Build, review, and test inside CarbConnect with gallery import, webcam capture, and in-platform inference.
Set up basics for the project and the model that will be created.

Upload the dataset that the model will train on.

Check that everything looks good to you and ready to go.

Optionally, start training right away.

Test your model with new samples to verify predictions and evaluate performance before deploying.

How teams are using AI Studio today to detect anomalies and validate results without writing any code.
Upload images of normal samples to teach the model what "good" looks like. It will then flag images that deviate from the expected pattern, without needing labeled defect examples.
Build a model trained on clean baseline images from your research. Run it against new samples to surface anything that looks out of the ordinary—before it slips through review.
Once your model is trained and deployed, test it by picking any image from your gallery. Results show confidence scores and flagged regions without writing a line of code.
Import your dataset from the CarbConnect gallery, upload from your local machine, or capture directly from a webcam. Your data stays within the platform throughout the whole process.
After training, review accuracy, precision, recall, and F1 score. See per-class breakdowns, confusion matrices, and epoch-by-epoch training curves to know exactly how well the model learned.
Keep multiple experiments separate with project-level organization. Each project tracks its own dataset, training jobs, and deployed model endpoint in one place.
The current model type. Train a detector on normal images and let it score new samples against the learned baseline. Heatmap feedback shows which regions drove the anomaly score.
Pull images from your existing CarbConnect gallery, upload files from your computer, or capture directly from a webcam. All sources are supported in the same project.
Choose a project name, training mode (Fast, Accuracy, or Auto), and add tags to keep your work organized. The setup step is quick and the defaults get you going right away.
After training, view accuracy, precision, recall, and F1 score. Per-class breakdowns and confusion matrices are available for every trained model version.
Pick any image from your gallery and run it against a deployed model endpoint without leaving the platform. See the confidence score and anomaly result immediately.
Start and stop training jobs from the project page. Track job status in real time and see how long each training run took.
Each project keeps a record of past training runs, model versions, and inference history so you can compare results and track improvements over time.
Group datasets and models under named projects. Each project has its own settings, dataset, and model endpoint—keeping experiments cleanly separated.
Create a project, bring in your dataset, and launch training from the same workspace.