Entraînement de modèles

Entraînez vos modèles à votre façon

Du no-code aux pipelines PyTorch personnalisés. Choisissez votre niveau de contrôle et laissez Picsellia gérer l'infrastructure.

Works with:
PyTorch
TensorFlow
Ultralytics
Hugging Face
+more
1-click
GPU allocation
20+
Pre-built pipelines
Custom flexibility
0
Infrastructure to manage
Flexibility

Choose your level of control

Start with no-code for quick iterations, use the SDK for automation, or build fully custom pipelines when you need complete control.

No-Code Training

Launch training jobs directly from the UI. Select a pre-built pipeline, configure parameters, and start training.

Configure in UI
Select GPU
Launch training
Visual parameter configurationOne-click GPU allocationReal-time progress monitoring

Python SDK

Full programmatic control with our Python SDK. Integrate into your existing workflows and CI/CD pipelines.

from picsellia import Client

client = Client()
project = client.get_project("defects")

# Create experiment
experiment = project.create_experiment("yolo-training")

# Attach dataset
dataset = client.get_dataset("defects").get_version("v3")
experiment.attach_dataset("train", dataset)
Type-safe APIJupyter supportPipeline automation

Custom Pipelines

Build custom training pipelines with CV Engine. Modular steps, any framework, full flexibility.

from picsellia_cv_engine import step, Pipeline

@step
def train(context):
    model = load_model(context.parameters)
    for epoch in range(context.parameters.epochs):
        # Your training logic
        context.experiment.log("loss", loss)
    context.experiment.store("model.pt")

pipeline = Pipeline([train])
pipeline.run()
Composable stepsFramework agnosticLocal + remote execution
Pre-built Pipelines

Production-grade models, ready to train

Start training in minutes with our pre-built pipelines. Ultralytics for YOLO, SAM2 for segmentation, Grounding DINO for zero-shot detection, and more.

One-click deployment to GPU
Pre-configured hyperparameters
Automatic metric logging
Model export to registry
Ultralytics

Ultralytics

production

Train YOLOv8/v11 models for detection, segmentation, and classification

DetectionSegmentationClassification
SAM2

SAM2

production

Segment Anything Model for automatic mask generation and refinement

SegmentationPre-annotation
Grounding DINO

Grounding DINO

production

Open-set object detection with text prompts for zero-shot labeling

DetectionZero-shot
CLIP

CLIP

production

Fine-tune embeddings for domain-specific similarity search

EmbeddingsClassification
CV Engine

Build custom pipelines with ease

Picsellia CV Engine is a modular toolkit for building computer vision workflows. Composable steps, framework extensions, and CLI automation.

terminal
$ pip install picsellia-cv-engine
# Initialize a new training pipeline
$ pxl-pipeline init --type training
# Run locally for testing
$ pxl-pipeline test
# Deploy to Picsellia cloud
$ pxl-pipeline deploy --gpus 1
CLI + Python decoratorsView Docs →

Modular Steps

Build pipelines from reusable, composable steps with @step decorators

Framework Extensions

Pre-built integrations for Ultralytics, SAM2, CLIP, and more

Local & Remote

Test locally, deploy to Picsellia cloud with one command

Auto Logging

Metrics, artifacts, and parameters logged automatically

Managed GPUs

Available now
NVIDIA A100
80GB VRAM
$3.50/hr
Pay only for what you useAuto-shutdown on completion
SageMaker

Bring Your Own Compute

SageMaker

Connect your AWS SageMaker account to train on your own infrastructure while keeping full orchestration through Picsellia.

Your AWS billing
Your GPU instances
Our orchestration
Infrastructure

Zero infrastructure to manage

Focus on your models, not your servers. Train on our managed A100 GPUs at $3.50/hr, or connect your own SageMaker account for full flexibility. Picsellia handles environment setup and job orchestration.

NVIDIA A100 GPUs at $3.50/hr
Bring your own SageMaker account
Pre-configured CUDA environments
Automatic job queuing
Real-time training logs
End-to-End

Connected to your entire workflow

AI Lab connects directly to datasets, experiment tracking, and model deployment. Full lineage from data to production.

Ready to train your models?

Start with no-code training or build custom pipelines. Zero infrastructure to manage.