This runbook is for operators running SmolVLA training in Plato with LeRobot datasets. It complements the parameter contract in Configuration Parameters.
Migration note
Older setup notes may still reference uv sync --extra robotics.
That root-package extra no longer exists. For current Plato, keep LeRobot / SmolVLA dependencies in a dedicated environment and verify them with import lerobot before launching these configs.
1) Setup
Install core dependencies:
uvsync
Add the optional LeRobot / SmolVLA robotics stack in a dedicated environment so the default Plato install stays lean.
The exact command depends on how you package the robotics dependencies in your environment. Plato itself only requires that lerobot is importable at runtime before you launch a LeRobot config.
Authenticate to Hugging Face when using private repos:
[data]datasource="LeRobot"[trainer]type="lerobot"model_type="smolvla"model_name="smolvla"[parameters.policy]type="smolvla"path="lerobot/smolvla_base"finetune_mode="adapter"# or "full"precision="fp32"device="cpu"# or "cuda" / "mps"[parameters.dataset]repo_id="lerobot/pusht_image"delta_timestamps={observation_image=[-0.2,-0.1,0.0]}[parameters.transforms]image_size=[224,224]normalize=trueinterpolation="bilinear"
5) Plato ↔ lerobot-train Mapping
Plato config field(s)
lerobot-train equivalent
Type
parameters.policy.path
--policy.path
Direct
parameters.dataset.repo_id
--dataset.repo_id
Direct
trainer.batch_size
--batch_size
Direct
parameters.policy.device
--policy.device
Direct
trainer.rounds + trainer.epochs
--steps
Conceptual scheduling mapping
server.checkpoint_path / server.model_path
--output_dir
Conceptual output-location mapping
parameters.dataset.delta_timestamps
LeRobot dataset delta_timestamps usage during training
Conceptual data-window mapping
parameters.policy.finetune_mode (full/adapter)
Trainable-parameter strategy during policy training
Conceptual finetune-mode mapping
Notes:
Upstream LeRobot examples for SmolVLA commonly use --steps; Plato uses round/epoch scheduling.
Adapter-mode behavior in Plato is implemented via parameters.policy.finetune_mode and
adapter parameter selection in the SmolVLA model wrapper.
Start from configs/LeRobot/smolvla_single_client_smoke.toml (CPU, tiny batch).
Reduce trainer.batch_size.
Use parameters.policy.device = "cpu" for smoke checks.
Move to cuda + higher batch sizes only after smoke passes.
FFmpeg / build issues in robotics stack
Symptom:
Build/runtime errors mentioning FFmpeg or PyAV dependencies.
Actions:
Install host FFmpeg libraries and build toolchain (cmake, build-essential, FFmpeg libs), then reinstall the LeRobot / SmolVLA robotics stack in that environment.
SmolVLA + LeRobot Optional Setup
This setup path is optional. Core Plato federated workloads continue to use the default dependency set from uv sync.
Install the robotics stack in a separate environment
Keep LeRobot / SmolVLA dependencies out of the default Plato environment unless
you are actively working on robotics workloads. The only hard requirement for
Plato's LeRobot path is that import lerobot succeeds in the environment where
you launch the training run.
Environment gating
When adding LeRobot-backed modules, keep imports guarded so non-robotics
environments fail with a clear action instead of a hard crash at import time.
try:importlerobotexceptImportErrorasexc:raiseImportError("LeRobot support is optional. Install the LeRobot / SmolVLA robotics stack in the active environment before using LeRobot configs.")fromexc
Runtime notes for SmolVLA/LeRobot
CUDA-capable GPUs are recommended for practical SmolVLA fine-tuning; CPU is
mainly suitable for smoke checks.
Install ffmpeg on hosts that read video-backed LeRobot datasets.
Authenticate with Hugging Face (huggingface-cli login) when accessing
private dataset repositories.
LeRobot currently constrains the Torch stack used by this optional path;
if you need different Torch constraints for non-robotics research, keep a
separate virtual environment.