Quick Start
Running Plato Directly Using uv
To start a federated learning training workload with only a configuration file, run uv run [Python file] -c [configuration file] .... For example:
uv run plato.py -c configs/MNIST/fedavg_lenet5.toml
The following command-line parameters are supported:
-
-c: the path to the configuration file to be used. The default isconfig.tomlin the project's home directory. -
-b: the base path, to be used to contain all models, datasets, checkpoints, and results (defaults to./runtime). -
-r: resume a previously interrupted training session (only works correctly in synchronous training sessions). -
--cpu: use the CPU as the device only.
Datasets required by an example are downloaded automatically the first time it runs; subsequent executions reuse the cached copies stored under the chosen base path.
Plato uses the TOML format for its configuration files to manage runtime configuration parameters. Example configuration files have been provided in the configs/ directory.
In examples/, a number of federated learning algorithms have been included. To run them, just run the main Python program in each of the directories with a suitable configuration file. For example, to run the basic example located at examples/basic/, run the command:
uv run examples/basic/basic.py -c configs/MNIST/fedavg_lenet5.toml
Using MLX as a Backend
Plato supports MLX as an alternative backend to PyTorch for Apple Silicon devices. To use MLX, first install the optional dependencies:
uv sync --extra mlx
Then configure your TOML file to use the MLX framework by setting framework = "mlx" in the relevant sections:
[trainer]
type = "mlx"
framework = "mlx"
[algorithm]
type = "mlx_fedavg"
framework = "mlx"
[parameters.model]
framework = "mlx"
A complete example configuration is available at configs/MNIST/fedavg_lenet5_mlx.toml. Run it with:
uv run plato.py -c configs/MNIST/fedavg_lenet5_mlx.toml
Running Plato in a Docker Container
To build such a Docker image, use the provided Dockerfile:
docker build -t plato -f Dockerfile .
To run the docker image that was just built, use the command:
./dockerrun.sh
To remove all the containers after they are run, use the command:
docker rm $(docker ps -a -q)
To remove the plato Docker image, use the command:
docker rmi plato
The provided Dockerfile helps to build a Docker image running Ubuntu 24.04, with a virtual environment called plato pre-configured to run Plato.
Running Plato in a Docker Container with GPU Support
First, the NVIDIA Container Toolkit will need to be installed on the host machine. On Ubuntu 24.04, follow these steps:
- Configure the production repository:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
- Update the packages list from the repository:
sudo apt-get update
- Install the NVIDIA Container Toolkit packages (where
1.17.8-1is the latest version as of October 2025):
export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1
sudo apt-get install -y \
nvidia-container-toolkit=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
nvidia-container-toolkit-base=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
libnvidia-container-tools=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
libnvidia-container1=${NVIDIA_CONTAINER_TOOLKIT_VERSION}
- Configure the Docker runtime for GPU support:
sudo nvidia-ctk runtime configure --runtime=docker
- Restart Docker:
sudo systemctl restart docker
For more information about installing the NVIDIA Container Toolkit, refer to its official documentation.
The following command can be used to verify that GPU access is available in Docker containers:
docker run --gpus all --rm nvidia/cuda:13.0.1-cudnn-devel-ubuntu24.04 nvidia-smi
This should output a table listing your GPUs, confirming that GPU access works.
The following command can be used to enter GPU-enabled Docker container with Plato built-in:
./dockerrun_gpu.sh
Formatting the Code and Fixing Linter Errors
It is strongly recommended that new additions and revisions of the codebase conform to Ruff's formatting and linter guidelines. To format the entire codebase automatically, run:
uvx ruff format
To fix all linter errors automatically, run:
uvx ruff check --fix
Type Checking
It is also strongly recommended that new additions and revisions of the code base to pass Astral's ty type checker cleanly. To install ty globally using uv, run:
uv tool install ty@latest
To check the codebase on any sub-directory in Plato, such as plato, run:
ty check plato