To start a federated learning training workload with only a configuration file, run uv run [Python file] -c [configuration file] .... For example:
uvrunplato.py-cconfigs/MNIST/fedavg_lenet5.toml
The following command-line parameters are supported:
-c: the path to the configuration file to be used. The default is config.toml in the project's home directory.
-b: the base path, to be used to contain all models, datasets, checkpoints, and results (defaults to ./runtime).
-r: resume a previously interrupted training session (only works correctly in synchronous training sessions).
--cpu: use the CPU as the device only.
Datasets required by an example are downloaded automatically the first time it runs; subsequent executions reuse the cached copies stored under the chosen base path.
Plato uses the TOML format for its configuration files to manage runtime configuration parameters. Example configuration files have been provided in the configs/ directory.
In examples/, a number of federated learning algorithms have been included. To run them, just run the main Python program in each of the directories with a suitable configuration file. For example, to run the basic example located at examples/basic/, run the command:
This should output a table listing your GPUs, confirming that GPU access works.
The following command can be used to enter GPU-enabled Docker container with Plato built-in:
./dockerrun_gpu.sh
Formatting the Code and Fixing Linter Errors
It is strongly recommended that new additions and revisions of the codebase conform to Ruff's formatting and linter guidelines. To format the entire codebase automatically, run:
uvxruffformat
To fix all linter errors automatically, run:
uvxruffcheck--fix
Type Checking
It is also strongly recommended that new additions and revisions of the code base to pass Astral's ty type checker cleanly. To install ty globally using uv, run:
uvtoolinstallty@latest
To check the codebase on any sub-directory in Plato, such as plato, run: