Three-layer Federated Learning
Tempo
Tempo is proposed to improve training performance in three-layer federated learning. It adaptively tunes the number of each client's local training epochs based on the difference between its edge server's locally aggregated model and the current global model.
cd examples/three_layer_fl/tempo
uv run tempo.py -c tempo_MNIST_lenet5.toml
Reference: Ying et al., "Tempo: Improving Training Performance in Cross-Silo Federated Learning," in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022.
FedSaw
FedSaw is proposed to improve training performance in three-layer federated learning with L1-norm structured pruning. Edge servers and clients pruned their updates before sending them out. FedSaw adaptively tunes the pruning amount of each edge server and its clients based on the difference between the edge server's locally aggregated model and the current global model.
cd examples/three_layer_fl/fedsaw
uv run fedsaw.py -c fedsaw_MNIST_lenet5.toml