AsyncFilter is proposed to defend against untargeted poisoning attacks in asynchronous federated learning with a server-side filter. With statistical analysis, AsyncFilter identifies potential poisoned model updates and filters them out before the server aggregation stage.
Running the examples
Navigate to the detector workspace before launching any experiment:
cdexamples/detector
Use the provided configuration files to reproduce key experiments from the paper:
CIFAR-10 (Section 5.2):
uvrundetector.py-casyncfilter_cifar_2.toml
CINIC-10 with LIE attack (Section 5.3, concentration factor 0.01):
uvrundetector.py-casyncfilter_cinic_3.toml
FashionMNIST with server staleness limit 10 (Section 5.6):
uvrundetector.py-casyncfilter_fashionmnist_6.toml
Datasets are downloaded automatically using the paths defined in each configuration file, so no additional flags are required. Modify these TOML files to explore custom scenarios or to adjust hyperparameters for new experiments.