Skip to content

Poisoning Detection

AsyncFilter

AsyncFilter is proposed to defend against untargeted poisoning attacks in asynchronous federated learning with a server-side filter. With statistical analysis, AsyncFilter identifies potential poisoned model updates and filters them out before the server aggregation stage.

Running the examples

Navigate to the detector workspace before launching any experiment:

cd examples/detector

Use the provided configuration files to reproduce key experiments from the paper:

  • CIFAR-10 (Section 5.2):
uv run detector.py -c asyncfilter_cifar_2.toml
  • CINIC-10 with LIE attack (Section 5.3, concentration factor 0.01):
uv run detector.py -c asyncfilter_cinic_3.toml
  • FashionMNIST with server staleness limit 10 (Section 5.6):
uv run detector.py -c asyncfilter_fashionmnist_6.toml

Datasets are downloaded automatically using the paths defined in each configuration file, so no additional flags are required. Modify these TOML files to explore custom scenarios or to adjust hyperparameters for new experiments.

Reference: Y. Kang and B. Li. "AsyncFilter: Detecting Poisoning Attacks in Asynchronous Federated Learning," in the Proceedings of the 25th ACM/IFIP International Middleware Conference (Middleware), 2024.