Active Federated Learning is a client selection algorithm, where clients were selected not uniformly at random in each round, but with a probability conditioned on the current model and the data on the client to maximize training efficiency. The objective was to reduce the number of required training iterations while maintaining the same model accuracy.
Reference: J. Goetz, K. Malik, D. Bui, S. Moon, H. Liu, A. Kumar. "Active Federated Learning," September 2019.
Alignment with the paper
The client records the valuation before local training by running a forward-only pass over its dataset (afl_client.py). This replicates the paper’s value function vk=nk1L(w,xk,yk) using the current global model, ensuring the server samples based on the intended “utility before update.” The additional pass roughly doubles inference per round but keeps the scoring faithful.
Server-side selection (afl_server.py, afl_selection_strategy.py) now matches Algorithm 1 exactly: the lowest α1 fraction of clients have their valuations reset to −∞; remaining clients receive softmax weights with temperature α2 and only those with positive mass join the weighted draw; the α3 fraction is sampled uniformly from the residual pool so suppressed clients re-enter solely through this fallback.
Valuations stay stale until a client trains again, as described in the paper. Differential privacy mechanisms and alternative valuation heads suggested in Section 4.2 remain unimplemented; any privacy guarantees would need to be layered on top.
Pisces
Pisces is an asynchronous federated learning algorithm that performs biased client selection based on overall utilities and weighted server aggregation based on staleness. In this example, a client running the Pisces algorithm calculates its statistical utility and report it together with model updates to Pisces server. The server then evaluates the overall utility for each client based on the reported statistical utility and client staleness, and selects clients for the next communication round. The algorithm also attempts to detect outliers via DBSCAN for better robustness.
The client matches Eq. (2) exactly, logging only first-epoch batch losses and maintaining an EMA of the squared loss before returning ∣Bi∣loss2 (examples/client_selection/pisces/pisces_client.py, pisces_trainer.py).
The selection policy mirrors the authors’ release: base utilities are stored independently, then combined with latency-derived speed penalties and sliding-window staleness discounts; latency tracking, exploration decay, and DBSCAN-based reliability credits are reproduced (pisces_selection_strategy.py).
Aggregation uses the same polynomial staleness weighting with a shared history window (pisces_aggregation_strategy.py), but Plato still relies on the framework’s default async server, omitting the paper’s adaptive pacing controller that caps staleness per Algorithm 1.
Sample TOML exposes the same hyperparameters as the public repo/paper; robustness remains optional and disabled by default to keep the example lightweight.
Oort
Oort is a federated learning algorithm that performs biased client selection based on both statistical utility and system utility. Originally, Oort is proposed for synchronous federated learning. In this example, it was adapted to support both synchronous and asynchronous federated learning. Notably, the Oort server maintains a blacklist for clients that have been selected too many times (10 by default). If per_round / total_clients is large, e.g. 2/5, the Oort server may not work correctly because most clients are in the blacklist and there will not be a sufficient number of clients that can be selected.
Polaris is a client selection algorithm for asynchronous federated learning. In this algorithm, clients are selected by balancing between the local device speed and data quality from an optimization perspective. As it does not require extra information beyond local updates, Polaris is compatible with any server aggregation algorithm.