Federated unlearning is a concept proposed in the recent research literature that uses an unlearning algorithm, such as retraining from scratch, to guarantee that a client is able to remove all the effects of its local private data samples from the trained model. In its implementation in fedunlearning_server.py and fedunlearning_client.py, an implementation of federated unlearning overrides several methods in the client and server APIs, such as the server's aggregate_deltas().
examples/unlearning/fedunlearning/fedunlearning_server.py:68-193 checkpoints the global state at round 0 and, once a deletion request arrives, rewinds to the earliest affected round before restarting training, exactly mirroring the rapid-retraining recipe spelled out in Liu et al. (INFOCOM 2022).
During the retraining window the server filters out stale payloads (aggregate_deltas at examples/unlearning/fedunlearning/fedunlearning_server.py:77-124) so only models consistent with the rewound checkpoint contribute, matching the paper's requirement that forgotten samples leave no trace.
On the client side, FedUnlearningLifecycleStrategy plus the custom sampler (examples/unlearning/fedunlearning/fedunlearning_client.py:24-66 and examples/unlearning/fedunlearning/unlearning_iid.py:23-66) delete the configured ratio of local data before rejoining, reproducing the data-pruning step that accompanies each retraining pass in the reference design.
Knot
Knot is implemented in examples/unlearning, which clusters the clients, and the server aggregation is carried out within each cluster only. Knot is designed under asynchronous mode, and unlearned by retraining from scratch in cluster. The global model will be aggregated at the end of the retraining process. Knot supports a wide range of tasks, including image classification and language tasks.
Reference: N. Su, B. Li. "Asynchronous Federated Unlearning," in Proc. IEEE International Conference on Computer Communications (INFOCOM 2023).
Alignment with the paper
The Knot server extends the baseline retraining workflow but aggregates updates cluster-by-cluster (examples/unlearning/knot/knot_server.py:178-214), which is the cornerstone of Su and Li (INFOCOM 2023): each cluster trains independently and only its members' deltas are fused.
When do_optimized_clustering is enabled, the implementation gathers per-client training times and cosine similarities (examples/unlearning/knot/knot_server.py:660-811) and feeds them to the CVXOPT solver in examples/unlearning/knot/solver.py:1-118, matching the optimization-based client assignment proposed in Section 4 of the paper.
The clustered tester and rollback hooks (examples/unlearning/knot/knot_server.py:216-340 and examples/unlearning/knot/knot_trainer.py:13-84) ensure that each cluster is retrained in isolation until convergence before the global model is recombined, just as the paper specifies for asynchronous unlearning.