Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

portfolio

publications

Flower: A Friendly Federated Learning Research Framework

Published in arXiv, 2022

This paper is about Flower, the federated learning framework developed at the University of Cambridge by the MLSys group.

Recommended citation: Beutel, D., Topal, T., Mathur, A., Qiu, X., Fernandez-Marques, J., Gao, Y., Sani, L., Kwing, H., Parcollet, T., Gusmão, P., & Lane, N. (2020). Flower: A Friendly Federated Learning Research Framework. arXiv preprint arXiv:2007.14390.
Download Paper

Published in , 1900

Published in , 1900

Sheaf HyperNetworks for Personalized Federated Learning

Published in arXiv, 2024

Sheaf HyperNetworks is a novel HyperNetwork-based methodology to improve the performance of any personalized federated learning setting.

Recommended citation: Bao Nguyen, Lorenzo Sani, Xinchi Qiu, Pietro Liò, & Nicholas D. Lane. (2024). Sheaf HyperNetworks for Personalized Federated Learning.
Download Paper

DEPT: Decoupled Embeddings for Pre-training Language Models.

Published in arXiv, 2024

This work proposes a novel pre-training method for language models that decouples the embeddings from the rest of the model.

Recommended citation: Alex Iacob, Lorenzo Sani, Meghdad Kurmanji, William F. Shen, Xinchi Qiu, Dongqi Cai, Yan Gao, & Nicholas D. Lane. (2024). DEPT: Decoupled Embeddings for Pre-training Language Models.
Download Paper

Photon: Federated LLM Pre-Training

Published in arXiv, 2024

We present Photon, the first fully federated system for the federated pre-training of large language models.

Recommended citation: Lorenzo Sani, Alex Iacob, Zeyu Cao, Royson Lee, Bill Marino, Yan Gao, Dongqi Cai, Zexi Li, Wanru Zhao, Xinchi Qiu, & Nicholas D. Lane. (2024). Photon: Federated LLM Pre-Training.
Download Paper

talks

Federating Large Language Models from Scratch @ Alan Turing Institute

Published:

The Research Engineering Team at the Alan Turing Institute invited me to present our work on federated learning of large language models. This talk was inserted in the seminar series titled “Robots in Disguise”. I presented our work on federated learning of large language models, in particular, how we can train large language models from scratch in a federated manner. A wider introduction to federate learning and its main challenges was also presented.

teaching

Summer intern supervisor for CaMLSys@UniOfCam

Intern supervisor, University of Cambridge, Department of Computer Science and Technology, 2023

I supervised Allen Cong during his Summer Internship. The project was related to optimising a CV task on a Rock64 Rock Pi 4 SE equipped with an Intel RealSense camera.

Master Thesis Supervisor

Thesis supervisor, University of Cambridge, Department of Computer Science and Technology, 2023

I supervised Adriano Guastella (from the University of Bologna) during his Master’s Thesis project when he was visiting the Computer Laboratory. The work investigated the intersection between Powerpropagation, Sparse Weight Activation Training, and federated learning.

MPhil Thesis Supervisor

Thesis supervisor, University of Cambridge, Department of Computer Science and Technology, 2023

I co-supervised Bao Nguyen during his MPhil Thesis project. The work investigated the possibility of applying sheaf neural networks in the context of federated learning. A research paper describing our approach is available on arXiv.