Publications

Published articles

Federated learning for energy constrained devices: a systematic mapping study 

This article consists of a comprehensive study of Federated Machine Learning (FedML) and its optimization for energy-constrained devices such as IoT and Mobile devices.

Abstract

Federated machine learning (Fed ML) is a new distributed machine learning technique using clients’ local data applied to collaboratively train a global model without transmitting the datasets. Nodes, the participating devices in the ML training, only send parameter updates (e.g., weight updates in the case of neural networks), which are fused together by the server to build the global model without compromising raw data privacy. Fed ML guarantees its confidentiality by not divulging node data to third party, central servers. Privacy of data is a crucial network security aspect of Fed ML that will enable the technique for use in the context of data-sensitive Internet of Things (IoT) and mobile applications (including smart geo-location and smart grid infrastructure). However, most IoT and mobile devices are particularly energy constrained, which requires optimization of the Fed ML process for efficient training tasks and optimized power consumption. This paper, to the best of our knowledge, is the first Systematic Mapping Study (SMS) on F ED ML for energy constrained devices. First, we selected a total of 67 from 800 papers that satisfy our criteria, then provide a structured overview of the field using a set of carefully chosen research questions. Finally, we attempt to offer an analysis of the state-of-the-art Fed ML  techniques and outline potential recommendations for the research community.

Keywords Federated machine learning, Energy optimization, Internet of Things Edge and mobile computing, On-device intelligence, Systematic mapping study

Cite this article

El Mokadem, R., Ben Maissa, Y. & El Akkaoui, Z. Federated learning for energy constrained devices: a systematic mapping study. Cluster Comput (2022). https://doi.org/10.1007/s10586-022-03763-4

DOI: https://doi.org/10.1007/s10586-022-03763-4

Article link: https://link.springer.com/article/10.1007/s10586-022-03763-4

eXtreme Federated Learning (XFL): a layer-wise approach 

Abstract

Federated learning (FL) is a machine learning technique that builds models by using distributed data across devices. FL aggregates parameter updates from locally trained models, avoiding the need for user data exchange. However, for resource-constrained devices (e.g., IoT and Mobile), optimization in FL becomes crucial, particularly with large deep learning models. In this work, we introduce XFL (eXtreme Federated Learning), a new approach that aims to drastically reduce the amount of exchanged data by transmitting only a single layer of each client’s model in each round. Our main contribution lies in the development and evaluation of this layer-wise model aggregation strategy, which demonstrates its potential in significantly reducing communication costs. Validation experiments demonstrate up to 88.9% data reduction, with a small impact on the global model’s performance compared to the baseline algorithm. By effectively addressing the challenge of the communications, XFL enables more efficient and practical federated learning on resource-constrained devices.

Cite this article

El Mokadem, R., Ben Maissa, Y. & El Akkaoui, Z. eXtreme Federated Learning (XFL): a layer-wise approach. Cluster Comput (2024). https://doi.org/10.1007/s10586-023-04242-0

DOI: https://doi.org/10.1007/s10586-023-04242-0

Article link: https://link.springer.com/article/10.1007/s10586-023-04242-0

Keywords Distributed machine learning; Federated learning; Resource-constrained; devices Layer-wise optimization; Communications optimization



Federated Learning Communications Optimization Using Sparse Single-Layer Updates

Abstract

Federated learning has emerged as a robust framework for distributed machine learning, enabling model training across decentralized data sources while preserving data privacy. Despite its advantages, a persistent challenge remains: the high communication overhead during the model update process results in a high energy consumption on the client devices. This paper introduces a new approach that combines (1) weights sparsification technique with (2) single layer update of the shared neural network model. Our proposal serves dual purposes: it significantly reduces the volume of data transmitted during each training round while lessening the computational burden on resource-limited devices. Through empirical evaluations, we witness an impressive reduction—up to 98.3%—in data exchange during the aggregation phase without much compromising the model's performance. Moreover, we find that the communication cost savings scale with the size of the model, making our approach particularly advantageous for large, complex models. This work opens the door to for more energy-efficient and scalable federated learning implementations, especially in resource-constrained environments like IoT and mobile devices.

Cite this article

El Mokadem, R., Maissa, Y. B., & El Akkaoui, Z. (2024). Federated Learning Communications Optimization Using Sparse Single-Layer Updates. Procedia Computer Science, 236, 168-176.  

DOI: https://doi.org/10.1016/j.procs.2024.05.018

Article link: https://www.sciencedirect.com/science/article/pii/S1877050924010342

Keywords Federated Machine Learning; IoT and mobile;Energy Efficiency;Communications Optimization