Publications

Published articles

Federated learning for energy constrained devices: a systematic mapping study 

This article consists of a comprehensive study of Federated Machine Learning (FedML) and its optimization for energy-constrained devices such as IoT and Mobile devices.

Abstract

Federated machine learning (Fed ML) is a new distributed machine learning technique using clients’ local data applied to collaboratively train a global model without transmitting the datasets. Nodes, the participating devices in the ML training, only send parameter updates (e.g., weight updates in the case of neural networks), which are fused together by the server to build the global model without compromising raw data privacy. Fed ML guarantees its confidentiality by not divulging node data to third party, central servers. Privacy of data is a crucial network security aspect of Fed ML that will enable the technique for use in the context of data-sensitive Internet of Things (IoT) and mobile applications (including smart geo-location and smart grid infrastructure). However, most IoT and mobile devices are particularly energy constrained, which requires optimization of the Fed ML process for efficient training tasks and optimized power consumption. This paper, to the best of our knowledge, is the first Systematic Mapping Study (SMS) on F ED ML for energy constrained devices. First, we selected a total of 67 from 800 papers that satisfy our criteria, then provide a structured overview of the field using a set of carefully chosen research questions. Finally, we attempt to offer an analysis of the state-of-the-art Fed ML  techniques and outline potential recommendations for the research community.

Keywords Federated machine learning, Energy optimization, Internet of Things Edge and mobile computing, On-device intelligence, Systematic mapping study

Cite this article

El Mokadem, R., Ben Maissa, Y. & El Akkaoui, Z. Federated learning for energy constrained devices: a systematic mapping study. Cluster Comput (2022). https://doi.org/10.1007/s10586-022-03763-4

DOI: https://doi.org/10.1007/s10586-022-03763-4

Article link: https://link.springer.com/article/10.1007/s10586-022-03763-4

eXtreme Federated Learning (XFL): a layer-wise approach 

Abstract

Federated learning (FL) is a machine learning technique that builds models by using distributed data across devices. FL aggregates parameter updates from locally trained models, avoiding the need for user data exchange. However, for resource-constrained devices (e.g., IoT and Mobile), optimization in FL becomes crucial, particularly with large deep learning models. In this work, we introduce XFL (eXtreme Federated Learning), a new approach that aims to drastically reduce the amount of exchanged data by transmitting only a single layer of each client’s model in each round. Our main contribution lies in the development and evaluation of this layer-wise model aggregation strategy, which demonstrates its potential in significantly reducing communication costs. Validation experiments demonstrate up to 88.9% data reduction, with a small impact on the global model’s performance compared to the baseline algorithm. By effectively addressing the challenge of the communications, XFL enables more efficient and practical federated learning on resource-constrained devices.

Cite this article

El Mokadem, R., Ben Maissa, Y. & El Akkaoui, Z. eXtreme Federated Learning (XFL): a layer-wise approach. Cluster Comput (2024). https://doi.org/10.1007/s10586-023-04242-0

DOI: https://doi.org/10.1007/s10586-023-04242-0

Article link: https://link.springer.com/article/10.1007/s10586-023-04242-0

Keywords Distributed machine learning; Federated learning; Resource-constrained; devices Layer-wise optimization; Communications optimization