Home
Hi, I'm Rachid EL Mokadem, a Software Engineer and Ph.D. on Machine Learning
I am a senior software engineer with strong skills in AI/ML and a Ph.D. in machine learning. With over 11 years of experience, I have built a robust portfolio that spans fullstack software development and innovative AI research. My work includes developing and deploying advanced models in natural language processing, computer vision, and data analytics. I am passionate about harnessing cutting-edge technologies to solve complex challenges and transform ideas into impactful, scalable solutions.
My current works
During my Ph.D. study, I focused on the application of Federated Machine Learning (FL) to IoT devices. This area has garnered significant interest as artificial intelligence (AI) becomes increasingly embedded in every aspect of our lives, alongside the emergence of the Internet of Things (IoT) and smart cities. Federated Learning, in particular, offers the promise of bringing on-device intelligence to end-user devices and smart objects while preserving the privacy of their data.
My inaugural research project involved conducting a systematic mapping study (SMS) on optimizing Federated Learning techniques for energy-constrained devices. This comprehensive study allowed us to delve deep into the various approaches and findings published by the research community, identify limitations in existing techniques, and uncover new potential directions for improvement. This work is documented in a publication in Springer Cluster Computing, which can be accessed here. I further deepened my understanding of the subject through extensive experimentation with state-of-the-art technologies, detailed further on the Projects page of my website.
Expanding upon this foundation, we introduced XFL (eXtreme Federated Learning), a groundbreaking approach aimed at significantly reducing the data exchange volume by transmitting only a single layer of each client’s model in each round. This innovative layer-wise model aggregation strategy marks our main contribution, showcasing its potential in substantially lowering communication costs. Our validation experiments demonstrated up to 88.9% data reduction with minimal impact on the global model's performance. This work represents a significant leap towards making federated learning more efficient and practical for resource-constrained devices, such as those used in IoT and mobile environments. The findings from this research are detailed in another publication in Springer Cluster Computing, available here.
Next, we worked on an improved technique that significantly reduces communication overhead. Our novel method combines a weights sparsification technique with a single layer update of the shared neural network model. This dual-purpose approach not only minimizes the volume of data transmitted during each training round but also lessens the computational burden on resource-limited devices. Through empirical evaluations, our method achieved up to a 98.3% reduction in data exchange during the aggregation phase—with only a minimal compromise in model performance. Moreover, we observed that the communication cost savings scale with the size of the model, making our strategy particularly advantageous for large, complex models. This work opens the door to more energy-efficient and scalable federated learning implementations, especially for IoT and mobile devices. The results of this research are detailed in separate publication in Elsevier Procedia Computer Science, available here.