Projects Page
Federated Machine Learning experimenting
This project consists in an implementation and experimentation of the Federated Machine Learning, for MNIST recognition task.
In this project, I have done three different experimentation with different setups, as I will detail in next paragraphs. But in first place, I will introduce the Federated learning technique and its benefits; then, I will present the different setups and methods used in my experimentation as well as the results I've got. Finally, I will present some limitations of the technique and the approaches used in literature to improve it.
Machine Learning in general, is about training of a "mathematical" model by a machine on a data-set, in order to learn the underline pattern in this data which will allow the resulting model to classify and recognize unseen data based on the acquired 'knowledge'. Federated Machine Learning is a distributed technique to train the model collaboratively by a number of independent devices, with low capacities in general, based on their own local data. The principal aspect of Fed ML, is to not share local data with other devices, nor with a central server; instead, only the learnt model's weights are to be shared with the server which aggregates all received models to create a global model, then send it back to the client devices for the next training round, this operations are repeated a number of time and results in a model that has the aggregated knowledge from all local data-sets.
Steps of the algorithm
Experimentation preparation
Experimentation preliminary results
Info:
Data-set: MNIST
50 nodes
20 server rounds
10 epochs/node
Data distribution: 800 images/node
Classes distribution: 4 random classes/node
Model: CNN (2 Conv layers + 1 hidden layers)
Optimization algorithm: Adam
Info:
Data-set: CIFAR10
10 nodes
10 server rounds
10 epochs/node
Data distribution: ~5000 images/node
Classes distribution: 10 classes/node
Model: CNN (4 Conv layers + 2 hidden layers)
Optimization algorithm: SGD