DJAM – Distributed Jacobi Asynchronous Method for Learning Personalized Models

With the widespread of data collection agent networks, distributed optimization and learning methods become preferable over centralized solutions. Typically, distributed machine learning problems are solved by having the network’s agents aim for a common (or consensus) model. In certain applications, however, each agent may be interested in meeting a personal goal which may differ from the consensus solution. This problem is referred to as (asynchronous) distributed learning of personalized models: Each agent reaches a compromise between agreeing with its neighbours and minimizing its personal loss function. We present a Jacobi-like distributed algorithm which converges with probability one to the centralized solution, provided the personal loss functions are strongly convex. We then evidence that our algorithm’s performance is comparable to or better than that of distributed ADMM in a number of applications. These very experiments suggest that our Jacobi method converges linearly to the centralized solution.

Inês Almeida

Inês is currently doing a PhD on distributed optimization at IST/ISR. Before that, she worked for three years as a data scientist on a number of companies; her work focused mostly on credit scoring, mobile data analytics, and model explainability. She completed her Master degree in Physics at IST in 2013.ISR