With the widespread of data collection agent networks, distributed optimization and learning methods become preferable over centralized solutions. Typically, distributed machine learning problems are solved by having the network’s agents aim for a common (or consensus) model. In certain applications, however, each agent may be interested in meeting a personal goal which may differ from the consensus solution. This problem is referred to as (asynchronous) distributed learning of personalized models: Each agent reaches a compromise between agreeing with its neighbours and minimizing its personal loss function. We present a Jacobi-like distributed algorithm which converges with probability one to the centralized solution, provided the personal loss functions are strongly convex. We then evidence that our algorithm’s performance is comparable to or better than that of distributed ADMM in a number of applications. These very experiments suggest that our Jacobi method converges linearly to the centralized solution.
DJAM – Distributed Jacobi Asynchronous Method for Learning Personalized Models
June 6, 2019
1:00 pm