With the widespread of data collection agent networks, distributed optimization and learning methods become preferable over centralized solutions. Typically, distributed machine learning problems are solved by having the network’s agents aim for a common (or consensus) model. In certain applications, however, each agent may be interested in meeting a personal goal which may differ from the consensus solution. This problem is referred to as (asynchronous) distributed learning of personalized models: Each agent reaches a compromise between agreeing with its neighbours and minimizing its personal loss function. We present a Jacobi-like distributed algorithm which converges with probability one to the centralized solution, provided the personal loss functions are strongly convex. We then evidence that our algorithm’s performance is comparable to or better than that of distributed ADMM in a number of applications. These very experiments suggest that our Jacobi method converges linearly to the centralized solution.
DJAM – Distributed Jacobi Asynchronous Method for Learning Personalized Models
June 6, 2019
1:00 pm
Inês Almeida
Inês is currently doing a PhD on distributed optimization at IST/ISR. Before that, she worked for three years as a data scientist on a number of companies; her work focused mostly on credit scoring, mobile data analytics, and model explainability. She completed her Master degree in Physics at IST in 2013.ISRSeminários
Últimos seminários
Unlocking Latent Discourse Translation in LLMs Through Quality-Aware Decoding
June 17, 2025Large language models (LLMs) have emerged as strong contenders in machine translation. Yet, they often fall behind specialized neural machine…
Speech as a Biomarker for Disease Detection
May 20, 2025Today’s overburdened health systems face numerous challenges, exacerbated by an aging population. Speech emerges as a ubiquitous biomarker with strong…
Enhancing Uncertainty Estimation in Neural Networks
May 6, 2025Neural networks are often overconfident about their predictions, which undermines their reliability and trustworthiness. In this presentation, I will present…
Improving Evaluation Metrics for Vision-and-Language Models
April 22, 2025Evaluating image captions is essential for ensuring both linguistic fluency and accurate semantic alignment with visual content. While reference-free metrics…



