We present distributed optimization algorithms for minimizing the sum of convex functions, each one being the local cost function of an agent in a connected network. This finds applications in distributed learning, consensus, spectrum sensing for cognitive radio networks, resource allocation, etc. We propose fast gradient based approaches exhibiting less communication steps than currently available distributed algorithms for the same problem class and solution accuracy.
The convergence rate for the various methods is established theoretically and tied to the network structure. Numerical simulations are provided to illustrate the gains that can be achieved across several applications and network topologies.