Skip to content Skip to sidebar Skip to footer
proceedings of the ieee cover nov 2020
proceedings of the ieee cover nov 2020

Special Issue: Optimization for Data-Driven Learning and Control

Volume 108, Issue 11

November 2020

Guest Editors

Guest Editors: Usman A. Khan, Waheed U. Bajwa, Angelia Nedić, Michael G. Rabbat, and Ali H. Sayed

Special Issue Papers

By U. A. Khan, W. U. Bajwa, A. Nedi ́c, M. G. Rabbat, and A. H. Sayed

By R. Xin, S. Pu, A. Nedi ́c, and U. A. Khan

This article presents a general framework for distributed first-order methods, for minimizing a finite sum of functions, that is applicable to both undirected and directed graphs. Such problems have found a significant interest in control, signal processing, and estimation, and more recently in large-scale data science and machine learning problems.

By A. K. Sahu and S. Kar

This article presents an overview of the recent work in the area of distributed zeroth-order optimization, focusing on constrained optimization settings and algorithms built around the Frank–Wolfe framework.

By A. Mokhtari and A. Ribeiro

This article discusses recent developments to accelerate convergence of stochastic optimization through the exploitation of second-order information and shows applications in the context of predicting the click-through rate of an advertisement displayed in response to a specific search engine query.

By D. Jakoveti ́c, D. Bajovi ́c, J. Xavier, and J. M. F. Moura

This article focuses on the augmented Lagrangian method (ALM), where a constrained optimization problem is solved with a series of unconstrained subproblems, with respect to the original (primal) variable, while the constraints are controlled via dual variables.

By G. França and J. Bento

This article reviews recent research quantifying the influence of the network topology on the convergence behavior of distributed methods and further explores the connections between the alternating direction method of multipliers (ADMM) and lifted Markov.

By H. Jaleel and J. S. Shamma

This article presents a collection of state-of-the-art results for distributed optimization problems arising in the context of robot networks, with a focus on two special classes of problems, namely, real-time path planning for multirobot systems and self-organization in multirobot systems using game-theoretic approaches.

By R. M. Gower, M. Schmidt, F. Bach, and P. Richtárik

This article discusses stochastic variance-reduced optimization methods for problems where multiple passes through batch training data sets are allowed.

By M. Nokleby, H. Raja, and W. U. Bajwa

This article reviews recently developed methods that focus on distributed training of large-scale machine learning models from streaming data in the compute-limited and bandwidth-limited regimes, with an emphasis on convergence analysis that explicitly accounts for the mismatch between computation, communication, and streaming rates, and that provides sufficient conditions for order-optimum convergence.

By M. Assran, A. Aytekin, H. R. Feyzmahdavian, M. Johansson, and M. G. Rabbat

This article focuses on asynchronous parallel and distributed methods for large-scale optimization problems in machine learning, where the processors may maintain an inconsistent view of the optimization variables.

By A. Simonetto, E. Dall’Anese, S. Paternain, G. Leus, and G. B. Giannakis

This article reviews a broad class of algorithms for time- varying optimization with an emphasis on both algorithmic development and performance analysis.

By V. Matta, A. Santos, and A. H. Sayed

This article examines the network tomography problem and considers the question: How much information can one glean about the underlying graph topology by observing the behavior of certain distributed optimization methods over the graph nodes?

By H. Li, C. Fang, and Z. Lin

This article provides a comprehensive survey of accelerated first-order methods with a particular focus on stochastic algorithms and further introduces some recent developments on accelerated methods for nonconvex optimization problems.

Scanning Our Past

Share: