
Optimization for Data-Driven Learning and Control
Volume 108, Issue 11 | November 2020
Guest Editors:
Usman A. Khan, Waheed U. Bajwa, Angelia Nedić, Michael G. Rabbat, and Ali H. Sayed
Special Issue Papers
Scanning the Issue
By U. A. Khan, W. U. Bajwa, A. Nedi ́c, M. G. Rabbat, and A. H. Sayed
A General Framework for Decentralized Optimization With First-Order Methods
By R. Xin, S. Pu, A. Nedi ́c, and U. A. Khan
This article presents a general framework for distributed first-order methods, for minimizing a finite sum of functions, that is applicable to both undirected and directed graphs. Such problems have found a significant interest in control, signal processing, and estimation, and more recently in large-scale data science and machine learning problems.
Decentralized Zeroth-Order Constrained Stochastic Optimization Algorithms: Frank–Wolfe and Variants With Applications to Black-Box Adversarial Attacks
By A. K. Sahu and S. Kar
This article presents an overview of the recent work in the area of distributed zeroth-order optimization, focusing on constrained optimization settings and algorithms built around the Frank–Wolfe framework.
Stochastic Quasi-Newton Methods
By A. Mokhtari and A. Ribeiro
This article discusses recent developments to accelerate convergence of stochastic optimization through the exploitation of second-order information and shows applications in the context of predicting the click-through rate of an advertisement displayed in response to a specific search engine query.
Primal–Dual Methods for Large-Scale and Distributed Convex Optimization and Data Analytics
By D. Jakoveti ́c, D. Bajovi ́c, J. Xavier, and J. M. F. Moura
This article focuses on the augmented Lagrangian method (ALM), where a constrained optimization problem is solved with a series of unconstrained subproblems, with respect to the original (primal) variable, while the constraints are controlled via dual variables.
Distributed Optimization, Averaging via ADMM, and Network Topology
By G. França and J. Bento
This article reviews recent research quantifying the influence of the network topology on the convergence behavior of distributed methods and further explores the connections between the alternating direction method of multipliers (ADMM) and lifted Markov.
Distributed Optimization for Robot Networks: From Real-Time Convex Optimization to Game-Theoretic Self-Organization
By H. Jaleel and J. S. Shamma
This article presents a collection of state-of-the-art results for distributed optimization problems arising in the context of robot networks, with a focus on two special classes of problems, namely, real-time path planning for multirobot systems and self-organization in multirobot systems using game-theoretic approaches.
Variance-Reduced Methods for Machine Learning
By R. M. Gower, M. Schmidt, F. Bach, and P. Richtárik
This article discusses stochastic variance-reduced optimization methods for problems where multiple passes through batch training data sets are allowed.
Scaling-Up Distributed Processing of Data Streams for Machine Learning
By M. Nokleby, H. Raja, and W. U. Bajwa
This article reviews recently developed methods that focus on distributed training of large-scale machine learning models from streaming data in the compute-limited and bandwidth-limited regimes, with an emphasis on convergence analysis that explicitly accounts for the mismatch between computation, communication, and streaming rates, and that provides sufficient conditions for order-optimum convergence.
Advances in Asynchronous Parallel and Distributed Optimization
By M. Assran, A. Aytekin, H. R. Feyzmahdavian, M. Johansson, and M. G. Rabbat
This article focuses on asynchronous parallel and distributed methods for large-scale optimization problems in machine learning, where the processors may maintain an inconsistent view of the optimization variables.
Time-Varying Convex Optimization: Time-Structured Algorithms and Applications
By A. Simonetto, E. Dall’Anese, S. Paternain, G. Leus, and G. B. Giannakis
This article reviews a broad class of algorithms for time- varying optimization with an emphasis on both algorithmic development and performance analysis.
Graph Learning Under Partial Observability
By V. Matta, A. Santos, and A. H. Sayed
This article examines the network tomography problem and considers the question: How much information can one glean about the underlying graph topology by observing the behavior of certain distributed optimization methods over the graph nodes?
Accelerated First-Order Optimization Algorithms for Machine Learning
By H. Li, C. Fang, and Z. Lin
This article provides a comprehensive survey of accelerated first-order methods with a particular focus on stochastic algorithms and further introduces some recent developments on accelerated methods for nonconvex optimization problems.