Abstract:
This paper studies non-smooth problems of convex stochastic optimization. Using the smoothing technique based on the replacement of the function value at the considered point by the averaged function value over a ball (in l1l1-norm or l2l2-norm) of a small radius centered at this point, and then the original problem is reduced to a smooth problem (whose Lipschitz constant of the gradient is inversely proportional to the radius of the ball). An essential property of the smoothing used is the possibility of calculating an unbiased estimation of the gradient of a smoothed function based only on realizations of the original function. The obtained smooth stochastic optimization problem is proposed to be solved in a distributed federated learning architecture (the problem is solved in parallel: nodes make local steps, e.g. stochastic gradient descent, then communicate–all with all, then all this is repeated). The goal of the article is to build on the basis of modern achievements in the field of gradient–free non-smooth optimization and in the field of federated learning gradient-free methods for solving problems of non-smooth stochastic optimization in the architecture of federated learning.
Citation:
B. A. Alashkar, A. V. Gasnikov, D. M. Dvinskikh, A. V. Lobanov, “Gradient-free federated learning methods with l1l1 and l2l2-randomization for non-smooth convex stochastic optimization problems”, Zh. Vychisl. Mat. Mat. Fiz., 63:9 (2023), 1458–1512; Comput. Math. Math. Phys., 63:9 (2023), 1600–1653
This publication is cited in the following 9 articles:
Alexander Gasnikov, Darina Dvinskikh, Pavel Dvurechensky, Eduard Gorbunov, Aleksandr Beznosikov, Alexander Lobanov, Encyclopedia of Optimization, 2024, 1
Ekaterina Statkevich, Sofiya Bondar, Darina Dvinskikh, Alexander Gasnikov, Aleksandr Lobanov, “Gradient-free algorithm for saddle point problems under overparametrization”, Chaos, Solitons & Fractals, 185 (2024), 115048
Aleksandr Lobanov, Nail Bashirov, Alexander Gasnikov, “The “Black-Box” Optimization Problem: Zero-Order Accelerated Stochastic Method via Kernel Approximation”, J Optim Theory Appl, 2024
A. V. Gasnikov, M. S. Alkousa, A. V. Lobanov, Y. V. Dorn, F. S. Stonyakin, I. A. Kuruzov, S. R. Singh, “On Quasi-Convex Smooth Optimization Problems by a Comparison Oracle”, Rus. J. Nonlin. Dyn., 20:5 (2024), 813–825
Aleksandr Lobanov, Anton Anikin, Alexander Gasnikov, Alexander Gornov, Sergey Chukanov, Communications in Computer and Information Science, 1881, Mathematical Optimization Theory and Operations Research: Recent Trends, 2023, 92
Nikita Kornilov, Alexander Gasnikov, Pavel Dvurechensky, Darina Dvinskikh, “Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact”, Comput Manag Sci, 20:1 (2023)
Aleksandr Lobanov, Lecture Notes in Computer Science, 14395, Optimization and Applications, 2023, 60
Aleksandr Lobanov, Alexander Gasnikov, Lecture Notes in Computer Science, 14395, Optimization and Applications, 2023, 72
Aleksandr Lobanov, Andrew Veprikov, Georgiy Konin, Aleksandr Beznosikov, Alexander Gasnikov, Dmitry Kovalev, “Non-smooth setting of stochastic decentralized convex optimization problem over time-varying Graphs”, Comput Manag Sci, 20:1 (2023)