Distributed learning with compressed gradient differences K Mishchenko, E Gorbunov, M Takáč, P Richtárik arXiv preprint arXiv:1901.09269, 2019 | 213 | 2019 |
A unified theory of SGD: Variance reduction, sampling, quantization and coordinate descent E Gorbunov, F Hanzely, P Richtárik International Conference on Artificial Intelligence and Statistics, 680-690, 2020 | 152 | 2020 |
Stochastic optimization with heavy-tailed noise via accelerated gradient clipping E Gorbunov, M Danilova, A Gasnikov Advances in Neural Information Processing Systems 33, 15042-15053, 2020 | 104 | 2020 |
MARINA: Faster non-convex distributed learning with compression E Gorbunov, KP Burlachenko, Z Li, P Richtárik International Conference on Machine Learning, 3788-3798, 2021 | 100 | 2021 |
Local sgd: Unified theory and new efficient methods E Gorbunov, F Hanzely, P Richtárik International Conference on Artificial Intelligence and Statistics, 3556-3564, 2021 | 98 | 2021 |
Near Optimal Methods for Minimizing Convex Functions with Lipschitz -th Derivatives A Gasnikov, P Dvurechensky, E Gorbunov, E Vorontsova, ... Conference on Learning Theory, 1392-1393, 2019 | 80 | 2019 |
Linearly converging error compensated SGD E Gorbunov, D Kovalev, D Makarenko, P Richtárik Advances in Neural Information Processing Systems 33, 20889-20900, 2020 | 79 | 2020 |
Optimal tensor methods in smooth convex and uniformly convexoptimization A Gasnikov, P Dvurechensky, E Gorbunov, E Vorontsova, ... Conference on Learning Theory, 1374-1391, 2019 | 72* | 2019 |
Optimal decentralized distributed algorithms for stochastic convex optimization E Gorbunov, D Dvinskikh, A Gasnikov arXiv preprint arXiv:1911.07363, 2019 | 71 | 2019 |
Extragradient method: O (1/k) last-iterate convergence for monotone variational inequalities and connections with cocoercivity E Gorbunov, N Loizou, G Gidel International Conference on Artificial Intelligence and Statistics, 366-402, 2022 | 70 | 2022 |
Recent theoretical advances in non-convex optimization M Danilova, P Dvurechensky, A Gasnikov, E Gorbunov, S Guminov, ... High-Dimensional Optimization and Probability: With a View Towards Data …, 2022 | 65 | 2022 |
An accelerated method for derivative-free smooth stochastic convex optimization E Gorbunov, P Dvurechensky, A Gasnikov SIAM Journal on Optimization 32 (2), 1210-1238, 2022 | 65* | 2022 |
Stochastic three points method for unconstrained smooth minimization EH Bergou, E Gorbunov, P Richtárik SIAM Journal on Optimization 30 (4), 2726-2749, 2020 | 51 | 2020 |
EF21 with bells & whistles: Practical algorithmic extensions of modern error feedback I Fatkhullin, I Sokolov, E Gorbunov, Z Li, P Richtárik arXiv preprint arXiv:2110.03294, 2021 | 45 | 2021 |
An accelerated directional derivative method for smooth stochastic convex optimization P Dvurechensky, E Gorbunov, A Gasnikov European Journal of Operational Research 290 (2), 601-621, 2021 | 44 | 2021 |
On primal and dual approaches for distributed stochastic convex optimization over networks D Dvinskikh, E Gorbunov, A Gasnikov, P Dvurechensky, CA Uribe 2019 IEEE 58th Conference on Decision and Control (CDC), 7435-7440, 2019 | 41* | 2019 |
Derivative-free method for composite optimization with applications to decentralized distributed optimization A Beznosikov, E Gorbunov, A Gasnikov IFAC-PapersOnLine 53 (2), 4038-4043, 2020 | 38* | 2020 |
Recent theoretical advances in decentralized distributed convex optimization E Gorbunov, A Rogozin, A Beznosikov, D Dvinskikh, A Gasnikov High-Dimensional Optimization and Probability: With a View Towards Data …, 2022 | 37 | 2022 |
Near-optimal high probability complexity bounds for non-smooth stochastic optimization with heavy-tailed noise E Gorbunov, M Danilova, I Shibaev, P Dvurechensky, A Gasnikov arXiv preprint arXiv:2106.05958, 2021 | 36* | 2021 |
Stochastic gradient descent-ascent: Unified theory and new efficient methods A Beznosikov, E Gorbunov, H Berard, N Loizou International Conference on Artificial Intelligence and Statistics, 172-235, 2023 | 34 | 2023 |