Follow
Sebastian Urban Stich
Sebastian Urban Stich
CISPA Helmholtz Center
Verified email at cispa.de - Homepage
Title
Cited by
Cited by
Year
Advances and open problems in federated learning
P Kairouz, HB McMahan, B Avent, A Bellet, M Bennis, AN Bhagoji, ...
Foundations and Trends® in Machine Learning 14 (1–2), 1-210, 2021
61362021
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
SP Karimireddy, S Kale, M Mohri, SJ Reddi, SU Stich, AT Suresh
ICML 2020 - International Conference on Machine Learning, 2019
29802019
Local SGD Converges Fast and Communicates Little
SU Stich
ICLR 2019 - International Conference on Learning Representations, 2019
11342019
Ensemble Distillation for Robust Model Fusion in Federated Learning
T Lin, L Kong, SU Stich, M Jaggi
NeurIPS 2020 - Advances in Neural Information Processing Systems 33, 2020
9862020
Sparsified SGD with memory
SU Stich, JB Cordonnier, M Jaggi
NeurIPS 2018 - Advances in Neural Information Processing Systems, 4448-4459, 2018
8742018
Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication
A Koloskova, SU Stich, M Jaggi
ICML 2019 - International Conference on Machine Learning, 2019
5412019
Error Feedback Fixes SignSGD and other Gradient Compression Schemes
SP Karimireddy, Q Rebjock, SU Stich, M Jaggi
ICML 2019 - International Conference on Machine Learning, 2019
5392019
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
A Koloskova, N Loizou, S Boreiri, M Jaggi, SU Stich
ICML 2020 - International Conference on Machine Learning, 2020
4962020
Don't Use Large Mini-Batches, Use Local SGD
T Lin, SU Stich, KK Patel, M Jaggi
ICLR 2020 - International Conference on Learning Representations, 2020
4822020
A Field Guide to Federated Optimization
J Wang, Z Charles, Z Xu, G Joshi, HB McMahan, M Al-Shedivat, G Andrew, ...
arXiv preprint arXiv:2107.06917, 2021
3642021
Is Local SGD Better than Minibatch SGD?
B Woodworth, KK Patel, SU Stich, Z Dai, B Bullins, HB McMahan, ...
ICML 2020 - International Conference on Machine Learning, 2020
2852020
Breaking the centralized barrier for cross-device federated learning
SP Karimireddy, M Jaggi, S Kale, M Mohri, S Reddi, SU Stich, AT Suresh
NeurIPS 2021 - Advances in Neural Information Processing Systems 34, 2021
283*2021
The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Updates
SU Stich, SP Karimireddy
Journal of Machine Learning Research 21, 1-36, 2020
278*2020
Decentralized Deep Learning with Arbitrary Communication Compression
A Koloskova, T Lin, SU Stich, M Jaggi
ICLR 2020 - International Conference on Learning Representations, 2020
2382020
Dynamic Model Pruning with Feedback
T Lin, SU Stich, L Barba, D Dmitriev, M Jaggi
ICLR 2020 - International Conference on Learning Representations, 2020
2302020
Stochastic distributed learning with gradient quantization and double-variance reduction
S Horváth, D Kovalev, K Mishchenko, P Richtárik, S Stich
Optimization Methods and Software 38 (1), 91-106, 2023
1842023
Efficiency of the Accelerated Coordinate Descent Method on Structured Optimization Problems
Y Nesterov, SU Stich
SIAM Journal on Optimization 27 (1), 110-123, 2017
1632017
On the Convergence of SGD with Biased Gradients
A Ajalloeian, SU Stich
ICML 2020 Workshop - Beyond First Order Methods in ML Systems, arXiv …, 2020
138*2020
ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!
K Mishchenko, G Malinovsky, S Stich, P Richtárik
ICML 2022 - International Conference on Machine Learning, 2022
1332022
Unified Optimal Analysis of the (Stochastic) Gradient Method
SU Stich
arXiv preprint arXiv:1907.04232, 2019
1282019
The system can't perform the operation now. Try again later.
Articles 1–20