关注
Blake Woodworth
Blake Woodworth
在 gwu.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Implicit regularization in matrix factorization
S Gunasekar, BE Woodworth, S Bhojanapalli, B Neyshabur, N Srebro
Advances in neural information processing systems 30, 2017
5572017
Learning non-discriminatory predictors
B Woodworth, S Gunasekar, MI Ohannessian, N Srebro
Conference on Learning Theory, 1920-1953, 2017
4502017
Kernel and rich regimes in overparametrized models
B Woodworth, S Gunasekar, JD Lee, E Moroshko, P Savarese, I Golan, ...
Conference on Learning Theory, 3635-3673, 2020
3922020
A field guide to federated optimization
J Wang, Z Charles, Z Xu, G Joshi, HB McMahan, M Al-Shedivat, G Andrew, ...
arXiv preprint arXiv:2107.06917, 2021
3812021
Lower bounds for non-convex stochastic optimization
Y Arjevani, Y Carmon, JC Duchi, DJ Foster, N Srebro, B Woodworth
Mathematical Programming 199 (1), 165-214, 2023
3532023
Is local SGD better than minibatch SGD?
B Woodworth, KK Patel, S Stich, Z Dai, B Bullins, B Mcmahan, O Shamir, ...
International Conference on Machine Learning, 10334-10343, 2020
2972020
Tight complexity bounds for optimizing composite objectives
BE Woodworth, N Srebro
Advances in neural information processing systems 29, 2016
2282016
Minibatch vs local sgd for heterogeneous distributed learning
BE Woodworth, KK Patel, N Srebro
Advances in Neural Information Processing Systems 33, 6281-6292, 2020
2252020
Graph oracle models, lower bounds, and gaps for parallel stochastic optimization
BE Woodworth, J Wang, A Smith, B McMahan, N Srebro
Advances in neural information processing systems 31, 2018
1332018
Training well-generalizing classifiers for fairness metrics and other data-dependent constraints
A Cotter, M Gupta, H Jiang, N Srebro, K Sridharan, S Wang, B Woodworth, ...
International Conference on Machine Learning, 1397-1405, 2019
1202019
Implicit bias in deep linear classification: Initialization scale vs training accuracy
E Moroshko, BE Woodworth, S Gunasekar, JD Lee, N Srebro, D Soudry
Advances in neural information processing systems 33, 22182-22193, 2020
922020
On the implicit bias of initialization shape: Beyond infinitesimal mirror descent
S Azulay, E Moroshko, MS Nacson, BE Woodworth, N Srebro, ...
International Conference on Machine Learning, 468-477, 2021
862021
The complexity of making the gradient small in stochastic convex optimization
DJ Foster, A Sekhari, O Shamir, N Srebro, K Sridharan, B Woodworth
Conference on Learning Theory, 1319-1345, 2019
602019
Asynchronous SGD beats minibatch SGD under arbitrary delays
K Mishchenko, F Bach, M Even, BE Woodworth
Advances in Neural Information Processing Systems 35, 420-433, 2022
572022
The min-max complexity of distributed stochastic convex optimization with intermittent communication
BE Woodworth, B Bullins, O Shamir, N Srebro
Conference on Learning Theory, 4386-4437, 2021
532021
Mirrorless mirror descent: A natural derivation of mirror descent
S Gunasekar, B Woodworth, N Srebro
International Conference on Artificial Intelligence and Statistics, 2305-2313, 2021
48*2021
The gradient complexity of linear regression
M Braverman, E Hazan, M Simchowitz, B Woodworth
Conference on Learning Theory, 627-647, 2020
402020
Lower bound for randomized first order convex optimization
B Woodworth, N Srebro
arXiv preprint arXiv:1709.03594, 2017
402017
An even more optimal stochastic optimization algorithm: minibatching and interpolation learning
BE Woodworth, N Srebro
Advances in Neural Information Processing Systems 34, 7333-7345, 2021
292021
Towards optimal communication complexity in distributed non-convex optimization
KK Patel, L Wang, BE Woodworth, B Bullins, N Srebro
Advances in Neural Information Processing Systems 35, 13316-13328, 2022
202022
系统目前无法执行此操作,请稍后再试。
文章 1–20