On the importance of initialization and momentum in deep learning I Sutskever, J Martens, G Dahl, G Hinton International conference on machine learning, 1139-1147, 2013 | 6613 | 2013 |
Generating text with recurrent neural networks I Sutskever, J Martens, GE Hinton Proceedings of the 28th international conference on machine learning (ICML …, 2011 | 2134 | 2011 |
Deep learning via hessian-free optimization. J Martens Icml 27, 735-742, 2010 | 1312 | 2010 |
Optimizing neural networks with kronecker-factored approximate curvature J Martens, R Grosse International conference on machine learning, 2408-2417, 2015 | 1119 | 2015 |
Learning recurrent neural networks with hessian-free optimization J Martens, I Sutskever Proceedings of the 28th international conference on machine learning (ICML …, 2011 | 836 | 2011 |
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context G Team, P Georgiev, VI Lei, R Burnell, L Bai, A Gulati, G Tanzer, ... arXiv preprint arXiv:2403.05530, 2024 | 720 | 2024 |
New insights and perspectives on the natural gradient method J Martens Journal of Machine Learning Research 21 (146), 1-76, 2020 | 717 | 2020 |
Adding gradient noise improves learning for very deep networks A Neelakantan, L Vilnis, QV Le, I Sutskever, L Kaiser, K Kurach, J Martens arXiv preprint arXiv:1511.06807, 2015 | 622 | 2015 |
Adversarial robustness through local linearization C Qin, J Martens, S Gowal, D Krishnan, K Dvijotham, A Fawzi, S De, ... Advances in neural information processing systems 32, 2019 | 344 | 2019 |
The mechanics of n-player differentiable games D Balduzzi, S Racaniere, J Martens, J Foerster, K Tuyls, T Graepel International Conference on Machine Learning, 354-363, 2018 | 329 | 2018 |
A kronecker-factored approximate fisher matrix for convolution layers R Grosse, J Martens International Conference on Machine Learning, 573-582, 2016 | 294 | 2016 |
Training deep and recurrent networks with hessian-free optimization J Martens, I Sutskever Neural Networks: Tricks of the Trade: Second Edition, 479-535, 2012 | 268 | 2012 |
Fast convergence of natural gradient descent for over-parameterized neural networks G Zhang, J Martens, RB Grosse Advances in Neural Information Processing Systems 32, 2019 | 150 | 2019 |
Which algorithmic choices matter at which batch sizes? insights from a noisy quadratic model G Zhang, L Li, Z Nado, J Martens, S Sachdeva, G Dahl, C Shallue, ... Advances in neural information processing systems 32, 2019 | 149 | 2019 |
Distributed Second-Order Optimization using Kronecker-Factored Approximations. J Ba, RB Grosse, J Martens ICLR (Poster), 2017 | 121 | 2017 |
Pre-training via denoising for molecular property prediction S Zaidi, M Schaarschmidt, J Martens, H Kim, YW Teh, ... arXiv preprint arXiv:2206.00133, 2022 | 114 | 2022 |
Kronecker-factored curvature approximations for recurrent neural networks J Martens, J Ba, M Johnson International Conference on Learning Representations, 2018 | 102 | 2018 |
Differentiable game mechanics A Letcher, D Balduzzi, S Racaniere, J Martens, J Foerster, K Tuyls, ... Journal of Machine Learning Research 20 (84), 1-40, 2019 | 99 | 2019 |
On the representational efficiency of restricted boltzmann machines J Martens, A Chattopadhya, T Pitassi, R Zemel Advances in Neural Information Processing Systems 26, 2013 | 89 | 2013 |
Estimating the hessian by back-propagating curvature J Martens, I Sutskever, K Swersky arXiv preprint arXiv:1206.6464, 2012 | 89 | 2012 |