Chi Jin
Chi Jin
Assistant Professor, Princeton University
Verified email at princeton.edu - Homepage
Title
Cited by
Cited by
Year
Escaping from saddle points—online stochastic gradient for tensor decomposition
R Ge, F Huang, C Jin, Y Yuan
Conference on learning theory, 797-842, 2015
8792015
How to escape saddle points efficiently
C Jin, R Ge, P Netrapalli, SM Kakade, MI Jordan
International Conference on Machine Learning, 1724-1732, 2017
5652017
Is Q-learning provably efficient?
C Jin, Z Allen-Zhu, S Bubeck, MI Jordan
arXiv preprint arXiv:1807.03765, 2018
3452018
No spurious local minima in nonconvex low rank problems: A unified geometric analysis
R Ge, C Jin, Y Zheng
International Conference on Machine Learning, 1233-1242, 2017
3232017
Provably efficient reinforcement learning with linear function approximation
C Jin, Z Yang, Z Wang, MI Jordan
Conference on Learning Theory, 2137-2143, 2020
1812020
Accelerated gradient descent escapes saddle points faster than gradient descent
C Jin, P Netrapalli, MI Jordan
Conference On Learning Theory, 1042-1085, 2018
1732018
Gradient descent can take exponential time to escape saddle points
SS Du, C Jin, JD Lee, MI Jordan, B Poczos, A Singh
arXiv preprint arXiv:1705.10412, 2017
1662017
What is local optimality in nonconvex-nonconcave minimax optimization?
C Jin, P Netrapalli, M Jordan
International Conference on Machine Learning, 4880-4889, 2020
145*2020
On gradient descent ascent for nonconvex-concave minimax problems
T Lin, C Jin, M Jordan
International Conference on Machine Learning, 6083-6093, 2020
1432020
Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for oja’s algorithm
P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford
Conference on learning theory, 1147-1164, 2016
1202016
Stochastic cubic regularization for fast nonconvex optimization
N Tripuraneni, M Stern, C Jin, J Regier, MI Jordan
arXiv preprint arXiv:1711.02838, 2017
1112017
Faster eigenvector computation via shift-and-invert preconditioning
D Garber, E Hazan, C Jin, C Musco, P Netrapalli, A Sidford
International Conference on Machine Learning, 2626-2634, 2016
107*2016
Local maxima in the likelihood of gaussian mixture models: Structural results and algorithmic consequences
C Jin, Y Zhang, S Balakrishnan, MJ Wainwright, MI Jordan
Advances in neural information processing systems 29, 4116-4124, 2016
1062016
Provably efficient exploration in policy optimization
Q Cai, Z Yang, C Jin, Z Wang
International Conference on Machine Learning, 1283-1294, 2020
902020
Provable efficient online matrix completion via non-convex stochastic gradient descent
C Jin, SM Kakade, P Netrapalli
Advances in Neural Information Processing Systems 29, 4520-4528, 2016
852016
On nonconvex optimization for machine learning: Gradients, stochasticity, and saddle points
C Jin, P Netrapalli, R Ge, SM Kakade, MI Jordan
Journal of the ACM (JACM) 68 (2), 1-29, 2021
80*2021
Near-optimal algorithms for minimax optimization
T Lin, C Jin, MI Jordan
Conference on Learning Theory, 2738-2779, 2020
722020
Sampling can be faster than optimization
YA Ma, Y Chen, C Jin, N Flammarion, MI Jordan
Proceedings of the National Academy of Sciences 116 (42), 20881-20885, 2019
722019
Efficient algorithms for large-scale generalized eigenvector computation and canonical correlation analysis
R Ge, C Jin, P Netrapalli, A Sidford
International Conference on Machine Learning, 2741-2750, 2016
622016
Learning Adversarial MDPs with Bandit Feedback and Unknown Transition
C Jin, T Jin, H Luo, S Sra, T Yu
arXiv preprint arXiv:1912.01192, 2020
50*2020
The system can't perform the operation now. Try again later.
Articles 1–20