关注
Yi Xu
Yi Xu
在 dlut.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Dash: Semi-supervised learning with dynamic thresholding
Y Xu, L Shang, J Ye, Q Qian, YF Li, B Sun, H Li, R Jin
International conference on machine learning, 11525-11536, 2021
2402021
First-order stochastic algorithms for escaping from saddle points in almost linear time
Y Xu, R Jin, T Yang
Advances in Neural Information Processing Systems, 5530-5540, 2018
1422018
Practical and theoretical considerations in study design for detecting gene-gene interactions using MDR and GMDR approaches
GB Chen, Y Xu, HM Xu, MD Li, J Zhu, XY Lou
PloS one 6 (2), e16981, 2011
752011
Chex: Channel exploration for cnn model compression
Z Hou, M Qin, F Sun, X Ma, K Yuan, Y Xu, YK Chen, R Jin, Y Xie, SY Kung
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
742022
Optimal Epoch Stochastic Gradient Descent Ascent Methods for Min-Max Optimization
Y Yan, Y Xu, Q Lin, W Liu, T Yang
Advances in Neural Information Processing Systems 33, 5789-5800, 2020
74*2020
Self-supervised pre-training for transformer-based person re-identification
H Luo, P Wang, Y Xu, F Ding, Y Zhou, F Wang, H Li, R Jin
arXiv preprint arXiv:2111.12084, 2021
652021
ADMM without a fixed penalty parameter: Faster convergence with new adaptive penalization
Y Xu, M Liu, Q Lin, T Yang
Advances in neural information processing systems 30, 2017
612017
On stochastic moving-average estimators for non-convex optimization
Z Guo, Y Xu, W Yin, R Jin, T Yang
arXiv preprint arXiv:2104.14840, 2021
592021
Stochastic convex optimization: Faster local growth implies faster global convergence
Y Xu, Q Lin, T Yang
International Conference on Machine Learning, 3821-3830, 2017
502017
Towards understanding label smoothing
Y Xu, Y Xu, Q Qian, H Li, R Jin
arXiv preprint arXiv:2006.11653, 2020
462020
Stochastic optimization for DC functions and non-smooth non-convex regularizers with non-asymptotic convergence
Y Xu, Q Qi, Q Lin, R Jin, T Yang
International conference on machine learning, 6942-6951, 2019
462019
A novel convergence analysis for algorithms of the adam family
Z Guo, Y Xu, W Yin, R Jin, T Yang
arXiv preprint arXiv:2112.03459, 2021
442021
Improved fine-tuning by better leveraging pre-training data
Z Liu, Y Xu, Y Xu, Q Qian, H Li, X Ji, A Chan, R Jin
Advances in Neural Information Processing Systems 35, 32568-32581, 2022
40*2022
An online method for a class of distributionally robust optimization with non-convex objectives
Q Qi, Z Guo, Y Xu, R Jin, T Yang
Advances in Neural Information Processing Systems 34, 2021
362021
Federated Deep AUC Maximization for Heterogeneous Data with a Constant Communication Complexity
Z Yuan, Z Guo, Y Xu, Y Ying, T Yang
International Conference on Machine Learning, 12219-12229, 2021
362021
Effective model sparsification by scheduled grow-and-prune methods
X Ma, M Qin, F Sun, Z Hou, K Yuan, Y Xu, Y Wang, YK Chen, R Jin, Y Xie
arXiv preprint arXiv:2106.09857, 2021
352021
Sadagrad: Strongly adaptive stochastic gradient methods
Z Chen*, Y Xu*, E Chen, T Yang
International Conference on Machine Learning, 913-921, 2018
352018
Stochastic Primal-Dual Algorithms with Faster Convergence than for Problems without Bilinear Structure
Y Yan, Y Xu, Q Lin, L Zhang, T Yang
arXiv preprint arXiv:1904.10112, 2019
332019
Non-asymptotic analysis of stochastic methods for non-smooth non-convex regularized problems
Y Xu, R Jin, T Yang
Advances In Neural Information Processing Systems 32, 2630-2640, 2019
32*2019
Learning with non-convex truncated losses by SGD
Y Xu, S Zhu, S Yang, C Zhang, R Jin, T Yang
Uncertainty in Artificial Intelligence, 701-711, 2020
282020
系统目前无法执行此操作,请稍后再试。
文章 1–20