关注
Sheng Liu
Sheng Liu
在 stanford.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Early-learning regularization prevents memorization of noisy labels
S Liu, J Niles-Weed, N Razavian, C Fernandez-Granda
34th Conference on Neural Information Processing Systems (NeurIPS 2020), 2020
6232020
Robust Training under Label Noise by Over-parameterization
S Liu, Z Zhu, Q Qu, C You
ICML 2022, 2022
1252022
Adaptive early-learning correction for segmentation from noisy annotations
S Liu, K Liu, W Zhu, Y Shen, C Fernandez-Granda
CVPR 2022 (Oral), 2606-2616, 2022
1142022
Generalizable deep learning model for early Alzheimer’s disease detection from structural MRIs
S Liu, AV Masurkar, H Rusinek, J Chen, B Zhang, W Zhu, ...
Scientific reports 12 (1), 17106, 2022
902022
Monitoring ai-modified content at scale: A case study on the impact of chatgpt on ai conference peer reviews
W Liang, Z Izzo, Y Zhang, H Lepp, H Cao, X Zhao, L Chen, H Ye, S Liu, ...
arXiv preprint arXiv:2403.07183, 2024
732024
On the design of convolutional neural networks for automatic detection of Alzheimer’s disease
S Liu, C Yadav, C Fernandez-Granda, N Razavian
2019 NeurIPS, 184-201, 2020
732020
On Learning Contrastive Representations for Learning with Noisy Labels
L Yi, S Liu, Q She, AI McLeod, B Wang
CVPR 2022, 16682-16691, 2022
642022
Are all losses created equal: A neural collapse perspective
J Zhou, C You, X Li, K Liu, S Liu, Q Qu, Z Zhu
Advances in Neural Information Processing Systems 35, 31697-31710, 2022
562022
Mapping the increasing use of llms in scientific papers
W Liang, Y Zhang, Z Wu, H Lepp, W Ji, X Zhao, H Cao, S Liu, S He, ...
arXiv preprint arXiv:2404.01268, 2024
462024
In-context vectors: Making in context learning more effective and controllable through latent space steering
S Liu, H Ye, L Xing, J Zou
Forty-first International Conference on Machine Learning., 2024
442024
Multiple instance learning via iterative self-paced supervised contrastive learning
K Liu, W Zhu, Y Shen, S Liu, N Razavian, KJ Geras, C Fernandez-Granda
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
292023
Principled and efficient transfer learning of deep models via neural collapse
X Li, S Liu, J Zhou, X Lu, C Fernandez-Granda, Z Zhu, Q Qu
arXiv preprint arXiv:2212.12206, 2022
292022
TextGrad: Automatic" Differentiation" via Text
M Yuksekgonul*, F Bianchi*, J Boen*, S Liu*, Z Huang*, C Guestrin, J Zou
arXiv preprint arXiv:2406.07496, 2024
272024
Swin MAE: Masked autoencoders for small datasets
Y Dai, F Liu, W Chen, Y Liu, L Shi, S Liu, Y Zhou
272023
Convolutional normalization: Improving deep convolutional network robustness and training
S Liu, X Li, Y Zhai, C You, Z Zhu, C Fernandez-Granda, Q Qu
35th Conference on Neural Information Processing Systems (NeurIPS 2021), 2021
272021
Avoiding spurious correlations via logit correction
S Liu, X Zhang, N Sekhar, Y Wu, P Singhal, C Fernandez-Granda
ICLR 2023, 2022
172022
Deep probability estimation
S Liu, A Kaku, W Zhu, M Leibovich, S Mohan, B Yu, L Zanna, N Razavian, ...
ICML 2022, 2021
172021
Few-shot fine-grained action recognition via bidirectional attention and contrastive meta-learning
J Wang, Y Wang, S Liu, A Li
Proceedings of the 29th ACM International Conference on Multimedia, 582-591, 2021
162021
Sparse recovery beyond compressed sensing: Separable nonlinear inverse problems
B Bernstein, S Liu, C Papadaniil, C Fernandez-Granda
IEEE transactions on information theory 66 (9), 5904-5926, 2020
132020
Paddles: Phase-amplitude spectrum disentangled early stopping for learning with noisy labels
H Huang, H Kang, S Liu, O Salvado, T Rakotoarivelo, D Wang, T Liu
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
122023
系统目前无法执行此操作,请稍后再试。
文章 1–20