Shengbang Tong
Cited by
Cited by
Ctrl: Closed-loop transcription to an ldr via minimaxing rate reduction
X Dai, S Tong, M Li, Z Wu, M Psenka, KHR Chan, P Zhai, Y Yu, X Yuan, ...
Entropy 24 (4), 456, 2022
Incremental learning of structured memory via closed-loop transcription
S Tong, X Dai, Z Wu, M Li, B Yi, Y Ma
ICLR 2023, 2022
Investigating the catastrophic forgetting in multimodal large language models
Y Zhai, S Tong, X Li, M Cai, Q Qu, YJ Lee, Y Ma
CPAL 2024, 2023
Revisiting sparse convolutional model for visual recognition
M Li, P Zhai, S Tong, X Gao, SL Huang, Z Zhu, C You, Y Ma
NIPS 2022, 2022
White-Box Transformers via Sparse Rate Reduction
Y Yu, S Buchanan, D Pai, T Chu, Z Wu, S Tong, BD Haeffele, Y Ma
NIPS2023, 2023
Emergence of segmentation with minimalistic white-box transformers
Y Yu, T Chu, S Tong, Z Wu, D Pai, S Buchanan, Y Ma
CPAL 2024, 2023
Mass-Producing Failures of Multimodal Systems with Language Models
S Tong, E Jones, J Steinhardt
NIPS2023, 2023
EMP-SSL: Towards Self-Supervised Learning in One Training Epoch
S Tong, Y Chen, Y Ma, Y Lecun
arXiv preprint arXiv:2304.03977, 2023
Unsupervised manifold linearizing and clustering
T Ding, S Tong, KHR Chan, X Dai, Y Ma, BD Haeffele
ICCV 2023, 2023
Closed-Loop Transcription via Convolutional Sparse Coding
X Dai, K Chen, S Tong, J Zhang, X Gao, M Li, D Pai, Y Zhai, XI Yuan, ...
CPAL 2024, 2023
Unsupervised learning of structured representations via closed-loop transcription
S Tong, X Dai, Y Chen, M Li, Z Li, B Yi, Y LeCun, Y Ma
CPAL 2024, 2022
Image Clustering via the Principle of Rate Reduction in the Age of Pretrained Models
T Chu, S Tong, T Ding, X Dai, BD Haeffele, R Vidal, Y Ma
arXiv preprint arXiv:2306.05272, 2023
White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?
Y Yu, S Buchanan, D Pai, T Chu, Z Wu, S Tong, H Bai, Y Zhai, ...
arXiv preprint arXiv:2311.13110, 2023
The system can't perform the operation now. Try again later.
Articles 1–13