Follow
Leonard Lausen
Leonard Lausen
Amazon Web Services
Verified email at amazon.com
Title
Cited by
Cited by
Year
Deep learning for precipitation nowcasting: A benchmark and a new model
X Shi, Z Gao, L Lausen, H Wang, DY Yeung, W Wong, W Woo
Advances in neural information processing systems 30, 2017
10202017
Gluoncv and gluonnlp: Deep learning in computer vision and natural language processing
J Guo, H He, T He, L Lausen, M Li, H Lin, X Shi, C Wang, J Xie, S Zha, ...
Journal of Machine Learning Research 21 (23), 1-7, 2020
2362020
Nsml: A machine learning platform that enables you to focus on your models
N Sung, M Kim, H Jo, Y Yang, J Kim, L Lausen, Y Kim, G Lee, D Kwak, ...
arXiv preprint arXiv:1712.05902, 2017
842017
HYTREL: Hypergraph-enhanced tabular data representation learning
P Chen, S Sarkar, L Lausen, B Srinivasan, S Zha, R Huang, G Karypis
Advances in Neural Information Processing Systems 36, 2024
252024
Better context makes better code language models: A case study on function call argument completion
H Pei, J Zhao, L Lausen, S Zha, G Karypis
Proceedings of the AAAI Conference on Artificial Intelligence 37 (4), 5230-5238, 2023
182023
Exploring the role of task transferability in large-scale multi-task learning
V Padmakumar, L Lausen, M Ballesteros, S Zha, H He, G Karypis
arXiv preprint arXiv:2204.11117, 2022
182022
Large language models of code fail at completing code with potential bugs
T Dinh, J Zhao, S Tan, R Negrinho, L Lausen, S Zha, G Karypis
Advances in Neural Information Processing Systems 36, 2024
162024
Testing the Limits of Unified Sequence to Sequence LLM Pretraining on Diverse Table Data Tasks
S Sarkar, L Lausen
arXiv preprint arXiv:2310.00789, 2023
4*2023
Parameter and Data Efficient Continual Pre-training for Robustness to Dialectal Variance in Arabic
S Sarkar, K Lin, S Sengupta, L Lausen, S Zha, S Mansour
arXiv preprint arXiv:2211.03966, 2022
22022
CrowdRisk: exploring crowdsourcing of risk information
L Lausen, M Rittenbruch, P Mitchell, E Horton, M Foth
Proceedings of the 28th Australian Conference on Computer-Human Interaction …, 2016
22016
Dive into deep learning for natural language processing
H Lin, X Shi, L Lausen, A Zhang, H He, S Zha, A Smola
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
12019
Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning
S Sarkar, L Lausen, V Cevher, S Zha, T Brox, G Karypis
arXiv preprint arXiv:2409.01483, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–12