Follow
Aman Timalsina
Aman Timalsina
Verified email at purdue.edu
Title
Cited by
Cited by
Year
How to train your hippo: State space models with generalized orthogonal basis projections
A Gu, I Johnson, A Timalsina, A Rudra, C Ré
arXiv preprint arXiv:2206.12037, 2022
602022
Zoology: Measuring and improving recall in efficient language models
S Arora, S Eyuboglu, A Timalsina, I Johnson, M Poli, J Zou, A Rudra, C Ré
arXiv preprint arXiv:2312.04927, 2023
312023
Simple linear attention language models balance the recall-throughput tradeoff
S Arora, S Eyuboglu, M Zhang, A Timalsina, S Alberti, D Zinsley, J Zou, ...
arXiv preprint arXiv:2402.18668, 2024
152024
Laughing hyena distillery: Extracting compact recurrences from convolutions
S Massaroli, M Poli, D Fu, H Kumbong, R Parnichkun, D Romero, ...
Advances in Neural Information Processing Systems 36, 2024
122024
Just read twice: closing the recall gap for recurrent language models
S Arora, A Timalsina, A Singhal, B Spector, S Eyuboglu, X Zhao, A Rao, ...
arXiv preprint arXiv:2407.05483, 2024
2024
Computing Generalized Ranks of Persistence Modules via Unfolding to Zigzag Modules
TK Dey, A Timalsina, C Xin
arXiv preprint arXiv:2403.08110, 2024
2024
Tetrahedralization of a Hexahedral Mesh
A Timalsina, MG Knepley
International Meshing Roundtable, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–7