Follow
Aman Timalsina
Aman Timalsina
Unknown affiliation
No verified email
Title
Cited by
Cited by
Year
How to train your hippo: State space models with generalized orthogonal basis projections
A Gu, I Johnson, A Timalsina, A Rudra, C Ré
arXiv preprint arXiv:2206.12037, 2022
552022
Zoology: Measuring and improving recall in efficient language models
S Arora, S Eyuboglu, A Timalsina, I Johnson, M Poli, J Zou, A Rudra, C Ré
arXiv preprint arXiv:2312.04927, 2023
182023
Laughing hyena distillery: Extracting compact recurrences from convolutions
S Massaroli, M Poli, D Fu, H Kumbong, R Parnichkun, D Romero, ...
Advances in Neural Information Processing Systems 36, 2024
10*2024
Simple linear attention language models balance the recall-throughput tradeoff
S Arora, S Eyuboglu, M Zhang, A Timalsina, S Alberti, D Zinsley, J Zou, ...
arXiv preprint arXiv:2402.18668, 2024
52024
Computing Generalized Ranks of Persistence Modules via Unfolding to Zigzag Modules
TK Dey, A Timalsina, C Xin
arXiv preprint arXiv:2403.08110, 2024
2024
Tetrahedralization of a Hexahedral Mesh
A Timalsina, MG Knepley
International Meshing Roundtable, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–6