关注
Dan Alistarh
Dan Alistarh
在 ist.ac.at 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
QSGD: Communication-efficient SGD via gradient quantization and encoding
D Alistarh, D Grubic, J Li, R Tomioka, M Vojnovic
Advances in neural information processing systems 30, 2017
12282017
Model compression via distillation and quantization
A Polino, R Pascanu, D Alistarh
ICLR 2018, 2018
5312018
The convergence of sparsified gradient methods
D Alistarh, T Hoefler, M Johansson, N Konstantinov, S Khirirat, C Renggli
Advances in Neural Information Processing Systems 31, 2018
3582018
Byzantine stochastic gradient descent
D Alistarh, Z Allen-Zhu, J Li
Advances in Neural Information Processing Systems 31, 2018
2232018
ZipML: Training linear models with end-to-end low precision, and a little bit of deep learning
H Zhang, J Li, K Kara, D Alistarh, J Liu, C Zhang
International Conference on Machine Learning, 4035-4043, 2017
191*2017
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks.
T Hoefler, D Alistarh, T Ben-Nun, N Dryden, A Peste
J. Mach. Learn. Res. 22 (241), 1-124, 2021
1862021
The spraylist: A scalable relaxed priority queue
D Alistarh, J Kopinsky, J Li, N Shavit
Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of …, 2015
1302015
Time-space trade-offs in population protocols
D Alistarh, J Aspnes, D Eisenstat, R Gelashvili, RL Rivest
Proceedings of the twenty-eighth annual ACM-SIAM symposium on discrete …, 2017
1232017
Fast and exact majority in population protocols
D Alistarh, R Gelashvili, M Vojnović
Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing …, 2015
1122015
Space-optimal majority in population protocols
D Alistarh, J Aspnes, R Gelashvili
Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete …, 2018
1012018
Polylogarithmic-time leader election in population protocols
D Alistarh, R Gelashvili
International Colloquium on Automata, Languages, and Programming, 479-491, 2015
982015
Sparcml: High-performance sparse communication for machine learning
C Renggli, S Ashkboos, M Aghagolzadeh, D Alistarh, T Hoefler
Proceedings of the International Conference for High Performance Computing …, 2019
942019
FPGA-accelerated dense linear machine learning: A precision-convergence trade-off
K Kara, D Alistarh, G Alonso, O Mutlu, C Zhang
2017 IEEE 25th Annual International Symposium on Field-Programmable Custom …, 2017
762017
Tight bounds for asynchronous renaming
D Alistarh, J Aspnes, K Censor-Hillel, S Gilbert, R Guerraoui
Journal of the ACM (JACM) 61 (3), 1-51, 2014
70*2014
Woodfisher: Efficient second-order approximation for neural network compression
SP Singh, D Alistarh
Advances in Neural Information Processing Systems 33, 18098-18109, 2020
632020
Fast randomized test-and-set and renaming
D Alistarh, H Attiya, S Gilbert, A Giurgiu, R Guerraoui
International Symposium on Distributed Computing, 94-108, 2010
612010
Stacktrack: An automated transactional approach to concurrent memory reclamation
D Alistarh, P Eugster, M Herlihy, A Matveev, N Shavit
Proceedings of the Ninth European Conference on Computer Systems, 1-14, 2014
592014
Are lock-free concurrent algorithms practically wait-free?
D Alistarh, K Censor-Hillel, N Shavit
Journal of the ACM (JACM) 63 (4), 1-20, 2016
502016
Inducing and exploiting activation sparsity for fast inference on deep neural networks
M Kurtz, J Kopinsky, R Gelashvili, A Matveev, J Carr, M Goin, W Leiserson, ...
International Conference on Machine Learning, 5533-5543, 2020
472020
Sub-logarithmic test-and-set against a weak adversary
D Alistarh, J Aspnes
International Symposium on Distributed Computing, 97-109, 2011
452011
系统目前无法执行此操作,请稍后再试。
文章 1–20