Follow
Omar Mohamed Awad
Title
Cited by
Cited by
Year
Gobo: Quantizing attention-based nlp models for low latency and energy efficient inference
AH Zadeh, I Edo, OM Awad, A Moshovos
2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture …, 2020
1542020
Tensordash: Exploiting sparsity to accelerate deep neural network training
M Mahmoud, I Edo, AH Zadeh, OM Awad, G Pekhimenko, J Albericio, ...
2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture …, 2020
772020
Shapeshifter: Enabling fine-grain data width adaptation in deep learning
AD Lascorz, S Sharify, I Edo, DM Stuart, OM Awad, P Judd, M Mahmoud, ...
Proceedings of the 52nd Annual IEEE/ACM International Symposium on …, 2019
452019
Security implications of intentional capacitive crosstalk
C Kison, OM Awad, M Fyrbiak, C Paar
IEEE Transactions on Information Forensics and Security 14 (12), 3246-3258, 2019
352019
Bitpruning: Learning bitlengths for aggressive and accurate quantization
M Nikolić, GB Hacene, C Bannon, AD Lascorz, M Courbariaux, OM Awad, ...
2024 IEEE International Symposium on Circuits and Systems (ISCAS), 1-5, 2024
272024
FPRaker: A processing element for accelerating neural network training
OM Awad, M Mahmoud, I Edo, AH Zadeh, C Bannon, A Jayarajan, ...
MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture …, 2021
182021
Compressing pre-trained language models using progressive low rank decomposition
H Hajimolahoseini, M Rezagholizadeh, V Partovinia, M Tahaei, OM Awad, ...
Advances in Neural Information Processing Systems, 2021
82021
SwiftLearn: A Data-Efficient Training Method of Deep Learning Models using Importance Sampling
H Hajimolahoseini, OM Awad, W Ahmed, A Wen, S Asani, M Hassanpour, ...
arXiv preprint arXiv:2311.15134, 2023
12023
GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values
F Javadi, W Ahmed, H Hajimolahoseini, F Ataiefard, M Hassanpour, ...
arXiv preprint arXiv:2311.03426, 2023
12023
Improving Resnet-9 Generalization Trained on Small Datasets
OM Awad, H Hajimolahoseini, M Lim, G Gosal, W Ahmed, Y Liu, G Deng
arXiv preprint arXiv:2309.03965, 2023
12023
Quantization for neural network computation
A Moshovos, AH Zadeh, IE Vivancos, OM Awad
US Patent App. 17/130,690, 2022
12022
GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference
A Hadi Zadeh, I Edo, OM Awad, A Moshovos
arXiv e-prints, arXiv: 2005.03842, 2020
12020
Tensordash: Exploiting sparsity to accelerate deep neural network training and inference
M Mahmoud, IE Vivancos, O Awad, AH Zadeh, G Pekhimenko, J Albericio, ...
Arxiv preprint cs. AR, 0
1
SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection
F Ataiefard, W Ahmed, H Hajimolahoseini, S Asani, F Javadi, ...
arXiv preprint arXiv:2401.15293, 2024
2024
Quantization for neural network computation
A Moshovos, AH Zadeh, IE Vivancos, OM Awad
US Patent App. 18/026,927, 2023
2023
System and method for accelerating training of deep learning networks
OM Awad, M Mahmoud, A Moshovos
US Patent App. 18/005,717, 2023
2023
cuSCNN: an Efficient CUDA Implementation of Sparse CNNs
MA Elgammal, OM Awad, IE Vivancos, A Moshovos, V Betz
Proceedings of the 13th International Symposium on Highly Efficient …, 2023
2023
FPRaker: Exploiting Fine-grain Sparsity to Accelerate Neural Network Training
OAMA Mohamed Awad
2020
The system can't perform the operation now. Try again later.
Articles 1–18