关注
Ahmad Rashid
Ahmad Rashid
Vector Institute; University of Waterloo
在 uwaterloo.ca 的电子邮件经过验证
标题
引用次数
引用次数
年份
Context-aware adversarial training for name regularity bias in named entity recognition
A Ghaddar, P Langlais, A Rashid, M Rezagholizadeh
Transactions of the Association for Computational Linguistics 9, 586-604, 2021
372021
Mate-kd: Masked adversarial text, a companion to knowledge distillation
A Rashid, V Lioutas, M Rezagholizadeh
arXiv preprint arXiv:2105.05912, 2021
302021
Systems and methods for multilingual text generation field
M Rezagholizadeh, MA Haidar, A Do-Omri, A Rashid
US Patent 11,151,334, 2021
292021
End-to-end self-debiasing framework for robust NLU training
A Ghaddar, P Langlais, M Rezagholizadeh, A Rashid
arXiv preprint arXiv:2109.02071, 2021
282021
Kronecker decomposition for gpt compression
A Edalati, M Tahaei, A Rashid, VP Nia, JJ Clark, M Rezagholizadeh
arXiv preprint arXiv:2110.08152, 2021
232021
Latent code and text-based generative adversarial networks for soft-text generation
MA Haidar, M Rezagholizadeh, A Do-Omri, A Rashid
arXiv preprint arXiv:1904.07293, 2019
232019
A short study on compressing decoder-based language models
T Li, YE Mesbahi, I Kobyzev, A Rashid, A Mahmud, N Anchuri, ...
arXiv preprint arXiv:2110.08460, 2021
212021
Towards zero-shot knowledge distillation for natural language processing
A Rashid, V Lioutas, A Ghaddar, M Rezagholizadeh
arXiv preprint arXiv:2012.15495, 2020
202020
Stress relief by non-linear fillers in insulating solids
DW Auckland, A Rashid, K Tavernier, BR Varlow
Proceedings of IEEE Conference on Electrical Insulation and Dielectric …, 1994
161994
Revisiting pre-trained language models and their evaluation for arabic natural language understanding
A Ghaddar, Y Wu, S Bagga, A Rashid, K Bibi, M Rezagholizadeh, C Xing, ...
arXiv preprint arXiv:2205.10687, 2022
132022
Improving Word Embedding Factorization for Compression Using Distilled Nonlinear Neural Decomposition
V Lioutas, A Rashid, A Do-Omri, M Haidar, M Rezagholizadeh
arXiv preprint arXiv:1910.06720, 0
13*
Bilingual-gan: A step towards parallel text generation
A Rashid, A Do-Omri, MA Haidar, Q Liu, M Rezagholizadeh
arXiv preprint arXiv:1904.04742, 2019
122019
RW-KD: Sample-wise loss terms re-weighting for knowledge distillation
P Lu, A Ghaddar, A Rashid, M Rezagholizadeh, A Ghodsi, P Langlais
Findings of the Association for Computational Linguistics: EMNLP 2021, 3145-3152, 2021
82021
Improving generalization of pre-trained language models via stochastic weight averaging
P Lu, I Kobyzev, M Rezagholizadeh, A Rashid, A Ghodsi, P Langlais
arXiv preprint arXiv:2212.05956, 2022
62022
Jaber and saber: Junior and senior arabic bert
A Ghaddar, Y Wu, A Rashid, K Bibi, M Rezagholizadeh, C Xing, Y Wang, ...
arXiv preprint arXiv:2112.04329, 2021
62021
The influence of particular barriers on treeing in polyester resin
DW Auckland, A Rashid, BR Varlow
[1992] Proceedings of the 4th International Conference on Conduction and …, 1992
61992
How to select one among all? an empirical study towards the robustness of knowledge distillation in natural language understanding
T Li, A Rashid, A Jafari, P Sharma, A Ghodsi, M Rezagholizadeh
Findings of the Association for Computational Linguistics: EMNLP 2021, 750-762, 2021
52021
JABER: junior arabic bert
A Ghaddar, Y Wu, A Rashid, K Bibi, M Rezagholizadeh, C Xing, Y Wang, ...
ArXiv Prepr. ArXiv211204329, 2021
42021
How to select one among all? an extensive empirical study towards the robustness of knowledge distillation in natural language understanding
T Li, A Rashid, A Jafari, P Sharma, A Ghodsi, M Rezagholizadeh
arXiv preprint arXiv:2109.05696, 2021
32021
From unsupervised machine translation to adversarial text generation
A Rashid, A Do-Omri, MA Haidar, Q Liu, M Rezagholizadeh
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
22020
系统目前无法执行此操作,请稍后再试。
文章 1–20