AdapLeR: Speeding up Inference by Adaptive Length Reduction A Modarressi, H Mohebbi, MT Pilehvar Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022 | 18 | 2022 |
Quantifying Context Mixing in Transformers H Mohebbi, W Zuidema, G Chrupała, A Alishahi Proceedings of the 17th Conference of the European Chapter of the …, 2023 | 14 | 2023 |
Exploring the Role of BERT Token Representations to Explain Sentence Probing Results H Mohebbi, A Modarressi, MT Pilehvar Proceedings of the 2021 Conference on Empirical Methods in Natural Language …, 2021 | 14 | 2021 |
Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations M Fayyaz, E Aghazadeh, A Modarressi, H Mohebbi, MT Pilehvar Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting …, 2021 | 13 | 2021 |
Nexus2D Team Description Paper MA Esfahani, M Ghafouri, M Jamili, S Askari, R Etemadi, H Mohebbi, ... RoboCup 2017 Symposium and Competition, 2017 | 5 | 2017 |
DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers A Langedijk, H Mohebbi, G Sarti, W Zuidema, J Jumelet arXiv preprint arXiv:2310.03686, 2023 | 3 | 2023 |
Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers H Mohebbi, G Chrupała, W Zuidema, A Alishahi Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023 | 2 | 2023 |
The Convexity of BERT: From Cause to Solution H Mohebbi, SA Modarressi | 1 | 2020 |
Transformer-specific Interpretability H Mohebbi, J Jumelet, M Hanna, A Alishahi, W Zuidema Proceedings of the 18th Conference of the European Chapter of the …, 2024 | | 2024 |
Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP Y Belinkov, S Hao, J Jumelet, N Kim, AD McCarthy, H Mohebbi Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting …, 2023 | | 2023 |