Follow
Prasanna Parthasarathi
Title
Cited by
Cited by
Year
Extending neural generative conversational model using external knowledge sources
P Parthasarathi, J Pineau
arXiv preprint arXiv:1809.05524, 2018
982018
Unnatural language inference
K Sinha, P Parthasarathi, J Pineau, A Williams
arXiv preprint arXiv:2101.00010, 2020
902020
Learning an unreferenced metric for online dialogue evaluation
K Sinha, P Parthasarathi, J Wang, R Lowe, WL Hamilton, J Pineau
arXiv preprint arXiv:2005.00583, 2020
862020
Attend, adapt and transfer: Attentive deep architecture for adaptive transfer from multiple sources in the same domain
J Rajendran, A Srinivas, MM Khapra, P Prasanna, B Ravindran
arXiv preprint arXiv:1510.02879, 2015
642015
Neural assistant: Joint action prediction, response generation, and latent knowledge reasoning
A Neelakantan, S Yavuz, S Narang, V Prasad, B Goodrich, D Duckworth, ...
arXiv preprint arXiv:1910.14613, 2019
152019
Sometimes we want ungrammatical translations
P Parthasarathi, K Sinha, J Pineau, A Williams
Findings of the Association for Computational Linguistics: EMNLP 2021, 3205-3227, 2021
14*2021
Local structure matters most: Perturbation study in NLU
L Clouatre, P Parthasarathi, A Zouaq, S Chandar
arXiv preprint arXiv:2107.13955, 2021
142021
Adaapt: A deep architecture for adaptive policy transfer from multiple sources
J Rajendran, P Prasanna, B Ravindran, MM Khapra
arXiv preprint arXiv 1510, 2015
122015
Demystifying neural language models’ insensitivity to word-order
L Clouatre, P Parthasarathi, A Zouaq, S Chandar
arXiv preprint arXiv:2107.13955 5, 45-67, 2021
112021
Maca: A modular architecture for conversational agents
HP Truong, P Parthasarathi, J Pineau
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, 93-102, 2017
112017
Deep learning on a healthy data diet: Finding important examples for fairness
A Zayed, P Parthasarathi, G Mordido, H Palangi, S Shabanian, S Chandar
Proceedings of the AAAI Conference on Artificial Intelligence 37 (12), 14593 …, 2023
72023
Do Encoder Representations of Generative Dialogue Models have sufficient summary of the Information about the task?
P Parthasarathi, J Pineau, S Chandar
Proceedings of the 22nd Annual Meeting of the Special Interest Group on …, 2021
5*2021
Memory augmented optimizers for deep learning
PA McRae, P Parthasarathi, M Assran, S Chandar
arXiv preprint arXiv:2106.10708, 2021
42021
Measuring the knowledge acquisition-utilization gap in pretrained language models
A Kazemnejad, M Rezagholizadeh, P Parthasarathi, S Chandar
arXiv preprint arXiv:2305.14775, 2023
32023
On task-level dialogue composition of generative transformer model
P Parthasarathi, A Neelakantan, S Narang
arXiv preprint arXiv:2010.04826, 2020
32020
Practical Takes on Federated Learning with Pretrained Language Models
A Agarwal, M Rezagholizadeh, P Parthasarathi
Findings of the Association for Computational Linguistics: EACL 2023, 454-471, 2023
22023
A brief study on the effects of training generative dialogue models with a semantic loss
P Parthasarathi, M Abdelsalam, J Pineau, S Chandar
arXiv preprint arXiv:2106.10619, 2021
22021
Variational encoder decoder for image generation conditioned on captions
J Romoff, N Angelard-Gontier, P Parthasarathi
22016
EpiK-Eval: Evaluation for Language Models as Epistemic Models
G Prato, J Huang, P Parthasarathi, S Sodhani, S Chandar
arXiv preprint arXiv:2310.15372, 2023
12023
Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes
L Clouâtre, P Parthasarathi, A Zouaq, S Chandar
arXiv preprint arXiv:2211.05015, 2022
12022
The system can't perform the operation now. Try again later.
Articles 1–20