Bert-attack: Adversarial attack against bert using bert L Li, R Ma, Q Guo, X Xue, X Qiu EMNLP 2020, arXiv preprint arXiv:2004.09984, 2020 | 681 | 2020 |
Simplify the usage of lexicon in Chinese NER R Ma, M Peng, Q Zhang, X Huang ACL 2020, arXiv preprint arXiv:1908.05969, 2019 | 342 | 2019 |
CNN-Based Chinese NER with Lexicon Rethinking. T Gui, R Ma, Q Zhang, L Zhao, YG Jiang, X Huang ijcai 2019, 2019 | 297 | 2019 |
Template-free prompt tuning for few-shot NER R Ma, X Zhou, T Gui, Y Tan, Q Zhang, X Huang NAACL 2022, arXiv preprint arXiv:2109.13532, 2021 | 182 | 2021 |
Backdoor attacks on pre-trained models by layerwise weight poisoning L Li, D Song, X Li, J Zeng, R Ma, X Qiu EMNLP 2021, arXiv preprint arXiv:2108.13888, 2021 | 116 | 2021 |
Textflint: Unified multilingual robustness evaluation toolkit for natural language processing X Wang, Q Liu, T Gui, Q Zhang, Y Zou, X Zhou, J Ye, Y Zhang, R Zheng, ... Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021 | 90 | 2021 |
Sent: sentence-level distant relation extraction via negative training R Ma, T Gui, L Li, Q Zhang, Y Zhou, X Huang ACL 2021, arXiv preprint arXiv:2106.11566, 2021 | 40 | 2021 |
Textflint: Unified multilingual robustness evaluation toolkit for natural language processing T Gui, X Wang, Q Zhang, Q Liu, Y Zou, X Zhou, R Zheng, C Zhang, Q Wu, ... arXiv preprint arXiv:2103.11441, 2021 | 36 | 2021 |
KNN-BERT: fine-tuning pre-trained models with KNN classifier L Li, D Song, R Ma, X Qiu, X Huang arXiv preprint arXiv:2110.02523, 2021 | 24 | 2021 |
Are Large Language Models Good Prompt Optimizers? R Ma, X Wang, X Zhou, J Li, N Du, T Gui, Q Zhang, X Huang arXiv preprint arXiv:2402.02101, 2024 | 18 | 2024 |
Coarse-to-fine few-shot learning for named entity recognition R Ma, Z Lin, X Chen, X Zhou, J Wang, T Gui, Q Zhang, X Gao, YW Chen Findings of the Association for Computational Linguistics: ACL 2023, 4115-4129, 2023 | 14 | 2023 |
Textobfuscator: Making pre-trained language model a privacy protector via obfuscating word representations X Zhou, Y Lu, R Ma, T Gui, Y Wang, Y Ding, Y Zhang, Q Zhang, XJ Huang Findings of the Association for Computational Linguistics: ACL 2023, 5459-5473, 2023 | 13 | 2023 |
Textfusion: Privacy-preserving pre-trained model inference via token fusion X Zhou, J Lu, T Gui, R Ma, Z Fei, Y Wang, Y Ding, Y Cheung, Q Zhang, ... Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022 | 12 | 2022 |
Searching for Optimal Subword Tokenization in Cross-domain NER R Ma, Y Tan, X Zhou, X Chen, D Liang, S Wang, W Wu, T Gui, Q Zhang IJCAI 2022, arXiv preprint arXiv:2206.03352, 2022 | 12 | 2022 |
Making harmful behaviors unlearnable for large language models X Zhou, Y Lu, R Ma, T Gui, Q Zhang, X Huang arXiv preprint arXiv:2311.02105, 2023 | 11 | 2023 |
Cross-linguistic syntactic difference in multilingual bert: how good is it and how does it affect transfer? N Xu, T Gui, R Ma, Q Zhang, J Ye, M Zhang, X Huang arXiv preprint arXiv:2212.10879, 2022 | 9 | 2022 |
Making parameter-efficient tuning more efficient: A unified framework for classification tasks X Zhou, R Ma, Y Zou, X Chen, T Gui, Q Zhang, XJ Huang, R Xie, W Wu Proceedings of the 29th International Conference on Computational …, 2022 | 9 | 2022 |
Toward recognizing more entity types in NER: An efficient implementation using only entity lexicons M Peng, R Ma, Q Zhang, L Zhao, M Wei, C Sun, XJ Huang Findings of the Association for Computational Linguistics: EMNLP 2020, 678-688, 2020 | 8 | 2020 |
Learning “O” helps for learning more: Handling the unlabeled entity problem for class-incremental NER R Ma, X Chen, Z Lin, X Zhou, J Wang, T Gui, Q Zhang, X Gao, YW Chen Proceedings of the 61st Annual Meeting of the Association for Computational …, 2023 | 7 | 2023 |
TextMixer: Mixing Multiple Inputs for Privacy-Preserving Inference X Zhou, Y Lu, R Ma, T Gui, Q Zhang, XJ Huang Findings of the Association for Computational Linguistics: EMNLP 2023, 3749-3762, 2023 | 3 | 2023 |