Spatiotemporal and frequential cascaded attention networks for speech emotion recognition S Li, X Xing, W Fan, B Cai, P Fordson, X Xu Neurocomputing 448, 238-248, 2021 | 52 | 2021 |
LSSED: a large-scale dataset and benchmark for speech emotion recognition W Fan, X Xu, X Xing, W Chen, D Huang ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021 | 35 | 2021 |
Isnet: Individual standardization network for speech emotion recognition W Fan, X Xu, B Cai, X Xing IEEE/ACM Transactions on Audio, Speech, and Language Processing 30, 1803-1814, 2022 | 31 | 2022 |
Multi-modality depression detection via multi-scale temporal dilated cnns W Fan, Z He, X Xing, B Cai, W Lu Proceedings of the 9th International on Audio/Visual Emotion Challenge and …, 2019 | 28 | 2019 |
Multi-modality hierarchical recall based on gbdts for bipolar disorder classification X Xing, B Cai, Y Zhao, S Li, Z He, W Fan Proceedings of the 2018 on Audio/Visual Emotion Challenge and Workshop, 31-37, 2018 | 28 | 2018 |
Qianfeng Tie, and Xiangmin Xu. 2022. CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI Y Chen, W Fan, X Xing, J Pang, M Huang, W Han arXiv preprint arXiv:2205.14727, 2022 | 11 | 2022 |
Adaptive Domain-Aware Representation Learning for Speech Emotion Recognition. W Fan, X Xu, X Xing, D Huang Interspeech, 4089-4093, 2020 | 11 | 2020 |
Cped: A large-scale chinese personalized and emotional dialogue dataset for conversational ai Y Chen, W Fan, X Xing, J Pang, M Huang, W Han, Q Tie, X Xu arXiv preprint arXiv:2205.14727, 2022 | 10 | 2022 |
Mgat: Multi-granularity attention based transformers for multi-modal emotion recognition W Fan, X Xing, B Cai, X Xu ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and …, 2023 | 8 | 2023 |
Coordination Attention Based Transformers with Bidirectional Contrastive Loss for Multimodal Speech Emotion Recognition W Fan, X Xu, G Zhou, X Deng, X Xing Available at SSRN 4647924, 0 | | |