追蹤
Weiquan Fan
Weiquan Fan
在 mail.scut.edu.cn 的電子郵件地址已通過驗證
標題
引用次數
引用次數
年份
Spatiotemporal and frequential cascaded attention networks for speech emotion recognition
S Li, X Xing, W Fan, B Cai, P Fordson, X Xu
Neurocomputing 448, 238-248, 2021
522021
LSSED: a large-scale dataset and benchmark for speech emotion recognition
W Fan, X Xu, X Xing, W Chen, D Huang
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
352021
Isnet: Individual standardization network for speech emotion recognition
W Fan, X Xu, B Cai, X Xing
IEEE/ACM Transactions on Audio, Speech, and Language Processing 30, 1803-1814, 2022
312022
Multi-modality depression detection via multi-scale temporal dilated cnns
W Fan, Z He, X Xing, B Cai, W Lu
Proceedings of the 9th International on Audio/Visual Emotion Challenge and …, 2019
282019
Multi-modality hierarchical recall based on gbdts for bipolar disorder classification
X Xing, B Cai, Y Zhao, S Li, Z He, W Fan
Proceedings of the 2018 on Audio/Visual Emotion Challenge and Workshop, 31-37, 2018
282018
Qianfeng Tie, and Xiangmin Xu. 2022. CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI
Y Chen, W Fan, X Xing, J Pang, M Huang, W Han
arXiv preprint arXiv:2205.14727, 2022
112022
Adaptive Domain-Aware Representation Learning for Speech Emotion Recognition.
W Fan, X Xu, X Xing, D Huang
Interspeech, 4089-4093, 2020
112020
Cped: A large-scale chinese personalized and emotional dialogue dataset for conversational ai
Y Chen, W Fan, X Xing, J Pang, M Huang, W Han, Q Tie, X Xu
arXiv preprint arXiv:2205.14727, 2022
102022
Mgat: Multi-granularity attention based transformers for multi-modal emotion recognition
W Fan, X Xing, B Cai, X Xu
ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and …, 2023
82023
Coordination Attention Based Transformers with Bidirectional Contrastive Loss for Multimodal Speech Emotion Recognition
W Fan, X Xu, G Zhou, X Deng, X Xing
Available at SSRN 4647924, 0
系統目前無法執行作業,請稍後再試。
文章 1–10