Songfang Huang
Songfang Huang
Alibaba DAMO Academy
Verified email at
Cited by
Cited by
Semantic relation classification via convolutional neural networks with simple negative sampling
K Xu, Y Feng, S Huang, D Zhao
arXiv preprint arXiv:1506.07650, 2015
Question answering on freebase via relation extraction and textual evidence
K Xu, S Reddy, Y Feng, S Huang, D Zhao
arXiv preprint arXiv:1603.00957, 2016
Combining graph-based learning with automated data collection for code vulnerability detection
H Wang, G Ye, Z Tang, SH Tan, S Huang, D Fang, Y Feng, L Bian, ...
IEEE Transactions on Information Forensics and Security 16, 1943-1958, 2020
RRHF: Rank responses to align language models with human feedback
H Yuan, Z Yuan, C Tan, W Wang, S Huang, F Huang
Advances in Neural Information Processing Systems 36, 2024
Raise a child in large language model: Towards effective and generalizable fine-tuning
R Xu, F Luo, Z Zhang, C Tan, B Chang, S Huang, F Huang
arXiv preprint arXiv:2109.05687, 2021
Biomedical question answering: a survey of approaches and challenges
Q Jin, Z Yuan, G Xiong, Q Yu, H Ying, C Tan, M Chen, S Huang, X Liu, ...
ACM Computing Surveys (CSUR) 55 (2), 1-36, 2022
MELR: Meta-learning via modeling episode-level relationships for few-shot learning
N Fei, Z Lu, T Xiang, S Huang
International Conference on Learning Representations, 2021
StructuralLM: Structural pre-training for form understanding
LS Chenliang Li, Bin Bi, Ming Yan, Wei Wang, Songfang Huang, Fei Huang
Proceedings of the 59th Annual Meeting of the Association for Computational …, 0
E2E-VLP: End-to-end vision-language pre-training enhanced by visual learning
H Xu, M Yan, C Li, B Bi, S Huang, W Xiao, F Huang
arXiv preprint arXiv:2106.01804, 2021
Improving biomedical pretrained language models with knowledge
Z Yuan, Y Liu, C Tan, S Huang, F Huang
arXiv preprint arXiv:2104.10344, 2021
Learning with noise: Enhance distantly supervised relation extraction with dynamic transition matrix
B Luo, Y Feng, Z Wang, Z Zhu, S Huang, R Yan, D Zhao
arXiv preprint arXiv:1705.03995, 2017
IEPT: Instance-level and episode-level pretext tasks for few-shot learning
M Zhang, J Zhang, Z Lu, T Xiang, M Ding, S Huang
International Conference on Learning Representations, 2021
mplug-2: A modularized multi-modal foundation model across text, image and video
H Xu, Q Ye, M Yan, Y Shi, J Ye, Y Xu, C Li, B Bi, Q Qian, W Wang, G Xu, ...
International Conference on Machine Learning, 38728-38748, 2023
mplug: Effective and efficient vision-language learning by cross-modal skip-connections
C Li, H Xu, J Tian, W Wang, M Yan, B Bi, J Ye, H Chen, G Xu, Z Cao, ...
arXiv preprint arXiv:2205.12005, 2022
Palm: Pre-training an autoencoding&autoregressive language model for context-conditioned generation
B Bi, C Li, C Wu, M Yan, W Wang, S Huang, F Huang, L Si
arXiv preprint arXiv:2004.07159, 2020
How well do large language models perform in arithmetic tasks?
Z Yuan, H Yuan, C Tan, W Wang, S Huang
arXiv preprint arXiv:2304.02015, 2023
Veco: Variable encoder-decoder pre-training for cross-lingual understanding and generation
F Luo, W Wang, J Liu, Y Liu, B Bi, S Huang, F Huang, L Si
Proceedings of the 59th ACL, 2020
Hybrid question answering over knowledge base and free text
K Xu, Y Feng, S Huang, D Zhao
Proceedings of COLING 2016, the 26th International Conference on …, 2016
Code synonyms do matter: Multiple synonyms matching network for automatic ICD coding
Z Yuan, C Tan, S Huang
arXiv preprint arXiv:2203.01515, 2022
Automated conformance testing for JavaScript engines via deep compiler fuzzing
G Ye, Z Tang, SH Tan, S Huang, D Fang, X Sun, L Bian, H Wang, Z Wang
Proceedings of the 42nd ACM SIGPLAN international conference on programming …, 2021
The system can't perform the operation now. Try again later.
Articles 1–20