关注
Hao  Sun
Hao Sun
在 pku.edu.cn 的电子邮件经过验证
标题
引用次数
引用次数
年份
Multispeech: Multi-speaker text to speech with transformer
M Chen, X Tan, Y Ren, J Xu, H Sun, S Zhao, T Qin, TY Liu
arXiv preprint arXiv:2006.04664, 2020
932020
Token-level ensemble distillation for grapheme-to-phoneme conversion
H Sun, X Tan, JW Gan, H Liu, S Zhao, T Qin, TY Liu
arXiv preprint arXiv:1904.03446, 2019
682019
LightPAFF: A two-stage distillation framework for pre-training and fine-tuning
K Song, H Sun, X Tan, T Qin, J Lu, H Liu, TY Liu
arXiv preprint arXiv:2004.12817, 2020
162020
Knowledge distillation from bert in pre-training and fine-tuning for polyphone disambiguation
H Sun, X Tan, JW Gan, S Zhao, D Han, H Liu, T Qin, TY Liu
2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU …, 2019
142019
系统目前无法执行此操作,请稍后再试。
文章 1–4