Guolin Ke
Guolin Ke
DP Technology
Verified email at microsoft.com
Title
Cited by
Cited by
Year
LightGBM: A Highly Efficient Gradient Boosting Decision Tree
G Ke, Q Meng, T Finley, T Wang, W Chen, W Ma, Q Ye, TY Liu
Advances in Neural Information Processing Systems, 3148-3156, 2017
37332017
A communication-efficient parallel algorithm for decision tree
Q Meng, G Ke, T Wang, W Chen, Q Ye, ZM Ma, TY Liu
Advances in Neural Information Processing Systems, 1279-1287, 2016
782016
Deep subdomain adaptation network for image classification
Y Zhu, F Zhuang, J Wang, G Ke, J Chen, J Bian, H Xiong, Q He
IEEE transactions on neural networks and learning systems 32 (4), 1713-1722, 2020
712020
DeepGBM: A deep learning framework distilled by GBDT for online prediction tasks
G Ke, Z Xu, J Zhang, J Bian, TY Liu
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge …, 2019
61*2019
Rethinking Positional Encoding in Language Pre-training
G Ke, D He, TY Liu
International Conference on Learning Representations (ICLR) 2021, 2020
59*2020
Invertible image rescaling
M Xiao, S Zheng, C Liu, Y Wang, D He, G Ke, J Bian, Z Lin, TY Liu
European Conference on Computer Vision, 126-144, 2020
482020
Do Transformers Really Perform Bad for Graph Representation?
C Ying, T Cai, S Luo, S Zheng, G Ke, D He, Y Shen, TY Liu
arXiv preprint arXiv:2106.05234, 2021
242021
Transformers with Competitive Ensembles of Independent Mechanisms
A Lamb, D He, A Goyal, G Ke, CF Liao, M Ravanelli, Y Bengio
arXiv preprint arXiv:2103.00336, 2021
72021
Mc-bert: Efficient language pre-training via a meta controller
Z Xu, L Gong, G Ke, D He, S Zheng, L Wang, J Bian, TY Liu
arXiv preprint arXiv:2006.05744, 2020
72020
Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder
S Lu, C Xiong, D He, G Ke, W Malik, Z Dou, P Bennett, T Liu, A Overwijk
arXiv preprint arXiv:2102.09206, 2021
52021
Taking Notes on the Fly Helps Language Pre-Training
Q Wu, C Xing, Y Li, G Ke, D He, TY Liu
International Conference on Learning Representations, 2020
5*2020
Stable, fast and accurate: Kernelized attention with relative positional encoding
S Luo, S Li, T Cai, D He, D Peng, S Zheng, G Ke, L Wang, TY Liu
Advances in Neural Information Processing Systems 34, 2021
22021
How could Neural Networks understand Programs?
D Peng, S Zheng, Y Li, G Ke, D He, TY Liu
arXiv preprint arXiv:2105.04297, 2021
12021
Revisiting Language Encoding in Learning Multilingual Representations
S Luo, K Gao, S Zheng, G Ke, D He, L Wang, TY Liu
arXiv preprint arXiv:2102.08357, 2021
12021
First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track
C Ying, M Yang, S Zheng, G Ke, S Luo, T Cai, C Wu, Y Wang, Y Shen, ...
12021
LightMC: A Dynamic and Efficient Multiclass Decomposition Algorithm
Z Liu, G Ke, J Bian, T Liu
arXiv preprint arXiv:1908.09362, 2019
12019
Less is More: Pretrain a Strong Siamese Encoder for Dense Text Retrieval Using a Weak Decoder
S Lu, D He, C Xiong, G Ke, W Malik, Z Dou, P Bennett, TY Liu, A Overwijk
Proceedings of the 2021 Conference on Empirical Methods in Natural Language …, 2021
2021
Awardee Solution of KDD Cup 2021 OGB Large-Scale Challenge Graph-Level Track
C Ying, M Yang, S Zheng, G Ke, S Luo, T Cai, C Wu, Y Wang, Y Shen, ...
arXiv preprint arXiv:2106.08279, 2021
2021
LazyFormer: Self Attention with Lazy Update
C Ying, G Ke, D He, TY Liu
arXiv preprint arXiv:2102.12702, 2021
2021
Light Multi-Segment Activation for Model Compression
Z Xu, G Ke, J Zhang, J Bian, TY Liu
AAAI, 6542-6549, 2020
2020
The system can't perform the operation now. Try again later.
Articles 1–20