Unsupervised Extractive Summarization by Pre-training Hierarchical Transformers S Xu, X Zhang, Y Wu, F Wei, M Zhou Findings of the Association for Computational Linguistics: EMNLP (2020), 2020 | 34 | 2020 |
Sequence level contrastive learning for text summarization S Xu, X Zhang, Y Wu, F Wei Proceedings of the AAAI Conference on Artificial Intelligence 36 (10), 11556 …, 2022 | 25 | 2022 |
Attentional Multi-graph Convolutional Network for Regional Economy Prediction with Open Migration Data F Xu, Y Li, S Xu Proceedings of the 26th ACM SIGKDD International Conference on Knowledge …, 2020 | 10 | 2020 |
Grounded Reinforcement Learning: Learning to Win the Game under Human Commands S Xu, H Wang, Y Wu Advances in Neural Information Processing Systems 35, 7504-7519, 2022 | 1 | 2022 |
PhyloTransformer: A Discriminative Model for Mutation Prediction Based on a Multi-head Self-attention Mechanism Y Wu, S Xu, ST Yau, Y Wu arXiv preprint arXiv:2111.01969, 2021 | 1 | 2021 |
Native Chinese Reader: A Dataset Towards Native-Level Chinese Machine Reading Comprehension S Xu, Y Liu, X Yi, S Zhou, H Li, Y Wu Thirty-fifth Conference on Neural Information Processing Systems Datasets …, 2021 | 1 | 2021 |
Beyond Information Gain: An Empirical Benchmark for Low-Switching-Cost Reinforcement Learning S Xu, Y Liang, Y Li, SS Du, Y Wu Transactions on Machine Learning Research, 0 | | |