追蹤
Zhiying Jiang
Zhiying Jiang
在 uwaterloo.ca 的電子郵件地址已通過驗證
標題
引用次數
引用次數
年份
Document ranking with a pretrained sequence-to-sequence model
R Nogueira, Z Jiang, J Lin
arXiv preprint arXiv:2003.06713, 2020
3842020
Investigating the limitations of transformers with simple arithmetic tasks
R Nogueira, Z Jiang, J Lin
arXiv preprint arXiv:2102.13019, 2021
822021
Describing a knowledge base
Q Wang, X Pan, L Huang, B Zhang, Z Jiang, H Ji, K Knight
arXiv preprint arXiv:1809.01797, 2018
572018
PaperRobot: Incremental draft generation of scientific ideas
Q Wang, L Huang, Z Jiang, K Knight, H Ji, M Bansal, Y Luan
arXiv preprint arXiv:1905.07870, 2019
542019
What the daam: Interpreting stable diffusion using cross attention
R Tang, L Liu, A Pandey, Z Jiang, G Yang, K Kumar, P Stenetorp, J Lin, ...
arXiv preprint arXiv:2210.04885, 2022
512022
“Low-Resource” Text Classification: A Parameter-Free Classification Method with Compressors
Z Jiang, M Yang, M Tsirlin, R Tang, Y Dai, J Lin
Findings of the Association for Computational Linguistics: ACL 2023, 6810-6828, 2023
312023
Navigation-based candidate expansion and pretrained language models for citation recommendation
R Nogueira, Z Jiang, K Cho, J Lin
Scientometrics 125 (3), 3001-3016, 2020
182020
Chengyu cloze test
Z Jiang, B Zhang, L Huang, H Ji
Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building …, 2018
152018
Inserting information bottlenecks for attribution in transformers
Z Jiang, R Tang, J Xin, J Lin
arXiv preprint arXiv:2012.13838, 2020
112020
Few-shot non-parametric learning with deep latent variable model
Z Jiang, Y Dai, J Xin, M Li, J Lin
Advances in Neural Information Processing Systems 35, 26448-26461, 2022
82022
How does BERT rerank passages? an attribution analysis with information bottlenecks
Z Jiang, R Tang, J Xin, J Lin
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting …, 2021
82021
Evaluating pretrained transformer models for citation recommendation
R Nogueira, Z Jiang, K Cho, J Lin
CEUR Workshop Proceedings 2591, 89-100, 2020
82020
Less is more: Parameter-free text classification with gzip
Z Jiang, MYR Yang, M Tsirlin, R Tang, J Lin
arXiv preprint arXiv:2212.09410, 2022
32022
Building an efficiency pipeline: Commutativity and cumulativeness of efficiency operators for transformers
J Xin, R Tang, Z Jiang, Y Yu, J Lin
arXiv preprint arXiv:2208.00483, 2022
22022
Less is More: Restricted Representations for Better Interpretability and Generalizability
Z Jiang
University of Waterloo, 2023
2023
Approximating Human-Like Few-shot Learning with GPT-based Compression
C Huang, Y Xie, Z Jiang, J Lin, M Li
arXiv preprint arXiv:2308.06942, 2023
2023
Operator Selection and Ordering in a Pipeline Approach to Efficiency Optimizations for Transformers
J Xin, R Tang, Z Jiang, Y Yu, J Lin
Findings of the Association for Computational Linguistics: ACL 2023, 2870-2882, 2023
2023
With a Little Help from Gzip: Text Classification with No Training
Z Jiang, MYR Yang, M Tsirlin, R Tang, J Lin
Narrating a Knowledge Base
Q Wang, X Pan, L Huang, B Zhang, Z Jiang, H Ji, K Knight
Rensselaer Polytechnic Institute DiDi Labs Universit 4 University of North Carolina at Chapel Hill U kevinknight@ didiglobal. com, heng
Q Wang, L Huang, Z Jiang, Q Wang, L Huang, Z Jiang
系統目前無法執行作業,請稍後再試。
文章 1–20