关注
Wenyue Hua
Wenyue Hua
Ph.D. candidate on Computer Science, Rutgers University, New Brunswick
在 rutgers.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Openagi: When llm meets domain experts
Y Ge, W Hua, J Ji, J Tan, S Xu, Y Zhang
NeurIPS 2023, 2023
692023
EntQA: Entity linking as question answering
W Zhang, W Hua, K Stratos
ICLR 2022, 2021
412021
How to Index Item IDs for Recommendation Foundation Models
W Hua, S Xu, Y Ge, Y Zhang
SIGIR-AP 2023, 2023
392023
UP5: Unbiased Foundation Model for Fairness-aware Recommendation
W Hua, Y Ge, S Xu, J Ji, Z Li, Y Zhang
EACL 2024, 2023
232023
Openp5: Benchmarking foundation models for recommendation
S Xu, W Hua, Y Zhang
SIGIR 2024, 2023
212023
War and peace (waragent): Large language model-based multi-agent simulation of world wars
W Hua*, L Fan*, L Li, K Mei, J Ji, Y Ge, L Hemphill, Y Zhang
arXiv preprint arXiv:2311.17227, 2023
192023
Text based Large Language Model for Recommendation
J Ji, Z Li, S Xu, W Hua, Y Ge, J Tan, Y Zhang
ECIR 2024, 2023
16*2023
A Predicate-Function-Argument Annotation of Natural Language for Open-Domain Information eXpression
M Sun, W Hua, Z Liu, X Wang, K Zheng, P Li
EMNLP 2020, 2020
132020
Tutorial on Large Language Models for Recommendation
W Hua, L Li, S Xu, L Chen, Y Zhang
RecSys 2023, 1281-1283, 2023
92023
Nphardeval: Dynamic benchmark on reasoning ability of large language models via complexity classes
L Fan*, W Hua*, L Li, H Ling, Y Zhang
arXiv preprint arXiv:2312.14890, 2023
82023
LLM as OS agents as APPs: Envisioning AISO, agents and the AIOS-agent ecosystem
Y Ge, Y Ren, W Hua, S Xu, J Tan, Y Zhang
arXiv preprint arXiv:2312.03815, 2023
62023
System 1 + System 2 = Better World: Neural-Symbolic Chain of Logic Reasoning
W Hua, Y Zhang
EMNLP 2022, 2022
52022
LegalRelectra: Mixed-domain Language Modeling for Long-range Legal Text Comprehension
W Hua, Y Zhang, Z Chen, J Li, M Weber
EMNLP NLLP workshop 2023, 2022
42022
Learning Underlying Representations and Input-Strictly-Local Functions
W Hua, A Jardine, H Dai
WCCFL 2020, 2020
42020
Propagation and Pitfalls: Reasoning-based Assessment of Knowledge Editing through Counterfactual Tasks
W Hua*, J Guo*, M Dong, H Zhu, P Ng, Z Wang
arXiv preprint arXiv:2401.17585, 2024
32024
PromptCrypt: Prompt Encryption for Secure Communication with Large Language Models
G Lin, W Hua, Y Zhang
arXiv preprint arXiv:2402.05868, 2024
22024
Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents
Z Li, W Hua, H Wang, H Zhu, Y Zhang
arXiv preprint arXiv:2402.00798, 2024
22024
The impact of reasoning step length on large language models
M Jin, Q Yu, H Zhao, W Hua, Y Meng, Y Zhang, M Du
arXiv preprint arXiv:2401.04925, 2024
22024
Towards LLM-RecSys Alignment with Textual ID Learning
J Tan, S Xu, W Hua, Y Ge, Z Li, Y Zhang
SIGIR 2024, 2024
12024
What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents
M Jin, B Wang, Z Xue, S Zhu, W Hua, H Tang, K Mei, M Du, Y Zhang
arXiv preprint arXiv:2402.13184, 2024
12024
系统目前无法执行此操作,请稍后再试。
文章 1–20