关注
Zheyuan Liu
标题
引用次数
引用次数
年份
Towards Safer Large Language Models through Machine Unlearning
Z Liu, G Dou, Z Tan, Y Tian, M Jiang
ACL Findings, 2024
192024
Fair Graph Representation Learning via Diverse Mixture of Experts
Z Liu*, C Zhang*, Y Tian, E Zhang, C Huang, Y Ye, C Zhang
The Web Conference (WWW) 2023, 2023
182023
Chasing all-round graph representation robustness: Model, training, and optimization
C Zhang, Y Tian, M Ju, Z Liu, Y Ye, N Chawla, C Zhang
The Eleventh International Conference on Learning Representations (ICLR) 2023, 2022
162022
Breaking the trilemma of privacy, utility, efficiency via controllable machine unlearning
Z Liu*, G Dou*, Y Tian, C Zhang, E Chien, Z Zhu
The Web Conference (WWW) 2024, 2023
112023
Graphbert: Bridging graph and text for malicious behavior detection on social media
J Wu, C Zhang, Z Liu, E Zhang, S Wilson, C Zhang
2022 IEEE International Conference on Data Mining (ICDM), 548-557, 2022
112022
Democratizing large language models via personalized parameter-efficient fine-tuning
Z Tan, Q Zeng, Y Tian, Z Liu, B Yin, M Jiang
arXiv preprint arXiv:2402.04401, 2024
102024
Can we soft prompt LLMs for graph learning tasks?
Z Liu, X He, Y Tian, NV Chawla
Companion Proceedings of the ACM on Web Conference 2024, 481-484, 2024
42024
Ugmae: A unified framework for graph masked autoencoders
Y Tian, C Zhang, Z Kou, Z Liu, X Zhang, NV Chawla
arXiv preprint arXiv:2402.08023, 2024
32024
Graph Learning for Parameter Prediction of Quantum Approximate Optimization Algorithm
Z Liang, G Liu, Z Liu, J Cheng, T Hao, K Liu, H Ren, Z Song, J Liu, F Ye, ...
arXiv preprint arXiv:2403.03310, 2024
12024
Avoiding Copyright Infringement via Machine Unlearning
G Dou, Z Liu, Q Lyu, K Ding, E Wong
arXiv preprint arXiv:2406.10952, 2024
2024
Personalized Pieces: Efficient Personalized Large Language Models through Collaborative Efforts
Z Tan, Z Liu, M Jiang
arXiv preprint arXiv:2406.10471, 2024
2024
State-level COVID-19 Trend Forecasting Using Mobility and Policy Data
Y Wang, H Peng, L Sha, Z Liu, P Hong
medRxiv, 2021.01. 04.21249218, 2021
2021
系统目前无法执行此操作,请稍后再试。
文章 1–12