关注
Neil Zhenqiang Gong
Neil Zhenqiang Gong
Associate Professor, Duke University
在 duke.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
M Fang, X Cao, J Jia, NZ Gong
USENIX Security Symposium, 2020
12472020
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
X Cao, M Fang, J Liu, NZ Gong
ISOC Network and Distributed System Security Symposium (NDSS), 2021
6322021
Stealing Hyperparameters in Machine Learning
B Wang, NZ Gong
IEEE Symposium on Security and Privacy, 2018
6322018
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
J Jia, A Salem, M Backes, Y Zhang, NZ Gong
ACM Conference on Computer and Communications Security (CCS), 2019
4372019
On the feasibility of internet-scale author identification
A Narayanan, H Paskov, NZ Gong, J Bethencourt, E Stefanov, ECR Shin, ...
IEEE Symposium on Security and Privacy, 2012
4012012
Joint link prediction and attribute inference using a social-attribute network
NZ Gong, A Talwalkar, L Mackey, L Huang, ECR Shin, E Stefanov, ER Shi, ...
ACM Transactions on Intelligent Systems and Technology (TIST) 5 (2), 27, 2014
329*2014
Evolution of Social-Attribute Networks: Measurements, Modeling, and Implications using Google+
NZ Gong, W Xu, L Huang, P Mittal, E Stefanov, V Sekar, D Song
ACM Internet Measurement Conference (IMC), 2012
2712012
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification
X Cao, NZ Gong
Annual Computer Security Applications Conference (ACSAC), 2017
2582017
Poisoning Attacks to Graph-Based Recommender Systems
M Fang, G Yang, NZ Gong, J Liu
Annual Computer Security Applications Conference (ACSAC), 2018
2462018
TrustLLM: Trustworthiness in Large Language Models
L Sun, Y Huang, H Wang, S Wu, Q Zhang, C Gao, Y Huang, W Lyu, ...
International Conference on Machine Learning (ICML), 2024
244*2024
SybilBelief: A Semi-supervised Learning Approach for Structure-based Sybil Detection
NZ Gong, M Frank, P Mittal
IEEE Transactions on Information Forensics and Security 9 (6), 2014
2312014
Backdoor Attacks to Graph Neural Networks
Z Zhang, J Jia, B Wang, NZ Gong
ACM Symposium on Access Control Models and Technologies (SACMAT), 2021
2302021
PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts
K Zhu, J Wang, J Zhou, Z Wang, H Chen, Y Wang, L Yang, W Ye, ...
arXiv preprint arXiv:2306.04528, 2023
2262023
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
Z Zhang, X Cao, J Jia, NZ Gong
ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022
2162022
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning
J Jia, NZ Gong
USENIX Security Symposium, 2018
2062018
FLCert: Provably Secure Federated Learning against Poisoning Attacks
X Cao, Z Zhang, J Jia, NZ Gong
IEEE Transactions on Information Forensics and Security, 2022
196*2022
Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning
J Jia, Y Liu, NZ Gong
IEEE Symposium on Security and Privacy, 2022
1882022
Stealing Links from Graph Neural Networks
X He, J Jia, M Backes, NZ Gong, Y Zhang
USENIX Security Symposium, 2021
1762021
You Are Who You Know and How You Behave: Attribute Inference Attacks via Users' Social Friends and Behaviors.
NZ Gong, B Liu
USENIX Security Symposium, 2016
1752016
Influence function based data poisoning attacks to top-n recommender systems
M Fang, NZ Gong, J Liu
Proceedings of The Web Conference, 2020
1712020
系统目前无法执行此操作,请稍后再试。
文章 1–20