Nicolas Papernot
Nicolas Papernot
University of Toronto and Vector Institute
Verified email at utoronto.ca - Homepage
Title
Cited by
Cited by
Year
The Limitations of Deep Learning in Adversarial Settings
N Papernot, P McDaniel, S Jha, M Fredrikson, ZB Celik, A Swami
Proceedings of the 1st IEEE European Symposium on Security and Privacy, 2015
25512015
Practical black-box attacks against machine learning
N Papernot, P McDaniel, I Goodfellow, S Jha, ZB Celik, A Swami
Proceedings of the 2017 ACM on Asia conference on computer and …, 2017
20632017
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
N Papernot, P McDaniel, X Wu, S Jha, A Swami
Proceedings of the 37th IEEE Symposium on Security and Privacy, 2015
20482015
Ensemble adversarial training: Attacks and defenses
F Tramèr, A Kurakin, N Papernot, I Goodfellow, D Boneh, P McDaniel
International Conference on Learning Representations, 2018
14952018
Transferability in machine learning: from phenomena to black-box attacks using adversarial samples
N Papernot, P McDaniel, I Goodfellow
arXiv preprint arXiv:1605.07277, 2016
10632016
Mixmatch: A holistic approach to semi-supervised learning
D Berthelot, N Carlini, I Goodfellow, N Papernot, A Oliver, C Raffel
arXiv preprint arXiv:1905.02249, 2019
8002019
Adversarial examples for malware detection
K Grosse, N Papernot, P Manoharan, M Backes, P McDaniel
European symposium on research in computer security, 62-79, 2017
643*2017
SoK: Towards the Science of Security and Privacy in Machine Learning
N Papernot, P McDaniel, A Sinha, MP Wellman
2018 IEEE European Symposium on Security and Privacy (EuroS&P), 2018
577*2018
Semi-supervised knowledge transfer for deep learning from private training data
N Papernot, M Abadi, Ú Erlingsson, I Goodfellow, K Talwar
Proceedings of the 5th International Conference on Learning Representations …, 2016
5102016
On the (statistical) detection of adversarial examples
K Grosse, P Manoharan, N Papernot, M Backes, P McDaniel
arXiv preprint arXiv:1702.06280, 2017
4692017
Adversarial attacks on neural network policies
S Huang, N Papernot, I Goodfellow, Y Duan, P Abbeel
arXiv preprint arXiv:1702.02284, 2017
4452017
Technical report on the cleverhans v2. 1.0 adversarial examples library
N Papernot, F Faghri, N Carlini, I Goodfellow, R Feinman, A Kurakin, ...
arXiv preprint arXiv:1610.00768, 2016
439*2016
On evaluating adversarial robustness
N Carlini, A Athalye, N Papernot, W Brendel, J Rauber, D Tsipras, ...
arXiv preprint arXiv:1902.06705, 2019
3902019
Practical black-box attacks against deep learning systems using adversarial examples
N Papernot, P McDaniel, I Goodfellow, S Jha, ZB Celik, A Swami
arXiv preprint arXiv:1602.02697 1 (2), 3, 2016
3782016
The space of transferable adversarial examples
F Tramèr, N Papernot, I Goodfellow, D Boneh, P McDaniel
arXiv preprint arXiv:1704.03453, 2017
3582017
Crafting Adversarial Input Sequences for Recurrent Neural Networks
N Papernot, P McDaniel, A Swami, R Harang
Military Communications Conference, MILCOM, 2016
2972016
Scalable Private Learning with PATE
N Papernot, S Song, I Mironov, A Raghunathan, K Talwar, Ú Erlingsson
International Conference on Learning Representations, 2018
2642018
Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning
N Papernot, P McDaniel
arXiv preprint arXiv:1803.04765, 2018
2592018
Making machine learning robust against adversarial inputs
I Goodfellow, P McDaniel, N Papernot
Communications of the ACM 61 (7), 56-66, 2018
254*2018
Adversarial examples that fool both computer vision and time-limited humans
GF Elsayed, S Shankar, B Cheung, N Papernot, A Kurakin, I Goodfellow, ...
arXiv preprint arXiv:1802.08195, 2018
1632018
The system can't perform the operation now. Try again later.
Articles 1–20