Follow
Pin-Yu Chen
Pin-Yu Chen
Principal Research Scientist, IBM Research AI; MIT-IBM Watson AI Lab; RPI-IBM AIRC
Verified email at ibm.com - Homepage
Title
Cited by
Cited by
Year
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
PY Chen*, H Zhang*, Y Sharma, J Yi, CJ Hsieh
ACM Workshop on AI and Security (*equal contribution, best paper award finalist), 2017
15022017
Efficient Neural Network Robustness Certification with General Activation Functions
H Zhang, TW Weng, PY Chen, CJ Hsieh, L Daniel
NeurIPS 2018, 2018
5782018
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
PY Chen*, Y Sharma*, H Zhang, J Yi, CJ Hsieh
AAAI 2018 (*equal contribution), 2017
5712017
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
A Dhurandhar*, PY Chen*, R Luss, CC Tu, P Ting, K Shanmugam, P Das
NeurIPS 2018 (*equal contribution), 2018
4812018
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
TW Weng, H Zhang, PY Chen, J Yi, D Su, Y Gao, CJ Hsieh, L Daniel
ICLR 2018, 2018
4032018
DBA: Distributed Backdoor Attacks against Federated Learning
C Xie, K Huang, PY Chen, B Li
ICLR 2020, 2019
3712019
Is Robustness the Cost of Accuracy?--A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
D Su, H Zhang, H Chen, J Yi, PY Chen, Y Gao
ECCV 2018, 2018
3642018
Query-efficient hard-label black-box attack: An optimization-based approach
M Cheng, T Le, PY Chen, J Yi, H Zhang, CJ Hsieh
ICLR 2019, 2018
3462018
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
CC Tu*, P Ting*, PY Chen*, S Liu, H Zhang, J Yi, CJ Hsieh, SM Cheng
AAAI 2019 (oral presentation, *equal contribution), 2018
3322018
One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques
V Arya, RKE Bellamy, PY Chen, A Dhurandhar, M Hind, SC Hoffman, ...
arXiv preprint arXiv:1909.03012, 2019
3172019
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
K Xu, H Chen, S Liu, PY Chen, TW Weng, M Hong, X Lin
IJCAI 2019, 2019
2382019
Adversarial t-shirt! evading person detectors in a physical world
K Xu, G Zhang, S Liu, Q Fan, M Sun, H Chen, PY Chen, Y Wang, X Lin
ECCV 2020 (spotlight), 2019
233*2019
Smart attacks in smart grid communication networks
PY Chen, SM Cheng, KC Chen
IEEE Communications Magazine 50 (8), 24-29, 2012
2202012
Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples
M Cheng, J Yi, PY Chen, H Zhang, CJ Hsieh
AAAI 2020, 2018
2082018
Variational Quantum Circuits for Deep Reinforcement Learning
S Yen-Chi Chen, CH Huck Yang, J Qi, PY Chen, X Ma, HS Goan
IEEE ACCESS, 2020
1782020
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
K Xu, S Liu, P Zhao, PY Chen, H Zhang, D Erdogmus, Y Wang, X Lin
ICLR 2019, 2018
1592018
Attacking visual language grounding with adversarial examples: A case study on neural image captioning
H Chen, H Zhang, PY Chen, J Yi, CJ Hsieh
ACL 2018 (Long Papers) 1, 2587-2597, 2018
158*2018
Vision transformers are robust learners
S Paul*, PY Chen*
AAAI 2022 (*equal contribution), 2021
1572021
Characterizing Audio Adversarial Examples Using Temporal Dependency
Z Yang, B Li, PY Chen, D Song
ICLR 2019, 2018
1502018
Accelerated antimicrobial discovery via deep generative models and molecular dynamics simulations
P Das, T Sercu, K Wadhawan, I Padhi, S Gehrmann, F Cipcigan, ...
Nature Biomedical Engineering 5 (6), 613-623, 2021
140*2021
The system can't perform the operation now. Try again later.
Articles 1–20