Follow
Pin-Yu Chen
Pin-Yu Chen
Research Staff Member, IBM Research AI; MIT-IBM Watson AI Lab; RPI-IBM AIRC
Verified email at ibm.com - Homepage
Title
Cited by
Cited by
Year
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
PY Chen*, H Zhang*, Y Sharma, J Yi, CJ Hsieh
ACM Workshop on AI and Security (*equal contribution, best paper award finalist), 2017
10602017
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
PY Chen*, Y Sharma*, H Zhang, J Yi, CJ Hsieh
AAAI 2018 (*equal contribution), 2017
4542017
Efficient Neural Network Robustness Certification with General Activation Functions
H Zhang, TW Weng, PY Chen, CJ Hsieh, L Daniel
NeurIPS 2018, 2018
3692018
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
A Dhurandhar*, PY Chen*, R Luss, CC Tu, P Ting, K Shanmugam, P Das
NeurIPS 2018 (*equal contribution), 2018
3022018
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
TW Weng, H Zhang, PY Chen, J Yi, D Su, Y Gao, CJ Hsieh, L Daniel
ICLR 2018, 2018
2742018
Is Robustness the Cost of Accuracy?--A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
D Su, H Zhang, H Chen, J Yi, PY Chen, Y Gao
ECCV 2018, 2018
2702018
Query-efficient hard-label black-box attack: An optimization-based approach
M Cheng, T Le, PY Chen, J Yi, H Zhang, CJ Hsieh
ICLR 2019, 2018
2362018
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
CC Tu*, P Ting*, PY Chen*, S Liu, H Zhang, J Yi, CJ Hsieh, SM Cheng
AAAI 2019 (oral presentation, *equal contribution), 2018
2292018
One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques
V Arya, RKE Bellamy, PY Chen, A Dhurandhar, M Hind, SC Hoffman, ...
arXiv preprint arXiv:1909.03012, 2019
2112019
Smart attacks in smart grid communication networks
PY Chen, SM Cheng, KC Chen
IEEE Communications Magazine 50 (8), 24-29, 2012
1962012
DBA: Distributed Backdoor Attacks against Federated Learning
C Xie, K Huang, PY Chen, B Li
ICLR 2020, 2019
1762019
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
K Xu, H Chen, S Liu, PY Chen, TW Weng, M Hong, X Lin
IJCAI 2019, 2019
1602019
Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples
M Cheng, J Yi, PY Chen, H Zhang, CJ Hsieh
AAAI 2020, 2018
1562018
Attacking visual language grounding with adversarial examples: A case study on neural image captioning
H Chen, H Zhang, PY Chen, J Yi, CJ Hsieh
ACL 2018 (Long Papers) 1, 2587-2597, 2018
135*2018
On modeling malware propagation in generalized social networks
SM Cheng, WC Ao, PY Chen, KC Chen
IEEE Communications Letters 15 (1), 25-27, 2010
1292010
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
K Xu, S Liu, P Zhao, PY Chen, H Zhang, D Erdogmus, Y Wang, X Lin
ICLR 2019, 2018
1252018
Adversarial t-shirt! evading person detectors in a physical world
K Xu, G Zhang, S Liu, Q Fan, M Sun, H Chen, PY Chen, Y Wang, X Lin
ECCV 2020 (spotlight), 2019
117*2019
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
A Boopathy, TW Weng, PY Chen, S Liu, L Daniel
AAAI 2019 (oral presentation), 2018
1022018
Information Fusion to Defend Intentional Attack in Internet of Things
PY Chen, SM Cheng, KC Chen
IEEE Internet of Things Journal, 2014
962014
Attacking the Madry defense model with -based adversarial examples
Y Sharma, PY Chen
ICLR 2018 Workshop, 2017
95*2017
The system can't perform the operation now. Try again later.
Articles 1–20