Dan Hendrycks
Dan Hendrycks
PhD Student, UC Berkeley
Dirección de correo verificada de berkeley.edu - Página principal
Título
Citado por
Citado por
Año
Gaussian Error Linear Units (GELUs)
D Hendrycks, K Gimpel
arXiv preprint arXiv:1606.08415, 2016
1008*2016
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
D Hendrycks, K Gimpel
International Conference on Learning Representations (ICLR), 2017
5052017
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
D Hendrycks, T Dietterich
International Conference on Learning Representations (ICLR), 2019
346*2019
Deep Anomaly Detection with Outlier Exposure
D Hendrycks, M Mazeika, T Dietterich
International Conference on Learning Representations (ICLR), 2019
1862019
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise
D Hendrycks, M Mazeika, D Wilson, K Gimpel
Neural Information Processing Systems (NeurIPS), 2018
1302018
Early Methods for Detecting Adversarial Images
D Hendrycks, K Gimpel
International Conference on Learning Representations (ICLR) Workshop, 2017
1262017
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty
D Hendrycks, M Mazeika, S Kadavath, D Song
Neural Information Processing Systems (NeurIPS), 2019
922019
Using pre-training can improve model robustness and uncertainty
D Hendrycks, K Lee, M Mazeika
arXiv preprint arXiv:1901.09960, 2019
832019
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
D Hendrycks, N Mu, ED Cubuk, B Zoph, J Gilmer, B Lakshminarayanan
International Conference on Learning Representations (ICLR), 2020
672020
Natural adversarial examples
D Hendrycks, K Zhao, S Basart, J Steinhardt, D Song
arXiv preprint arXiv:1907.07174, 2019
622019
Testing robustness against unforeseen adversaries
D Kang, Y Sun, D Hendrycks, T Brown, J Steinhardt
arXiv preprint arXiv:1908.08016, 2019
40*2019
Open Category Detection with PAC Guarantees
S Liu, R Garrepalli, TG Dietterich, A Fern, D Hendrycks
International Conference on Machine Learning (ICML), 2018
252018
Pretrained transformers improve out-of-distribution robustness
D Hendrycks, X Liu, E Wallace, A Dziedzic, R Krishnan, D Song
arXiv preprint arXiv:2004.06100, 2020
152020
Adjusting for Dropout Variance in Batch Normalization and Weight Initialization
D Hendrycks, K Gimpel
arXiv preprint arXiv:1607.02488, 2016
14*2016
The many faces of robustness: A critical analysis of out-of-distribution generalization
D Hendrycks, S Basart, N Mu, S Kadavath, F Wang, E Dorundo, R Desai, ...
arXiv preprint arXiv:2006.16241, 2020
72020
A benchmark for anomaly segmentation
D Hendrycks, S Basart, M Mazeika, M Mostajabi, J Steinhardt, D Song
arXiv preprint arXiv:1911.11132, 2019
72019
A Discussion of'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Example Researchers Need to Expand What is Meant by'Robustness'
J Gilmer, D Hendrycks
Distill 4 (8), e00019. 1, 2019
52019
Measuring Massive Multitask Language Understanding
D Hendrycks, C Burns, S Basart, A Zou, M Mazeika, D Song, J Steinhardt
arXiv preprint arXiv:2009.03300, 2020
2020
Aligning AI With Shared Human Values
D Hendrycks, C Burns, S Basart, A Critch, J Li, D Song, J Steinhardt
arXiv preprint arXiv:2008.02275, 2020
2020
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty Supplementary Materials
D Hendrycks, M Mazeika, S Kadavath, D Song, P Ptest
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20