Æȷοì
Dan Hendrycks
Dan Hendrycks
Director of the Center for AI Safety
berkeley.eduÀÇ À̸ÞÀÏ È®ÀÎµÊ - ȨÆäÀÌÁö
Á¦¸ñ
Àοë
Àοë
¿¬µµ
Gaussian Error Linear Units (GELUs)
D Hendrycks, K Gimpel
arXiv preprint arXiv:1606.08415, 2016
49072016
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
D Hendrycks, T Dietterich
International Conference on Learning Representations (ICLR), 2019
32452019
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
D Hendrycks, K Gimpel
International Conference on Learning Representations (ICLR), 2017
31722017
Deep Anomaly Detection with Outlier Exposure
D Hendrycks, M Mazeika, T Dietterich
International Conference on Learning Representations (ICLR), 2019
14572019
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
D Hendrycks, N Mu, ED Cubuk, B Zoph, J Gilmer, B Lakshminarayanan
International Conference on Learning Representations (ICLR), 2020
13102020
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
D Hendrycks, S Basart, N Mu, S Kadavath, F Wang, E Dorundo, R Desai, ...
International Conference on Computer Vision (ICCV), 2020
12342020
Natural Adversarial Examples
D Hendrycks, K Zhao, S Basart, J Steinhardt, D Song
Conference on Computer Vision and Pattern Recognition (CVPR), 2019
11612019
Measuring Massive Multitask Language Understanding
D Hendrycks, C Burns, S Basart, A Zou, M Mazeika, D Song, J Steinhardt
International Conference on Learning Representations (ICLR), 2020
10802020
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty
D Hendrycks, M Mazeika, S Kadavath, D Song
Neural Information Processing Systems (NeurIPS), 2019
9552019
Using Pre-training Can Improve Model Robustness and Uncertainty
D Hendrycks, K Lee, M Mazeika
International Conference on Machine Learning, 2712-2721, 2019
7462019
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
7242022
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise
D Hendrycks, M Mazeika, D Wilson, K Gimpel
Neural Information Processing Systems (NeurIPS), 2018
5912018
Measuring Mathematical Problem Solving With the MATH Dataset
D Hendrycks, C Burns, S Kadavath, A Arora, S Basart, E Tang, D Song, ...
NeurIPS, 2021
4792021
Pretrained Transformers Improve Out-of-Distribution Robustness
D Hendrycks, X Liu, E Wallace, A Dziedzic, R Krishnan, D Song
Association for Computational Linguistics (ACL), 2020
4062020
Scaling Out-of-Distribution Detection for Real-World Settings
D Hendrycks, S Basart, M Mazeika, M Mostajabi, J Steinhardt, D Song
International Conference on Machine Learning (ICML), 2022
362*2022
Early Methods for Detecting Adversarial Images
D Hendrycks, K Gimpel
International Conference on Learning Representations (ICLR) Workshop, 2017
3032017
Measuring Coding Challenge Competence With APPS
D Hendrycks, S Basart, S Kadavath, M Mazeika, A Arora, E Guo, C Burns, ...
NeurIPS, 2021
2912021
Aligning AI With Shared Human Values
D Hendrycks, C Burns, S Basart, A Critch, J Li, D Song, J Steinhardt
International Conference on Learning Representations (ICLR), 2020
2522020
Unsolved Problems in ML Safety
D Hendrycks, N Carlini, J Schulman, J Steinhardt
arXiv preprint arXiv:2109.13916, 2021
2302021
Testing robustness against unforeseen adversaries
D Kang, Y Sun, D Hendrycks, T Brown, J Steinhardt
178*2019
ÇöÀç ½Ã½ºÅÛÀÌ ÀÛµ¿µÇÁö ¾Ê½À´Ï´Ù. ³ªÁß¿¡ ´Ù½Ã ½ÃµµÇØ ÁÖ¼¼¿ä.
ÇмúÀÚ·á 1–20