팔로우
Ido Nachum
Ido Nachum
Postdoctoral Researcher, École polytechnique fédérale de Lausanne (EPFL)
epfl.ch의 이메일 확인됨 - 홈페이지
제목
인용
인용
연도
Learners that use little information
R Bassily, S Moran, I Nachum, J Shafer, A Yehudayoff
Algorithmic Learning Theory, 25-55, 2018
1042018
A direct sum result for the information complexity of learning
I Nachum, J Shafer, A Yehudayoff
Conference On Learning Theory, 1547-1568, 2018
182018
Average-case information complexity of learning
I Nachum, A Yehudayoff
Algorithmic Learning Theory, 633-646, 2019
132019
Finite littlestone dimension implies finite information complexity
A Pradeep, I Nachum, M Gastpar
2022 IEEE International Symposium on Information Theory (ISIT), 3055-3060, 2022
92022
A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs
I Nachum, J Hązła, M Gastpar, A Khina
ICLR 2022, 2021
82021
Almost-Reed–Muller codes achieve constant rates for random errors
E Abbe, J Hązła, I Nachum
IEEE Transactions on Information Theory 67 (12), 8034-8050, 2021
72021
On symmetry and initialization for neural networks
I Nachum, A Yehudayoff
LATIN 2020: Theoretical Informatics: 14th Latin American Symposium, São …, 2020
72020
Fantastic generalization measures are nowhere to be found
M Gastpar, I Nachum, J Shafer, T Weinberger
arXiv preprint arXiv:2309.13658, 2023
62023
Learners that leak little information
R Bassily, S Moran, I Nachum, J Shafer, A Yehudayoff
arXiv preprint arXiv:1710.05233, 2017
32017
Regularization by Misclassification in ReLU Neural Networks
E Cornacchia, J Hązła, I Nachum, A Yehudayoff
arXiv preprint arXiv:2111.02154, 2021
12021
On the perceptron’s compression
S Moran, I Nachum, I Panasoff, A Yehudayoff
Beyond the Horizon of Computability: 16th Conference on Computability in …, 2020
12020
LINX
M Bondaschi, MB Dogan, AR Esposito, F Faille, C Feng, MC Gastpar, ...
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–12