Æȷοì
Bilal Piot
Bilal Piot
Google Deepmind
google.comÀÇ À̸ÞÀÏ È®ÀεÊ
Á¦¸ñ
Àοë
Àοë
¿¬µµ
Bootstrap your own latent: A new approach to self-supervised learning
JB Grill, F Strub, F Altché, C Tallec, PH Richemond, E Buchatskaya, ...
arXiv preprint arXiv:2006.07733, 2020
57072020
Rainbow: Combining improvements in deep reinforcement learning
M Hessel, J Modayil, H Van Hasselt, T Schaul, G Ostrovski, W Dabney, ...
Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018
24542018
Deep q-learning from demonstrations
T Hester, M Vecerik, O Pietquin, M Lanctot, T Schaul, B Piot, D Horgan, ...
Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018
11502018
Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards
M Vecerik, T Hester, J Scholz, F Wang, O Pietquin, B Piot, N Heess, ...
arXiv preprint arXiv:1707.08817, 2017
7332017
Agent57: Outperforming the atari human benchmark
AP Badia, B Piot, S Kapturowski, P Sprechmann, A Vitvitskyi, ZD Guo, ...
International conference on machine learning, 507-517, 2020
5862020
Never give up: Learning directed exploration strategies
AP Badia, P Sprechmann, A Vitvitskyi, D Guo, B Piot, S Kapturowski, ...
arXiv preprint arXiv:2002.06038, 2020
3122020
Acme: A research framework for distributed reinforcement learning
MW Hoffman, B Shahriari, J Aslanides, G Barth-Maron, N Momchev, ...
arXiv preprint arXiv:2006.00979, 2020
2272020
Learning from demonstrations for real world reinforcement learning
T Hester, M Vecerik, O Pietquin, M Lanctot, T Schaul, B Piot, A Sendonaris, ...
arXiv preprint arXiv:1704.03732, 2017
1752017
Mastering the game of stratego with model-free multiagent reinforcement learning
J Perolat, B De Vylder, D Hennes, E Tarassov, F Strub, V de Boer, ...
Science 378 (6623), 990-996, 2022
1372022
Bootstrap latent-predictive representations for multitask reinforcement learning
ZD Guo, BA Pires, B Piot, JB Grill, F Altché, R Munos, MG Azar
International Conference on Machine Learning, 3875-3886, 2020
1372020
Observe and look further: Achieving consistent performance on atari
T Pohlen, B Piot, T Hester, MG Azar, D Horgan, D Budden, G Barth-Maron, ...
arXiv preprint arXiv:1805.11593, 2018
1262018
Inverse reinforcement learning through structured classification
E Klein, M Geist, B Piot, O Pietquin
Advances in neural information processing systems 25, 2012
1192012
Approximate dynamic programming for two-player zero-sum markov games
J Perolat, B Scherrer, B Piot, O Pietquin
International Conference on Machine Learning, 1321-1329, 2015
1122015
Bridging the gap between imitation learning and inverse reinforcement learning
B Piot, M Geist, O Pietquin
IEEE transactions on neural networks and learning systems 28 (8), 1814-1826, 2016
982016
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
A Gruslys, W Dabney, MG Azar, B Piot, M Bellemare, R Munos
arXiv preprint arXiv:1704.04651, 2017
962017
Boosted bellman residual minimization handling expert demonstrations
B Piot, M Geist, O Pietquin
Machine Learning and Knowledge Discovery in Databases: European Conference ¡¦, 2014
872014
Hindsight credit assignment
A Harutyunyan, W Dabney, T Mesnard, M Gheshlaghi Azar, B Piot, ...
Advances in neural information processing systems 32, 2019
852019
Neural predictive belief representations
ZD Guo, MG Azar, B Piot, BA Pires, R Munos
arXiv preprint arXiv:1811.06407, 2018
832018
End-to-end optimization of goal-driven and visually grounded dialogue systems
F Strub, H De Vries, J Mary, B Piot, A Courville, O Pietquin
arXiv preprint arXiv:1703.05423, 2017
832017
Byol works even without batch statistics
PH Richemond, JB Grill, F Altché, C Tallec, F Strub, A Brock, S Smith, ...
arXiv preprint arXiv:2010.10241, 2020
822020
ÇöÀç ½Ã½ºÅÛÀÌ ÀÛµ¿µÇÁö ¾Ê½À´Ï´Ù. ³ªÁß¿¡ ´Ù½Ã ½ÃµµÇØ ÁÖ¼¼¿ä.
ÇмúÀÚ·á 1–20