팔로우
Vishal Maini
Vishal Maini
DeepMind
deepmind.com의 이메일 확인됨
제목
인용
인용
연도
Scalable agent alignment via reward modeling: a research direction
J Leike, D Krueger, T Everitt, M Martic, V Maini, S Legg
arXiv preprint arXiv:1811.07871, 2018
2442018
Reducing sentiment bias in language models via counterfactual evaluation
PS Huang, H Zhang, R Jiang, R Stanforth, J Welbl, J Rae, V Maini, ...
arXiv preprint arXiv:1911.03064, 2019
1592019
Machine learning for humans
V Maini, S Sabri
Retrieved on May 1, 2022, 2017
932017
Building safe artificial intelligence: specification, robustness, and assurance
PA Ortega, V Maini, DMS Team
DeepMind Safety Research Blog, 2018
362018
the DeepMind safety team
PA Ortega, V Maini
Building safe artificial intelligence: specification, robustness, and assurance, 2018
132018
Scalable agent alignment via reward modeling: A research direction. arXiv 2018
J Leike, D Krueger, T Everitt, M Martic, V Maini, S Legg
arXiv preprint arXiv:1811.07871, 1811
131811
Machine learning for humans. 2017
V Maini, S Sabri
Disponível:«https://everythingcomputerscience. com/books/Machine% 20Learning …, 2023
112023
Scalable agent alignment via reward modeling: a research direction. arXiv
J Leike, D Krueger, T Everitt, M Martic, V Maini, S Legg
arXiv preprint arXiv:1811.07871, 2018
82018
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–8