팔로우
Jongmin Lee
Jongmin Lee
berkeley.edu의 이메일 확인됨 - 홈페이지
제목
인용
인용
연도
OptiDICE: Offline Policy Optimization via Stationary Distribution Correction Estimation
J Lee, W Jeon, BJ Lee, J Pineau, KE Kim
ICML, 2021
692021
Monte-Carlo Tree Search for Constrained POMDPs
J Lee, GH Kim, P Poupart, KE Kim
NeurIPS, 2018
642018
Multi-view automatic lip-reading using neural network
D Lee, J Lee, KE Kim
Computer Vision–ACCV 2016 Workshops: ACCV 2016 International Workshops …, 2017
632017
DemoDICE: Offline Imitation Learning with Supplementary Imperfect Demonstrations
GH Kim, S Seo, J Lee, W Jeon, HJ Hwang, H Yang, KE Kim
International Conference on Learning Representations (ICLR), 2022
562022
Representation balancing offline model-based reinforcement learning
BJ Lee, J Lee, KE Kim
International Conference on Learning Representations (ICLR), 2021
462021
GPT-Critic: Offline Reinforcement Learning for End-to-End Task-Oriented Dialogue Systems
Y Jang, J Lee, KE Kim
International Conference on Learning Representations (ICLR), 2022
342022
COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction Estimation
J Lee, C Paduraru, DJ Mankowitz, N Heess, D Precup, KE Kim, A Guez
International Conference on Learning Representations (ICLR), 2022
242022
Bayes-Adaptive Monte-Carlo Planning and Learning for Goal-Oriented Dialogues
Y Jang, J Lee, KE Kim
AAAI, 2020
182020
Monte-Carlo Tree Search in Continuous Action Spaces with Value Gradients
J Lee, W Jeon, GH Kim, KE Kim
AAAI, 2020
172020
Hierarchically-partitioned Gaussian Process Approximation
BJ Lee, J Lee, KE Kim
Artificial Intelligence and Statistics (AISTATS), 822-831, 2017
162017
Reinforcement Learning for Control with Multiple Frequencies
J Lee, BJ Lee, KE Kim
Advances in Neural Information Processing Systems (NeurIPS) 33, 2020
152020
Batch Reinforcement Learning with Hyperparameter Gradients
BJ Lee, J Lee, P Vrancx, D Kim, KE Kim
ICML, 2020
152020
PyOpenDial: a python-based domain-independent toolkit for developing spoken dialogue systems with probabilistic rules
Y Jang, J Lee, J Park, KH Lee, P Lison, KE Kim
Proceedings of the 2019 conference on empirical methods in natural language …, 2019
112019
Constrained Bayesian Reinforcement Learning via Approximate Linear Programming
J Lee, Y Jang, P Poupart, KE Kim
IJCAI, 2088-2095, 2017
102017
LobsDICE: Offline Imitation Learning from Observation via Stationary Distribution Correction Estimation
GH Kim, J Lee, Y Jang, H Yang, KE Kim
Advances in Neural Information Processing Systems (NeurIPS), 2022
82022
Monte-carlo planning and learning with language action value estimates
Y Jang, S Seo, J Lee, KE Kim
International Conference on Learning Representations (ICLR), 2021
82021
Bayesian Reinforcement Learning with Behavioral Feedback.
T Hong, J Lee, KE Kim, PA Ortega, DD Lee
IJCAI, 1571-1577, 2016
32016
Local Metric Learning for Off-Policy Evaluation in Contextual Bandits with Continuous Actions
H Lee, J Lee, Y Choi, W Jeon, BJ Lee, YK Noh, KE Kim
Advances in Neural Information Processing Systems (NeurIPS), 2022
22022
Trust Region Sequential Variational Inference
GH Kim, Y Jang, J Lee, W Jeon, H Yang, KE Kim
Asian Conference on Machine Learning (ACML), 1033-1048, 2019
22019
Tempo Adaption in Non-stationary Reinforcement Learning
H Lee, Y Ding, J Lee, M Jin, J Lavaei, S Sojoudi
Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS), 2023
12023
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–20