팔로우
Hongkun Yu
Hongkun Yu
google.com의 이메일 확인됨
제목
인용
인용
연도
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
Journal of Machine Learning Research 25 (70), 1-53, 2024
15252024
Mobilebert: a compact task-agnostic bert for resource-limited devices
Z Sun, H Yu, X Song, R Liu, Y Yang, D Zhou
arXiv preprint arXiv:2004.02984, 2020
6802020
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
4192023
Large language models can self-improve
J Huang, SS Gu, L Hou, Y Wu, X Wang, H Yu, J Han
arXiv preprint arXiv:2210.11610, 2022
2462022
H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, E Li, X Wang, ...
arXiv preprint arXiv:2210.11416, 2022
992022
Latent factor transition for dynamic collaborative filtering
C Zhang, K Wang, H Yu, J Sun, EP Lim
Proceedings of the 2014 SIAM international conference on data mining, 452-460, 2014
872014
TensorFlow model garden
H Yu, C Chen, X Du, Y Li, A Rashwan, L Hou, P Jin, F Yang, F Liu, J Kim, ...
Model Garden for TensorFlow., 2020
822020
Huai hsin Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, E Li, X Wang, ...
Le, and Jason Wei. Scaling instruction-finetuned language models. ArXiv, abs …, 2022
592022
Generating representative headlines for news stories
X Gu, Y Mao, J Han, J Liu, Y Wu, C Yu, D Finnie, H Yu, J Zhai, N Zukoski
Proceedings of The Web Conference 2020, 1773-1784, 2020
582020
H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
arXiv preprint arXiv:2210.11416 6 (7), 2022
452022
On the transformer growth for progressive bert training
X Gu, L Liu, H Yu, J Li, C Chen, J Han
arXiv preprint arXiv:2010.12562, 2020
432020
Mining multi-aspect reflection of news events in twitter: Discovery, linking and presentation
J Wang, W Tong, H Yu, M Li, X Ma, H Cai, T Hanratty, J Han
2015 IEEE International Conference on Data Mining, 429-438, 2015
402015
Are features equally representative? A feature-centric recommendation
C Zhang, K Wang, E Lim, Q Xu, J Sun, H Yu
Proceedings of the AAAI Conference on Artificial Intelligence 29 (1), 2015
242015
Flan-moe: Scaling instruction-finetuned language models with sparse mixture of experts
S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ...
arXiv e-prints, arXiv: 2305.14705, 2023
192023
Data-driven contextual valence shifter quantification for multi-theme sentiment analysis
H Yu, J Shang, M Hsu, M Castellanos, J Han
Proceedings of the 25th ACM international on conference on information and …, 2016
192016
Mixture-of-experts meets instruction tuning: A winning combination for large language models
S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ...
arXiv preprint arXiv:2305.14705, 2023
182023
Enct5: Fine-tuning t5 encoder for non-autoregressive tasks
F Liu, S Shakeri, H Yu, J Li
arXiv preprint arXiv:2110.08426 2, 2021
162021
Mobilebert: Task-agnostic compression of bert by progressive knowledge transfer
Z Sun, H Yu, X Song, R Liu, Y Yang, D Zhou
162019
EKNOT: Event knowledge from news and opinions in Twitter
M Li, J Wang, W Tong, H Yu, X Ma, Y Chen, H Cai, J Han
Proceedings of the AAAI Conference on Artificial Intelligence 30 (1), 2016
142016
TensorFlow model garden. 2020
H Yu, C Chen, X Du, Y Li, A Rashwan, L Hou, P Jin, F Yang, F Liu, J Kim, ...
URL https://github. com/tensorflow/models, 2020
122020
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–20