The cot collection: Improving zero-shot and few-shot learning of language models via chain-of-thought fine-tuning S Kim, SJ Joo, D Kim, J Jang, S Ye, J Shin, M Seo arXiv preprint arXiv:2305.14045, 2023 | 67 | 2023 |
Mind the gap! injecting commonsense knowledge for abstractive dialogue summarization S Kim, SJ Joo, H Chae, C Kim, S Hwang, J Yeo arXiv preprint arXiv:2209.00930, 2022 | 18 | 2022 |
Cotever: Chain of thought prompting annotation toolkit for explanation verification S Kim, SJ Joo, Y Jang, H Chae, J Yeo arXiv preprint arXiv:2303.03628, 2023 | 8 | 2023 |
How Well Do Large Language Models Truly Ground? H Lee, S Joo, C Kim, J Jang, D Kim, KW On, M Seo arXiv preprint arXiv:2311.09069, 2023 | 4 | 2023 |
Latent Action Pretraining From Videos S Ye, J Jang, B Jeon, S Joo, J Yang, B Peng, A Mandlekar, R Tan, ... arXiv preprint arXiv:2410.11758, 2024 | | 2024 |
Semiparametric Token-Sequence Co-Supervision H Lee, D Kim, J Jun, S Joo, J Jang, KW On, M Seo arXiv preprint arXiv:2403.09024, 2024 | | 2024 |
ÀÚ¿¬¾î 󸮸¦ À§ÇÑ Á¶°ÇºÎ °ÔÀÌÆ® ´ÙÃþ ÆÛ¼ÁÆ®·Ð ¸ðµ¨ °³¹ß ¹× ±¸Çö ¼Õ±ÔÁø£¬ ±è½Â¿ø£¬ ÁÖ¼¼ÁØ£¬ Á¶¿ìÁø£¬ ³ªÁ¤Àº Çѱ¹Á¤º¸Ã³¸®ÇÐȸ Çмú´ëȸ³í¹®Áý 28 (2), 1116-1119, 2021 | | 2021 |