Æȷοì
Seongyun Lee
Seongyun Lee
kaist.ac.krÀÇ À̸ÞÀÏ È®ÀεÊ
Á¦¸ñ
Àοë
Àοë
¿¬µµ
Volcano: mitigating multimodal hallucination through self-feedback guided revision
S Lee, SH Park, Y Jo, M Seo
arXiv preprint arXiv:2311.07362, 2023
292023
LIQUID: A framework for list question answering dataset generation
S Lee, H Kim, J Kang
Proceedings of the AAAI Conference on Artificial Intelligence 37 (11), 13014 ¡¦, 2023
182023
Aligning to thousands of preferences via system message generalization
S Lee, SH Park, S Kim, M Seo
arXiv preprint arXiv:2405.17977, 2024
122024
Prometheusvision: Vision-language model as a judge for fine-grained evaluation
S Lee, S Kim, SH Park, G Kim, M Seo
arXiv preprint arXiv:2401.06591, 2024
122024
Zero-shot dense video captioning by jointly optimizing text and moment
Y Jo, S Lee, ASJ Lee, H Lee, H Oh, M Seo
arXiv preprint arXiv:2307.02682, 2023
62023
LG AI Research & KAIST at EHRSQL 2024: Self-Training Large Language Models with Pseudo-Labeled Unanswerable Questions for a Reliable Text-to-SQL System on EHRs
Y Jo, S Lee, M Seo, SJ Hwang, M Lee
arXiv preprint arXiv:2405.11162, 2024
22024
The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models
S Kim, J Suk, JY Cho, S Longpre, C Kim, D Yoon, G Son, Y Cho, ...
arXiv preprint arXiv:2406.05761, 2024
12024
How Does Vision-Language Adaptation Impact the Safety of Vision Language Models?
S Lee, G Kim, J Kim, H Lee, H Chang, SH Park, M Seo
arXiv preprint arXiv:2410.07571, 2024
2024
ÇöÀç ½Ã½ºÅÛÀÌ ÀÛµ¿µÇÁö ¾Ê½À´Ï´Ù. ³ªÁß¿¡ ´Ù½Ã ½ÃµµÇØ ÁÖ¼¼¿ä.
ÇмúÀÚ·á 1–8