Memory-efficient fine-tuning of compressed large language models via sub-4-bit integer quantization J Kim, JH Lee, S Kim, J Park, KM Yoo, SJ Kwon, D Lee Advances in Neural Information Processing Systems 36, 2024 | 94 | 2024 |
Flexround: Learnable rounding based on element-wise division for post-training quantization JH Lee, J Kim, SJ Kwon, D Lee International Conference on Machine Learning, 18913-18939, 2023 | 30 | 2023 |
Cluster-promoting quantization with bit-drop for minimizing network quantization loss JH Lee, J Yun, SJ Hwang, E Yang Proceedings of the IEEE/CVF International Conference on Computer Vision ¡¦, 2021 | 14 | 2021 |
HyperCLOVA X Technical Report KM Yoo, J Han, S In, H Jeon, J Jeong, J Kang, H Kim, KM Kim, M Kim, ... arXiv preprint arXiv:2404.01954, 2024 | 6 | 2024 |
Compressed sensing via measurement-conditional generative models KS Kim, JH Lee, E Yang IEEE Access 9, 155335-155352, 2021 | 4 | 2021 |
LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices JH Lee, J Kim, JY Yang, SJ Kwon, E Yang, KM Yoo, D Lee arXiv preprint arXiv:2407.11534, 2024 | 3 | 2024 |
Label-Noise Robust Diffusion Models B Na, Y Kim, HS Bae, JH Lee, SJ Kwon, W Kang, IC Moon arXiv preprint arXiv:2402.17517, 2024 | 3 | 2024 |
Token-Supervised Value Models for Enhancing Mathematical Reasoning Capabilities of Large Language Models JH Lee, JY Yang, B Heo, D Han, KM Yoo arXiv preprint arXiv:2407.12863, 2024 | 1 | 2024 |