Æȷοì
Yuan Yao
Yuan Yao
Postdoc Research Fellow, National University of Singapore
mails.tsinghua.edu.cnÀÇ À̸ÞÀÏ È®ÀÎµÊ - ȨÆäÀÌÁö
Á¦¸ñ
Àοë
Àοë
¿¬µµ
FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation
X Han, H Zhu, P Yu, Z Wang, Y Yao, Z Liu, M Sun
Proceedings of the 2018 Conference on Empirical Methods in Natural Language ¡¦, 2018
6032018
Pre-trained models: Past, present and future
X Han, Z Zhang, N Ding, Y Gu, X Liu, Y Huo, J Qiu, Y Yao, A Zhang, ...
AI Open 2, 225-250, 2021
5532021
DocRED: A large-scale document-level relation extraction dataset
Y Yao*, D Ye*, P Li, X Han, Y Lin, Z Liu, Z Liu, L Huang, J Zhou, M Sun
Proceedings of the 57th Annual Meeting of the Association for Computational ¡¦, 2019
4662019
CPT: Colorful prompt tuning for pre-trained vision-language models
Y Yao*, A Zhang*, Z Zhang, Z Liu, TS Chua, M Sun
arXiv preprint arXiv:2109.11797, 2021
1822021
OpenNRE: An open and extensible toolkit for neural relation extraction
X Han, T Gao, Y Yao, D Ye, Z Liu, M Sun
Proceedings of the 2019 Conference on Empirical Methods in Natural Language ¡¦, 2019
1692019
Onion: A simple and effective defense against textual backdoor attacks
F Qi, Y Chen, M Li, Y Yao, Z Liu, M Sun
Proceedings of the 2021 Conference on Empirical Methods in Natural Language ¡¦, 2020
1452020
Turn the combination lock: Learnable textual backdoor attacks via word substitution
F Qi*, Y Yao*, S Xu*, Z Liu, M Sun
Proceedings of the 59th Annual Meeting of the Association for Computational ¡¦, 2021
932021
A deep-learning system bridging molecule structure and biomedical text with comprehension comparable to human professionals
Z Zeng*, Y Yao*, Z Liu, M Sun
Nature communications 13 (1), 1-11, 2022
742022
CPM-2: Large-scale cost-effective pre-trained language models
Z Zhang, Y Gu, X Han, S Chen, C Xiao, Z Sun, Y Yao, F Qi, J Guan, P Ke, ...
AI Open 2, 216-224, 2021
722021
Open relation extraction: Relational knowledge transfer from supervised data to unsupervised data
R Wu*, Y Yao*, X Han, R Xie, Z Liu, F Lin, L Lin, M Sun
Proceedings of the 2019 Conference on Empirical Methods in Natural Language ¡¦, 2019
632019
Fine-Grained Scene Graph Generation with Data Transfer
A Zhang*, Y Yao*, Q Chen, W Ji, Z Liu, M Sun, TS Chua
Proceedings of the 2022 European Conference on Computer Vision, 2022
542022
Kola: Carefully benchmarking world knowledge of large language models
J Yu, X Wang, S Tu, S Cao, D Zhang-Li, X Lv, H Peng, Z Yao, X Zhang, ...
arXiv preprint arXiv:2306.09296, 2023
462023
Knowledge transfer via pre-training for recommendation: A review and prospect
Z Zeng, C Xiao, Y Yao, R Xie, Z Liu, F Lin, L Lin, M Sun
Frontiers in big Data 4, 602071, 2021
392021
Visual distant supervision for scene graph generation
Y Yao*, A Zhang*, X Han, M Li, C Weber, Z Liu, S Wermter, M Sun
Proceedings of the IEEE/CVF International Conference on Computer Vision ¡¦, 2021
382021
PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models
Y Yao*, Q Chen*, A Zhang, W Ji, Z Liu, TS Chua, M Sun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language ¡¦, 2022
372022
Meta-Information Guided Meta-Learning for Few-Shot Relation Classification
B Dong*, Y Yao*, R Xie, T Gao, X Han, Z Liu, F Lin, L Lin, M Sun
Proceedings of the 28th International Conference on Computational ¡¦, 2020
362020
Transfer visual prompt generator across llms
A Zhang, H Fei, Y Yao, W Ji, L Li, Z Liu, TS Chua
NeurIPS 2023, 2023
332023
Denoising relation extraction from document-level distant supervision
C Xiao, Y Yao, R Xie, X Han, Z Liu, M Sun, F Lin, L Lin
Proceedings of the 2020 Conference on Empirical Methods in Natural Language ¡¦, 2020
332020
Uprec: User-aware pre-training for recommender systems
C Xiao, R Xie, Y Yao, Z Liu, M Sun, X Zhang, L Lin
arXiv preprint arXiv:2102.10989, 2021
322021
Open hierarchical relation extraction
K Zhang*, Y Yao*, R Xie, X Han, Z Liu, F Lin, L Lin, M Sun
Proceedings of the 2021 Conference of the North American Chapter of the ¡¦, 2021
272021
ÇöÀç ½Ã½ºÅÛÀÌ ÀÛµ¿µÇÁö ¾Ê½À´Ï´Ù. ³ªÁß¿¡ ´Ù½Ã ½ÃµµÇØ ÁÖ¼¼¿ä.
ÇмúÀÚ·á 1–20