ÆÈ·Î¿ì
Yichun Yin
Yichun Yin
Huawei Noah's Ark Lab
huawei.comÀÇ À̸ÞÀÏ È®ÀεÊ
Á¦¸ñ
Àοë
Àοë
¿¬µµ
TinyBERT: Distilling BERT for Natural Language Understanding
X Jiao, Y Yin, L Shang, X Jiang, X Chen, L Li, F Wang, Q Liu
EMNLP-findings (most influential paper of EMNLP-2020), 2019
10122019
Unsupervised word and dependency path embeddings for aspect term extraction
Y Yin, F Wei, L Dong, K Xu, M Zhang, M Zhou
IJCAI 2016, 2016
1822016
TernaryBERT: Distillation-aware Ultra-low Bit BERT
W Zhang, L Hou, Y Yin, L Shang, X Chen, X Jiang, Q Liu
EMNLP 2020, 2020
982020
Document-level multi-aspect sentiment classification as machine comprehension
Y Yin, Y Song, M Zhang
EMNLP 2017, 2044-2054, 2017
772017
Generate & Rank: A Multi-task Framework for Math Word Problems
J Shen, Y Yin, L Li, L Shang, X Jiang, M Zhang, Q Liu
EMNLP 2021 findings, 2021
372021
AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models
Y Yin, C Chen, L Shang, X Jiang, X Chen, Q Liu
ACL 2021, 2021
202021
Dialog State Tracking with Reinforced Data Augmentation
Y Yin, L Shang, X Jiang, X Chen, Q Liu
AAAI 2020, 2019
182019
Nnembs at semeval-2017 task 4: Neural twitter sentiment classification: a simple ensemble method with different embeddings
Y Yin, Y Song, M Zhang
Proceedings of the 11th International Workshop on Semantic Evaluation ¡¦, 2017
182017
Socialized Word Embeddings.
Z Zeng, Y Yin, Y Song, M Zhang
IJCAI, 3915-3921, 2017
152017
Splusplus: a feature-rich two-stage classifier for sentiment analysis of tweets
L Dong, F Wei, Y Yin, M Zhou, K Xu
Proceedings of the 9th international workshop on semantic evaluation ¡¦, 2015
142015
bert2BERT: Towards Reusable Pretrained Language Models
C Chen, Y Yin, L Shang, X Jiang, Y Qin, F Wang, Z Wang, X Chen, Z Liu, ...
ACL 2022, 2021
112021
PoD: Positional Dependency-Based Word Embedding for Aspect Term Extraction
Y Yin, C Wang, M Zhang
COLING 2020, 2019
92019
Lightmbert: A simple yet effective method for multilingual bert distillation
X Jiao, Y Yin, L Shang, X Jiang, X Chen, L Li, F Wang, Q Liu
arXiv preprint arXiv:2103.06418, 2021
72021
Improving Task-Agnostic BERT Distillation with Layer Mapping Search
X Jiao, H Chang, Y Yin, L Shang, X Jiang, X Chen, L Li, F Wang, Q Liu
Neurocomputing 2021, 2020
72020
Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation
C Chen, Y Yin, L Shang, Z Wang, X Jiang, X Chen, Q Liu
ICANN 2021, 2021
52021
Text processing model training method, and text processing method and apparatus
Y Yin, L Shang, X Jiang, X Chen
US Patent App. 17/682,145, 2022
32022
G-MAP: General Memory-Augmented Pre-trained Language Model for Domain Tasks
Z Wan, Y Yin, W Zhang, J Shi, L Shang, G Chen, X Jiang, Q Liu
EMNLP 2022, 2022
12022
Integrating regular expressions with neural networks via DFA
S Li, Q Liu, X Jiang, Y Yin, C Sun, B Liu, Z Ji, L Shang
arXiv preprint arXiv:2109.02882, 2021
12021
More Chinese women needed to hold up half the computing sky
M Zhang, Y Yin
Proceedings of the ACM Turing Celebration Conference-China, 1-4, 2019
12019
FPT: Improving Prompt Tuning Efficiency via Progressive Training
Y Huang, Y Qin, H Wang, Y Yin, M Sun, Z Liu, Q Liu
EMNLP 2022 findings, 2022
2022
ÇöÀç ½Ã½ºÅÛÀÌ ÀÛµ¿µÇÁö ¾Ê½À´Ï´Ù. ³ªÁß¿¡ ´Ù½Ã ½ÃµµÇØ ÁÖ¼¼¿ä.
ÇмúÀÚ·á 1–20