TinyBERT: Distilling BERT for Natural Language Understanding X Jiao, Y Yin, L Shang, X Jiang, X Chen, L Li, F Wang, Q Liu EMNLP-findings (most influential paper of EMNLP-2020), 2019 | 1012 | 2019 |
Unsupervised word and dependency path embeddings for aspect term extraction Y Yin, F Wei, L Dong, K Xu, M Zhang, M Zhou IJCAI 2016, 2016 | 182 | 2016 |
TernaryBERT: Distillation-aware Ultra-low Bit BERT W Zhang, L Hou, Y Yin, L Shang, X Chen, X Jiang, Q Liu EMNLP 2020, 2020 | 98 | 2020 |
Document-level multi-aspect sentiment classification as machine comprehension Y Yin, Y Song, M Zhang EMNLP 2017, 2044-2054, 2017 | 77 | 2017 |
Generate & Rank: A Multi-task Framework for Math Word Problems J Shen, Y Yin, L Li, L Shang, X Jiang, M Zhang, Q Liu EMNLP 2021 findings, 2021 | 37 | 2021 |
AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models Y Yin, C Chen, L Shang, X Jiang, X Chen, Q Liu ACL 2021, 2021 | 20 | 2021 |
Dialog State Tracking with Reinforced Data Augmentation Y Yin, L Shang, X Jiang, X Chen, Q Liu AAAI 2020, 2019 | 18 | 2019 |
Nnembs at semeval-2017 task 4: Neural twitter sentiment classification: a simple ensemble method with different embeddings Y Yin, Y Song, M Zhang Proceedings of the 11th International Workshop on Semantic Evaluation ¡¦, 2017 | 18 | 2017 |
Socialized Word Embeddings. Z Zeng, Y Yin, Y Song, M Zhang IJCAI, 3915-3921, 2017 | 15 | 2017 |
Splusplus: a feature-rich two-stage classifier for sentiment analysis of tweets L Dong, F Wei, Y Yin, M Zhou, K Xu Proceedings of the 9th international workshop on semantic evaluation ¡¦, 2015 | 14 | 2015 |
bert2BERT: Towards Reusable Pretrained Language Models C Chen, Y Yin, L Shang, X Jiang, Y Qin, F Wang, Z Wang, X Chen, Z Liu, ... ACL 2022, 2021 | 11 | 2021 |
PoD: Positional Dependency-Based Word Embedding for Aspect Term Extraction Y Yin, C Wang, M Zhang COLING 2020, 2019 | 9 | 2019 |
Lightmbert: A simple yet effective method for multilingual bert distillation X Jiao, Y Yin, L Shang, X Jiang, X Chen, L Li, F Wang, Q Liu arXiv preprint arXiv:2103.06418, 2021 | 7 | 2021 |
Improving Task-Agnostic BERT Distillation with Layer Mapping Search X Jiao, H Chang, Y Yin, L Shang, X Jiang, X Chen, L Li, F Wang, Q Liu Neurocomputing 2021, 2020 | 7 | 2020 |
Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation C Chen, Y Yin, L Shang, Z Wang, X Jiang, X Chen, Q Liu ICANN 2021, 2021 | 5 | 2021 |
Text processing model training method, and text processing method and apparatus Y Yin, L Shang, X Jiang, X Chen US Patent App. 17/682,145, 2022 | 3 | 2022 |
G-MAP: General Memory-Augmented Pre-trained Language Model for Domain Tasks Z Wan, Y Yin, W Zhang, J Shi, L Shang, G Chen, X Jiang, Q Liu EMNLP 2022, 2022 | 1 | 2022 |
Integrating regular expressions with neural networks via DFA S Li, Q Liu, X Jiang, Y Yin, C Sun, B Liu, Z Ji, L Shang arXiv preprint arXiv:2109.02882, 2021 | 1 | 2021 |
More Chinese women needed to hold up half the computing sky M Zhang, Y Yin Proceedings of the ACM Turing Celebration Conference-China, 1-4, 2019 | 1 | 2019 |
FPT: Improving Prompt Tuning Efficiency via Progressive Training Y Huang, Y Qin, H Wang, Y Yin, M Sun, Z Liu, Q Liu EMNLP 2022 findings, 2022 | | 2022 |