CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP Q Ye, BY Lin, X Ren EMNLP 2021, 2021 | 126 | 2021 |
Refining language models with compositional explanations H Yao, Y Chen, Q Ye, X Jin, X Ren NeurIPS 2021, 2021 | 39* | 2021 |
Learning from Explanations with Neural Execution Tree Z Wang, Y Qin, W Zhou, J Yan, Q Ye, L Neves, Z Liu, X Ren ICLR 2020, 2019 | 38* | 2019 |
Learning to Generate Task-Specific Adapters from Task Description Q Ye, X Ren ACL-IJCNLP 2021 (Short Paper), 2021 | 25* | 2021 |
Teaching Machine Comprehension with Compositional Explanations Q Ye, X Huang, E Boschee, X Ren Findings of EMNLP 2020, 2020 | 24 | 2020 |
Looking Beyond Label Noise: Shifted Label Distribution Matters in Distantly Supervised Relation Extraction Q Ye, L Liu, M Zhang, X Ren EMNLP-IJCNLP 2019, 2019 | 21 | 2019 |
Semi-automated protocol disambiguation and code generation J Yen, T Lévai, Q Ye, X Ren, R Govindan, B Raghavan SIGCOMM 2021, 272-286, 2021 | 18 | 2021 |
LEAN-LIFE: A Label-Efficient Annotation Framework Towards Learning from Explanation DH Lee, R Khanna, BY Lin, J Chen, S Lee, Q Ye, E Boschee, L Neves, ... ACL 2020 (Demo Track), 2020 | 17 | 2020 |
On the Influence of Masking Policies in Intermediate Pre-training Q Ye, BZ Li, S Wang, B Bolte, H Ma, W Yih, X Ren, M Khabsa EMNLP 2021, 2021 | 12 | 2021 |
Studying strategically: Learning to mask for closed-book QA Q Ye, BZ Li, S Wang, B Bolte, H Ma, W Yih, X Ren, M Khabsa arXiv preprint arXiv:2012.15856, 2020 | 10 | 2020 |
Prompt engineering a prompt engineer Q Ye, M Axmed, R Pryzant, F Khani arXiv preprint arXiv:2311.05661, 2023 | 7 | 2023 |
Eliciting and Understanding Cross-Task Skills with Task-Level Mixture-of-Experts Q Ye, J Zha, X Ren Findings of EMNLP 2022, 2022 | 7* | 2022 |
FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning Q Ye, I Beltagy, ME Peters, X Ren, H Hajishirzi ACL 2023, 2022 | 4 | 2022 |
How Predictable Are Large Language Model Capabilities? A Case Study on BIG-bench Q Ye, HY Fu, X Ren, R Jia Findings of EMNLP 2023, 2023 | 3 | 2023 |
Estimating Large Language Model Capabilities without Labeled Test Data HY Fu, Q Ye, A Xu, X Ren, R Jia Findings of EMNLP 2023, 2023 | 3 | 2023 |
Sparse Distillation: Speeding Up Text Classification by Using Bigger Student Models Q Ye, M Khabsa, M Lewis, S Wang, X Ren, A Jaech NAACL 2022, 2021 | 2 | 2021 |
LLM-driven Instruction Following: Progresses and Concerns W Yin, Q Ye, P Liu, X Ren, H Schütze Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023 | | 2023 |