Seguir
Qinyuan Ye
Qinyuan Ye
Dirección de correo verificada de usc.edu - Página principal
Título
Citado por
Citado por
Año
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP
Q Ye, BY Lin, X Ren
EMNLP 2021, 2021
1282021
Refining language models with compositional explanations
H Yao, Y Chen, Q Ye, X Jin, X Ren
NeurIPS 2021, 2021
39*2021
Learning from Explanations with Neural Execution Tree
Z Wang, Y Qin, W Zhou, J Yan, Q Ye, L Neves, Z Liu, X Ren
ICLR 2020, 2019
39*2019
Learning to Generate Task-Specific Adapters from Task Description
Q Ye, X Ren
ACL-IJCNLP 2021 (Short Paper), 2021
25*2021
Teaching Machine Comprehension with Compositional Explanations
Q Ye, X Huang, E Boschee, X Ren
Findings of EMNLP 2020, 2020
242020
Looking Beyond Label Noise: Shifted Label Distribution Matters in Distantly Supervised Relation Extraction
Q Ye, L Liu, M Zhang, X Ren
EMNLP-IJCNLP 2019, 2019
212019
Semi-automated protocol disambiguation and code generation
J Yen, T Lévai, Q Ye, X Ren, R Govindan, B Raghavan
SIGCOMM 2021, 272-286, 2021
202021
LEAN-LIFE: A Label-Efficient Annotation Framework Towards Learning from Explanation
DH Lee, R Khanna, BY Lin, J Chen, S Lee, Q Ye, E Boschee, L Neves, ...
ACL 2020 (Demo Track), 2020
172020
On the Influence of Masking Policies in Intermediate Pre-training
Q Ye, BZ Li, S Wang, B Bolte, H Ma, W Yih, X Ren, M Khabsa
EMNLP 2021, 2021
122021
Studying strategically: Learning to mask for closed-book QA
Q Ye, BZ Li, S Wang, B Bolte, H Ma, W Yih, X Ren, M Khabsa
arXiv preprint arXiv:2012.15856, 2020
102020
Prompt engineering a prompt engineer
Q Ye, M Axmed, R Pryzant, F Khani
arXiv preprint arXiv:2311.05661, 2023
72023
Eliciting and Understanding Cross-Task Skills with Task-Level Mixture-of-Experts
Q Ye, J Zha, X Ren
Findings of EMNLP 2022, 2022
7*2022
FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning
Q Ye, I Beltagy, ME Peters, X Ren, H Hajishirzi
ACL 2023, 2022
42022
How Predictable Are Large Language Model Capabilities? A Case Study on BIG-bench
Q Ye, HY Fu, X Ren, R Jia
Findings of EMNLP 2023, 2023
32023
Estimating Large Language Model Capabilities without Labeled Test Data
HY Fu, Q Ye, A Xu, X Ren, R Jia
Findings of EMNLP 2023, 2023
32023
Sparse Distillation: Speeding Up Text Classification by Using Bigger Student Models
Q Ye, M Khabsa, M Lewis, S Wang, X Ren, A Jaech
NAACL 2022, 2021
22021
LLM-driven Instruction Following: Progresses and Concerns
W Yin, Q Ye, P Liu, X Ren, H Schütze
Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023
12023
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–17