Follow
Haokun Liu
Haokun Liu
Verified email at cs.unc.edu - Homepage
Title
Cited by
Cited by
Year
Intermediate-task transfer learning with pretrained models for natural language understanding: When and why does it work?
Y Pruksachatkun, J Phang, H Liu, PM Htut, X Zhang, RY Pang, C Vania, ...
arXiv preprint arXiv:2005.00628, 2020
1332020
BLiMP: The benchmark of linguistic minimal pairs for English
A Warstadt, A Parrish, H Liu, A Mohananey, W Peng, SF Wang, ...
Transactions of the Association for Computational Linguistics 8, 377-392, 2020
1152020
Investigating BERT's knowledge of language: five analysis methods with NPIs
A Warstadt, Y Cao, I Grosu, W Peng, H Blix, Y Nie, A Alsop, S Bordia, ...
arXiv preprint arXiv:1909.02597, 2019
722019
jiant: A software toolkit for research on general-purpose text understanding models
Y Pruksachatkun, P Yeres, H Liu, J Phang, PM Htut, A Wang, I Tenney, ...
arXiv preprint arXiv:2003.02249, 2020
65*2020
English intermediate-task training improves zero-shot cross-lingual transfer too
J Phang, I Calixto, PM Htut, Y Pruksachatkun, H Liu, C Vania, K Kann, ...
arXiv preprint arXiv:2005.13013, 2020
502020
Learning which features matter: RoBERTa acquires a preference for linguistic generalizations (eventually)
A Warstadt, Y Zhang, HS Li, H Liu, SR Bowman
arXiv preprint arXiv:2010.05358, 2020
362020
Counterfactually-augmented SNLI training data does not yield better generalization than unaugmented data
W Huang, H Liu, SR Bowman
arXiv preprint arXiv:2010.04762, 2020
172020
Comparing test sets with item response theory
C Vania, PM Htut, W Huang, D Mungra, RY Pang, J Phang, H Liu, K Cho, ...
arXiv preprint arXiv:2106.00840, 2021
122021
Precise task formalization matters in Winograd schema evaluations
H Liu, W Huang, DA Mungra, SR Bowman
arXiv preprint arXiv:2010.04043, 2020
92020
Memd: A diversity-promoting learning framework for short-text conversation
M Zou, X Li, H Liu, ZH Deng
Proceedings of the 27th International Conference on Computational …, 2018
42018
Retrieving Relevant and Diverse Image from Social Media Images.
X Chen, H Liu, ZH Deng, Y Yang
MediaEval, 2015
32015
Fine-tuned transformers show clusters of similar representations across layers
J Phang, H Liu, SR Bowman
arXiv preprint arXiv:2109.08406, 2021
22021
The system can't perform the operation now. Try again later.
Articles 1–12