Katharina Kann
Title
Cited by
Cited by
Year
Comparative study of cnn and rnn for natural language processing
W Yin, K Kann, M Yu, H Schütze
arXiv preprint arXiv:1702.01923, 2017
4452017
MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection
K Kann, H Schütze
Proceedings of the 14th SIGMORPHON Workshop on Computational Research in …, 2016
822016
The CoNLL--SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection
R Cotterell, C Kirov, J Sylak-Glassman, G Walther, E Vylomova, ...
arXiv preprint arXiv:1810.07125, 2018
712018
Single-model encoder-decoder with explicit morphological representation for reinflection
K Kann, H Schütze
arXiv preprint arXiv:1606.00589, 2016
612016
Training data augmentation for low-resource morphological inflection
T Bergmanis, K Kann, H Schütze, S Goldwater
Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal …, 2017
402017
One-shot neural cross-lingual transfer for paradigm completion
K Kann, R Cotterell, H Schütze
arXiv preprint arXiv:1704.00052, 2017
312017
Neural morphological analysis: Encoding-decoding canonical segments
K Kann, R Cotterell, H Schütze
Proceedings of the 2016 Conference on Empirical Methods in Natural Language …, 2016
262016
Neural multi-source morphological reinflection
K Kann, R Cotterell, H Schütze
arXiv preprint arXiv:1612.06027, 2016
242016
The LMU system for the CoNLL-SIGMORPHON 2017 shared task on universal morphological reinflection
K Kann, H Schütze
Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal …, 2017
222017
jiant 1.2: A software toolkit for research on general-purpose text understanding models
A Wang, IF Tenney, Y Pruksachatkun, K Yu, J Hula, P Xia, R Pappagari, ...
Note: http://jiant. info/Cited by: footnote 4, 2019
212019
Comparative study of CNN and RNN for natural language processing (2017)
W Yin, K Kann, M Yu, H Schutze
arXiv preprint arXiv:1702.01923, 2017
212017
Comparative study of CNN and RNN for natural language processing. arXiv 2017
W Yin, K Kann, M Yu, H Schütze
arXiv preprint arXiv:1702.01923, 0
21
Sentence-Level Fluency Evaluation: References Help, But Can Be Spared!
K Kann, S Rothe, K Filippova
arXiv preprint arXiv:1809.08731, 2018
202018
Fortification of neural morphological segmentation models for polysynthetic minimal-resource languages
K Kann, M Mager, I Meza-Ruiz, H Schütze
arXiv preprint arXiv:1804.06024, 2018
202018
Verb argument structure alternations in word and sentence embeddings
K Kann, A Warstadt, A Williams, SR Bowman
arXiv preprint arXiv:1811.10773, 2018
172018
Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
Y Pruksachatkun, J Phang, H Liu, PM Htut, X Zhang, RY Pang, C Vania, ...
arXiv preprint arXiv:2005.00628, 2020
142020
Character-level supervision for low-resource pos tagging
K Kann, J Bjerva, I Augenstein, B Plank, A Søgaard
Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP …, 2018
142018
Towards Realistic Practices In Low-Resource Natural Language Processing: The Development Set
K Kann, K Cho, SR Bowman
arXiv preprint arXiv:1909.01522, 2019
102019
Exploring cross-lingual transfer of morphological knowledge in sequence-to-sequence models
H Jin, K Kann
Proceedings of the First Workshop on Subword and Character Level Models in …, 2017
82017
Unlabeled data for morphological generation with character-based sequence-to-sequence models
K Kann, H Schütze
arXiv preprint arXiv:1705.06106, 2017
82017
The system can't perform the operation now. Try again later.
Articles 1–20