Ekin Akyürek,
Erenay Dayanık and
Deniz Yuret.
2019.
Morphological Analysis Using a Sequence Decoder.
Transactions of the Association for Computational Linguistics, vol
7, pp
567--579,
Sep. [
ai.ku]
url url abstract google scholar
We introduce Morse, a recurrent encoder-decoder model that produces morphological analyses of each word in a sentence. The encoder turns the relevant information about the word and its context into a fixed size vector representation and the decoder generates the sequence of characters for the lemma followed by a sequence of individual morphological features. We show that generating morphological features individually rather than as a combined tag allows the model to handle rare or unseen tags and outperform whole-tag models. In addition, generating morphological features as a sequence rather than e.g. an unordered set allows our model to produce an arbitrary number of features that represent multiple inflectional groups in morphologically complex languages. We obtain state-of-the art results in nine languages of different morphological complexity under low-resource, high-resource and transfer learning settings. We also introduce TrMor2018, a new high accuracy Turkish morphology dataset. Our Morse implementation and the TrMor2018 dataset are available online to support future research.
Cemil Cengiz,
Ulaş Sert and
Deniz Yuret.
2019.
KU_ai at MEDIQA 2019: Domain-specific Pre-training and Transfer Learning for Medical NLI. In
Proceedings of the 18th BioNLP Workshop and Shared Task, pp
427--436,
Florence, Italy,
Aug.
Association for Computational Linguistics. [
ai.ku]
url abstract google scholar
In this paper, we describe our system and results submitted for the Natural Language Inference (NLI) track of the MEDIQA 2019 Shared Task. As KU{\_}ai team, we used BERT as our baseline model and pre-processed the MedNLI dataset to mitigate the negative impact of de-identification artifacts. Moreover, we investigated different pre-training and transfer learning approaches to improve the performance. We show that pre-training the language model on rich biomedical corpora has a significant effect in teaching the model domain-specific language. In addition, training the model on large NLI datasets such as MultiNLI and SNLI helps in learning task-specific reasoning. Finally, we ensembled our highest-performing models, and achieved 84.7{\%} accuracy on the unseen test dataset and ranked 10th out of 17 teams in the official results.