Tom Kwiatkowski,
Luke Zettlemoyer,
Sharon Goldwater and
Mark Steedman.
2011.
Lexical generalization in CCG grammar induction for semantic parsing. In
Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp
1512--1523.
Association for Computational Linguistics. cit
31. [
semparse, d=geo, d=atis, spf]
pdf annote google scholar
(***)
Build on unification based Kwiatkowski 2010.
Key observation is groups of words show same syntactic/semantic tag variation.
So learn the variation for the whole group, more robust to data sparsity.
i.e. They have discovered that word classes exist :)
This helps generalize the language-independent unification approach to unedited sentences like in atis.
Mentions Clarke10, Liang11, Goldwasser11 as going from sentences to answers without LF.
Mentions Branavan10, Vogel10, Liang09, Poon09, 10 as learning from interactions.
Results: (ubl: Kwiatkowsky10, fubl: Kwiatkowski11)
atis-exact-f1: zc07:.852 ubl:.717 fubl:.828
geo880-f1: zc05:.870 zc07:.888 ubl:.882 fubl:.886
geo250-en: wasp:.829 ubl:.826 fubl:.837
geo250-sp: wasp:.858 ubl:.824 fubl:.857
geo250-jp: wasp:.858 ubl:.831 fubl:.835
geo250-tr: wasp:.781 ubl:.746 fubl:.731
Yoav Artzi and
Luke Zettlemoyer.
2011.
Bootstrapping semantic parsers from conversations. In
Proceedings of the conference on empirical methods in natural language processing, pp
421--432.
Association for Computational Linguistics. cit
20. [
semparse, d=lucent, d=bbn, spf]
pdf abstract annote google scholar
Conversations provide rich opportunities for interactive, continuous learning. When some- thing goes wrong, a system can ask for clari- fication, rewording, or otherwise redirect the interaction to achieve its goals. In this pa- per, we present an approach for using con- versational interactions of this type to induce semantic parsers. We demonstrate learning without any explicit annotation of the mean- ings of user utterances. Instead, we model meaning with latent variables, and introduce a loss function to measure how well potential meanings match the conversation. This loss drives the overall learning approach, which in- duces a weighted CCG grammar that could be used to automatically bootstrap the semantic analysis component in a complete dialog sys- tem. Experiments on DARPA Communica- tor conversational logs demonstrate effective learning, despite requiring no explicit mean- ing annotations.
(*)
Learning with logical forms provided for only part of the data (clarification questions in dialogues).
Loss-sensitive perceptron (Singh-Miller and Collins 2007)
DARPA communicator corpus. (Walker 2002)
Access to logs of conversations where system utterances annotated, user utterances not.
Annotations include speech acts (Walker and Passonneau (2001)), these are not predicted for unannotated sentences.
Seems like the annotated part of the dialogue (system utterances) can be seen as training set,
the rest (user utterances) as test set, is there anything new here?
Uses ZC05, ZC07 (template based, not unification).
Further reading:
Clarke et al. (2010) and Liang et al. (2011) describe approaches for learning semantic parsers from questions paired with database answers, while Goldwasser et al. (2011) presents work on unsupervised learning.
Semantic analysis tasks from context-dependent database queries (Miller et al., 1996; Zettlemoyer and Collins, 2009), grounded event streams (Chen et al., 2010; Liang et al., 2009), environment interactions (Branavan et al., 2009; 2010; Vogel and Jurafsky, 2010), and even unannotated text (Poon and Domingos, 2009; 2010).
Uses BIU Number Normalizer http://www.cs.biu.ac.il/˜nlp/downloads/