Tom Kwiatkowski,
Luke Zettlemoyer,
Sharon Goldwater and
Mark Steedman.
2011.
Lexical generalization in CCG grammar induction for semantic parsing. In
Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp
1512--1523.
Association for Computational Linguistics. cit
31. [
semparse, d=geo, d=atis, spf]
pdf annote google scholar
(***)
Build on unification based Kwiatkowski 2010.
Key observation is groups of words show same syntactic/semantic tag variation.
So learn the variation for the whole group, more robust to data sparsity.
i.e. They have discovered that word classes exist :)
This helps generalize the language-independent unification approach to unedited sentences like in atis.
Mentions Clarke10, Liang11, Goldwasser11 as going from sentences to answers without LF.
Mentions Branavan10, Vogel10, Liang09, Poon09, 10 as learning from interactions.
Results: (ubl: Kwiatkowsky10, fubl: Kwiatkowski11)
atis-exact-f1: zc07:.852 ubl:.717 fubl:.828
geo880-f1: zc05:.870 zc07:.888 ubl:.882 fubl:.886
geo250-en: wasp:.829 ubl:.826 fubl:.837
geo250-sp: wasp:.858 ubl:.824 fubl:.857
geo250-jp: wasp:.858 ubl:.831 fubl:.835
geo250-tr: wasp:.781 ubl:.746 fubl:.731
Yoav Artzi and
Luke Zettlemoyer.
2011.
Bootstrapping semantic parsers from conversations. In
Proceedings of the conference on empirical methods in natural language processing, pp
421--432.
Association for Computational Linguistics. cit
20. [
semparse, d=lucent, d=bbn, spf]
pdf abstract annote google scholar
Conversations provide rich opportunities for interactive, continuous learning. When some- thing goes wrong, a system can ask for clari- fication, rewording, or otherwise redirect the interaction to achieve its goals. In this pa- per, we present an approach for using con- versational interactions of this type to induce semantic parsers. We demonstrate learning without any explicit annotation of the mean- ings of user utterances. Instead, we model meaning with latent variables, and introduce a loss function to measure how well potential meanings match the conversation. This loss drives the overall learning approach, which in- duces a weighted CCG grammar that could be used to automatically bootstrap the semantic analysis component in a complete dialog sys- tem. Experiments on DARPA Communica- tor conversational logs demonstrate effective learning, despite requiring no explicit mean- ing annotations.
(*)
Learning with logical forms provided for only part of the data (clarification questions in dialogues).
Loss-sensitive perceptron (Singh-Miller and Collins 2007)
DARPA communicator corpus. (Walker 2002)
Access to logs of conversations where system utterances annotated, user utterances not.
Annotations include speech acts (Walker and Passonneau (2001)), these are not predicted for unannotated sentences.
Seems like the annotated part of the dialogue (system utterances) can be seen as training set,
the rest (user utterances) as test set, is there anything new here?
Uses ZC05, ZC07 (template based, not unification).
Further reading:
Clarke et al. (2010) and Liang et al. (2011) describe approaches for learning semantic parsers from questions paired with database answers, while Goldwasser et al. (2011) presents work on unsupervised learning.
Semantic analysis tasks from context-dependent database queries (Miller et al., 1996; Zettlemoyer and Collins, 2009), grounded event streams (Chen et al., 2010; Liang et al., 2009), environment interactions (Branavan et al., 2009; 2010; Vogel and Jurafsky, 2010), and even unannotated text (Poon and Domingos, 2009; 2010).
Uses BIU Number Normalizer http://www.cs.biu.ac.il/˜nlp/downloads/
Hoifung Poon and
Pedro Domingos.
2010.
Unsupervised ontology induction from text. In
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pp
296--305.
Association for Computational Linguistics. cit
62. [
semparse]
pdf abstract google scholar
Extracting knowledge from unstructured text is a long-standing goal of NLP. Al- though learning approaches to many of its subtasks have been developed (e.g., pars- ing, taxonomy induction, information ex- traction), all end-to-end solutions to date require heavy supervision and/or manual engineering, limiting their scope and scal- ability. We present OntoUSP, a system that induces and populates a probabilistic on- tology using only dependency-parsed text as input. OntoUSP builds on the USP unsupervised semantic parser by jointly forming ISA and IS-PART hierarchies of lambda-form clusters. The ISA hierar- chy allows more general knowledge to be learned, and the use of smoothing for parameter estimation. We evaluate On- toUSP by using it to extract a knowledge base from biomedical abstracts and an- swer questions. OntoUSP improves on the recall of USP by 47% and greatly outperforms previous state-of-the-art ap- proaches.
Tom Kwiatkowski,
Luke Zettlemoyer,
Sharon Goldwater and
Mark Steedman.
2010.
Inducing probabilistic CCG grammars from logical form with higher-order unification. In
Proceedings of the 2010 conference on empirical methods in natural language processing, pp
1223--1233.
Association for Computational Linguistics. cit
82. [
spf, semparse, d=geo]
pdf annote google scholar
(****)
Sentence to logical form mapping using CCG and unification (UBL) instead of GenLex.
Geo dataset, four languages, two meaning representations (funql, lambda).
Start with single lex item for each sentence mapping it to LF.
Introduce vertical bar | to ccg which can match / or \. (ZC07 similar?)
Understand the SGD gradient possibly reading CC07.
Starting with single lex item and trying splits look much less ad-hoc than ZC05,07 with Genlex and initial lexicon.
Only the proper noun NPs (e.g. Texas) are in the initial lexicon.
4.1 splitting constraints interesting, can learn them from data?
The split-merge process seem a bit ad-hoc, a more principled Bayesian approach may be possible.
Good related work discussion in Sec 6.
UBL geo880: p=.941 r=.850 f=.893
UBL-s (2pass): p=.885 r=.879 f=.882