Tom Kwiatkowski,
Luke Zettlemoyer,
Sharon Goldwater and
Mark Steedman.
2011.
Lexical generalization in CCG grammar induction for semantic parsing. In
Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp
1512--1523.
Association for Computational Linguistics. cit
31. [
semparse, d=geo, d=atis, spf]
pdf annote google scholar
(***)
Build on unification based Kwiatkowski 2010.
Key observation is groups of words show same syntactic/semantic tag variation.
So learn the variation for the whole group, more robust to data sparsity.
i.e. They have discovered that word classes exist :)
This helps generalize the language-independent unification approach to unedited sentences like in atis.
Mentions Clarke10, Liang11, Goldwasser11 as going from sentences to answers without LF.
Mentions Branavan10, Vogel10, Liang09, Poon09, 10 as learning from interactions.
Results: (ubl: Kwiatkowsky10, fubl: Kwiatkowski11)
atis-exact-f1: zc07:.852 ubl:.717 fubl:.828
geo880-f1: zc05:.870 zc07:.888 ubl:.882 fubl:.886
geo250-en: wasp:.829 ubl:.826 fubl:.837
geo250-sp: wasp:.858 ubl:.824 fubl:.857
geo250-jp: wasp:.858 ubl:.831 fubl:.835
geo250-tr: wasp:.781 ubl:.746 fubl:.731
Luke S Zettlemoyer and
Michael Collins.
2007.
Online learning of relaxed CCG grammars for parsing to logical form. In
In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL-2007.
Citeseer. cit
121. [
semparse, d=atis, spf, d=geo, afz]
pdf abstract annote google scholar
We consider the problem of learning to parse sentences to lambda-calculus representations of their underlying semantics and present an algorithm that learns a weighted combinatory categorial grammar (CCG). A key idea is to introduce non-standard CCG combinators that relax certain parts of the grammar—for example allowing flexible word order, or insertion of lexical items— with learned costs. We also present a new, online algorithm for inducing a weighted CCG. Results for the approach on ATIS data show 86 % F-measure in recovering fully correct semantic analyses and 95.9% F-measure by a partial-match criterion, a more than 5 % improvement over the 90.3% partial-match figure reported by He and Young (2006).
(*)
Solving the same problem as ZC05 paper on atis and geo.
Geo, jobs, restaurant are artificially generated, atis is natural!
New CCG combinators and new online algorithm more flexible with realistic language.
atis exact 1-pass p=.9061 r=.8192 f=.8605
atis exact 2-pass p=.8575 r=.8460 f=.8516
atis partial 1-pass p=.9676 r=.8689 f=.9156
atis partial 2-pass p=.9511 r=.9671 f=.9590
(He and Young 2006 atis partial f=90.3%)
geo880 1-pass p=.9549 r=.8320 f=.8893
geo880 2-pass p=.9163 r=.8607 f=.8876
(ZC05 p=.9625 r=.7929 f=.8695)
Still uses GENLEX with two additional rules.
Still uses initial lexicon with nouns and wh-words!
CCG additions include:
1. function application with reverse word order.
2. function composition with reverse word order.
(do we even need the syntactic cats with slashes?)
3. additional type raising and crossed-comp rules that need more careful reading.
Two important differences from learning algorithm of ZC05:
1. online updates instead of batch.
2. perceptron updates instead of SGD on NLL.
Learning algorithm:
1. skip example if parsed correctly with current lex.
2. introduce all genlex and find maxscore parse with correct semantics.
3. add the new entries in maxparse to lex and try parsing again.
4. do a perceptron update.