Luke S Zettlemoyer and
Michael Collins.
2007.
Online learning of relaxed CCG grammars for parsing to logical form. In
In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL-2007.
Citeseer. cit
121. [
semparse, d=atis, spf, d=geo, afz]
pdf abstract annote google scholar
We consider the problem of learning to parse sentences to lambda-calculus representations of their underlying semantics and present an algorithm that learns a weighted combinatory categorial grammar (CCG). A key idea is to introduce non-standard CCG combinators that relax certain parts of the grammar—for example allowing flexible word order, or insertion of lexical items— with learned costs. We also present a new, online algorithm for inducing a weighted CCG. Results for the approach on ATIS data show 86 % F-measure in recovering fully correct semantic analyses and 95.9% F-measure by a partial-match criterion, a more than 5 % improvement over the 90.3% partial-match figure reported by He and Young (2006).
(*)
Solving the same problem as ZC05 paper on atis and geo.
Geo, jobs, restaurant are artificially generated, atis is natural!
New CCG combinators and new online algorithm more flexible with realistic language.
atis exact 1-pass p=.9061 r=.8192 f=.8605
atis exact 2-pass p=.8575 r=.8460 f=.8516
atis partial 1-pass p=.9676 r=.8689 f=.9156
atis partial 2-pass p=.9511 r=.9671 f=.9590
(He and Young 2006 atis partial f=90.3%)
geo880 1-pass p=.9549 r=.8320 f=.8893
geo880 2-pass p=.9163 r=.8607 f=.8876
(ZC05 p=.9625 r=.7929 f=.8695)
Still uses GENLEX with two additional rules.
Still uses initial lexicon with nouns and wh-words!
CCG additions include:
1. function application with reverse word order.
2. function composition with reverse word order.
(do we even need the syntactic cats with slashes?)
3. additional type raising and crossed-comp rules that need more careful reading.
Two important differences from learning algorithm of ZC05:
1. online updates instead of batch.
2. perceptron updates instead of SGD on NLL.
Learning algorithm:
1. skip example if parsed correctly with current lex.
2. introduce all genlex and find maxscore parse with correct semantics.
3. add the new entries in maxparse to lex and try parsing again.
4. do a perceptron update.
Andrew McCallum,
Kamal Nigam, et al.
1998.
A comparison of event models for naive bayes text classification. In
AAAI-98 workshop on learning for text categorization, vol
752, pp
41--48.
Citeseer. [
comp542]
url pdf abstract google scholar
Recent approaches to text classication have used two
dierent rst-order probabilistic models for classication, both of which make the naive Bayes assumption.
Some use a multi-variate Bernoulli model, that is, a
Bayesian Network with no dependencies between words
and binary word features (e.g. Larkey and Croft 1996;
Koller and Sahami 1997). Others use a multinomial
model, that is, a uni-gram language model with integer
word counts (e.g. Lewis and Gale 1994; Mitchell 1997).
This paper aims to clarify the confusion by describing
the dierences and details of these two models, and by
empirically comparing their classication performance
on ve text corpora. We nd that the multi-variate
Bernoulli performs well with small vocabulary sizes,
but that the multinomial performs usually performs
even better at larger vocabulary sizes|providing on
average a 27% reduction in error over the multi-variate
Bernoulli model at any vocabulary size.