Processing...
Learning Unification-Based Natural Language Grammars
1995-02-03
9502002 | cmp-lg
When parsing unrestricted language, wide-covering grammars often
undergenerate. Undergeneration can be tackled either by sentence correction, or
by grammar correction. This thesis concentrates upon automatic grammar
correction (or machine learning of grammar) as a solution to the problem of
undergeneration. Broadly speaking, grammar correction approaches can be
classified as being either {\it data-driven}, or {\it model-based}. Data-driven
learners use data-intensive methods to acquire grammar. They typically use
grammar formalisms unsuited to the needs of practical text processing and
cannot guarantee that the resulting grammar is adequate for subsequent semantic
interpretation. That is, data-driven learners acquire grammars that generate
strings that humans would judge to be grammatically ill-formed (they {\it
overgenerate}) and fail to assign linguistically plausible parses. Model-based
learners are knowledge-intensive and are reliant for success upon the
completeness of a {\it model of grammaticality}. But in practice, the model
will be incomplete. Given that in this thesis we deal with undergeneration by
learning, we hypothesise that the combined use of data-driven and model-based
learning would allow data-driven learning to compensate for model-based
learning's incompleteness, whilst model-based learning would compensate for
data-driven learning's unsoundness. We describe a system that we have used to
test the hypothesis empirically. The system combines data-driven and
model-based learning to acquire unification-based grammars that are more
suitable for practical text parsing. Using the Spoken English Corpus as data,
and by quantitatively measuring undergeneration, overgeneration and parse
plausibility, we show that this hypothesis is correct.