Do you want to publish a course? Click here

On Implementing an HPSG theory -- Aspects of the logical architecture, the formalization, and the implementation of head-driven phrase structure grammars

58   0   0.0 ( 0 )
 Added by Detmar Meurers
 Publication date 1994
and research's language is English




Ask ChatGPT about the research

The paper presents some aspects involved in the formalization and implementation of HPSG theories. As basis, the logical setups of Carpenter (1992) and King (1989, 1994) are briefly compared regarding their usefulness as basis for HPSGII (Pollard and Sag 1994). The possibilities for expressing HPSG theories in the HPSGII architecture and in various computational systems (ALE, Troll, CUF, and TFS) are discussed. Beside a formal characterization of the possibilities, the paper investigates the specific choices for constraints with certain linguistic motivations, i.e. the lexicon, structure licencing, and grammatical principles. An ALE implementation of a theory for German proposed by Hinrichs and Nakazawa (1994) is used as example and the ALE grammar is included in the appendix.



rate research

Read More

Finding simple, non-recursive, base noun phrases is an important subtask for many natural language processing applications. While previous empirical methods for base NP identification have been rather complex, this paper instead proposes a very simple algorithm that is tailored to the relative simplicity of the task. In particular, we present a corpus-based approach for finding base NPs by matching part-of-speech tag sequences. The training phase of the algorithm is based on two successful techniques: first the base NP grammar is read from a ``treebank corpus; then the grammar is improved by selecting rules with high ``benefit scores. Using this simple algorithm with a naive heuristic for matching rules, we achieve surprising accuracy in an evaluation on the Penn Treebank Wall Street Journal.
72 - Zuchao Li , Junru Zhou , Hai Zhao 2021
Constituent and dependency parsing, the two classic forms of syntactic parsing, have been found to benefit from joint training and decoding under a uniform formalism, Head-driven Phrase Structure Grammar (HPSG). However, decoding this unified grammar has a higher time complexity ($O(n^5)$) than decoding either form individually ($O(n^3)$) since more factors have to be considered during decoding. We thus propose an improved head scorer that helps achieve a novel performance-preserved parser in $O$($n^3$) time complexity. Furthermore, on the basis of this proposed practical HPSG parser, we investigated the strengths of HPSG-based parsing and explored the general method of training an HPSG-based parser from only a constituent or dependency annotations in a multilingual scenario. We thus present a more effective, more in-depth, and general work on HPSG parsing.
We survey several problems related to logical aspects of quantum structures. In particular, we consider problems related to completions, decidability and axiomatizability, and embedding problems. The historical development is described, as well as recent progress and some suggested paths forward.
We develop an approach to choice principles and their contrapositive bar-induction principles as extensionality schemes connecting an intensional or effective view of respectively ill-and well-foundedness properties to an extensional or ideal view of these properties. After classifying and analysing the relations between different intensional definitions of ill-foundedness and well-foundedness, we introduce, for a domain $A$, a codomain $B$ and a filter $T$ on finite approximations of functions from $A$ to $B$, a generalised form GDC$_{A,B,T}$ of the axiom of dependent choice and dually a generalised bar induction principle GBI$_{A,B,T}$ such that: GDC$_{A,B,T}$ intuitionistically captures the strength of $bullet$ the general axiom of choice expressed as $forall aexists b R(a, b) Rightarrowexistsalphaforall alpha R(alpha,alpha(a))$ when $T$ is a filter that derives point-wise from a relation $R$ on $A times B$ without introducing further constraints, $bullet$ the Boolean Prime Filter Theorem / Ultrafilter Theorem if $B$ is the two-element set $mathbb{B}$ (for a constructive definition of prime filter), $bullet$ the axiom of dependent choice if $A = mathbb{N}$, $bullet$ Weak K{o}nigs Lemma if $A = mathbb{N}$ and $B = mathbb{B}$ (up to weak classical reasoning) GBI$_{A,B,T}$ intuitionistically captures the strength of $bullet$ G{o}dels completeness theorem in the form validity implies provability for entailment relations if $B = mathbb{B}$, $bullet$ bar induction when $A = mathbb{N}$, $bullet$ the Weak Fan Theorem when $A = mathbb{N}$ and $B = mathbb{B}$. Contrastingly, even though GDC$_{A,B,T}$ and GBI$_{A,B,T}$ smoothly capture several variants of choice and bar induction, some instances are inconsistent, e.g. when $A$ is $mathbb{B}^mathbb{N}$ and $B$ is $mathbb{N}$.
Instance segmentation models today are very accurate when trained on large annotated datasets, but collecting mask annotations at scale is prohibitively expensive. We address the partially supervised instance segmentation problem in which one can train on (significantly cheaper) bounding boxes for all categories but use masks only for a subset of categories. In this work, we focus on a popular family of models which apply differentiable cropping to a feature map and predict a mask based on the resulting crop. Under this family, we study Mask R-CNN and discover that instead of its default strategy of training the mask-head with a combination of proposals and groundtruth boxes, training the mask-head with only groundtruth boxes dramatically improves its performance on novel classes. This training strategy also allows us to take advantage of alternative mask-head architectures, which we exploit by replacing the typical mask-head of 2-4 layers with significantly deeper off-the-shelf architectures (e.g. ResNet, Hourglass models). While many of these architectures perform similarly when trained in fully supervised mode, our main finding is that they can generalize to novel classes in dramatically different ways. We call this ability of mask-heads to generalize to unseen classes the strong mask generalization effect and show that without any specialty modules or losses, we can achieve state-of-the-art results in the partially supervised COCO instance segmentation benchmark. Finally, we demonstrate that our effect is general, holding across underlying detection methodologies (including anchor-based, anchor-free or no detector at all) and across different backbone networks. Code and pre-trained models are available at https://git.io/deepmac.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا