ترغب بنشر مسار تعليمي؟ اضغط هنا

Constraining Linear-chain CRFs to Regular Languages

110   0   0.0 ( 0 )
 نشر من قبل Sean Papay
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In structured prediction, a major challenge for models is to represent the interdependencies within their output structures. For the common case where outputs are structured as a sequence, linear-chain conditional random fields (CRFs) are a widely used model class which can learn local dependencies in output sequences. However, the CRFs Markov assumption makes it impossible for these models to capture nonlocal dependencies, and standard CRFs are unable to respect nonlocal constraints of the data (such as global arity constraints on output labels). We present a generalization of CRFs that can enforce a broad class of constraints, including nonlocal ones, by specifying the space of possible output structures as a regular language $mathcal{L}$. The resulting regular-constrained CRF (RegCCRF) has the same formal properties as a standard CRF, but assigns zero probability to all label sequences not in $mathcal{L}$. Notably, RegCCRFs can incorporate their constraints during training, while related models only enforce constraints during decoding. We prove that constrained training is never worse than constrained decoding, and show using synthetic data that it can be substantially better in practice. Additionally, we demonstrate a practical benefit on downstream tasks by incorporating a RegCCRF into a deep neural model for semantic role labeling, exceeding state-of-the-art results on a standard dataset.

قيم البحث

اقرأ أيضاً

219 - Leonid Kontorovich 2007
We give a universal kernel that renders all the regular languages linearly separable. We are not able to compute this kernel efficiently and conjecture that it is intractable, but we do have an efficient $eps$-approximation.
A classical result (often credited to Y. Medvedev) states that every language recognized by a finite automaton is the homomorphic image of a local language, over a much larger so-called local alphabet, namely the alphabet of the edges of the transiti on graph. Local languages are characterized by the value k=2 of the sliding window width in the McNaughton and Paperts infinite hierarchy of strictly locally testable languages (k-slt). We generalize Medvedevs result in a new direction, studying the relationship between the width and the alphabetic ratio telling how much larger the local alphabet is. We prove that every regular language is the image of a k-slt language on an alphabet of doubled size, where the width logarithmically depends on the automaton size, and we exhibit regular languages for which any smaller alphabetic ratio is insufficient. More generally, we express the trade-off between alphabetic ratio and width as a mathematical relation derived from a careful encoding of the states. At last we mention some directions for theoretical development and application.
The fully connected conditional random field (CRF) with Gaussian pairwise potentials has proven popular and effective for multi-class semantic segmentation. While the energy of a dense CRF can be minimized accurately using a linear programming (LP) r elaxation, the state-of-the-art algorithm is too slow to be useful in practice. To alleviate this deficiency, we introduce an efficient LP minimization algorithm for dense CRFs. To this end, we develop a proximal minimization framework, where the dual of each proximal problem is optimized via block coordinate descent. We show that each block of variables can be efficiently optimized. Specifically, for one block, the problem decomposes into significantly smaller subproblems, each of which is defined over a single pixel. For the other block, the problem is optimized via conditional gradient descent. This has two advantages: 1) the conditional gradient can be computed in a time linear in the number of pixels and labels; and 2) the optimal step size can be computed analytically. Our experiments on standard datasets provide compelling evidence that our approach outperforms all existing baselines including the previous LP based approach for dense CRFs.
Finite automata whose computations can be reversed, at any point, by knowing the last k symbols read from the input, for a fixed k, are considered. These devices and their accepted languages are called k-reversible automata and k-reversible languages , respectively. The existence of k-reversible languages which are not (k-1)-reversible is known, for each k>1. This gives an infinite hierarchy of weakly irreversible languages, i.e., languages which are k-reversible for some k. Conditions characterizing the class of k-reversible languages, for each fixed k, and the class of weakly irreversible languages are obtained. From these conditions, a procedure that given a finite automaton decides if the accepted language is weakly or strongly (i.e., not weakly) irreversible is described. Furthermore, a construction which allows to transform any finite automaton which is not k-reversible, but which accepts a k-reversible language, into an equivalent k-reversible finite automaton, is presented.
Given a regular language L, we effectively construct a unary semigroup that recognizes the topological closure of L in the free unary semigroup relative to the variety of unary semigroups generated by the pseudovariety R of all finite R-trivial semig roups. In particular, we obtain a new effective solution of the separation problem of regular languages by R-languages.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا