ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning Graphs with Monotone Topology Properties and Multiple Connected Components

295   0   0.0 ( 0 )
 نشر من قبل Eduardo Pavez
 تاريخ النشر 2017
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Recent papers have formulated the problem of learning graphs from data as an inverse covariance estimation with graph Laplacian constraints. While such problems are convex, existing methods cannot guarantee that solutions will have specific graph topology properties (e.g., being $k$-partite), which are desirable for some applications. In fact, the problem of learning a graph with given topology properties, e.g., finding the $k$-partite graph that best matches the data, is in general non-convex. In this paper, we develop novel theoretical results that provide performance guarantees for an approach to solve these problems. Our solution decomposes this problem into two sub-problems, for which efficient solutions are known. Specifically, a graph topology inference (GTI) step is employed to select a feasible graph topology, i.e., one having the desired property. Then, a graph weight estimation (GWE) step is performed by solving a generalized graph Laplacian estimation problem, where edges are constrained by the topology found in the GTI step. Our main result is a bound on the error of the GWE step as a function of the error in the GTI step. This error bound indicates that the GTI step should be solved using an algorithm that approximates the similarity matrix by another matrix whose entries have been thresholded to zero to have the desired type of graph topology. The GTI stage can leverage existing methods (e.g., state of the art approaches for graph coloring) which are typically based on minimizing the total weight of removed edges. Since the GWE stage is formulated as an inverse covariance estimation problem with linear constraints, it can be solved using existing convex optimization methods. We demonstrate that our two step approach can achieve good results for both synthetic and texture image data.

قيم البحث

اقرأ أيضاً

105 - Andrei Gagarin 2008
We adapt the classical 3-decomposition of any 2-connected graph to the case of simple graphs (no loops or multiple edges). By analogy with the block-cutpoint tree of a connected graph, we deduce from this decomposition a bicolored tree tc(g) associat ed with any 2-connected graph g, whose white vertices are the 3-components of g (3-connected components or polygons) and whose black vertices are bonds linking together these 3-components, arising from separating pairs of vertices of g. Two fundamental relationships on graphs and networks follow from this construction. The first one is a dissymmetry theorem which leads to the expression of the class B=B(F) of 2-connected graphs, all of whose 3-connected components belong to a given class F of 3-connected graphs, in terms of various rootings of B. The second one is a functional equation which characterizes the corresponding class R=R(F) of two-pole networks all of whose 3-connected components are in F. All the rootings of B are then expressed in terms of F and R. There follow corresponding identities for all the associated series, in particular the edge index series. Numerous enumerative consequences are discussed.
In an $mathsf{L}$-embedding of a graph, each vertex is represented by an $mathsf{L}$-segment, and two segments intersect each other if and only if the corresponding vertices are adjacent in the graph. If the corner of each $mathsf{L}$-segment in an $ mathsf{L}$-embedding lies on a straight line, we call it a monotone $mathsf{L}$-embedding. In this paper we give a full characterization of monotone $mathsf{L}$-embeddings by introducing a new class of graphs which we call non-jumping graphs. We show that a graph admits a monotone $mathsf{L}$-embedding if and only if the graph is a non-jumping graph. Further, we show that outerplanar graphs, convex bipartite graphs, interval graphs, 3-leaf power graphs, and complete graphs are subclasses of non-jumping graphs. Finally, we show that distance-hereditary graphs and $k$-leaf power graphs ($kle 4$) admit $mathsf{L}$-embeddings.
We study amenability of definable groups and topological groups, and prove various results, briefly described below. Among our main technical tools, of interest in its own right, is an elaboration on and strengthening of the Massicot-Wagner version of the stabilizer theorem, and also some results about measures and measure-like functions (which we call means and pre-means). As an application we show that if $G$ is an amenable topological group, then the Bohr compactification of $G$ coincides with a certain ``weak Bohr compactification introduced in [24]. In other words, the conclusion says that certain connected components of $G$ coincide: $G^{00}_{topo} = G^{000}_{topo}$. We also prove wide generalizations of this result, implying in particular its extension to a ``definable-topological context, confirming the main conjectures from [24]. We also introduce $bigvee$-definable group topologies on a given $emptyset$-definable group $G$ (including group topologies induced by type-definable subgroups as well as uniformly definable group topologies), and prove that the existence of a mean on the lattice of closed, type-definable subsets of $G$ implies (under some assumption) that $cl(G^{00}_M) = cl(G^{000}_M)$ for any model $M$. Thirdly, we give an example of a $emptyset$-definable approximate subgroup $X$ in a saturated extension of the group $mathbb{F}_2 times mathbb{Z}$ in a suitable language (where $mathbb{F}_2$ is the free group in 2-generators) for which the $bigvee$-definable group $H:=langle X rangle$ contains no type-definable subgroup of bounded index. This refutes a conjecture by Wagner and shows that the Massicot-Wagner approach to prove that a locally compact (and in consequence also Lie) ``model exists for each approximate subgroup does not work in general (they proved in [29] that it works for definably amenable approximate subgroups).
In this paper we model the problem of learning preferences of a population as an active learning problem. We propose an algorithm can adaptively choose pairs of items to show to users coming from a heterogeneous population, and use the obtained rewar d to decide which pair of items to show next. We provide computationally efficient algorithms with provable sample complexity guarantees for this problem in both the noiseless and noisy cases. In the process of establishing sample complexity guarantees for our algorithms, we establish new results using a Nystr{o}m-like method which can be of independent interest. We supplement our theoretical results with experimental comparisons.
Reinforcement learning (RL) typically defines a discount factor as part of the Markov Decision Process. The discount factor values future rewards by an exponential scheme that leads to theoretical convergence guarantees of the Bellman equation. Howev er, evidence from psychology, economics and neuroscience suggests that humans and animals instead have hyperbolic time-preferences. In this work we revisit the fundamentals of discounting in RL and bridge this disconnect by implementing an RL agent that acts via hyperbolic discounting. We demonstrate that a simple approach approximates hyperbolic discount functions while still using familiar temporal-difference learning techniques in RL. Additionally, and independent of hyperbolic discounting, we make a surprising discovery that simultaneously learning value functions over multiple time-horizons is an effective auxiliary task which often improves over a strong value-based RL agent, Rainbow.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا