ترغب بنشر مسار تعليمي؟ اضغط هنا

A common variable minimax theorem for graphs

121   0   0.0 ( 0 )
 نشر من قبل Nicholas Marshall
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Let $mathcal{G} = {G_1 = (V, E_1), dots, G_m = (V, E_m)}$ be a collection of $m$ graphs defined on a common set of vertices $V$ but with different edge sets $E_1, dots, E_m$. Informally, a function $f :V rightarrow mathbb{R}$ is smooth with respect to $G_k = (V,E_k)$ if $f(u) sim f(v)$ whenever $(u, v) in E_k$. We study the problem of understanding whether there exists a nonconstant function that is smooth with respect to all graphs in $mathcal{G}$, simultaneously, and how to find it if it exists.

قيم البحث

اقرأ أيضاً

We show that the spectral flow of a one-parameter family of Schrodinger operators on a metric graph is equal to the Maslov index of a path of Lagrangian subspaces describing the vertex conditions. In addition, we derive an Hadamard-type formula for t he derivatives of the eigenvalue curves via the Maslov crossing form.
78 - Alexey M. Kulik 2007
A limit theorem for a sequence of diffusion processes on graphs is proved in a case when vary both parameters of the processes (the drift and diffusion coefficients on every edge and the asymmetry coefficients in every vertex), and configuration of g raphs, where the processes are set on. The explicit formulae for the parameters of asymmetry for the vertices of the limiting graph are given in the case, when, in the pre-limiting graphs, some groups of vertices form knots contracting into a points.
The celebrated minimax principle of Yao (1977) says that for any Boolean-valued function $f$ with finite domain, there is a distribution $mu$ over the domain of $f$ such that computing $f$ to error $epsilon$ against inputs from $mu$ is just as hard a s computing $f$ to error $epsilon$ on worst-case inputs. Notably, however, the distribution $mu$ depends on the target error level $epsilon$: the hard distribution which is tight for bounded error might be trivial to solve to small bias, and the hard distribution which is tight for a small bias level might be far from tight for bounded error levels. In this work, we introduce a new type of minimax theorem which can provide a hard distribution $mu$ that works for all bias levels at once. We show that this works for randomized query complexity, randomized communication complexity, some randomized circuit models, quantum query and communication complexities, approximate polynomial degree, and approximate logrank. We also prove an improved version of Impagliazzos hardcore lemma. Our proofs rely on two innovations over the classical approach of using Von Neumanns minimax theorem or linear programming duality. First, we use Sions minimax theorem to prove a minimax theorem for ratios of bilinear functions representing the cost and score of algorithms. Second, we introduce a new way to analyze low-bias randomized algorithms by viewing them as forecasting algorithms evaluated by a proper scoring rule. The expected score of the forecasting version of a randomized algorithm appears to be a more fine-grained way of analyzing the bias of the algorithm. We show that such expected scores have many elegant mathematical properties: for example, they can be amplified linearly instead of quadratically. We anticipate forecasting algorithms will find use in future work in which a fine-grained analysis of small-bias algorithms is required.
Given a graph with a designated set of boundary vertices, we define a new notion of a Neumann Laplace operator on a graph using a reflection principle. We show that the first eigenvalue of this Neumann graph Laplacian satisfies a Cheeger inequality.
68 - Bahman Kalantari 2019
Given $n times n$ real symmetric matrices $A_1, dots, A_m$, the following {it spectral minimax} property holds: $$min_{X in mathbf{Delta}_n} max_{y in S_m} sum_{i=1}^m y_iA_i bullet X=max_{y in S_m} min_{X in mathbf{Delta}_n} sum_{i=1}^m y_iA_i bulle t X,$$ where $S_m$ is the simplex and $mathbf{Delta}_n$ the spectraplex. For diagonal $A_i$s this reduces to the classic minimax.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا