ترغب بنشر مسار تعليمي؟ اضغط هنا

Speeding Up Graph Algorithms via Switching Classes

106   0   0.0 ( 0 )
 نشر من قبل Nathan Lindzey
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Nathan Lindzey




اسأل ChatGPT حول البحث

Given a graph $G$, a vertex switch of $v in V(G)$ results in a new graph where neighbors of $v$ become nonneighbors and vice versa. This operation gives rise to an equivalence relation over the set of labeled digraphs on $n$ vertices. The equivalence class of $G$ with respect to the switching operation is commonly referred to as $G$s switching class. The algebraic and combinatorial properties of switching classes have been studied in depth; however, they have not been studied as thoroughly from an algorithmic point of view. The intent of this work is to further investigate the algorithmic properties of switching classes. In particular, we show that switching classes can be used to asymptotically speed up several super-linear unweighted graph algorithms. The current techniques for speeding up graph algorithms are all somewhat involved insofar that they employ sophisticated pre-processing, data-structures, or use word tricks on the RAM model to achieve at most a $O(log(n))$ speed up for sufficiently dense graphs. Our methods are much simpler and can result in super-polylogarithmic speedups. In particular, we achieve better bounds for diameter, transitive closure, bipartite maximum matching, and general maximum matching.

قيم البحث

اقرأ أيضاً

80 - Jan van den Brand 2020
Many algorithms use data structures that maintain properties of matrices undergoing some changes. The applications are wide-ranging and include for example matchings, shortest paths, linear programming, semi-definite programming, convex hull and volu me computation. Given the wide range of applications, the exact property these data structures must maintain varies from one application to another, forcing algorithm designers to invent them from scratch or modify existing ones. Thus it is not surprising that these data structures and their proofs are usually tailor-made for their specific application and that maintaining more complicated properties results in more complicated proofs. In this paper we present a unifying framework that captures a wide range of these data structures. The simplicity of this framework allows us to give short proofs for many existing data structures regardless of how complicated the to be maintained property is. We also show how the framework can be used to speed up existing iterative algorithms, such as the simplex algorithm. More formally, consider any rational function $f(A_1,...,A_d)$ with input matrices $A_1,...,A_d$. We show that the task of maintaining $f(A_1,...,A_d)$ under updates to $A_1,...,A_d$ can be reduced to the much simpler problem of maintaining some matrix inverse $M^{-1}$ under updates to $M$. The latter is a well studied problem called dynamic matrix inverse. By applying our reduction and using known algorithms for dynamic matrix inverse we can obtain fast data structures and iterative algorithms for much more general problems.
There are two distinct approaches to speeding up large parallel computers. The older method is the General Purpose Graphics Processing Units (GPGPU). The newer is the Many Integrated Core (MIC) technology . Here we attempt to focus on the MIC technol ogy and point out differences between the two approaches to accelerating supercomputers. This is a user perspective.
We study the application of a counter-diabatic driving (CD) technique to enhance the thermodynamic efficiency and power of a quantum Otto refrigerator based on a superconducting qubit coupled to two resonant circuits. Although the CD technique is ori ginally designed to counteract non-adiabatic coherent excitations in isolated systems, we find that it also works effectively in the open system dynamics, improving the coherence-induced losses of efficiency and power. We compare the CD dynamics with its classical counterpart, and find a deviation that arises because the CD is designed to follow the energy eigenbasis of the original Hamiltonian, but the heat baths thermalize the system in a different basis. We also discuss possible experimental realizations of our model.
This paper studies the problem of error-runtime trade-off, typically encountered in decentralized training based on stochastic gradient descent (SGD) using a given network. While a denser (sparser) network topology results in faster (slower) error co nvergence in terms of iterations, it incurs more (less) communication time/delay per iteration. In this paper, we propose MATCHA, an algorithm that can achieve a win-win in this error-runtime trade-off for any arbitrary network topology. The main idea of MATCHA is to parallelize inter-node communication by decomposing the topology into matchings. To preserve fast error convergence speed, it identifies and communicates more frequently over critical links, and saves communication time by using other links less frequently. Experiments on a suite of datasets and deep neural networks validate the theoretical analyses and demonstrate that MATCHA takes up to $5times$ less time than vanilla decentralized SGD to reach the same training loss.
We show that it is possible to use Bondy-Chvatal closure to design an FPT algorithm that decides whether or not it is possible to cover vertices of an input graph by at most k vertex disjoint paths in the complement of the input graph. More precisely , we show that if a graph has tree-width at most w and its complement is closed under Bondy-Chvatal closure, then it is possible to bound neighborhood diversity of the complement by a function of w only. A simpler proof where tree-depth is used instead of tree-width is also presented.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا