We introduce a structure for the directed acyclic graph (DAG) and a mechanism design based on that structure so that peers can reach consensus at large scale based on proof of work (PoW). We also design a mempool transaction assignment method based on the DAG structure to render negligible the probability that a transaction being processed by more than one miners. The result is a significant scale-up of the capacity without sacrificing security and decentralization.
Multivariate time series (MTS) data such as time course gene expression data in genomics are often collected to study the dynamic nature of the systems. These data provide important information about the causal dependency among a set of random variab
les. In this paper, we introduce a computationally efficient algorithm to learn directed acyclic graphs (DAGs) based on MTS data, focusing on learning the local structure of a given target variable. Our algorithm is based on learning all parents (P), all children (C) and some descendants (D) (PCD) iteratively, utilizing the time order of the variables to orient the edges. This time series PCD-PCD algorithm (tsPCD-PCD) extends the previous PCD-PCD algorithm to dependent observations and utilizes composite likelihood ratio tests (CLRTs) for testing the conditional independence. We present the asymptotic distribution of the CLRT statistic and show that the tsPCD-PCD is guaranteed to recover the true DAG structure when the faithfulness condition holds and the tests correctly reject the null hypotheses. Simulation studies show that the CLRTs are valid and perform well even when the sample sizes are small. In addition, the tsPCD-PCD algorithm outperforms the PCD-PCD algorithm in recovering the local graph structures. We illustrate the algorithm by analyzing a time course gene expression data related to mouse T-cell activation.
The Minimum Path Cover problem on directed acyclic graphs (DAGs) is a classical problem that provides a clear and simple mathematical formulation for several applications in different areas and that has an efficient algorithmic solution. In this pape
r, we study the computational complexity of two constrained variants of Minimum Path Cover motivated by the recent introduction of next-generation sequencing technologies in bioinformatics. The first problem (MinPCRP), given a DAG and a set of pairs of vertices, asks for a minimum cardinality set of paths covering all the vertices such that both vertices of each pair belong to the same path. For this problem, we show that, while it is NP-hard to compute if there exists a solution consisting of at most three paths, it is possible to decide in polynomial time whether a solution consisting of at most two paths exists. The second problem (MaxRPSP), given a DAG and a set of pairs of vertices, asks for a path containing the maximum number of the given pairs of vertices. We show its NP-hardness and also its W[1]-hardness when parametrized by the number of covered pairs. On the positive side, we give a fixed-parameter algorithm when the parameter is the maximum overlapping degree, a natural parameter in the bioinformatics applications of the problem.
Motivated by the problem of designing inference-friendly Bayesian nonparametric models in probabilistic programming languages, we introduce a general class of partially exchangeable random arrays which generalizes the notion of hierarchical exchangea
bility introduced in Austin and Panchenko (2014). We say that our partially exchangeable arrays are DAG-exchangeable since their partially exchangeable structure is governed by a collection of Directed Acyclic Graphs. More specifically, such a random array is indexed by $mathbb{N}^{|V|}$ for some DAG $G=(V,E)$, and its exchangeability structure is governed by the edge set $E$. We prove a representation theorem for such arrays which generalizes the Aldous-Hoover and Austin-Panchenko representation theorems.
We develop a Bregman proximal gradient method for structure learning on linear structural causal models. While the problem is non-convex, has high curvature and is in fact NP-hard, Bregman gradient methods allow us to neutralize at least part of the
impact of curvature by measuring smoothness against a highly nonlinear kernel. This allows the method to make longer steps and significantly improves convergence. Each iteration requires solving a Bregman proximal step which is convex and efficiently solvable for our particular choice of kernel. We test our method on various synthetic and real data sets.
We show how every stock-flow consistent model of the macroeconomy can be represented as a directed acyclic graph. The advantages of representing the model in this way include graphical clarity, causal inference, and model specification. We provide ma
ny examples implemented with a new software package.