Do you want to publish a course? Click here

Money Transfer Made Simple: a Specification, a Generic Algorithm, and its Proof

100   0   0.0 ( 0 )
 Added by Francois Taiani
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

It has recently been shown that, contrarily to a common belief, money transfer in the presence of faulty (Byzantine) processes does not require strong agreement such as consensus. This article goes one step further: namely, it first proposes a non-sequential specification of the money-transfer object, and then presents a generic algorithm based on a simple FIFO order between each pair of processes that implements it. The genericity dimension lies in the underlying reliable broadcast abstraction which must be suited to the appropriate failure model. Interestingly, whatever the failure model, the money transfer algorithm only requires adding a single sequence number to its messages as control information. Moreover, as a side effect of the proposed algorithm, it follows that money transfer is a weaker problem than the construction of a safe/regular/atomic read/write register in the asynchronous message-passing crash-prone model.

rate research

Read More

Many blockchain consensus protocols have been proposed recently to scale the throughput of a blockchain with available bandwidth. However, these protocols are becoming increasingly complex, making it more and more difficult to produce proofs of their security guarantees. We propose a novel permissionless blockchain protocol OHIE which explicitly aims for simplicity. OHIE composes as many parallel instances of Bitcoins original (and simple) backbone protocol as needed to achieve excellent throughput. We formally prove the safety and liveness properties of OHIE. We demonstrate its performance with a prototype implementation and large-scale experiments with up to 50,000 nodes. In our experiments, OHIE achieves linear scaling with available bandwidth, providing about 4-10 Mbps transaction throughput (under 8-20 Mbps per-node available bandwidth configurations) and at least about 20x better decentralization over prior works.
We present a local algorithm (constant-time distributed algorithm) for finding a 3-approximate vertex cover in bounded-degree graphs. The algorithm is deterministic, and no auxiliary information besides port numbering is required.
We present a recursive formulation of the Horn algorithm for deciding the satisfiability of propositional clauses. The usual presentations in imperative pseudo-code are informal and not suitable for simple proofs of its main properties. By defining the algorithm as a recursive function (computing a least fixed-point), we achieve: 1) a concise, yet rigorous, formalisation; 2) a clear form of visualising executions of the algorithm, step-by-step; 3) precise results, simple to state and with clean inductive proofs.
Modern science and engineering computing environments often feature storage systems of different types, from parallel file systems in high-performance computing centers to object stores operated by cloud providers. To enable easy, reliable, secure, and performant data exchange among these different systems, we propose Connector, a pluggable data access architecture for diverse, distributed storage. By abstracting low-level storage system details, this abstraction permits a managed data transfer service (Globus in our case) to interact with a large and easily extended set of storage systems. Equally important, it supports third-party transfers: that is, direct data transfers from source to destination that are initiated by a third-party client but do not engage that third party in the data path. The abstraction also enables management of transfers for performance optimization, error handling, and end-to-end integrity. We present the Connector design, describe implementations for different storage services, evaluate tradeoffs inherent in managed vs. direct transfers, motivate recommended deployment options, and propose a performance model-based method that allows for easy characterization of performance in different contexts without exhaustive benchmarking.
We provide a simple proof of convergence covering both the Adam and Adagrad adaptive optimization algorithms when applied to smooth (possibly non-convex) objective functions with bounded gradients. We show that in expectation, the squared norm of the objective gradient averaged over the trajectory has an upper-bound which is explicit in the constants of the problem, parameters of the optimizer and the total number of iterations $N$. This bound can be made arbitrarily small: Adam with a learning rate $alpha=1/sqrt{N}$ and a momentum parameter on squared gradients $beta_2=1-1/N$ achieves the same rate of convergence $O(ln(N)/sqrt{N})$ as Adagrad. Finally, we obtain the tightest dependency on the heavy ball momentum among all previous convergence bounds for non-convex Adam and Adagrad, improving from $O((1-beta_1)^{-3})$ to $O((1-beta_1)^{-1})$. Our technique also improves the best known dependency for standard SGD by a factor $1 - beta_1$.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا