Do you want to publish a course? Click here

Competitive Parallelism: Getting Your Priorities Right

243   0   0.0 ( 0 )
 Added by Stefan Muller
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Multi-threaded programs have traditionally fallen into one of two domains: cooperative and competitive. These two domains have traditionally remained mostly disjoint, with cooperative threading used for increasing throughput in compute-intensive applications such as scientific workloads and cooperative threading used for increasing responsiveness in interactive applications such as GUIs and games. As multicore hardware becomes increasingly mainstream, there is a need for bridging these two disjoint worlds, because many applications mix interaction and computation and would benefit from both cooperative and competitive threading. In this paper, we present techniques for programming and reasoning about parallel interactive applications that can use both cooperative and competitive threading. Our techniques enable the programmer to write rich parallel interactive programs by creating and synchronizing with threads as needed, and by assigning threads user-defined and partially ordered priorities. To ensure important responsiveness properties, we present a modal type system analogous to S4 modal logic that precludes low-priority threads from delaying high-priority threads, thereby statically preventing a crucial set of priority-inversion bugs. We then present a cost model that allows reasoning about responsiveness and completion time of well-typed programs. The cost model extends the traditional work-span model for cooperative threading to account for competitive scheduling decisions needed to ensure responsiveness. Finally, we show that our proposed techniques are realistic by implementing them as an extension to the Standard ML language.

rate research

Read More

We present a novel programming language design that attempts to combine the clarity and safety of high-level functional languages with the efficiency and parallelism of low-level numerical languages. We treat arrays as eagerly-memoized functions on typed index sets, allowing abstract function manipulations, such as currying, to work on arrays. In contrast to composing primitive bulk-array operations, we argue for an explicit nested indexing style that mirrors application of functions to arguments. We also introduce a fine-grained typed effects system which affords concise and automatically-parallelized in-place updates. Specifically, an associative accumulation effect allows reverse-mode automatic differentiation of in-place updates in a way that preserves parallelism. Empirically, we benchmark against the Futhark array programming language, and demonstrate that aggressive inlining and type-driven compilation allows array programs to be written in an expressive, pointful style with little performance penalty.
This work focuses on representing very high-dimensional global image descriptors using very compact 64-1024 bit binary hashes for instance retrieval. We propose DeepHash: a hashing scheme based on deep networks. Key to making DeepHash work at extremely low bitrates are three important considerations -- regularization, depth and fine-tuning -- each requiring solutions specific to the hashing problem. In-depth evaluation shows that our scheme consistently outperforms state-of-the-art methods across all data sets for both Fisher Vectors and Deep Convolutional Neural Network features, by up to 20 percent over other schemes. The retrieval performance with 256-bit hashes is close to that of the uncompressed floating point features -- a remarkable 512 times compression.
Robust model-fitting to spectroscopic transitions is a requirement across many fields of science. The corrected Akaike and Bayesian information criteria (AICc and BIC) are most frequently used to select the optimal number of fitting parameters. In general, AICc modelling is thought to overfit (too many model parameters) and BIC underfits. For spectroscopic modelling, both AICc and BIC lack in two important respects: (a) no penalty distinction is made according to line strength such that parameters of weak lines close to the detection threshold are treated with equal importance as strong lines and (b) no account is taken of the way in which spectral lines impact on narrow data regions. In this paper we introduce a new information criterion that addresses these shortcomings, the Spectral Information Criterion (SpIC). Spectral simulations are used to compare performances. The main findings are (i) SpIC clearly outperforms AICc for high signal to noise data, (ii) SpIC and AICc work equally well for lower signal to noise data, although SpIC achieves this with fewer parameters, and (iii) BIC does not perform well (for this application) and should be avoided. The new method should be of broader applicability (beyond spectroscopy), wherever different model parameters influence separated small ranges within a larger dataset and/or have widely varying sensitivities.
Software developers compose systems from components written in many different languages. A business-logic component may be written in Java or OCaml, a resource-intensive component in C or Rust, and a high-assurance component in Coq. In this multi-language world, program execution sends values from one linguistic context to another. This boundary-crossing exposes values to contexts with unforeseen behavior---that is, behavior that could not arise in the source language of the value. For example, a Rust function may end up being applied in an ML context that violates the memory usage policy enforced by Rusts type system. This leads to the question of how developers ought to reason about code in such a multi-language world where behavior inexpressible in one language is easily realized in another. This paper proposes the novel idea of linking types to address the problem of reasoning about single-language components in a multi-lingual setting. Specifically, linking types allow programmers to annotate where in a program they can link with components inexpressible in their unadulterated language. This enables developers to reason about (behavioral) equality using only their own language and the annotations, even though their code may be linked with code written in a language with more expressive power.
We identify the two scalar leptoquarks capable of generating sign-dependent contributions to leptonic magnetic moments, $R_2sim (mathbf{3}, mathbf{2}, 7/6)$ and $S_1sim (mathbf{3}, mathbf{1}, -1/3)$, as favoured by current measurements. We consider the case in which the electron and muon sectors are decoupled, and real-valued Yukawa couplings are specified using an up-type quark mass-diagonal basis. Contributions to $Delta a_e$ arise from charm-containing loops and $Delta a_mu$ from top-containing loops -- hence avoiding dangerous LFV constraints, particularly from $mu to e gamma$. The strongest constraints on these models arise from contributions to the Z leptonic decay widths, high-$p_T$ leptonic tails at the LHC, and from (semi)leptonic kaon decays. To be a comprehensive solution to the $(g-2)_{e/mu}$ puzzle we find that the mass of either leptoquark must be $lesssim 65$ TeV. This analysis can be embedded within broader flavour anomaly studies, including those of hierarchical leptoquark coupling structures. It can also be straightforwardly adapted to accommodate future measurements of leptonic magnetic moments, such as those expected from the Muon $g-2$ collaboration in the near future.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا