Do you want to publish a course? Click here

Grandma: a network to coordinate them all

95   0   0.0 ( 0 )
 Added by Bruce Gendre
 Publication date 2020
  fields Physics
and research's language is English
 Authors S. Agayeva




Ask ChatGPT about the research

GRANDMA is an international project that coordinates telescope observations of transient sources with large localization uncertainties. Such sources include gravitational wave events, gamma-ray bursts and neutrino events. GRANDMA currently coordinates 25 telescopes (70 scientists), with the aim of optimizing the imaging strategy to maximize the probability of identifying an optical counterpart of a transient source. This paper describes the motivation for the project, organizational structure, methodology and initial results.



rate research

Read More

The analytical solution of the three--dimensional linear pendulum in a rotating frame of reference is obtained, including Coriolis and centrifugal accelerations, and expressed in terms of initial conditions. This result offers the possibility of treating Foucault and Bravais pendula as trajectories of the very same system of equations, each of them with particular initial conditions. We compare with the common two--dimensional approximations in textbooks. A previously unnoticed pattern in the three--dimensional Foucault pendulum attractor is presented.
One Monad to Prove Them All is a modern fairy tale about curiosity and perseverance, two important properties of a successful PhD student. We follow the PhD student Mona on her adventure of proving properties about Haskell programs in the proof assistant Coq. On the one hand, as a PhD student in computer science Mona observes an increasing demand for correct software products. In particular, because of the large amount of existing software, verifying existing software products becomes more important. Verifying programs in the functional programming language Haskell is no exception. On the other hand, Mona is delighted to see that communities in the area of theorem proving are becoming popular. Thus, Mona sets out to learn more about the interactive theorem prover Coq and verifying Haskell programs in Coq. To prove properties about a Haskell function in Coq, Mona has to translate the function into Coq code. As Coq programs have to be total and Haskell programs are often not, Mona has to model partiality explicitly in Coq. In her quest for a solution Mona finds an ancient manuscript that explains how properties about Haskell functions can be proven in the proof assistant Agda by translating Haskell programs into monadic Agda programs. By instantiating the monadic program with a concrete monad instance the proof can be performed in either a total or a partial setting. Mona discovers that the proposed transformation does not work in Coq due to a restriction in the termination checker. In fact the transformation does not work in Agda anymore as well, as the termination checker in Agda has been improved. We follow Mona on an educational journey through the land of functional programming where she learns about concepts like free monads and containers as well as basics and restrictions of proof assistants like Coq. These concepts are well-known individually, but their interplay gives rise to a solution for Monas problem based on the originally proposed monadic tranformation that has not been presented before. When Mona starts to test her approach by proving a statement about simple Haskell functions, she realizes that her approach has an additional advantage over the original idea in Agda. Monas final solution not only works for a specific monad instance but even allows her to prove monad-generic properties. Instead of proving properties over and over again for specific monad instances she is able to prove properties that hold for all monads representable by a container-based instance of the free monad. In order to strengthen her confidence in the practicability of her approach, Mona evaluates her approach in a case study that compares two implementations for queues. In order to share the results with other functional programmers the fairy tale is available as a literate Coq file. If you are a citizen of the land of functional programming or are at least familiar with its customs, had a journey that involved reasoning about functional programs of your own, or are just a curious soul looking for the next story about monads and proofs, then this tale is for you.
The second LIGO-Virgo catalog of gravitational wave transients has more than quadrupled the observational sample of binary black holes. We analyze this catalog using a suite of five state-of-the-art binary black hole population models covering a range of isolated and dynamical formation channels and infer branching fractions between channels as well as constraints on uncertain physical processes that impact the observational properties of mergers. Given our set of formation models, we find significant differences between the branching fractions of the underlying and detectable populations, and that the diversity of detections suggests that multiple formation channels are at play. A mixture of channels is strongly preferred over any single channel dominating the detected population: an individual channel does not contribute to more than $simeq 70%$ of the observational sample of binary black holes. We calculate the preference between the natal spin assumptions and common envelope efficiencies in our models, favoring natal spins of isolated black holes of $lesssim 0.1$, and marginally preferring common envelope efficiencies of $gtrsim 2.0$ while strongly disfavoring highly inefficient common envelopes. We show that it is essential to consider multiple channels when interpreting gravitational wave catalogs, as inference on branching fractions and physical prescriptions becomes biased when contributing formation scenarios are not considered or incorrect physical prescriptions are assumed. Although our quantitative results can be affected by uncertain assumptions in model predictions, our methodology is capable of including models with updated theoretical considerations and additional formation channels.
We present the Global Rapid Advanced Network Devoted to the Multi-messenger Addicts (GRANDMA). The network consists of 21 telescopes with both photometric and spectroscopic facilities. They are connected together thanks to a dedicated infrastructure. The network aims at coordinating the observations of large sky position estimates of transient events to enhance their follow-up and reduce the delay between the initial detection and the optical confirmation. The GRANDMA program mainly focuses on follow-up of gravitational-wave alerts to find and characterise the electromagnetic counterpart during the third observational campaign of the Advanced LIGO and Advanced Virgo detectors. But it allows for any follow-up of transient alerts involving neutrinos or gamma-ray bursts, even with poor spatial localisation. We present the different facilities, tools, and methods we developed for this network, and show its efficiency using observations of LIGO/Virgo S190425z, a binary neutron star merger candidate. We furthermore report on all GRANDMA follow-up observations performed during the first six months of the LIGO-Virgo observational campaign, and we derive constraints on the kilonova properties assuming that the events locations were imaged by our telescopes.
Magnetic resonance imaging (MRI) acquisition, reconstruction, and segmentation are usually processed independently in the conventional practice of MRI workflow. It is easy to notice that there are significant relevances among these tasks and this procedure artificially cuts off these potential connections, which may lead to losing clinically important information for the final diagnosis. To involve these potential relations for further performance improvement, a sequential multi-task joint learning network model is proposed to train a combined end-to-end pipeline in a differentiable way, aiming at exploring the mutual influence among those tasks simultaneously. Our design consists of three cascaded modules: 1) deep sampling pattern learning module optimizes the $k$-space sampling pattern with predetermined sampling rate; 2) deep reconstruction module is dedicated to reconstructing MR images from the undersampled data using the learned sampling pattern; 3) deep segmentation module encodes MR images reconstructed from the previous module to segment the interested tissues. The proposed model retrieves the latently interactive and cyclic relations among those tasks, from which each task will be mutually beneficial. The proposed framework is verified on MRB dataset, which achieves superior performance on other SOTA methods in terms of both reconstruction and segmentation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا