Do you want to publish a course? Click here

The Ultimate Display

42   0   0.0 ( 0 )
 Added by Christopher Fluke
 Publication date 2016
  fields Physics
and research's language is English




Ask ChatGPT about the research

Astronomical images and datasets are increasingly high-resolution and multi-dimensional. The vast majority of astronomers perform all of their visualisation and analysis tasks on low-resolution, two-dimensional desktop monitors. If there were no technological barriers to designing the ultimate stereoscopic display for astronomy, what would it look like? What capabilities would we require of our compute hardware to drive it? And are existing technologies even close to providing a true 3D experience that is compatible with the depth resolution of human stereoscopic vision? We consider the CAVE2 (an 80 Megapixel, hybrid 2D and 3D virtual reality environment directly integrated with a 100 Tflop/s GPU-powered supercomputer) and the Oculus Rift (a low- cost, head-mounted display) as examples at opposite financial ends of the immersive display spectrum.

rate research

Read More

DARk matter WImp search with liquid xenoN (DARWIN) will be an experiment for the direct detection of dark matter using a multi-ton liquid xenon time projection chamber at its core. Its primary goal will be to explore the experimentally accessible parameter space for Weakly Interacting Massive Particles (WIMPs) in a wide mass-range, until neutrino interactions with the target become an irreducible background. The prompt scintillation light and the charge signals induced by particle interactions in the xenon will be observed by VUV sensitive, ultra-low background photosensors. Besides its excellent sensitivity to WIMPs above a mass of 5 GeV/c2, such a detector with its large mass, low-energy threshold and ultra-low background level will also be sensitive to other rare interactions. It will search for solar axions, galactic axion-like particles and the neutrinoless double-beta decay of 136-Xe, as well as measure the low-energy solar neutrino flux with <1% precision, observe coherent neutrino-nucleus interactions, and detect galactic supernovae. We present the concept of the DARWIN detector and discuss its physics reach, the main sources of backgrounds and the ongoing detector design and R&D efforts.
The Maunakea Spectroscopic Explorer (MSE) is a next-generation observatory, designed to provide highly multiplexed, multi-object spectroscopy over a wide field of view. The observatory will consist of (1) a telescope with an 11.25 m aperture, (2) a 1.5 square-degree science field of view, (3) fibre optic positioning and transmission systems, and (4) a suite of low (R=3000), moderate (R=6000) and high resolution (R=40,000) spectrographs. The Fibre Transmission System (FiTS) consists of 4332 optical fibres, designed to transmit the light from the telescope prime focus to the dedicated spectrographs. The ambitious science goals of MSE require the Fibre Transmission System to deliver performance well beyond the current state of the art for multi-fibre systems, e.g., the sensitivity to observe magnitude 24 objects over a very broad wavelength range (0.37 - 1.8 microns) while achieving relative spectrophotometric accuracy of <3% and radial velocity precision of 20 km/s.
As of 2023, the low-frequency part of the Square Kilometre Array will go online in Australia. It will constitute the largest and most powerful low-frequency radio-astronomical observatory to date, and will facilitate a rich science programme in astronomy and astrophysics. With modest engineering changes, it will also be able to measure cosmic rays via the radio emission from extensive air showers. The extreme antenna density and the homogeneous coverage provided by more than 60,000 antennas within an area of one km$^2$ will push radio detection of cosmic rays in the energy range around 10$^{17}$ eV to ultimate precision, with superior capabilities in the reconstruction of arrival direction, energy, and an expected depth-of-shower-maximum resolution of 6~g/cm${^2}$.
Near-field cosmology -- using detailed observations of the Local Group and its environs to study wide-ranging questions in galaxy formation and dark matter physics -- has become a mature and rich field over the past decade. There are lingering concerns, however, that the relatively small size of the present-day Local Group ($sim 2$ Mpc diameter) imposes insurmountable sample-variance uncertainties, limiting its broader utility. We consider the region spanned by the Local Groups progenitors at earlier times and show that it reaches $3 approx 7$ co-moving Mpc in linear size (a volume of $approx 350,{rm Mpc}^3$) at $z=7$. This size at early cosmic epochs is large enough to be representative in terms of the matter density and counts of dark matter halos with $M_{rm vir}(z=7) lesssim 2times 10^{9},M_{odot}$. The Local Groups stellar fossil record traces the cosmic evolution of galaxies with $10^{3} lesssim M_{star}(z=0) / M_{odot} lesssim 10^{9}$ (reaching $M_{1500} > -9$ at $zsim7$) over a region that is comparable to or larger than the Hubble Ultra-Deep Field (HUDF) for the entire history of the Universe. It is highly complementary to the HUDF, as it probes much fainter galaxies but does not contain the intrinsically rarer, brighter sources that are detectable in the HUDF. Archaeological studies in the Local Group also provide the ability to trace the evolution of individual galaxies across time as opposed to evaluating statistical connections between temporally distinct populations. In the JWST era, resolved stellar populations will probe regions larger than the HUDF and any deep JWST fields, further enhancing the value of near-field cosmology.
Working with any gradient-based machine learning algorithm involves the tedious task of tuning the optimizers hyperparameters, such as the learning rate. There exist many techniques for automated hyperparameter optimization, but they typically introduce even more hyperparameters to control the hyperparameter optimization process. We propose to instead learn the hyperparameters themselves by gradient descent, and furthermore to learn the hyper-hyperparameters by gradient descent as well, and so on ad infinitum. As these towers of gradient-based optimizers grow, they become significantly less sensitive to the choice of top-level hyperparameters, hence decreasing the burden on the user to search for optimal values.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا