ترغب بنشر مسار تعليمي؟ اضغط هنا

This document is intended to present in detail the processing criteria and the analysis techniques used for the production of the Vulnerability Map Sanitary based on the use of public and open data sources. The paper makes use of statistical analysis techniques (MCA, PCA, etc.) and machine learning (autoencoders) for the processing and analysis of information. The final product is a map at the census track level that seeks to quantify the populations access to basic health benefits.
Plastic scintillators are widely used as particle detectors in many fields, mainly, medicine, particle physics and astrophysics. Traditionally, they are coupled to a photo-multplier (PMT) but now silicon photo-multipliers (SiPM) are evolving as a pro mising robust alternative, specially in space born experiments since plastic scintillators may be a light option for low Earth orbit missions. Therefore it is timely to make a new analysis of the optimal design for experiments based on plastic scintillators in realistic conditions in such a configuration. We analyze here their response to an isotropic flux of electron and proton primaries in the energy range from 1 MeV to 1 GeV, a typical scenario for cosmic ray or space weather experiments, through detailed GEANT4 simulations. First, we focus on the effect of increasing the ratio between the plastic volume and the area of the photo-detector itself and, second, on the benefits of using a reflective coating around the plastic, the most common technique to increase light collection efficiency. In order to achieve a general approach, it is necessary to consider several detector setups. Therefore, we have performed a full set of simulations using the highly tested GEANT4 simulation tool: several parameters have been analyzed such as the energy lost in the coating, the deposited energy in the scintillator, the optical absorption, the fraction of scintillation photons that are not detected, the light collection at the photo-detector, the pulse shape and its time parameters and finally, other design parameters as the surface roughness, the coating reflectivity and the case of a scintillator with two decay components. This work could serve as a guide on the design of future experiments based on the use of plastic scintillators.
In this work we investigate the problem of road scene semantic segmentation using Deconvolutional Networks (DNs). Several constraints limit the practical performance of DNs in this context: firstly, the paucity of existing pixel-wise labelled trainin g data, and secondly, the memory constraints of embedded hardware, which rule out the practical use of state-of-the-art DN architectures such as fully convolutional networks (FCN). To address the first constraint, we introduce a Multi-Domain Road Scene Semantic Segmentation (MDRS3) dataset, aggregating data from six existing densely and sparsely labelled datasets for training our models, and two existing, separate datasets for testing their generalisation performance. We show that, while MDRS3 offers a greater volume and variety of data, end-to-end training of a memory efficient DN does not yield satisfactory performance. We propose a new training strategy to overcome this, based on (i) the creation of a best-possible source network (S-Net) from the aggregated data, ignoring time and memory constraints; and (ii) the transfer of knowledge from S-Net to the memory-efficient target network (T-Net). We evaluate different techniques for S-Net creation and T-Net transferral, and demonstrate that training a constrained deconvolutional network in this manner can unlock better performance than existing training approaches. Specifically, we show that a target network can be trained to achieve improved accuracy versus an FCN despite using less than 1% of the memory. We believe that our approach can be useful beyond automotive scenarios where labelled data is similarly scarce or fragmented and where practical constraints exist on the desired model size. We make available our network models and aggregated multi-domain dataset for reproducibility.
The current methods to determine the primary energy of ultra-high energy cosmic rays (UHECRs) are different when dealing with hadron or photon primaries. The current experiments combine two different techniques, an array of surface detectors and fluo rescence telescopes. The latter allow an almost calorimetric measurement of the primary energy. Thus, hadron-initiated showers detected by both type of detectors are used to calibrate the energy estimator from the surface array (usually the interpolated signal at a certain distance from the shower core S(r0)) with the primary energy. On the other hand, this calibration is not feasible when searching for photon primaries since no high energy photon has been unambiguously detected so far. Therefore, pure Monte Carlo parametrizations are used instead. In this work, we present a new method to determine the primary energy of hadron-induced showers in a hybrid experiment based on a technique previously developed for photon primaries. It consists on a set of calibration curves that relate the surface energy estimator, S(r0), and the depth of maximum development of the shower, Xmax, obtained from the fluorescence telescopes. Then, the primary energy can be determined from pure surface information since S(r0) and the zenith angle of the incoming shower are only needed. Considering a mixed sample of ultra-high energy proton and iron primaries and taking into account the reconstruction uncertainties and shower to shower fluctuations, we demonstrate that the primary energy may be determined with a systematic uncertainty below 1% and resolution around 16% in the energy range from 10^{18.5} to 10^{19.6} eV. Several array geometries, the shape of the energy error distributions and the uncertainties due to the unknown composition of the primary flux have been analyzed as well.
We address the problem of efficient sparse fixed-rank (S-FR) matrix decomposition, i.e., splitting a corrupted matrix $M$ into an uncorrupted matrix $L$ of rank $r$ and a sparse matrix of outliers $S$. Fixed-rank constraints are usually imposed by th e physical restrictions of the system under study. Here we propose a method to perform accurate and very efficient S-FR decomposition that is more suitable for large-scale problems than existing approaches. Our method is a grateful combination of geometrical and algebraical techniques, which avoids the bottleneck caused by the Truncated SVD (TSVD). Instead, a polar factorization is used to exploit the manifold structure of fixed-rank problems as the product of two Stiefel and an SPD manifold, leading to a better convergence and stability. Then, closed-form projectors help to speed up each iteration of the method. We introduce a novel and fast projector for the $text{SPD}$ manifold and a proof of its validity. Further acceleration is achieved using a Nystrom scheme. Extensive experiments with synthetic and real data in the context of robust photometric stereo and spectral clustering show that our proposals outperform the state of the art.
In this work, we address the problem of outlier detection for robust motion estimation by using modern sparse-low-rank decompositions, i.e., Robust PCA-like methods, to impose global rank constraints. Robust decompositions have shown to be good at sp litting a corrupted matrix into an uncorrupted low-rank matrix and a sparse matrix, containing outliers. However, this process only works when matrices have relatively low rank with respect to their ambient space, a property not met in motion estimation problems. As a solution, we propose to exploit the partial information present in the decomposition to decide which matches are outliers. We provide evidences showing that even when it is not possible to recover an uncorrupted low-rank matrix, the resulting information can be exploited for outlier detection. To this end we propose the Robust Decomposition with Constrained Rank (RD-CR), a proximal gradient based method that enforces the rank constraints inherent to motion estimation. We also present a general framework to perform robust estimation for stereo Visual Odometry, based on our RD-CR and a simple but effective compressed optimization method that achieves high performance. Our evaluation on synthetic data and on the KITTI dataset demonstrates the applicability of our approach in complex scenarios and it yields state-of-the-art performance.
The search for photons at EeV energies and beyond has considerable astrophysical interest and will remain one of the key challenges for ultra-high energy cosmic ray (UHECR) observatories in the near future. Several upper limits to the photon flux hav e been established since no photon has been unambiguously observed up to now. An improvement in the reconstruction efficiency of the photon showers and/or better discrimination tools are needed to improve these limits apart from an increase in statistics. Following this direction, we analyze in this work the ability of the surface parameter Sb, originally proposed for hadron discrimination, for photon search. Semi-analytical and numerical studies are performed in order to optimize Sb for the discrimination of photons from a proton background in the energy range from 10^18.5 to 10^19.6 eV. Although not shown explicitly, the same analysis has been performed for Fe nuclei and the corresponding results are discussed when appropriate. The effects of different array geometries and the underestimation of the muon component in the shower simulations are analyzed, as well as the Sb dependence on primary energy and zenith angle.
The current methods to determine the primary energy in surface arrays are different when dealing with hadron or photon initiated showers. In this work, we adapt a method previously developed for photon-initiated showers to hadron primaries. We determ ine the Monte Carlo parametrizations that relate the surface energy estimator and the maximum of shower development for both, proton and Iron primaries. Using for each primary their own set of calibration curves, which is of course impossible in practice, we show that the energy could be inferred with a negligible bias and 12% resolution. However, we show that a mixed calibration could also be performed, including both type of primaries, such that the bias still remains low and the achieved resolution is around 15%. In addition, the method allows the simultaneous determination of Xmax in pure surface arrays with resolution better than 7%.
A new family of parameters intended for composition studies in cosmic ray surface array detectors is proposed. The application of this technique to different array layout designs has been analyzed. The parameters make exclusive use of surface data co mbining the information from the total signal at each triggered detector and the array geometry. They are sensitive to the combined effects of the different muon and electromagnetic components on the lateral distribution function of proton and iron initiated showers at any given primary energy. Analytical and numerical studies have been performed in order to assess the reliability, stability and optimization of these parameters. Experimental uncertainties, the underestimation of the muon component in the shower simulation codes, intrinsic fluctuations and reconstruction errors are considered and discussed in a quantitative way. The potential discrimination power of these parameters, under realistic experimental conditions, is compared on a simplified, albeit quantitative way, with that expected from other surface and fluorescence estimators.
A new family of parameters intended for composition studies is presented. They make exclusive use of surface data combining the information from the total signal at each triggered detector and the array geometry. We perform an analytical study of the se composition estimators in order to assess their reliability, stability and possible optimization. The influence of the different slopes of the proton and Iron lateral distribution function on the discrimination power of the estimators is also studied. Additionally, the stability of the parameter in face of a possible underestimation of the size of the muon component by the shower simulation codes, as it is suggested by experimental evidence, is also studied.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا