ترغب بنشر مسار تعليمي؟ اضغط هنا

108 - L. A. Perez , K. Xu , M. R. Wagner 2021
We developed a novel contactless frequency-domain approach to study thermal transport, which is particularly convenient when thermally anisotropic materials are considered. The method is based on a similar line-shaped heater geometry as used in the 3 -omega method, however, keeping all the technical advantages offered by non-contact methodologies. The present method is especially suitable to determine all the elements of the thermal conductivity tensor, which is experimentally achieved by simply rotating the sample with respect to the line-shaped optical heater. We provide the mathematical solution of the heat equation for the cases of anisotropic substrates, multilayers, as well as thin films. This methodology allows an accurate determination of the thermal conductivity, and does not require complex modeling or intensive computational efforts to process the experimental data, i.e., the thermal conductivity is obtained through a simple linear fit (slope method), in a similar fashion as in the 3-omega method. We demonstrate the potential of this approach by studying isotropic and anisotropic materials in a wide range of thermal conductivities. In particular, we have studied the following inorganic and organic systems: (i) glass, Si, and Ge substrates (isotropic), (ii) $beta$-Ga$_2$O$_3$, and a Kapton substrate (anisotropic) and , (iii) a 285 nm SiO$_2$/Si thin film. The accuracy in the determination of the thermal conductivity is estimated at $approx$ 5%, whereas the best temperature resolution is $Delta$T $approx$ 3 mK.
We study encodings of the lambda-calculus into the pi-calculus in the unexplored case of calculi with non-determinism and failures. On the sequential side, we consider lambdafail, a new non-deterministic calculus in which intersection types control r esources (terms); on the concurrent side, we consider spi, a pi-calculus in which non-determinism and failure rest upon a Curry-Howard correspondence between linear logic and session types. We present a typed encoding of lambdafail into spi and establish its correctness. Our encoding precisely explains the interplay of non-deterministic and fail-prone evaluation in lambdafail via typed processes in spi. In particular, it shows how failures in sequential evaluation (absence/excess of resources) can be neatly codified as interaction protocols.
The COVID-19 pandemic has challenged authorities at different levels of government administration around the globe. When faced with diseases of this severity, it is useful for the authorities to have prediction tools to estimate in advance the impact on the health system and the human, material, and economic resources that will be necessary. In this paper, we construct an extended Susceptible-Exposed-Infected-Recovered model that incorporates the social structure of Mar del Plata, the $4^circ$ most inhabited city in Argentina and head of the Municipality of General Pueyrredon. Moreover, we consider detailed partitions of infected individuals according to the illness severity, as well as data of local health resources, to bring these predictions closer to the local reality. Tuning the corresponding epidemic parameters for COVID-19, we study an alternating quarantine strategy, in which a part of the population can circulate without restrictions at any time, while the rest is equally divided into two groups and goes on successive periods of normal activity and lockdown, each one with a duration of $tau$ days. Besides, we implement a random testing strategy over the population. We found that $tau = 7$ is a good choice for the quarantine strategy since it matches with the weekly cycle as it reduces the infected population. Focusing on the health system, projecting from the situation as of September 30, we foresee a difficulty to avoid saturation of ICU, given the extremely low levels of mobility that would be required. In the worst case, our model estimates that four thousand deaths would occur, of which 30% could be avoided with proper medical attention. Nonetheless, we found that aggressive testing would allow an increase in the percentage of people that can circulate without restrictions, being the equipment required to deal with the additional critical patients relatively low.
The definition of Linear Symmetry-Based Disentanglement (LSBD) proposed by (Higgins et al., 2018) outlines the properties that should characterize a disentangled representation that captures the symmetries of data. However, it is not clear how to mea sure the degree to which a data representation fulfills these properties. We propose a metric for the evaluation of the level of LSBD that a data representation achieves. We provide a practical method to evaluate this metric and use it to evaluate the disentanglement of the data representations obtained for three datasets with underlying $SO(2)$ symmetries.
Learning low-dimensional representations that disentangle the underlying factors of variation in data has been posited as an important step towards interpretable machine learning with good generalization. To address the fact that there is no consensu s on what disentanglement entails, Higgins et al. (2018) propose a formal definition for Linear Symmetry-Based Disentanglement, or LSBD, arguing that underlying real-world transformations give exploitable structure to data. Although several works focus on learning LSBD representations, such methods require supervision on the underlying transformations for the entire dataset, and cannot deal with unlabeled data. Moreover, none of these works provide a metric to quantify LSBD. We propose a metric to quantify LSBD representations that is easy to compute under certain well-defined assumptions. Furthermore, we present a method that can leverage unlabeled data, such that LSBD representations can be learned with limited supervision on transformations. Using our LSBD metric, our results show that limited supervision is indeed sufficient to learn LSBD representations.
91 - Lucia A. Perez 2020
We calculate the void probability function (VPF) in simulations of Lyman-$alpha$ emitters (LAEs) across a wide redshift range ($z=3.1, 4.5, 5.7, 6.6$). The VPF measures the zero-point correlation function (i.e. places devoid of galaxies) and naturall y connects to higher order correlation functions while being computationally simple to calculate. We explore the Poissonian and systematic errors on the VPF, specify its accuracy as a function of average source density and the volume probed, and provide the appropriate size scales to measure the VPF. At small radii the accuracy of the VPF is limited by galaxy density, while at large radii the VPF is limited by the number of independent volumes probed. We also offer guidelines for understanding and quantifying the error in the VPF. We approximate the error in the VPF by using independent sub-volumes of the catalogs, after finding that jackknife statistics underestimate the uncertainty. We use the VPF to probe the strength of higher order correlation functions by measuring and examining the hierarchical scaling between the correlation functions using count-in-cells. The negative binomial model (NBM) has been shown to best describe the scaling between the two point correlation function and VPF for low-redshift galaxy observations. We further test the fit of the NBM by directly deriving the volume averaged two-point correlation function from the VPF and vice versa. We find the NBM best describes the $z=3.1, 4.5, 5.7$ simulated LAEs, with a 1$sigma$ deviation from the model in the $z=6.6$ catalog. This suggests that LAEs show higher order clustering terms similar to those of normal low redshift galaxies.
This article presents the complexity of reachability decision problems for parametric Markov decision processes (pMDPs), an extension to Markov decision processes (MDPs) where transitions probabilities are described by polynomials over a finite set o f parameters. In particular, we study the complexity of finding values for these parameters such that the induced MDP satisfies some maximal or minimal reachability probability constraints. We discuss different variants depending on the comparison operator in the constraints and the domain of the parameter values. We improve all known lower bounds for this problem, and notably provide ETR-completeness results for distinct variants of this problem.
We investigate the inner regions of the Milky Way with a sample of unprecedented size and coverage thanks to APOGEE DR16 and {it Gaia} DR3 data. Our inner Galactic sample has more than 26,000 stars within $|X_{rm Gal}| <5$ kpc, $|Y_{rm Gal}| <3.5$ kp c, $|Z_{rm Gal}| <1$ kpc, and we also make the analysis for a foreground-cleaned sub-sample of 8,000 stars more representative of the bulge-bar populations. The inner Galaxy shows a clear chemical discontinuity in key abundance ratios [$alpha$/Fe], [C/N], and [Mn/O], probing different enrichment timescales, which suggests a star formation gap (quenching) between the high- and low-$alpha$ populations. For the first time, we are able to fully characterize the different populations co-existing in the innermost regions of the Galaxy via joint analysis of the distributions of rotational velocities, metallicities, orbital parameters and chemical abundances. The chemo-kinematic analysis reveals the presence of the bar; of an inner thin disk; of a thick disk, and of a broad metallicity population, with a large velocity dispersion, indicative of a pressure supported component. We find and characterize chemically and kinematically a group of counter-rotating stars, which could be the result of a gas-rich merger event or just the result of clumpy star formation during the earliest phases of the early disk, which migrated into the bulge. Finally, based on the 6D information we assign stars a probability value of being on a bar orbit and find that most of the stars with large bar orbit probabilities come from the innermost 3 kpcs. Even stars with a high probability of belonging to the bar show the chemical bimodality in the [$alpha$/Fe] vs. [Fe/H] diagram. This suggests bar trapping to be an efficient mechanism, explaining why stars on bar orbits do not show a significant distinct chemical abundance ratio signature.
We study the (parameter) synthesis problem for one-counter automata with parameters. One-counter automata are obtained by extending classical finite-state automata with a counter whose value can range over non-negative integers and be tested for zero . The updates and tests applicable to the counter can further be made parametric by introducing a set of integer-valued variables called parameters. The synthesis problem for such automata asks whether there exists a valuation of the parameters such that all infinite runs of the automaton satisfy some omega-regular property. Lechner showed that (the complement of) the problem can be encoded in a restricted one-alternation fragment of Presburger arithmetic with divisibility. In this work (i) we argue that said fragment, called AERPADPLUS, is unfortunately undecidable. Nevertheless, by a careful re-encoding of the problem into a decidable restriction of AERPADPLUS, (ii) we prove that the synthesis problem is decidable in general and in N2EXP for several fixed omega-regular properties. Finally, (iii) we give a polynomial-space algorithm for the special case of the problem where parameters can only be used in tests, and not updates, of the counter.
70 - I. A. Perez 2020
The frequent emergence of diseases with the potential to become threats at local and global scales, such as influenza A(H1N1), SARS, MERS, and recently COVID-19 disease, makes it crucial to keep designing models of disease propagation and strategies to prevent or mitigate their effects in populations. Since isolated systems are exceptionally rare to find in any context, especially in human contact networks, here we examine the susceptible-infected-recovered model of disease spreading in a multiplex network formed by two distinct networks or layers, interconnected through a fraction $q$ of shared individuals (overlap). We model the interactions through weighted networks, because person-to-person interactions are diverse (or disordered); weights represent the contact times of the interactions. Using branching theory supported by simulations, we analyze a social distancing strategy that reduces the average contact time in both layers, where the intensity of the distancing is related to the topology of the layers. We find that the critical values of the distancing intensities, above which an epidemic can be prevented, increase with the overlap $q$. Also we study the effect of the social distancing on the mutual giant component of susceptible individuals, which is crucial to keep the functionality of the system. In addition, we find that for relatively small values of the overlap $q$, social distancing policies might not be needed at all to maintain the functionality of the system.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا