No Arabic abstract
Application designers often face the question of whether to store large objects in a filesystem or in a database. Often this decision is made for application design simplicity. Sometimes, performance measurements are also used. This paper looks at the question of fragmentation - one of the operational issues that can affect the performance and/or manageability of the system as deployed long term. As expected from the common wisdom, objects smaller than 256KB are best stored in a database while objects larger than 1M are best stored in the filesystem. Between 256KB and 1MB, the read:write ratio and rate of object overwrite or replacement are important factors. We used the notion of storage age or number of object overwrites as way of normalizing wall clock time. Storage age allows our results or similar such results to be applied across a number of read:write ratios and object replacement rates.
When an object impacts the free surface of a liquid, it ejects a splash curtain upwards and creates an air cavity below the free surface. As the object descends into the liquid, the air cavity eventually closes under the action of hydrostatic pressure (deep seal). In contrast, the surface curtain may splash outwards or dome over and close, creating a surface seal. In this paper we experimentally investigate how the splash curtain dynamics are governed by the interplay of cavity pressure difference, gravity, and surface tension and how they control the occurrence, or not, of surface seal. Based on the experimental observations and measurements, we develop an analytical model to describe the trajectory and dynamics of the splash curtain. The model enables us to reveal the scaling relationship for the dimensionless surface seal time and discover the existence of a critical dimensionless number that predicts the occurrence of surface seal. This scaling indicates that the most significant parameter governing the occurrence of surface seal is the velocity of the airflow rushing into the cavity. This is in contrast to the current understanding which considers the impact velocity as the determinant parameter.
We review some aspects, especially those we can tackle analytically, of a minimal model of closed economy analogous to the kinetic theory model of ideal gases where the agents exchange wealth amongst themselves such that the total wealth is conserved, and each individual agent saves a fraction (0 < lambda < 1) of wealth before transaction. We are interested in the special case where the fraction lambda is constant for all the agents (global saving propensity) in the closed system. We show by moment calculations that the resulting wealth distribution cannot be the Gamma distribution that was conjectured in Phys. Rev. E 70, 016104 (2004). We also derive a form for the distribution at low wealth, which is a new result.
In cases where both components of a binary system show oscillations, asteroseismology has been proposed as a method to identify the system. For KIC 2568888, observed with $Kepler$, we detect oscillation modes for two red giants in a single power density spectrum. Through an asteroseismic study we investigate if the stars have similar properties, which could be an indication that they are physically bound into a binary system. While one star lies on the red giant branch (RGB), the other, more evolved star, is either a RGB or asymptotic-giant-branch star. We found similar ages for the red giants and a mass ratio close to 1. Based on these asteroseismic results we propose KIC 2568888 as a rare candidate binary system ($sim 0.1%$ chance). However, when combining the asteroseismic data with ground-based $BVI$ photometry we estimated different distances for the stars, which we cross-checked with $Gaia$ DR2. From $Gaia$ we obtained for one object a distance between and broadly consistent with the distances from $BVI$ photometry. For the other object we have a negative parallax with a not yet reliable $Gaia$ distance solution. The derived distances challenge a binary interpretation and may either point to a triple system, which could explain the visible magnitudes, or, to a rare chance alignment ($sim 0.05%$ chance based on stellar magnitudes). This probability would even be smaller, if calculated for close pairs of stars with a mass ratio close to unity in addition to similar magnitudes, which may indeed indicate that a binary scenario is more favourable.
Instrumental variables (IVs) are extensively used to estimate treatment effects when the treatment and outcome are confounded by unmeasured confounders; however, weak IVs are often encountered in empirical studies and may cause problems. Many studies have considered building a stronger IV from the original, possibly weak, IV in the design stage of a matched study at the cost of not using some of the samples in the analysis. It is widely accepted that strengthening an IV tends to render nonparametric tests more powerful and will increase the power of sensitivity analyses in large samples. In this article, we re-evaluate this conventional wisdom to bring new insights into this topic. We consider matched observational studies from three perspectives. First, we evaluate the trade-off between IV strength and sample size on nonparametric tests assuming the IV is valid and exhibit conditions under which strengthening an IV increases power and conversely conditions under which it decreases power. Second, we derive a necessary condition for a valid sensitivity analysis model with continuous doses. We show that the $Gamma$ sensitivity analysis model, which has been previously used to come to the conclusion that strengthening an IV increases the power of sensitivity analyses in large samples, does not apply to the continuous IV setting and thus this previously reached conclusion may be invalid. Third, we quantify the bias of the Wald estimator with a possibly invalid IV under an oracle and leverage it to develop a valid sensitivity analysis framework; under this framework, we show that strengthening an IV may amplify or mitigate the bias of the estimator, and may or may not increase the power of sensitivity analyses. We also discuss how to better adjust for the observed covariates when building an IV in matched studies.
With the advent of increasingly elaborate experimental techniques in physics, chemistry and materials sciences, measured data are becoming bigger and more complex. The observables are typically a function of several stimuli resulting in multidimensional data sets spanning a range of experimental parameters. As an example, a common approach to study ferroelectric switching is to observe effects of applied electric field, but switching can also be enacted by pressure and is influenced by strain fields, material composition, temperature, time, etc. Moreover, the parameters are usually interdependent, so that their decoupling toward univariate measurements or analysis may not be straightforward. On the other hand, both explicit and hidden parameters provide an opportunity to gain deeper insight into the measured properties, provided there exists a well-defined path to capture and analyze such data. Here, we introduce a new, two-dimensional approach to represent hysteretic response of a material system to applied electric field. Utilizing ferroelectric polarization as a model hysteretic property, we demonstrate how explicit consideration of electromechanical response to two rather than one control voltages enables significantly more transparent and robust interpretation of observed hysteresis, such as differentiating between charge trapping and ferroelectricity. Furthermore, we demonstrate how the new data representation readily fits into a variety of machinelearning methodologies, from unsupervised classification of the origins of hysteretic response via linear clustering algorithms to neural-network-based inference of the sample temperature based on the specific morphology of hysteresis.