ترغب بنشر مسار تعليمي؟ اضغط هنا

84 - Ari Belenkiy 2016
In February 1700, Isaac Newton needed a precise tropical year to design a new universal calendar that would supersede the Gregorian one. However, 17th-Century astronomers were uncertain of the long-term variation in the inclination of the Earths axis and were suspicious of Ptolemys equinox observations. As a result, they produced a wide range of tropical years. Facing this problem, Newton attempted to compute the length of the year on his own, using the ancient equinox observations reported by a famous Greek astronomer Hipparchus of Rhodes, ten in number. Though Newton had a very thin sample of data, he obtained a tropical year only a few seconds longer than the correct length. The reason lies in Newtons application of a technique similar to modern regression analysis. Newton wrote down the first of the two so-called normal equations known from the ordinary least-squares (OLS) method. In that procedure, Newton seems to have been the first to employ the mean (average) value of the data set, while the other leading astronomers of the era (Tycho Brahe, Galileo, and Kepler) used the median. Fifty years after Newton, in 1750, Newtons method was rediscovered and enhanced by Tobias Mayer. Remarkably, the same regression method served with distinction in the late 1920s when the founding fathers of modern cosmology, Georges Lemaitre (1927), Edwin Hubble (1929), and Willem de Sitter (1930), employed it to derive the Hubble constant.
The dual problem of testing the predictive significance of a particular covariate, and identification of the set of relevant covariates is common in applied research and methodological investigations. To study this problem in the context of functiona l linear regression models with predictor variables observed over a grid and a scalar response, we consider basis expansions of the functional covariates and apply the likelihood ratio test. Based on p-values from testing each predictor, we propose a new variable selection method, which is consistent in selecting the relevant predictors from set of available predictors that is allowed to grow with the sample size n. Numerical simulations suggest that the proposed variable selection procedure outperforms existing methods found in the literature. A real dataset from weather stations in Japan is analyzed.
55 - Pierre E. Jacob 2013
This article considers the problem of storing the paths generated by a particle filter and more generally by a sequential Monte Carlo algorithm. It provides a theoretical result bounding the expected memory cost by $T + C N log N$ where $T$ is the ti me horizon, $N$ is the number of particles and $C$ is a constant, as well as an efficient algorithm to realise this. The theoretical result and the algorithm are illustrated with numerical experiments.
Rank-order relational data, in which each actor ranks the others according to some criterion, often arise from sociometric measurements of judgment (e.g., self-reported interpersonal interaction) or preference (e.g., relative liking). We propose a cl ass of exponential-family models for rank-order relational data and derive a new class of sufficient statistics for such data, which assume no more than within-subject ordinal properties. Application of MCMC MLE to this family allows us to estimate effects for a variety of plausible mechanisms governing rank structure in cross-sectional context, and to model the evolution of such structures over time. We apply this framework to model the evolution of relative liking judgments in an acquaintance process, and to model recall of relative volume of interpersonal interaction among members of a technology education program.
Exponential-family random graph models (ERGMs) provide a principled and flexible way to model and simulate features common in social networks, such as propensities for homophily, mutuality, and friend-of-a-friend triad closure, through choice of mode l terms (sufficient statistics). However, those ERGMs modeling the more complex features have, to date, been limited to binary data: presence or absence of ties. Thus, analysis of valued networks, such as those where counts, measurements, or ranks are observed, has necessitated dichotomizing them, losing information and introducing biases. In this work, we generalize ERGMs to valued networks. Focusing on modeling counts, we formulate an ERGM for networks whose ties are counts and discuss issues that arise when moving beyond the binary case. We introduce model terms that generalize and model common social network features for such data and apply these methods to a network dataset whose values are counts of interactions.
Models of dynamic networks --- networks that evolve over time --- have manifold applications. We develop a discrete-time generative model for social network evolution that inherits the richness and flexibility of the class of exponential-family rando m graph models. The model --- a Separable Temporal ERGM (STERGM) --- facilitates separable modeling of the tie duration distributions and the structural dynamics of tie formation. We develop likelihood-based inference for the model, and provide computational algorithms for maximum likelihood estimation. We illustrate the interpretability of the model in analyzing a longitudinal network of friendship ties within a school.
This work presents an empirical study of the evolution of the personal income distribution in Brazil. Yearly samples available from 1978 to 2005 were studied and evidence was found that the complementary cumulative distribution of personal income for 99% of the economically less favorable population is well represented by a Gompertz curve of the form $G(x)=exp [exp (A-Bx)]$, where $x$ is the normalized individual income. The complementary cumulative distribution of the remaining 1% richest part of the population is well represented by a Pareto power law distribution $P(x)= beta x^{-alpha}$. This result means that similarly to other countries, Brazils income distribution is characterized by a well defined two class system. The parameters $A$, $B$, $alpha$, $beta$ were determined by a mixture of boundary conditions, normalization and fitting methods for every year in the time span of this study. Since the Gompertz curve is characteristic of growth models, its presence here suggests that these patterns in income distribution could be a consequence of the growth dynamics of the underlying economic system. In addition, we found out that the percentage share of both the Gompertzian and Paretian components relative to the total income shows an approximate cycling pattern with periods of about 4 years and whose maximum and minimum peaks in each component alternate at about every 2 years. This finding suggests that the growth dynamics of Brazils economic system might possibly follow a Goodwin-type class model dynamics based on the application of the Lotka-Volterra equation to economic growth and cycle.
We consider families of Abelian integrals arising from perturbations of planar Hamiltonian systems. The tangential center focus problem asks for the conditions under which these integrals vanish identically. The problem is closely related to the mono dromy problem, which asks when the monodromy of a vanishing cycle generates the whole homology of the level curves of the Hamiltonian. We solve both these questions for the case when the Hamiltonian is hyperelliptic. As a side-product, we solve the corresponding problems for the 0-dimensional Abelian integrals defined by Gavrilov and Movasati.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا