ترغب بنشر مسار تعليمي؟ اضغط هنا

Gaussian Tree Constraints Applied to Acoustic Linguistic Functional Data

165   0   0.0 ( 0 )
 نشر من قبل John Aston
 تاريخ النشر 2014
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Evolutionary models of languages are usually considered to take the form of trees. With the development of so-called tree constraints the plausibility of the tree model assumptions can be addressed by checking whether the moments of observed variables lie within regions consistent with trees. In our linguistic application, the data set comprises acoustic samples (audio recordings) from speakers of five Romance languages or dialects. We wish to assess these functional data for compatibility with a hereditary tree model at the language level. A novel combination of canonical function analysis (CFA) with a separable covariance structure provides a method for generating a representative basis for the data. This resulting basis is formed of components which emphasize language differences whilst maintaining the integrity of the observational language-groupings. A previously unexploited Gaussian tree constraint is then applied to component-by-component projections of the data to investigate adherence to an evolutionary tree. The results indicate that while a tree model is unlikely to be suitable for modeling all aspects of the acoustic linguistic data, certain features of the spoken Romance languages highlighted by the separable-CFA basis may indeed be suitably modeled as a tree.



قيم البحث

اقرأ أيضاً

Positron Emission Tomography (PET) is an imaging technique which can be used to investigate chemical changes in human biological processes such as cancer development or neurochemical reactions. Most dynamic PET scans are currently analyzed based on t he assumption that linear first order kinetics can be used to adequately describe the system under observation. However, there has recently been strong evidence that this is not the case. In order to provide an analysis of PET data which is free from this compartmental assumption, we propose a nonparametric deconvolution and analysis model for dynamic PET data based on functional principal component analysis. This yields flexibility in the possible deconvolved functions while still performing well when a linear compartmental model setup is the true data generating mechanism. As the deconvolution needs to be performed on only a relative small number of basis functions rather than voxel by voxel in the entire 3-D volume, the methodology is both robust to typical brain imaging noise levels while also being computationally efficient. The new methodology is investigated through simulations in both 1-D functions and 2-D images and also applied to a neuroimaging study whose goal is the quantification of opioid receptor concentration in the brain.
One of the classic concerns in statistics is determining if two samples come from thesame population, i.e. homogeneity testing. In this paper, we propose a homogeneitytest in the context of Functional Data Analysis, adopting an idea from multivariate data analysis: the data depth plot (DD-plot). This DD-plot is a generalization of theunivariate Q-Q plot (quantile-quantile plot). We propose some statistics based onthese DD-plots, and we use bootstrapping techniques to estimate their distributions.We estimate the finite-sample size and power of our test via simulation, obtainingbetter results than other homogeneity test proposed in the literature. Finally, weillustrate the procedure in samples of real heterogeneous data and get consistent results.
Crime prevention strategies based on early intervention depend on accurate risk assessment instruments for identifying high risk youth. It is important in this context that the instruments be convenient to administer, which means, in particular, that they must be reasonably brief; adaptive screening tests are useful for this purpose. Although item response theory (IRT) bears a long and rich history in producing reliable adaptive tests, adaptive tests constructed using classification and regression trees are becoming a popular alternative to the traditional IRT approach for item selection. On the upside, unlike IRT, tree-based questionnaires require no real-time parameter estimation during administration. On the downside, while item response theory provides robust criteria for terminating the exam, the stopping criterion for a tree-based adaptive test (the maximum tree depth) is unclear. We present a Bayesian decision theory approach for characterizing the trade-offs of administering tree-based questionnaires of different lengths. This formalism involves specifying 1) a utility function measuring the goodness of the assessment; 2) a target population over which this utility should be maximized; 3) an action space comprised of different-length assessments, populated via a tree-fitting algorithm. Using this framework, we provide uncertainty estimates for the trade-offs of shortening the exam, allowing practitioners to determine an optimal exam length in a principled way. The method is demonstrated through an application to youth delinquency risk assessment in Honduras.
We present and describe the GPFDA package for R. The package provides flexible functionalities for dealing with Gaussian process regression (GPR) models for functional data. Multivariate functional data, functional data with multidimensional inputs, and nonseparable and/or nonstationary covariance structures can be modeled. In addition, the package fits functional regression models where the mean function depends on scalar and/or functional covariates and the covariance structure is modeled by a GPR model. In this paper, we present the versatility of GPFDA with respect to mean function and covariance function specifications and illustrate the implementation of estimation and prediction of some models through reproducible numerical examples.
The Argo data is a modern oceanography dataset that provides unprecedented global coverage of temperature and salinity measurements in the upper 2,000 meters of depth of the ocean. We study the Argo data from the perspective of functional data analys is (FDA). We develop spatio-temporal functional kriging methodology for mean and covariance estimation to predict temperature and salinity at a fixed location as a smooth function of depth. By combining tools from FDA and spatial statistics, including smoothing splines, local regression, and multivariate spatial modeling and prediction, our approach provides advantages over current methodology that consider pointwise estimation at fixed depths. Our approach naturally leverages the irregularly-sampled data in space, time, and depth to fit a space-time functional model for temperature and salinity. The developed framework provides new tools to address fundamental scientific problems involving the entire upper water column of the oceans such as the estimation of ocean heat content, stratification, and thermohaline oscillation. For example, we show that our functional approach yields more accurate ocean heat content estimates than ones based on discrete integral approximations in pressure. Further, using the derivative function estimates, we obtain a new product of a global map of the mixed layer depth, a key component in the study of heat absorption and nutrient circulation in the oceans. The derivative estimates also reveal evidence for density
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا