ترغب بنشر مسار تعليمي؟ اضغط هنا

Search for Evergreens in Science: A Functional Data Analysis

78   0   0.0 ( 0 )
 نشر من قبل Jian Wang
 تاريخ النشر 2017
والبحث باللغة English




اسأل ChatGPT حول البحث

Evergreens in science are papers that display a continual rise in annual citations without decline, at least within a sufficiently long time period. Aiming to better understand evergreens in particular and patterns of citation trajectory in general, this paper develops a functional data analysis method to cluster citation trajectories of a sample of 1699 research papers published in 1980 in the American Physical Society (APS) journals. We propose a functional Poisson regression model for individual papers citation trajectories, and fit the model to the observed 30-year citations of individual papers by functional principal component analysis and maximum likelihood estimation. Based on the estimated paper-specific coefficients, we apply the K-means clustering algorithm to cluster papers into different groups, for uncovering general types of citation trajectories. The result demonstrates the existence of an evergreen cluster of papers that do not exhibit any decline in annual citations over 30 years.



قيم البحث

اقرأ أيضاً

The Argo data is a modern oceanography dataset that provides unprecedented global coverage of temperature and salinity measurements in the upper 2,000 meters of depth of the ocean. We study the Argo data from the perspective of functional data analys is (FDA). We develop spatio-temporal functional kriging methodology for mean and covariance estimation to predict temperature and salinity at a fixed location as a smooth function of depth. By combining tools from FDA and spatial statistics, including smoothing splines, local regression, and multivariate spatial modeling and prediction, our approach provides advantages over current methodology that consider pointwise estimation at fixed depths. Our approach naturally leverages the irregularly-sampled data in space, time, and depth to fit a space-time functional model for temperature and salinity. The developed framework provides new tools to address fundamental scientific problems involving the entire upper water column of the oceans such as the estimation of ocean heat content, stratification, and thermohaline oscillation. For example, we show that our functional approach yields more accurate ocean heat content estimates than ones based on discrete integral approximations in pressure. Further, using the derivative function estimates, we obtain a new product of a global map of the mixed layer depth, a key component in the study of heat absorption and nutrient circulation in the oceans. The derivative estimates also reveal evidence for density
Over the past decade, the field of forensic science has received recommendations from the National Research Council of the U.S. National Academy of Sciences, the U.S. National Institute of Standards and Technology, and the U.S. Presidents Council of Advisors on Science and Technology to study the validity and reliability of forensic analyses. More specifically, these committees recommend estimation of the rates of occurrence of erroneous conclusions drawn from forensic analyses. Black box studies for the various subjective feature-based comparison methods are intended for this purpose. In general, black box studies often have unbalanced designs, comparisons that are not independent, and missing data. These aspects pose difficulty in the analysis of the results and are often ignored. Instead, interpretation of the data relies on methods that assume independence between observations and a balanced experiment. Furthermore, all of these projects are interpreted within the frequentist framework and result in point estimates associated with confidence intervals that are confusing to communicate and understand. We propose to use an existing likelihood-free Bayesian inference method, called Approximate Bayesian Computation (ABC), that is capable of handling unbalanced designs, dependencies among the observations, and missing data. ABC allows for studying the parameters of interest without recourse to incoherent and misleading measures of uncertainty such as confidence intervals. By taking into account information from all decision categories for a given examiner and information from the population of examiners, our method also allows for quantifying the risk of error for the given examiner, even when no error has been recorded for that examiner. We illustrate our proposed method by reanalysing the results of the Noblis Black Box study by Ulery et al. in 2011.
104 - Fabrice Gamboa 2013
Let $X:=(X_1, ldots, X_p)$ be random objects (the inputs), defined on some probability space $(Omega,{mathcal{F}}, mathbb P)$ and valued in some measurable space $E=E_1timesldots times E_p$. Further, let $Y:=Y = f(X_1, ldots, X_p)$ be the output. Her e, $f$ is a measurable function from $E$ to some Hilbert space $mathbb{H}$ ($mathbb{H}$ could be either of finite or infinite dimension). In this work, we give a natural generalization of the Sobol indices (that are classically defined when $Yinmathbb R$ ), when the output belongs to $mathbb{H}$. These indices have very nice properties. First, they are invariant. under isometry and scaling. Further they can be, as in dimension $1$, easily estimated by using the so-called Pick and Freeze method. We investigate the asymptotic behaviour of such estimation scheme.
One of the classic concerns in statistics is determining if two samples come from thesame population, i.e. homogeneity testing. In this paper, we propose a homogeneitytest in the context of Functional Data Analysis, adopting an idea from multivariate data analysis: the data depth plot (DD-plot). This DD-plot is a generalization of theunivariate Q-Q plot (quantile-quantile plot). We propose some statistics based onthese DD-plots, and we use bootstrapping techniques to estimate their distributions.We estimate the finite-sample size and power of our test via simulation, obtainingbetter results than other homogeneity test proposed in the literature. Finally, weillustrate the procedure in samples of real heterogeneous data and get consistent results.
Positron Emission Tomography (PET) is an imaging technique which can be used to investigate chemical changes in human biological processes such as cancer development or neurochemical reactions. Most dynamic PET scans are currently analyzed based on t he assumption that linear first order kinetics can be used to adequately describe the system under observation. However, there has recently been strong evidence that this is not the case. In order to provide an analysis of PET data which is free from this compartmental assumption, we propose a nonparametric deconvolution and analysis model for dynamic PET data based on functional principal component analysis. This yields flexibility in the possible deconvolved functions while still performing well when a linear compartmental model setup is the true data generating mechanism. As the deconvolution needs to be performed on only a relative small number of basis functions rather than voxel by voxel in the entire 3-D volume, the methodology is both robust to typical brain imaging noise levels while also being computationally efficient. The new methodology is investigated through simulations in both 1-D functions and 2-D images and also applied to a neuroimaging study whose goal is the quantification of opioid receptor concentration in the brain.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا