Do you want to publish a course? Click here

The Early Roots of Statistical Learning in the Psychometric Literature: A review and two new results

60   0   0.0 ( 0 )
 Added by Mark De Rooij
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Machine and Statistical learning techniques become more and more important for the analysis of psychological data. Four core concepts of machine learning are the bias variance trade-off, cross-validation, regularization, and basis expansion. We present some early psychometric papers, from almost a century ago, that dealt with cross-validation and regularization. From this review it is safe to conclude that the origins of these lie partly in the field of psychometrics. From our historical review, two new ideas arose which we investigated further: The first is about the relationship between reliability and predictive validity; the second is whether optimal regression weights should be estimated by regularizing their values towards equality or shrinking their values towards zero. In a simulation study we show that the reliability of a test score does not influence the predictive validity as much as is usually written in psychometric textbooks. Using an empirical example we show that regularization towards equal regression coefficients is beneficial in terms of prediction error.



rate research

Read More

Monitoring several correlated quality characteristics of a process is common in modern manufacturing and service industries. Although a lot of attention has been paid to monitoring the multivariate process mean, not many control charts are available for monitoring the covariance matrix. This paper presents a comprehensive overview of the literature on control charts for monitoring the covariance matrix in a multivariate statistical process monitoring (MSPM) framework. It classifies the research that has previously appeared in the literature. We highlight the challenging areas for research and provide some directions for future research.
This paper reviews two main types of prediction interval methods under a parametric framework. First, we describe methods based on an (approximate) pivotal quantity. Examples include the plug-in, pivotal, and calibration methods. Then we describe methods based on a predictive distribution (sometimes derived based on the likelihood). Examples include Bayesian, fiducial, and direct-bootstrap methods. Several examples involving continuous distributions along with simulation studies to evaluate coverage probability properties are provided. We provide specific connections among different prediction interval methods for the (log-)location-scale family of distributions. This paper also discusses general prediction interval methods for discrete data, using the binomial and Poisson distributions as examples. We also overview methods for dependent data, with application to time series, spatial data, and Markov random fields, for example.
Randomization-based Machine Learning methods for prediction are currently a hot topic in Artificial Intelligence, due to their excellent performance in many prediction problems, with a bounded computation time. The application of randomization-based approaches to renewable energy prediction problems has been massive in the last few years, including many different types of randomization-based approaches, their hybridization with other techniques and also the description of n
Deep reinforcement learning (DRL) augments the reinforcement learning framework, which learns a sequence of actions that maximizes the expected reward, with the representative power of deep neural networks. Recent works have demonstrated the great potential of DRL in medicine and healthcare. This paper presents a literature review of DRL in medical imaging. We start with a comprehensive tutorial of DRL, including the latest model-free and model-based algorithms. We then cover existing DRL applications for medical imaging, which are roughly divided into three main categories: (I) parametric medical image analysis tasks including landmark detection, object/lesion detection, registration, and view plane localization; (ii) solving optimization tasks including hyperparameter tuning, selecting augmentation strategies, and neural architecture search; and (iii) miscellaneous applications including surgical gesture segmentation, personalized mobile health intervention, and computational model personalization. The paper concludes with discussions of future perspectives.
When making choices in software projects, engineers and other stakeholders engage in decision making that involves uncertain future outcomes. Research in psychology, behavioral economics and neuroscience has questioned many of the classical assumptions of how such decisions are made. This literature review aims to characterize the assumptions that underpin the study of these decisions in Software Engineering. We identify empirical research on this subject and analyze how the role of time has been characterized in the study of decision making in SE. The literature review aims to support the development of descriptive frameworks for empirical studies of intertemporal decision making in practice.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا