ترغب بنشر مسار تعليمي؟ اضغط هنا

How little data do we need for patient-level prediction?

69   0   0.0 ( 0 )
 نشر من قبل Luis John
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Objective: Provide guidance on sample size considerations for developing predictive models by empirically establishing the adequate sample size, which balances the competing objectives of improving model performance and reducing model complexity as well as computational requirements. Materials and Methods: We empirically assess the effect of sample size on prediction performance and model complexity by generating learning curves for 81 prediction problems in three large observational health databases, requiring training of 17,248 prediction models. The adequate sample size was defined as the sample size for which the performance of a model equalled the maximum model performance minus a small threshold value. Results: The adequate sample size achieves a median reduction of the number of observations between 9.5% and 78.5% for threshold values between 0.001 and 0.02. The median reduction of the number of predictors in the models at the adequate sample size varied between 8.6% and 68.3%, respectively. Discussion: Based on our results a conservative, yet significant, reduction in sample size and model complexity can be estimated for future prediction work. Though, if a researcher is willing to generate a learning curve a much larger reduction of the model complexity may be possible as suggested by a large outcome-dependent variability. Conclusion: Our results suggest that in most cases only a fraction of the available data was sufficient to produce a model close to the performance of one developed on the full data set, but with a substantially reduced model complexity.

قيم البحث

اقرأ أيضاً

We examine the possibility of soft cosmology, namely small deviations from the usual framework due to the effective appearance of soft-matter properties in the Universe sectors. One effect of such a case would be the dark energy to exhibit a differen t equation-of-state parameter at large scales (which determine the universe expansion) and at intermediate scales (which determine the sub-horizon clustering and the large scale structure formation). Concerning soft dark matter, we show that it can effectively arise due to the dark-energy clustering, even if dark energy is not soft. We propose a novel parametrization introducing the softness parameters of the dark sectors. As we see, although the background evolution remains unaffected, due to the extreme sensitivity and significant effects on the global properties even a slightly non-trivial softness parameter can improve the clustering behavior and alleviate e.g. the $fsigma_8$ tension. Lastly, an extension of the cosmological perturbation theory and a detailed statistical mechanical analysis, in order to incorporate complexity and estimate the scale-dependent behavior from first principles, is necessary and would provide a robust argumentation in favour of soft cosmology.
In recent years, deep learning models have resulted in a huge amount of progress in various areas, including computer vision. By nature, the supervised training of deep models requires a large amount of data to be available. This ideal case is usuall y not tractable as the data annotation is a tremendously exhausting and costly task to perform. An alternative is to use synthetic data. In this paper, we take a comprehensive look into the effects of replacing real data with synthetic data. We further analyze the effects of having a limited amount of real data. We use multiple synthetic and real datasets along with a simulation tool to create large amounts of cheaply annotated synthetic data. We analyze the domain similarity of each of these datasets. We provide insights about designing a methodological procedure for training deep networks using these datasets.
The intent recognition is an essential algorithm of any conversational AI application. It is responsible for the classification of an input message into meaningful classes. In many bot development platforms, we can configure the NLU pipeline. Several intent recognition services are currently available as an API, or we choose from many open-source alternatives. However, there is no comparison of intent recognition services and open-source algorithms. Many factors make the selection of the right approach to the intent recognition challenging in practice. In this paper, we suggest criteria to choose the best intent recognition algorithm for an application. We present a dataset for evaluation. Finally, we compare selected public NLU services with selected open-source algorithms for intent recognition.
We consider reciprocal metasurfaces with engineered reflection and transmission coefficients and study the role of normal (with respect to the metasurface plane) electric and magnetic polarizations on the possibilities to shape the reflection and tra nsmission responses. We demonstrate in general and on a representative example that the presence of normal components of the polarization vectors does not add extra degrees of freedom in engineering the reflection and transmission characteristics of metasurfaces. Furthermore, we discuss advantages and disadvantages of equivalent volumetric and fully planar realizations of the same properties of functional metasurfaces.
The software development community has been using code quality metrics for the last five decades. Despite their wide adoption, code quality metrics have attracted a fair share of criticism. In this paper, first, we carry out a qualitative exploration by surveying software developers to gauge their opinions about current practices and potential gaps with the present set of metrics. We identify deficiencies including lack of soundness, i.e., the ability of a metric to capture a notion accurately as promised by the metric, lack of support for assessing software architecture quality, and insufficient support for assessing software testing and infrastructure. In the second part of the paper, we focus on one specific code quality metric-LCOM as a case study to explore opportunities towards improved metrics. We evaluate existing LCOM algorithms qualitatively and quantitatively to observe how closely they represent the concept of cohesion. In this pursuit, we first create eight diverse cases that any LCOM algorithm must cover and obtain their cohesion levels by a set of experienced developers and consider them as a ground truth. We show that the present set of LCOM algorithms do poorly w.r.t. these cases. To bridge the identified gap, we propose a new approach to compute LCOM and evaluate the new approach with the ground truth. We also show, using a quantitative analysis using more than 90 thousand types belonging to 261 high-quality Java repositories, the present set of methods paint a very inaccurate and misleading picture of class cohesion. We conclude that the current code quality metrics in use suffer from various deficiencies, presenting ample opportunities for the research community to address the gaps.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا