ترغب بنشر مسار تعليمي؟ اضغط هنا

Uncertainty representation for early phase clinical test evaluations: a case study

57   0   0.0 ( 0 )
 نشر من قبل Kevin Wilson Dr
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In early clinical test evaluations the potential benefits of the introduction of a new technology into the healthcare system are assessed in the challenging situation of limited available empirical data. The aim of these evaluations is to provide additional evidence for the decision maker, who is typically a funder or the company developing the test, to evaluate which technologies should progress to the next stage of evaluation. In this paper we consider the evaluation of a diagnostic test for patients suffering from Chronic Obstructive Pulmonary Disease (COPD). We describe the use of graphical models, prior elicitation and uncertainty analysis to provide the required evidence to allow the test to progress to the next stage of evaluation. We specifically discuss inferring an influence diagram from a care pathway and conducting an elicitation exercise to allow specification of prior distributions over all model parameters. We describe the uncertainty analysis, via Monte Carlo simulation, which allowed us to demonstrate that the potential value of the test was robust to uncertainties. This paper provides a case study illustrating how a careful Bayesian analysis can be used to enhance early clinical test evaluations.



قيم البحث

اقرأ أيضاً

83 - Shu Wang , Ji-Hyun Lee 2021
Nowadays, more and more clinical trials choose combinational agents as the intervention to achieve better therapeutic responses. However, dose-finding for combinational agents is much more complicated than single agent as the full order of combinatio n dose toxicity is unknown. Therefore, regular phase I designs are not able to identify the maximum tolerated dose (MTD) of combinational agents. Motivated by such needs, plenty of novel phase I clinical trial designs for combinational agents were proposed. With so many available designs, research that compare their performances, explore parameters impacts, and provide recommendations is very limited. Therefore, we conducted a simulation study to evaluate multiple phase I designs that proposed to identify single MTD for combinational agents under various scenarios. We also explored influences of different design parameters. In the end, we summarized the pros and cons of each design, and provided a general guideline in design selection.
Illegal wildlife poaching threatens ecosystems and drives endangered species toward extinction. However, efforts for wildlife protection are constrained by the limited resources of law enforcement agencies. To help combat poaching, the Protection Ass istant for Wildlife Security (PAWS) is a machine learning pipeline that has been developed as a data-driven approach to identify areas at high risk of poaching throughout protected areas and compute optimal patrol routes. In this paper, we take an end-to-end approach to the data-to-deployment pipeline for anti-poaching. In doing so, we address challenges including extreme class imbalance (up to 1:200), bias, and uncertainty in wildlife poaching data to enhance PAWS, and we apply our methodology to three national parks with diverse characteristics. (i) We use Gaussian processes to quantify predictive uncertainty, which we exploit to improve robustness of our prescribed patrols and increase detection of snares by an average of 30%. We evaluate our approach on real-world historical poaching data from Murchison Falls and Queen Elizabeth National Parks in Uganda and, for the first time, Srepok Wildlife Sanctuary in Cambodia. (ii) We present the results of large-scale field tests conducted in Murchison Falls and Srepok Wildlife Sanctuary which confirm that the predictive power of PAWS extends promisingly to multiple parks. This paper is part of an effort to expand PAWS to 800 parks around the world through integration with SMART conservation software.
Observational studies are valuable for estimating the effects of various medical interventions, but are notoriously difficult to evaluate because the methods used in observational studies require many untestable assumptions. This lack of verifiabilit y makes it difficult both to compare different observational study methods and to trust the results of any particular observational study. In this work, we propose TrialVerify, a new approach for evaluating observational study methods based on ground truth sourced from clinical trial reports. We process trial reports into a denoised collection of known causal relationships that can then be used to estimate the precision and recall of various observational study methods. We then use TrialVerify to evaluate multiple observational study methods in terms of their ability to identify the known causal relationships from a large national insurance claims dataset. We found that inverse propensity score weighting is an effective approach for accurately reproducing known causal relationships and outperforms other observational study methods. TrialVerify is made freely available for others to evaluate observational study methods.
This paper presents a case study on short-term load forecasting for France, with emphasis on special days, such as public holidays. We investigate the generalisability to French data of a recently proposed approach, which generates forecasts for norm al and special days in a coherent and unified framework, by incorporating subjective judgment in univariate statistical models using a rule-based methodology. The intraday, intraweek, and intrayear seasonality in load are accommodated using a rule-based triple seasonal adaptation of a seasonal autoregressive moving average (SARMA) model. We find that, for application to French load, the method requires an important adaption. We also adapt a recently proposed SARMA model that accommodates special day effects on an hourly basis using indicator variables. Using a rule formulated specifically for the French load, we compare the SARMA models with a range of different benchmark methods based on an evaluation of their point and density forecast accuracy. As sophisticated benchmarks, we employ the rule-based triple seasonal adaptations of Holt-Winters-Taylor (HWT) exponential smoothing and artificial neural networks (ANNs). We use nine years of half-hourly French load data, and consider lead times ranging from one half-hour up to a day ahead. The rule-based SARMA approach generated the most accurate forecasts.
Epidemics are a serious public health threat, and the resources for mitigating their effects are typically limited. Decision-makers face challenges in forecasting the demand for these resources as prior information about the disease is often not avai lable, the behaviour of the disease can periodically change (either naturally or as a result of public health policies) and can differ by geographical region. In this work, we discuss a model that is suitable for short-term real-time supply and demand forecasting during emerging outbreaks without having to rely on demographic information. We propose a data-driven mixed-integer programming (MIP) resource allocation model that assigns available resources to maximize a notion of fairness among the resource-demanding entities. Numerical results from applying our MIP model to a COVID-19 Convalescent Plasma (CCP) case study suggest that our approach can help balance the supply and demand of limited products such as CCP and minimize the unmet demand ratios of the demand entities.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا