Do you want to publish a course? Click here

OpenKBP: The open-access knowledge-based planning grand challenge

119   0   0.0 ( 0 )
 Added by Aaron Babier
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The purpose of this work is to advance fair and consistent comparisons of dose prediction methods for knowledge-based planning (KBP) in radiation therapy research. We hosted OpenKBP, a 2020 AAPM Grand Challenge, and challenged participants to develop the best method for predicting the dose of contoured CT images. The models were evaluated according to two separate scores: (1) dose score, which evaluates the full 3D dose distributions, and (2) dose-volume histogram (DVH) score, which evaluates a set DVH metrics. Participants were given the data of 340 patients who were treated for head-and-neck cancer with radiation therapy. The data was partitioned into training (n=200), validation (n=40), and testing (n=100) datasets. All participants performed training and validation with the corresponding datasets during the validation phase of the Challenge, and we ranked the models in the testing phase based on out-of-sample performance. The Challenge attracted 195 participants from 28 countries, and 73 of those participants formed 44 teams in the validation phase, which received a total of 1750 submissions. The testing phase garnered submissions from 28 teams. On average, over the course of the validation phase, participants improved the dose and DVH scores of their models by a factor of 2.7 and 5.7, respectively. In the testing phase one model achieved significantly better dose and DVH score than the runner-up models. Lastly, many of the top performing teams reported using generalizable techniques (e.g., ensembles) to achieve higher performance than their competition. This is the first competition for knowledge-based planning research, and it helped launch the first platform for comparing KBP prediction methods fairly and consistently. The OpenKBP datasets are available publicly to help benchmark future KBP research, which has also democratized KBP research by making it accessible to everyone.



rate research

Read More

Knowledge of whole heart anatomy is a prerequisite for many clinical applications. Whole heart segmentation (WHS), which delineates substructures of the heart, can be very valuable for modeling and analysis of the anatomy and functions of the heart. However, automating this segmentation can be arduous due to the large variation of the heart shape, and different image qualities of the clinical data. To achieve this goal, a set of training data is generally needed for constructing priors or for training. In addition, it is difficult to perform comparisons between different methods, largely due to differences in the datasets and evaluation metrics used. This manuscript presents the methodologies and evaluation results for the WHS algorithms selected from the submissions to the Multi-Modality Whole Heart Segmentation (MM-WHS) challenge, in conjunction with MICCAI 2017. The challenge provides 120 three-dimensional cardiac images covering the whole heart, including 60 CT and 60 MRI volumes, all acquired in clinical environments with manual delineation. Ten algorithms for CT data and eleven algorithms for MRI data, submitted from twelve groups, have been evaluated. The results show that many of the deep learning (DL) based methods achieved high accuracy, even though the number of training datasets was limited. A number of them also reported poor results in the blinded evaluation, probably due to overfitting in their training. The conventional algorithms, mainly based on multi-atlas segmentation, demonstrated robust and stable performance, even though the accuracy is not as good as the best DL method in CT segmentation. The challenge, including the provision of the annotated training data and the blinded evaluation for submitted algorithms on the test data, continues as an ongoing benchmarking resource via its homepage (url{www.sdspeople.fudan.edu.cn/zhuangxiahai/0/mmwhs/}).
Stereotactic body radiation therapy (SBRT) for pancreatic cancer requires a skillful approach to deliver ablative doses to the tumor while limiting dose to the highly sensitive duodenum, stomach, and small bowel. Here, we develop knowledge-based artificial neural network dose models (ANN-DMs) to predict dose distributions that would be approved by experienced physicians. Using dose distributions calculated by a commercial treatment planning system (TPS), physician-approved treatment plans were used to train ANN-DMs that could predict physician-approved dose distributions based on a set of geometric parameters (vary from voxel to voxel) and plan parameters (constant across all voxels for a given patient). Differences between TPS and ANN-DM dose distributions were used to evaluate model performance. ANN-DM design, including neural network structure and parameter choices, were evaluated to optimize dose model performance. Mean dose errors were less than 5% at all distances from the PTV, and mean absolute dose errors were on the order of 5%, but no more than 10%. Dose-volume histogram errors demonstrated good model performance above 25 Gy, but much larger errors were seen at lower doses. ANN-DM dose distributions showed excellent overall agreement with TPS dose distributions, and accuracy was substantially improved when each physicians treatment approach was taken into account by training their own dedicated models. In this manner, one could feasibly train ANN-DMs that could predict the dose distribution desired by a given physician for a given treatment site.
Purpose: A Monte Carlo (MC) beam model and its implementation in a clinical treatment planning system (TPS, Varian Eclipse) are presented for a modified ultra-high dose-rate electron FLASH radiotherapy (eFLASH-RT) LINAC. Methods: The gantry head without scattering foils or targets, representative of the LINAC modifications, was modelled in Geant4. The energy spectrum ({sigma}E) and beam source emittance cone angle ({theta}cone) were varied to match the calculated and Gafchromic film measured central-axis percent depth dose (PDD) and lateral profiles. Its Eclipse configuration was validated with measured profiles of the open field and nominal fields for clinical applicators. eFLASH-RT plans were MC forward calculated in Geant4 for a mouse brain treatment and compared to a conventional (Conv-RT) plan in Eclipse for a human patient with metastatic renal cell carcinoma. Results: The beam model and its Eclipse configuration agreed best with measurements at {sigma}E=0.5 MeV and {theta}cone=3.9+/-0.2 degrees to clinically acceptable accuracy (the absolute average error was within 1.5% for in-water lateral, 3% for in-air lateral, and 2% for PDD). The forward dose calculation showed dose was delivered to the entire mouse brain with adequate conformality. The human patient case demonstrated the planning capability with routine accessories in relatively complex geometry to achieve an acceptable plan (90% of the tumor volume receiving 95% and 90% of the prescribed dose for eFLASH and Conv-RT, respectively). Conclusion: To the best of our knowledge, this is the first functional beam model commissioned in a clinical TPS for eFLASH-RT, enabling planning and evaluation with minimal deviation from Conv-RT workflow. It facilitates the clinical translation as eFLASH-RT and Conv-RT plan quality were comparable for a human patient. The methods can be expanded to model other eFLASH irradiators.
Cheapfake is a recently coined term that encompasses non-AI (cheap) manipulations of multimedia content. Cheapfakes are known to be more prevalent than deepfakes. Cheapfake media can be created using editing software for image/video manipulations, or even without using any software, by simply altering the context of an image/video by sharing the media alongside misleading claims. This alteration of context is referred to as out-of-context (OOC) misuse} of media. OOC media is much harder to detect than fake media, since the images and videos are not tampered. In this challenge, we focus on detecting OOC images, and more specifically the misuse of real photographs with conflicting image captions in news items. The aim of this challenge is to develop and benchmark models that can be used to detect whether given samples (news image and associated captions) are OOC, based on the recently compiled COSMOS dataset.
In a real-world setting, visual recognition systems can be brought to make predictions for images belonging to previously unknown class labels. In order to make semantically meaningful predictions for such inputs, we propose a two-step approach that utilizes information from knowledge graphs. First, a knowledge-graph representation is learned to embed a large set of entities into a semantic space. Second, an image representation is learned to embed images into the same space. Under this setup, we are able to predict structured properties in the form of relationship triples for any open-world image. This is true even when a set of labels has been omitted from the training protocols of both the knowledge graph and image embeddings. Furthermore, we append this learning framework with appropriate smoothness constraints and show how prior knowledge can be incorporated into the model. Both these improvements combined increase performance for visual recognition by a factor of six compared to our baseline. Finally, we propose a new, extended dataset which we use for experiments.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا