ترغب بنشر مسار تعليمي؟ اضغط هنا

A new simulation-based model for calculating post-mortem intervals using developmental data for Lucilia sericata (Dipt.: Calliphoridae)

194   0   0.0 ( 0 )
 نشر من قبل Philip von Doetinchem
 تاريخ النشر 2009
  مجال البحث علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Homicide investigations often depend on the determination of a minimum post-mortem interval (PMI$_{min}$) by forensic entomologists. The age of the most developed insect larvae (mostly blow fly larvae) gives reasonably reliable information about the minimum time a person has been dead. Methods such as isomegalen diagrams or ADH calculations can have problems in their reliability, so we established in this study a new growth model to calculate the larval age of textit{Lucilia sericata} (Meigen 1826). This is based on the actual non-linear development of the blow fly and is designed to include uncertainties, e.g. for temperature values from the crime scene. We used published data for the development of textit{L. sericata} to estimate non-linear functions describing the temperature dependent behavior of each developmental state. For the new model it is most important to determine the progress within one developmental state as correctly as possible since this affects the accuracy of the PMI estimation by up to 75%. We found that PMI calculations based on one mean temperature value differ by up to 65% from PMIs based on an 12-hourly time temperature profile. Differences of 2degree C in the estimation of the crime scene temperature result in a deviation in PMI calculation of 15 - 30%.



قيم البحث

اقرأ أيضاً

One of the answers to the measurement problem in quantum theory is given by the Copenhagen-Interpretation of quantum theory (i.e. orthodox quantum theory) in which the wave function collapse happens in (by) the mind of observer. In fact, at first, gr eat scientists like Von Neumann, London, Bauer and Wigner (initially) believed that the wave function collapse occurs in the brain or is caused by the consciousness of observer. However, this issue has been stayed yet very controversial. In fact, there are many challenging discussions about the survival of quantum effects in microscopic structures of the human brain, which is mainly because of quick decoherence of quantum states due to hot, wet and noisy environment of the brain that forbids long life coherence for brain processing. Nevertheless, there are also several arguments and evidences that emergence of large coherent states is feasible in the brain. In this paper, our approach is based on the latter in which macroscopic quantum states are probable in the human brain. Here, we simulate the delayed luminescence of photons in neurons with a Brassard-like teleportation circuit, i.e. equivalent to the transfer of quantum states of photons through the visual pathways from retina to the visual cortex. Indeed, our simulation considers both classical and quantum mechanical aspects of processing in neurons. As a result and based on our simulation, it is possible for our brain to receive the exact quantum states of photons in the visual cortex to be collapsed by our consciousness, which supports the Copenhagen-Interpretation of measurement problem in quantum theory.
Large volume of Genomics data is produced on daily basis due to the advancement in sequencing technology. This data is of no value if it is not properly analysed. Different kinds of analytics are required to extract useful information from this raw d ata. Classification, Prediction, Clustering and Pattern Extraction are useful techniques of data mining. These techniques require appropriate selection of attributes of data for getting accurate results. However, Bioinformatics data is high dimensional, usually having hundreds of attributes. Such large a number of attributes affect the performance of machine learning algorithms used for classification/prediction. So, dimensionality reduction techniques are required to reduce the number of attributes that can be further used for analysis. In this paper, Principal Component Analysis and Factor Analysis are used for dimensionality reduction of Bioinformatics data. These techniques were applied on Leukaemia data set and the number of attributes was reduced from to.
Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in the biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundament al developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the languages usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a variable, the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences.
109 - Mario V Balzan 2021
The term nature-based solutions has often been used to refer to adequate green infrastructure, which is cost-effective and simultaneously provides environmental, social and economic benefits, through the delivery of ecosystem services, and contribute s to build resilience. This paper provides an overview of the recent work mapping and assessing ecosystem services in Malta and the implications for decision-making. Research has focused on the identification and mapping of ecosystems, and ecosystem condition, the capacity to deliver key ecosystem services and the actual use (flow) of these services by local communities leading to benefits to human well-being. The integration of results from these different assessments demonstrates several significant synergies between ecosystem services, indicating multifunctionality in the provision of ecosystem services leading to human well-being. This is considered as key criterion in the identification of green infrastructure in the Maltese Islands. A gradient in green infrastructure cover and ecosystem services capacity is observed between rural and urban areas but ecosystem services flow per unit area was in some cases higher in urban environments. These results indicate a potential mismatch between ecosystem service demand and capacity but also provide a scientific baseline for evidence-based policy which fosters the development of green infrastructure through nature-based innovation promoting more specific and novel solutions for landscape and urban planning.
Biological data mainly comprises of Deoxyribonucleic acid (DNA) and protein sequences. These are the biomolecules which are present in all cells of human beings. Due to the self-replicating property of DNA, it is a key constitute of genetic material that exist in all breathingcreatures. This biomolecule (DNA) comprehends the genetic material obligatory for the operational and expansion of all personified lives. To save DNA data of single person we require 10CD-ROMs.Moreover, this size is increasing constantly, and more and more sequences are adding in the public databases. This abundant increase in the sequence data arise challenges in the precise information extraction from this data. Since many data analyzing and visualization tools do not support processing of this huge amount of data. To reduce the size of DNA and protein sequence, many scientists introduced various types of sequence compression algorithms such as compress or gzip, Context Tree Weighting (CTW), Lampel Ziv Welch (LZW), arithmetic coding, run-length encoding and substitution method etc. These techniques have sufficiently contributed to minimizing the volume of the biological datasets. On the other hand, traditional compression techniques are also not much suitable for the compression of these types of sequential data. In this paper, we have explored diverse types of techniques for compression of large amounts of DNA Sequence Data. In this paper, the analysis of techniques reveals that efficient techniques not only reduce the size of the sequence but also avoid any information loss. The review of existing studies also shows that compression of a DNA sequence is significant for understanding the critical characteristics of DNA data in addition to improving storage efficiency and data transmission. In addition, the compression of the protein sequence is a challenge for the research community. The major parameters for evaluation of these compression algorithms include compression ratio, running time complexity etc.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا