No Arabic abstract
We consider a network consisting of $n$ components (links or nodes) and assume that the network has two states, up and down. We further suppose that the network is subject to shocks that appear according to a counting process and that each shock may lead to the component failures. Under some assumptions on the shock occurrences, we present a new variant of the notion of signature which we call it t-signature. Then t-signature based mixture representations for the reliability function of the network are obtained. Several stochastic properties of the network lifetime are investigated. In particular, under the assumption that the number of failures at each shock follows a binomial distribution and the process of shocks is non-homogeneous Poisson process, explicit form of the network reliability is derived and its aging properties are explored. Several examples are also provided
A new approach called RESID is proposed in this paper for estimating reliability of a software allowing for imperfect debugging. Unlike earlier approaches based on counting number of bugs or modelling inter-failure time gaps, RESID focuses on the probability of bugginess of different parts of a program buggy. This perspective allows an easy way to incorporate the structure of the software under test, as well as imperfect debugging. One main design objective behind RESID is ease of implementation in practical scenarios.
There is increasing appetite for analysing multiple network data. This is different to analysing traditional data sets, where now each observation in the data comprises a network. Recent technological advancements have allowed the collection of this type of data in a range of different applications. This has inspired researchers to develop statistical models that most accurately describe the probabilistic mechanism that generates a network population and use this to make inferences about the underlying structure of the network data. Only a few studies developed to date consider the heterogeneity that can exist in a network population. We propose a Mixture of Measurement Error Models for identifying clusters of networks in a network population, with respect to similarities detected in the connectivity patterns among the networks nodes. Extensive simulation studies show our model performs well in both clustering multiple network data and inferring the model parameters. We further apply our model on two real world multiple network data sets resulting from the fields of Computing (Human Tracking Systems) and Neuroscience.
Distribution network operators (DNOs) are increasingly concerned about the impact of low carbon technologies on the low voltage (LV) networks. More advanced metering infrastructures provide numerous opportunities for more accurate load flow analysis of the LV networks. However, such data may not be readily available for DNOs and in any case is likely to be expensive. Modelling tools are required which can provide realistic, yet accurate, load profiles as input for a network modelling tool, without needing access to large amounts of monitored customer data. In this paper we outline some simple methods for accurately modelling a large number of unmonitored residential customers at the LV level. We do this by a process we call buddying, which models unmonitored customers by assigning them load profiles from a limited sample of monitored customers who have smart meters. Hence the presented method requires access to only a relatively small amount of domestic customers data. The method is efficiently optimised using a genetic algorithm to minimise a weighted cost function between matching the substation data and the individual mean daily demands. Hence we can show the effectiveness of substation monitoring in LV network modelling. Using real LV network modelling, we show that our methods perform significantly better than a comparative Monte Carlo approach, and provide a description of the peak demand behaviour.
Mortality is different across countries, states and regions. Several empirical research works however reveal that mortality trends exhibit a common pattern and show similar structures across populations. The key element in analyzing mortality rate is a time-varying indicator curve. Our main interest lies in validating the existence of the common trends among these curves, the similar gender differences and their variability in location among the curves at the national level. Motivated by the empirical findings, we make the study of estimating and forecasting mortality rates based on a semi-parametric approach, which is applied to multiple curves with the shape-related nonlinear variation. This approach allows us to capture the common features contained in the curve functions and meanwhile provides the possibility to characterize the nonlinear variation via a few deviation parameters. These parameters carry an instructive summary of the time-varying curve functions and can be further used to make a suggestive forecast analysis for countries with barren data sets. In this research the model is illustrated with mortality rates of Japan and China, and extended to incorporate more countries.
Air pollution is a major risk factor for global health, with both ambient and household air pollution contributing substantial components of the overall global disease burden. One of the key drivers of adverse health effects is fine particulate matter ambient pollution (PM$_{2.5}$) to which an estimated 3 million deaths can be attributed annually. The primary source of information for estimating exposures has been measurements from ground monitoring networks but, although coverage is increasing, there remain regions in which monitoring is limited. Ground monitoring data therefore needs to be supplemented with information from other sources, such as satellite retrievals of aerosol optical depth and chemical transport models. A hierarchical modelling approach for integrating data from multiple sources is proposed allowing spatially-varying relationships between ground measurements and other factors that estimate air quality. Set within a Bayesian framework, the resulting Data Integration Model for Air Quality (DIMAQ) is used to estimate exposures, together with associated measures of uncertainty, on a high resolution grid covering the entire world. Bayesian analysis on this scale can be computationally challenging and here approximate Bayesian inference is performed using Integrated Nested Laplace Approximations. Model selection and assessment is performed by cross-validation with the final model offering substantial increases in predictive accuracy, particularly in regions where there is sparse ground monitoring, when compared to current approaches: root mean square error (RMSE) reduced from 17.1 to 10.7, and population weighted RMSE from 23.1 to 12.1 $mu$gm$^{-3}$. Based on summaries of the posterior distributions for each grid cell, it is estimated that 92% of the worlds population reside in areas exceeding the World Health Organizations Air Quality Guidelines.