Do you want to publish a course? Click here

Simulating recurrent events that mimic actual data: a review of the literature with emphasis on event-dependence

178   0   0.0 ( 0 )
 Added by Aurelien Latouche
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

We conduct a review to assess how the simulation of repeated or recurrent events are planned. For such multivariate time-to-events, it is well established that the underlying mechanism is likely to be complex and to involve in particular both heterogeneity in the population and event-dependence. In this respect, we particularly focused on these two dimensions of events dynamic when mimicking actual data. Next, we investigate whether the processes generated in the simulation studies have similar properties to those expected in the clinical data of interest. Finally we describe a simulation scheme for generating data according to the timescale of choice (gap time/ calendar) and to whether heterogeneity and/or event-dependence are to be considered. The main finding is that event-dependence is less widely considered in simulation studies than heterogeneity. This is unfortunate since the occurrence of an event may alter the risk of occurrence of new events.

rate research

Read More

397 - Andrej Srakar 2020
Multiple Indicators Multiple Causes (MIMIC) models are type of structural equation models, a theory-based approach to confirm the influence of a set of exogenous causal variables on the latent variable, and also the effect of the latent variable on observed indicator variables. In a common MIMIC model, multiple indicators reflect the underlying latent variables/factors, and the multiple causes (observed predictors) affect latent variables/factors. Basic assumptions of MIMIC are clearly violated in case of a variable being both an indicator and a cause, i.e. in the presence of reverse causality. Furthermore, the model is then unidentified. To resolve the situation, which can arise frequently, and as MIMIC estimation lacks closed form solutions for parameters we utilize a version of Bollens (1996) 2SLS estimator for structural equation models combined with Joreskog (1970)s method of the analysis of covariance structures to derive a new, 2SLS estimator for MIMIC models. Our 2SLS empirical estimation is based on static MIMIC specification but we point also to dynamic/error-correction MIMIC specification and 2SLS solution for it. We derive basic asymptotic theory for static 2SLS-MIMIC, present a simulation study and apply findings to an interesting empirical case of estimating precarious status of older workers (using dataset of Survey of Health, Ageing and Retirement in Europe) which solves an important issue of the definition of precarious work as a multidimensional concept, not modelled adequately so far.
Multichannel adaptive signal detection jointly uses the test and training data to form an adaptive detector, and then make a decision on whether a target exists or not. Remarkably, the resulting adaptive detectors usually possess the constant false alarm rate (CFAR) properties, and hence no additional CFAR processing is needed. Filtering is not needed as a processing procedure either, since the function of filtering is embedded in the adaptive detector. Moreover, adaptive detection usually exhibits better detection performance than the filtering-then-CFAR detection technique. Multichannel adaptive signal detection has been more than 30 years since the first multichannel adaptive detector was proposed by Kelly in 1986. However, there are fewer overview articles on this topic. In this paper we give a tutorial overview of multichannel adaptive signal detection, with emphasis on Gaussian background. We present the main deign criteria for adaptive detectors, investigate the relationship between adaptive detection and filtering-then-CFAR detection, relationship between adaptive detectors and adaptive filters, summarize typical adaptive detectors, show numerical examples, give comprehensive literature review, and discuss some possible further research tracks.
Cloud computing has become a powerful and indispensable technology for complex, high performance and scalable computation. The exponential expansion in the deployment of cloud technology has produced a massive amount of data from a variety of applications, resources and platforms. In turn, the rapid rate and volume of data creation has begun to pose significant challenges for data management and security. The design and deployment of intrusion detection systems (IDS) in the big data setting has, therefore, become a topic of importance. In this paper, we conduct a systematic literature review (SLR) of data mining techniques (DMT) used in IDS-based solutions through the period 2013-2018. We employed criterion-based, purposive sampling identifying 32 articles, which constitute the primary source of the present survey. After a careful investigation of these articles, we identified 17 separate DMTs deployed in an IDS context. This paper also presents the merits and disadvantages of the various works of current research that implemented DMTs and distributed streaming frameworks (DSF) to detect and/or prevent malicious attacks in a big data environment.
102 - Yue Liu , Qinghua Lu , Liming Zhu 2021
Blockchain has been increasingly used as a software component to enable decentralisation in software architecture for a variety of applications. Blockchain governance has received considerable attention to ensure the safe and appropriate use and evolution of blockchain, especially after the Ethereum DAO attack in 2016. To understand the state-of-the-art of blockchain governance and provide an actionable guidance for academia and practitioners, in this paper, we conduct a systematic literature review, identifying 34 primary studies. Our study comprehensively investigates blockchain governance via 5W1H questions. The study results reveal several major findings: 1) the adaptation and upgrade of blockchain are the primary purposes of blockchain governance, while both software quality attributes and human value attributes need to be increasingly considered; 2) blockchain governance mainly relies on the project team, node operators, and users of a blockchain platform; and 3) existing governance solutions can be classified into process mechanisms and product mechanisms, which mainly focus on the operation phase over the blockchain platform layer.
We present new estimators for the statistical analysis of the dependence of the mean gap time length between consecutive recurrent events, on a set of explanatory random variables and in the presence of right censoring. The dependence is expressed through regression-like and overdispersion parameters, estimated via conditional estimating equations. The mean and variance of the length of each gap time, conditioned on the observed history of prior events and other covariates, are known functions of parameters and covariates. Under certain conditions on censoring, we construct normalized estimating functions that are asymptotically unbiased and contain only observed data. We discuss the existence, consistency and asymptotic normality of a sequence of estimators of the parameters, which are roots of these estimating equations. Simulations suggest that our estimators could be used successfully with a relatively small sample size in a study of short duration.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا