Do you want to publish a course? Click here

Utilizing Players Playtime Records for Churn Prediction: Mining Playtime Regularity

104   0   0.0 ( 0 )
 Added by Wanshan Yang
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In the free online game industry, churn prediction is an important research topic. Reducing the churn rate of a game significantly helps with the success of the game. Churn prediction helps a game operator identify possible churning players and keep them engaged in the game via appropriate operational strategies, marketing strategies, and/or incentives. Playtime related features are some of the widely used universal features for most churn prediction models. In this paper, we consider developing new universal features for churn predictions for long-term players based on players playtime.



rate research

Read More

We introduce IP over Xylophone Players (IPoXP), a novel Internet protocol between two computers using xylophone-based Arduino interfaces. In our implementation, human operators are situated within the lowest layer of the network, transmitting data between computers by striking designated keys. We discuss how IPoXP inverts the traditional mode of human-computer interaction, with a computer using the human as an interface to communicate with another computer.
Electronic Health Records (EHRs) are typically stored as time-stamped encounter records. Observing temporal relationship between medical records is an integral part of interpreting the information. Hence, statistical analysis of EHRs requires that clinically informed time-interdependent analysis variables (TIAV) be created. Often, formulation and creation of these variables are iterative and requiring custom codes. We describe a technique of using sequences of time-referenced entities as the building blocks for TIAVs. These sequences represent different aspects of patients medical history in a contiguous fashion. To illustrate the principles and applications of the method, we provide examples using Veterans Health Administrations research databases. In the first example, sequences representing medication exposure were used to assess patient selection criteria for a treatment comparative effectiveness study. In the second example, sequences of Charlson Comorbidity conditions and clinical settings of inpatient or outpatient were used to create variables with which data anomalies and trends were revealed. The third example demonstrated the creation of an analysis variable derived from the temporal dependency of medication exposure and comorbidity. Complex time-interdependent analysis variables can be created from the sequences with simple, reusable codes, hence enable unscripted or automation of TIAV creation.
We introduce AirWare, an in-air hand-gesture recognition system that uses the already embedded speaker and microphone in most electronic devices, together with embedded infrared proximity sensors. Gestures identified by AirWare are performed in the air above a touchscreen or a mobile phone. AirWare utilizes convolutional neural networks to classify a large vocabulary of hand gestures using multi-modal audio Doppler signatures and infrared (IR) sensor information. As opposed to other systems which use high frequency Doppler radars or depth cameras to uniquely identify in-air gestures, AirWare does not require any external sensors. In our analysis, we use openly available APIs to interface with the Samsung Galaxy S5 audio and proximity sensors for data collection. We find that AirWare is not reliable enough for a deployable interaction system when trying to classify a gesture set of 21 gestures, with an average true positive rate of only 50.5% per gesture. To improve performance, we train AirWare to identify subsets of the 21 gestures vocabulary based on possible usage scenarios. We find that AirWare can identify three gesture sets with average true positive rate greater than 80% using 4--7 gestures per set, which comprises a vocabulary of 16 unique in-air gestures.
This paper presents a cognitive behavioral-based driver mood repairment platform in intelligent transportation cyber-physical systems (IT-CPS) for road safety. In particular, we propose a driving safety platform for distracted drivers, namely emph{drive safe}, in IT-CPS. The proposed platform recognizes the distracting activities of the drivers as well as their emotions for mood repair. Further, we develop a prototype of the proposed drive safe platform to establish proof-of-concept (PoC) for the road safety in IT-CPS. In the developed driving safety platform, we employ five AI and statistical-based models to infer a vehicle drivers cognitive-behavioral mining to ensure safe driving during the drive. Especially, capsule network (CN), maximum likelihood (ML), convolutional neural network (CNN), Apriori algorithm, and Bayesian network (BN) are deployed for driver activity recognition, environmental feature extraction, mood recognition, sequential pattern mining, and content recommendation for affective mood repairment of the driver, respectively. Besides, we develop a communication module to interact with the systems in IT-CPS asynchronously. Thus, the developed drive safe PoC can guide the vehicle drivers when they are distracted from driving due to the cognitive-behavioral factors. Finally, we have performed a qualitative evaluation to measure the usability and effectiveness of the developed drive safe platform. We observe that the P-value is 0.0041 (i.e., < 0.05) in the ANOVA test. Moreover, the confidence interval analysis also shows significant gains in prevalence value which is around 0.93 for a 95% confidence level. The aforementioned statistical results indicate high reliability in terms of drivers safety and mental state.
This paper presents an integration of a game system and the art therapy concept for promoting the mental well-being of video game players. In the proposed game system, the player plays an Angry-Birds-like game in which levels in the game are generated based on images they draw. Upon finishing a game level, the player also receives positive feedback (praising words) toward their drawing and the generated level from an Art Therapy AI. The proposed system is composed of three major parts: (1) a drawing recognizer that identifies what object is drawn by the player (Sketcher), (2) a level generator that converts the drawing image into a pixel image, then a set of blocks representing a game level (PCG AI), and (3) the Art Therapy AI that encourages the player and improves their emotion. This paper describes an overview of the system and explains how its major components function.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا