No Arabic abstract
Disaster monitoring is challenging due to the lake of infrastructures in monitoring areas. Based on the theory of Game-With-A-Purpose (GWAP), this paper contributes to a novel large-scale crowdsourcing disaster monitoring system. The system analyzes tagged satellite pictures from anonymous players, and then reports aggregated and evaluated monitoring results to its stakeholders. An algorithm based on directed graph centralities is presented to address the core issues of malicious user detection and disaster level calculation. Our method can be easily applied in other human computation systems. In the end, some issues with possible solutions are discussed for our future work.
As a means of human-based computation, crowdsourcing has been widely used to annotate large-scale unlabeled datasets. One of the obvious challenges is how to aggregate these possibly noisy labels provided by a set of heterogeneous annotators. Another challenge stems from the difficulty in evaluating the annotator reliability without even knowing the ground truth, which can be used to build incentive mechanisms in crowdsourcing platforms. When each instance is associated with many possible labels simultaneously, the problem becomes even harder because of its combinatorial nature. In this paper, we present new flexible Bayesian models and efficient inference algorithms for multi-label annotation aggregation by taking both annotator reliability and label dependency into account. Extensive experiments on real-world datasets confirm that the proposed methods outperform other competitive alternatives, and the model can recover the type of the annotators with high accuracy.
Various domain users are increasingly leveraging real-time social media data to gain rapid situational awareness. However, due to the high noise in the deluge of data, effectively determining semantically relevant information can be difficult, further complicated by the changing definition of relevancy by each end user for different events. The majority of existing methods for short text relevance classification fail to incorporate users knowledge into the classification process. Existing methods that incorporate interactive user feedback focus on historical datasets. Therefore, classifiers cannot be interactively retrained for specific events or user-dependent needs in real-time. This limits real-time situational awareness, as streaming data that is incorrectly classified cannot be corrected immediately, permitting the possibility for important incoming data to be incorrectly classified as well. We present a novel interactive learning framework to improve the classification process in which the user iteratively corrects the relevancy of tweets in real-time to train the classification model on-the-fly for immediate predictive improvements. We computationally evaluate our classification model adapted to learn at interactive rates. Our results show that our approach outperforms state-of-the-art machine learning models. In addition, we integrate our framework with the extended Social Media Analytics and Reporting Toolkit (SMART) 2.0 system, allowing the use of our interactive learning framework within a visual analytics system tailored for real-time situational awareness. To demonstrate our frameworks effectiveness, we provide domain expert feedback from first responders who used the extended SMART 2.0 system.
Could social media data aid in disaster response and damage assessment? Countries face both an increasing frequency and intensity of natural disasters due to climate change. And during such events, citizens are turning to social media platforms for disaster-related communication and information. Social media improves situational awareness, facilitates dissemination of emergency information, enables early warning systems, and helps coordinate relief efforts. Additionally, spatiotemporal distribution of disaster-related messages helps with real-time monitoring and assessment of the disaster itself. Here we present a multiscale analysis of Twitter activity before, during, and after Hurricane Sandy. We examine the online response of 50 metropolitan areas of the United States and find a strong relationship between proximity to Sandys path and hurricane-related social media activity. We show that real and perceived threats -- together with the physical disaster effects -- are directly observable through the intensity and composition of Twitters message stream. We demonstrate that per-capita Twitter activity strongly correlates with the per-capita economic damage inflicted by the hurricane. Our findings suggest that massive online social networks can be used for rapid assessment (nowcasting) of damage caused by a large-scale disaster.
Successful analysis of player skills in video games has important impacts on the process of enhancing player experience without undermining their continuous skill development. Moreover, player skill analysis becomes more intriguing in team-based video games because such form of study can help discover useful factors in effective team formation. In this paper, we consider the problem of skill decomposition in MOBA (MultiPlayer Online Battle Arena) games, with the goal to understand what player skill factors are essential for the outcome of a game match. To understand the construct of MOBA player skills, we utilize various skill-based predictive models to decompose player skills into interpretative parts, the impact of which are assessed in statistical terms. We apply this analysis approach on two widely known MOBAs, namely League of Legends (LoL) and Defense of the Ancients 2 (DOTA2). The finding is that base skills of in-game avatars, base skills of players, and players champion-specific skills are three prominent skill components influencing LoLs match outcomes, while those of DOTA2 are mainly impacted by in-game avatars base skills but not much by the other two.
To minimize enormous havoc from disasters, permanent environment monitoring is necessarily required. Thus we propose a novel energy management protocol for energy harvesting wireless sensor networks (EH-WSNs), named the adaptive sensor node management protocol (ASMP). The proposed protocol makes system components to systematically control their performance to conserve the energy. Through this protocol, sensor nodes autonomously activate an additional energy conservation algorithm. ASMP embeds three sampling algorithms. For the optimized environment sampling, we proposed the adaptive sampling algorithm for monitoring (ASA-m). ASA-m estimates the expected time period to occur meaningful change. The meaningful change refers to the distance between two target data for the monitoring QoS. Therefore, ASA-m merely gathers the data the system demands. The continuous adaptive sampling algorithm (CASA) solves the problem to be continuously decreasing energy despite of ASA-m. When the monitored environment shows a linear trend property, the sensor node in CASA rests a sampling process, and the server generates predicted data at the estimated time slot. For guaranteeing the self-sustainability, ASMP uses the recoverable adaptive sampling algorithm (RASA). RASA makes consumed energy smaller than harvested energy by utilizing the predicted data. RASA recharges the energy of the sensor node. Through this method, ASMP achieves both energy conservation and service quality.