Do you want to publish a course? Click here

Dont cross that stop line: Characterizing Traffic Violations in Metropolitan Cities

353   0   0.0 ( 0 )
 Added by Shashank Srikanth
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In modern metropolitan cities, the task of ensuring safe roads is of paramount importance. Automated systems of e-challans (Electronic traffic-violation receipt) are now being deployed across cities to record traffic violations and to issue fines. In the present study, an automated e-challan system established in Ahmedabad (Gujarat, India) has been analyzed for characterizing user behaviour, violation types as well as finding spatial and temporal patterns in the data. We describe a method of collecting e-challan data from the e-challan portal of Ahmedabad traffic police and create a dataset of over 3 million e-challans. The dataset was first analyzed to characterize user behaviour with respect to repeat offenses and fine payment. We demonstrate that a lot of users repeat their offenses (traffic violation) frequently and are less likely to pay fines of higher value. Next, we analyze the data from a spatial and temporal perspective and identify certain spatio-temporal patterns present in our dataset. We find that there is a drastic increase/decrease in the number of e-challans issued during the festival days and identify a few hotspots in the city that have high intensity of traffic violations. In the end, we propose a set of 5 features to model recidivism in traffic violations and train multiple classifiers on our dataset to evaluate the effectiveness of our proposed features. The proposed approach achieves 95% accuracy on the dataset.



rate research

Read More

Most current approaches to characterize and detect hate speech focus on textit{content} posted in Online Social Networks. They face shortcomings to collect and annotate hateful speech due to the incompleteness and noisiness of OSN text and the subjectivity of hate speech. These limitations are often aided with constraints that oversimplify the problem, such as considering only tweets containing hate-related words. In this work we partially address these issues by shifting the focus towards textit{users}. We develop and employ a robust methodology to collect and annotate hateful users which does not depend directly on lexicon and where the users are annotated given their entire profile. This results in a sample of Twitters retweet graph containing $100,386$ users, out of which $4,972$ were annotated. We also collect the users who were banned in the three months that followed the data collection. We show that hateful users differ from normal ones in terms of their activity patterns, word usage and as well as network structure. We obtain similar results comparing the neighbors of hateful vs. neighbors of normal users and also suspended users vs. active users, increasing the robustness of our analysis. We observe that hateful users are densely connected, and thus formulate the hate speech detection problem as a task of semi-supervised learning over a graph, exploiting the network of connections on Twitter. We find that a node embedding algorithm, which exploits the graph structure, outperforms content-based approaches for the detection of both hateful ($95%$ AUC vs $88%$ AUC) and suspended users ($93%$ AUC vs $88%$ AUC). Altogether, we present a user-centric view of hate speech, paving the way for better detection and understanding of this relevant and challenging issue.
Social media provides many opportunities to monitor and evaluate political phenomena such as referendums and elections. In this study, we propose a set of approaches to analyze long-running political events on social media with a real-world experiment: the debate about Brexit, i.e., the process through which the United Kingdom activated the option of leaving the European Union. We address the following research questions: Could Twitter-based stance classification be used to demonstrate public stance with respect to political events? What is the most efficient and comprehensive approach to measuring the impact of politicians on social media? Which of the polarized sides of the debate is more responsive to politician messages and the main issues of the Brexit process? What is the share of bot accounts in the Brexit discussion and which side are they for? By combining the user stance classification, topic discovery, sentiment analysis, and bot detection, we show that it is possible to obtain useful insights about political phenomena from social media data. We are able to detect relevant topics in the discussions, such as the demand for a new referendum, and to understand the position of social media users with respect to the different topics in the debate. Our comparative and temporal analysis of political accounts can detect the critical periods of the Brexit process and the impact they have on the debate.
Numerous urban indicators scale with population in a power law across cities, but whether the cross-sectional scaling law is applicable to the temporal growth of individual cities is unclear. Here we first find two paradoxical scaling relationships that urban built-up area sub-linearly scales with population across cities, but super-linearly scales with population over time in most individual cities because urban land expands faster than population grows. Different cities have diverse temporal scaling exponents and one city even has opposite temporal scaling regimes during two periods, strongly supporting the absence of single temporal scaling and further illustrating the failure of cross-sectional urban scaling in predicting temporal growth of cities. We propose a conceptual model that can clarify the essential difference and also connections between the cross-sectional scaling law and temporal trajectories of cities. Our model shows that cities have an extra growth of built-up area over time besides the supposed growth predicted by the cross-sectional scaling law. Disparities of extra growth among different-sized cities change the cross-sectional scaling exponent. Further analyses of GDP and other indicators confirm the contradiction between cross-sectional and temporal scaling relationships and the validity of the conceptual model. Our findings may open a new avenue towards the science of cities.
An infodemic is an emerging phenomenon caused by an overabundance of information online. This proliferation of information makes it difficult for the public to distinguish trustworthy news and credible information from untrustworthy sites and non-credible sources. The perils of an infodemic debuted with the outbreak of the COVID-19 pandemic and bots (i.e., automated accounts controlled by a set of algorithms) that are suspected of spreading the infodemic. Although previous research has revealed that bots played a central role in spreading misinformation during major political events, how bots behaved during the infodemic is unclear. In this paper, we examined the roles of bots in the case of the COVID-19 infodemic and the diffusion of non-credible information such as 5G and Bill Gates conspiracy theories and content related to Trump and WHO by analyzing retweet networks and retweeted items. We show the segregated topology of their retweet networks, which indicates that right-wing self-media accounts and conspiracy theorists may lead to this opinion cleavage, while malicious bots might favor amplification of the diffusion of non-credible information. Although the basic influence of information diffusion could be larger in human users than bots, the effects of bots are non-negligible under an infodemic situation.
The onset of the Coronavirus disease 2019 (COVID-19) pandemic instigated a global infodemic that has brought unprecedented challenges for society as a whole. During this time, a number of manual fact-checking initiatives have emerged to alleviate the spread of dis/mis-information. This study is about COVID-19 debunks published in multiple languages by different fact-checking organisations, sometimes as far as several months apart, despite the fact that the claim has already been fact-checked before. The spatiotemporal analysis reveals that similar or nearly duplicate false COVID-19 narratives have been spreading in multifarious modalities on various social media platforms in different countries. We also find that misinformation involving general medical advice has spread across multiple countries and hence has the highest proportion of false COVID-19 narratives that keep being debunked. Furthermore, as manual fact-checking is an onerous task in itself, therefore debunking similar claims recurrently is leading to a waste of resources. To this end, we propound the idea of the inclusion of multilingual debunk search in the fact-checking pipeline.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا