ترغب بنشر مسار تعليمي؟ اضغط هنا

Making Public Safety Data Accessible in the Westside Atlanta Data Dashboard

271   0   0.0 ( 0 )
 نشر من قبل Katie O'Connell
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Katie OConnell




اسأل ChatGPT حول البحث

Individual neighborhoods within large cities can benefit from independent analysis of public data in the context of ongoing efforts to improve the community. Yet existing tools for public data analysis and visualization are often mismatched to community needs, for reasons including geographic granularity that does not correspond to community boundaries, siloed data sets, inaccurate assumptions about data literacy, and limited user input in design and implementation phases. In Atlanta this need is being addressed through a Data Dashboard developed under the auspices of the Westside Communities Alliance (WCA), a partnership between Georgia Tech and community stakeholders. In this paper we present an interactive analytic and visualization tool for public safety data within the WCA Data Dashboard. We describe a human-centered approach to understand the needs of users and to build accessible mapping tools for visualization and analysis. The tools include a variety of overlays that allow users to spatially correlate features of the built environment, such as vacant properties with criminal activity as well as crime prevention efforts. We are in the final stages of developing the first version of the tool, with plans for a public release in fall of 2016.

قيم البحث

اقرأ أيضاً

Despite an increasing reliance on fully-automated algorithmic decision-making in our day-to-day lives, human beings still make highly consequential decisions. As frequently seen in business, healthcare, and public policy, recommendations produced by algorithms are provided to human decision-makers to guide their decisions. While there exists a fast-growing literature evaluating the bias and fairness of such algorithmic recommendations, an overlooked question is whether they help humans make better decisions. We develop a statistical methodology for experimentally evaluating the causal impacts of algorithmic recommendations on human decisions. We also show how to examine whether algorithmic recommendations improve the fairness of human decisions and derive the optimal decision rules under various settings. We apply the proposed methodology to preliminary data from the first-ever randomized controlled trial that evaluates the pretrial Public Safety Assessment (PSA) in the criminal justice system. A goal of the PSA is to help judges decide which arrested individuals should be released. On the basis of the preliminary data available, we find that providing the PSA to the judge has little overall impact on the judges decisions and subsequent arrestee behavior. However, our analysis yields some potentially suggestive evidence that the PSA may help avoid unnecessarily harsh decisions for female arrestees regardless of their risk levels while it encourages the judge to make stricter decisions for male arrestees who are deemed to be risky. In terms of fairness, the PSA appears to increase the gender bias against males while having little effect on any existing racial differences in judges decision. Finally, we find that the PSAs recommendations might be unnecessarily severe unless the cost of a new crime is sufficiently high.
This article surveys the use of algorithmic systems to support decision-making in the public sector. Governments adopt, procure, and use algorithmic systems to support their functions within several contexts -- including criminal justice, education, and benefits provision -- with important consequences for accountability, privacy, social inequity, and public participation in decision-making. We explore the social implications of municipal algorithmic systems across a variety of stages, including problem formulation, technology acquisition, deployment, and evaluation. We highlight several open questions that require further empirical research.
This paper introduces the concept of Open Source Intelligence (OSINT) as an important application in intelligent profiling of individuals. With a variety of tools available, significant data shall be obtained on an individual as a consequence of anal yzing his/her internet presence but all of this comes at the cost of low relevance. To increase the relevance score in profiling, PeopleXploit is being introduced. PeopleXploit is a hybrid tool which helps in collecting the publicly available information that is reliable and relevant to the given input. This tool is used to track and trace the given target with their digital footprints like Name, Email, Phone Number, User IDs etc. and the tool will scan & search other associated data from public available records from the internet and create a summary report against the target. PeopleXploit profiles a person using authorship analysis and finds the best matching guess. Also, the type of analysis performed (professional/matrimonial/criminal entity) varies with the requirement of the user.
Measuring public opinion is a key focus during democratic elections, enabling candidates to gauge their popularity and alter their campaign strategies accordingly. Traditional survey polling remains the most popular estimation technique, despite its cost and time intensity, measurement errors, lack of real-time capabilities and lagged representation of public opinion. In recent years, Twitter opinion mining has attempted to combat these issues. Despite achieving promising results, it experiences its own set of shortcomings such as an unrepresentative sample population and a lack of long term stability. This paper aims to merge data from both these techniques using Bayesian data assimilation to arrive at a more accurate estimate of true public opinion for the Brexit referendum. This paper demonstrates the effectiveness of the proposed approach using Twitter opinion data and survey data from trusted pollsters. Firstly, the possible existence of a time gap of 16 days between the two data sets is identified. This gap is subsequently incorporated into a proposed assimilation architecture. This method was found to adequately incorporate information from both sources and measure a strong upward trend in Leave support leading up to the Brexit referendum. The proposed technique provides useful estimates of true opinion, which is essential to future opinion measurement and forecasting research.
Many powerful computing technologies rely on implicit and explicit data contributions from the public. This dependency suggests a potential source of leverage for the public in its relationship with technology companies: by reducing, stopping, redire cting, or otherwise manipulating data contributions, the public can reduce the effectiveness of many lucrative technologies. In this paper, we synthesize emerging research that seeks to better understand and help people action this textit{data leverage}. Drawing on prior work in areas including machine learning, human-computer interaction, and fairness and accountability in computing, we present a framework for understanding data leverage that highlights new opportunities to change technology company behavior related to privacy, economic inequality, content moderation and other areas of societal concern. Our framework also points towards ways that policymakers can bolster data leverage as a means of changing the balance of power between the public and tech companies.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا