ترغب بنشر مسار تعليمي؟ اضغط هنا

regvis.net -- A Visual Bibliography of Regulatory Visualization

126   0   0.0 ( 0 )
 نشر من قبل Zhibin Niu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Information visualization and visual analytics technology has attracted significant attention from the financial regulation community. In this research, we present regvis.net, a visual survey of regulatory visualization that allows researchers from both the computing and financial communities to review their literature of interest. We have collected and manually tagged more than 80 regulation visualization related publications. To the best of our knowledge, this is the first publication set tailored for regulatory visualization. We have provided a webpage (http://regvis.net) for interactive searches and filtering. Each publication is represented by a thumbnail of the representative system interface or key visualization chart, and users can conduct multi-condition screening explorations and fixed text searches.



قيم البحث

اقرأ أيضاً

Background: It is possible to find many different visual representations of data values in visualizations, it is less common to see visual representations that include uncertainty, especially in visualizations intended for non-technical audiences. Ob jective: our aim is to rigorously define and evaluate the novel use of visual entropy as a measure of shape that allows us to construct an ordered scale of glyphs for use in representing both uncertainty and value in 2D and 3D environments. Method: We use sample entropy as a numerical measure of visual entropy to construct a set of glyphs using R and Blender which vary in their complexity. Results: A Bradley-Terry analysis of a pairwise comparison of the glyphs shows participants (n=19) ordered the glyphs as predicted by the visual entropy score (linear regression R2 >0.97, p<0.001). We also evaluate whether the glyphs can effectively represent uncertainty using a signal detection method, participants (n=15) were able to search for glyphs representing uncertainty with high sensitivity and low error rates. Conclusion: visual entropy is a novel cue for representing ordered data and provides a channel that allows the uncertainty of a measure to be presented alongside its mean value.
Recently a simple military exercise on the Internet was perceived as the beginning of a new civil war in the US. Social media aggregate people around common interests eliciting a collective framing of narratives and worldviews. However, the wide avai lability of user-provided content and the direct path between producers and consumers of information often foster confusion about causations, encouraging mistrust, rumors, and even conspiracy thinking. In order to contrast such a trend attempts to textit{debunk} are often undertaken. Here, we examine the effectiveness of debunking through a quantitative analysis of 54 million users over a time span of five years (Jan 2010, Dec 2014). In particular, we compare how users interact with proven (scientific) and unsubstantiated (conspiracy-like) information on Facebook in the US. Our findings confirm the existence of echo chambers where users interact primarily with either conspiracy-like or scientific pages. Both groups interact similarly with the information within their echo chamber. We examine 47,780 debunking posts and find that attempts at debunking are largely ineffective. For one, only a small fraction of usual consumers of unsubstantiated information interact with the posts. Furthermore, we show that those few are often the most committed conspiracy users and rather than internalizing debunking information, they often react to it negatively. Indeed, after interacting with debunking posts, users retain, or even increase, their engagement within the conspiracy echo chamber.
While visualizations play a crucial role in gaining insights from data, generating useful visualizations from a complex dataset is far from an easy task. Besides understanding the functionality provided by existing visualization libraries, generating the desired visualization also requires reshaping and aggregating the underlying data as well as composing different visual elements to achieve the intended visual narrative. This paper aims to simplify visualization tasks by automatically synthesizing the required program from simple visual sketches provided by the user. Specifically, given an input data set and a visual sketch that demonstrates how to visualize a very small subset of this data, our technique automatically generates a program that can be used to visualize the entire data set. Automating visualization poses several challenges. First, because many visualization tasks require data wrangling in addition to generating plots, we need to decompose the end-to-end synthesis task into two separate sub-problems. Second, because the intermediate specification that results from the decomposition is necessarily imprecise, this makes the data wrangling task particularly challenging in our context. In this paper, we address these problems by developing a new compositional visualization-by-example technique that (a) decomposes the end-to-end task into two different synthesis problems over different DSLs and (b) leverages bi-directional program analysis to deal with the complexity that arises from having an imprecise intermediate specification. We implemented our visualization-by-example algorithm and evaluate it on 83 visualization tasks collected from on-line forums and tutorials. Viser can solve 84% of these benchmarks within a 600 second time limit, and, for those tasks that can be solved, the desired visualization is among the top-5 generated by Viser in 70% of the cases.
153 - Xin Qian , Ryan A. Rossi , Fan Du 2021
Visualization recommendation work has focused solely on scoring visualizations based on the underlying dataset and not the actual user and their past visualization feedback. These systems recommend the same visualizations for every user, despite that the underlying user interests, intent, and visualization preferences are likely to be fundamentally different, yet vitally important. In this work, we formally introduce the problem of personalized visualization recommendation and present a generic learning framework for solving it. In particular, we focus on recommending visualizations personalized for each individual user based on their past visualization interactions (e.g., viewed, clicked, manually created) along with the data from those visualizations. More importantly, the framework can learn from visualizations relevant to other users, even if the visualizations are generated from completely different datasets. Experiments demonstrate the effectiveness of the approach as it leads to higher quality visualization recommendations tailored to the specific user intent and preferences. To support research on this new problem, we release our user-centric visualization corpus consisting of 17.4k users exploring 94k datasets with 2.3 million attributes and 32k user-generated visualizations.
The introduction of robots into our society will also introduce new concerns about personal privacy. In order to study these concerns, we must do human-subject experiments that involve measuring privacy-relevant constructs. This paper presents a taxo nomy of privacy constructs based on a review of the privacy literature. Future work in operationalizing privacy constructs for HRI studies is also discussed.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا