Do you want to publish a course? Click here

ClinicalVis: Supporting Clinical Task-Focused Design Evaluation

204   0   0.0 ( 0 )
 Added by Marzyeh Ghassemi
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Making decisions about what clinical tasks to prepare for is multi-factored, and especially challenging in intensive care environments where resources must be balanced with patient needs. Electronic health records (EHRs) are a rich data source, but are task-agnostic and can be difficult to use as summarizations of patient needs for a specific task, such as could this patient need a ventilator tomorrow? In this paper, we introduce ClinicalVis, an open-source EHR visualization-based prototype system for task-focused design evaluation of interactions between healthcare providers (HCPs) and EHRs. We situate ClinicalVis in a task-focused proof-of-concept design study targeting these interactions with real patient data. We conduct an empirical study of 14 HCPs, and discuss our findings on usability, accuracy, preference, and confidence in treatment decisions. We also present design implications that our findings suggest for future EHR interfaces, the presentation of clinical data for task-based planning, and evaluating task-focused HCP/EHR interactions in practice.



rate research

Read More

258 - Zehua Zeng , Phoebe Moh , Fan Du 2021
Although we have seen a proliferation of algorithms for recommending visualizations, these algorithms are rarely compared with one another, making it difficult to ascertain which algorithm is best for a given visual analysis scenario. Though several formal frameworks have been proposed in response, we believe this issue persists because visualization recommendation algorithms are inadequately specified from an evaluation perspective. In this paper, we propose an evaluation-focused framework to contextualize and compare a broad range of visualization recommendation algorithms. We present the structure of our framework, where algorithms are specified using three components: (1) a graph representing the full space of possible visualization designs, (2) the method used to traverse the graph for potential candidates for recommendation, and (3) an oracle used to rank candidate designs. To demonstrate how our framework guides the formal comparison of algorithmic performance, we not only theoretically compare five existing representative recommendation algorithms, but also empirically compare four new algorithms generated based on our findings from the theoretical comparison. Our results show that these algorithms behave similarly in terms of user performance, highlighting the need for more rigorous formal comparisons of recommendation algorithms to further clarify their benefits in various analysis scenarios.
Low-quality results have been a long-standing problem on microtask crowdsourcing platforms, driving away requesters and justifying low wages for workers. To date, workers have been blamed for low-quality results: they are said to make as little effort as possible, do not pay attention to detail, and lack expertise. In this paper, we hypothesize that requesters may also be responsible for low-quality work: they launch unclear task designs that confuse even earnest workers, under-specify edge cases, and neglect to include examples. We introduce prototype tasks, a crowdsourcing strategy requiring all new task designs to launch a small number of sample tasks. Workers attempt these tasks and leave feedback, enabling the re- quester to iterate on the design before publishing it. We report a field experiment in which tasks that underwent prototype task iteration produced higher-quality work results than the original task designs. With this research, we suggest that a simple and rapid iteration cycle can improve crowd work, and we provide empirical evidence that requester quality directly impacts result quality.
The remote work ecosystem is transforming patterns of communication between teams and individuals located at distance. Particularly, the absence of certain subtle cues in current communication tools may hinder an onlines meeting outcome by negatively impacting attendees overall experience and, often, make them feeling disconnected. The problem here might be due to the fact that current tools fall short in capturing it. To partly address this, we developed an online platform-MeetCues-with the aim of supporting online communication during meetings. MeetCues is a companion platform for a commercial communication tool with interactive and visual UI features that support back-channels of communications. It allows attendees to be more engaged during a meeting, and reflect in real-time or post-meeting. We evaluated our platform in a diverse set of five, real-world corporate meetings, and we found that, not only people were more engaged and aware during their meetings, but they also felt more connected. These findings suggest promise in the design of new communications tools, and reinforce the role of InfoVis in augmenting and enriching online meetings.
Nowadays, the development of Web applications supporting distributed user interfaces (DUI) is straightforward. However, it is still hard to find Web sites supporting this kind of user interaction. Although studies on this field have demonstrated that DUI would improve the user experience, users are not massively empowered to manage these kinds of interactions. In this setting, we propose to move the responsibility of distributing both the UI and user interaction, from the application (a Web application) to the client (the Web browser), giving also rise to inter-application interaction distribution. This paper presents a platform for client-side DUI, built on the foundations of Web augmentation and End User Development. The idea is to empower end users to apply an augmentation layer over existing Web applications, considering both frequent use and opportunistic DUI requirements. In this work, we present the architecture and a prototype tool supporting this approach and illustrate the incorporation of some DUI features through case studies.
Critical human-machine interfaces are present in many systems including avionics systems and medical devices. Use error is a concern in these systems both in terms of hardware panels and input devices, and the software that drives the interfaces. Guaranteeing safe usability, in terms of buttons, knobs and displays is now a key element in the overall safety of the system. New integrated development environments (IDEs) based on formal methods technologies have been developed by the research community to support the design and analysis of high-confidence human-machine interfaces. To date, little work has focused on the comparison of these particular types of formal IDEs. This paper compares and evaluates two state-of-the-art toolkits: CIRCUS, a model-based development and analysis tool based on Petri net extensions, and PVSio-web, a prototyping toolkit based on the PVS theorem proving system.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا