No Arabic abstract
Virtual labs allow researchers to design high-throughput and macro-level experiments that are not feasible in traditional in-person physical lab settings. Despite the increasing popularity of online research, researchers still face many technical and logistical barriers when designing and deploying virtual lab experiments. While several platforms exist to facilitate the development of virtual lab experiments, they typically present researchers with a stark trade-off between usability and functionality. We introduce Empirica: a modular virtual lab that offers a solution to the usability-functionality trade-off by employing a flexible defaults design strategy. This strategy enables us to maintain complete build anything flexibility while offering a development platform that is accessible to novice programmers. Empiricas architecture is designed to allow for parameterizable experimental designs, reusable protocols, and rapid development. These features will increase the accessibility of virtual lab experiments, remove barriers to innovation in experiment design, and enable rapid progress in the understanding of distributed human computation.
In this paper we report on a study conducted with a group of older adults in which they engaged in participatory design workshops to create a VR ATM training simulation. Based on observation, recordings and the developed VR application we present the results of the workshops and offer considerations and recommendations for organizing opportunities for end users, in this case older adults, to directly engage in co-creation of cutting-edge ICT solutions. These include co-designing interfaces and interaction schemes for emerging technologies like VR and AR. We discuss such aspects as user engagement and hardware and software tools suitable for participatory prototyping of VR applications. Finally, we present ideas for further research in the area of VR participatory prototyping with users of various proficiency levels, taking steps towards developing a unified framework for co-design in AR and VR.
Many researchers studying online social communities seek to make such communities better. However, understanding what better means is challenging, due to the divergent opinions of community members, and the multitude of possible community values which often conflict with one another. Community members own values for their communities are not well understood, and how these values align with one another is an open question. Previous research has mostly focused on specific and comparatively well-defined harms within online communities, such as harassment, rule-breaking, and misinformation. In this work, we ask 39 community members on reddit to describe their values for their communities. We gather 301 responses in members own words, spanning 125 unique communities, and use iterative categorization to produce a taxonomy of 29 different community values across 9 major categories. We find that members value a broad range of topics ranging from technical features to the diversity of the community, and most frequently prioritize content quality. We identify important understudied topics such as content quality and community size, highlight where values conflict with one another, and call for research into governance methods for communities that protect vulnerable members.
The rapid development of virtual reality technology has increased its availability and, consequently, increased the number of its possible applications. The interest in the new medium has grown due to the entertainment industry (games, VR experiences and movies). The number of freely available training and therapeutic applications is also increasing. Contrary to popular opinion, new technologies are also adopted by older adults. Creating virtual environments tailored to the needs and capabilities of older adults requires intense research on the behaviour of these participants in the most common situations, towards commonly used elements of the virtual environment, in typical sceneries. Comfortable immersion in a virtual environment is key to achieving the impression of presence. Presence is, in turn, necessary to obtain appropriate training, persuasive and therapeutic effects. A virtual agent (a humanoid representation of an algorithm or artificial intelligence) is often an element of the virtual environment interface. Maintaining an appropriate distance to the agent is, therefore, a key parameter for the creator of the VR experience. Older (65+) participants maintain greater distance towards an agent (a young white male) than younger ones (25-35). It may be caused by differences in the level of arousal, but also cultural norms. As a consequence, VR developers are advised to use algorithms that maintain the agent at the appropriate distance, depending on the users age.
A significant problem with immersive virtual reality (IVR) experiments is the ability to compare research conditions. VR kits and IVR environments are complex and diverse but researchers from different fields, e.g. ICT, psychology, or marketing, often neglect to describe them with a level of detail sufficient to situate their research on the IVR landscape. Careful reporting of these conditions may increase the applicability of research results and their impact on the shared body of knowledge on HCI and IVR. Based on literature review, our experience, practice and a synthesis of key IVR factors, in this article we present a reference checklist for describing research conditions of IVR experiments. Including these in publications will contribute to the comparability of IVR research and help other researchers decide to what extent reported results are relevant to their own research goals. The compiled checklist is a ready-to-use reference tool and takes into account key hardware, software and human factors as well as diverse factors connected to visual, audio, tactile, and other aspects of interaction.
Accurately and efficiently crowdsourcing complex, open-ended tasks can be difficult, as crowd participants tend to favor short, repetitive microtasks. We study the crowdsourcing of large networks where the crowd provides the network topology via microtasks. Crowds can explore many types of social and information networks, but we focus on the network of causal attributions, an important network that signifies cause-and-effect relationships. We conduct experiments on Amazon Mechanical Turk (AMT) testing how workers propose and validate individual causal relationships and introduce a method for independent crowd workers to explore large networks. The core of the method, Iterative Pathway Refinement, is a theoretically-principled mechanism for efficient exploration via microtasks. We evaluate the method using synthetic networks and apply it on AMT to extract a large-scale causal attribution network, then investigate the structure of this network as well as the activity patterns and efficiency of the workers who constructed this network. Worker interactions reveal important characteristics of causal perception and the network data they generate can improve our understanding of causality and causal inference.