Do you want to publish a course? Click here

StripBrush: A Constraint-Relaxed 3D Brush Reduces Physical Effort and Enhances the Quality of Spatial Drawing

79   0   0.0 ( 0 )
 Added by Enrique Rosales
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Spatial drawing using ruled-surface brush strokes is a popular mode of content creation in immersive VR, yet little is known about the usability of existing spatial drawing interfaces or potential improvements. We address these questions in a three-phase study. (1) Our exploratory need-finding study (N=8) indicates that popular spatial brushes require users to perform large wrist motions, causing physical strain. We speculate that this is partly due to constraining users to align their 3D controllers with their intended stroke normal orientation. (2) We designed and implemented a new brush interface that significantly reduces the physical effort and wrist motion involved in VR drawing, with the additional benefit of increasing drawing accuracy. We achieve this by relaxing the normal alignment constraints, allowing users to control stroke rulings, and estimating normals from them instead. (3) Our comparative evaluation of StripBrush (N=17) against the traditional brush shows that StripBrush requires significantly less physical effort and allows users to more accurately depict their intended shapes while offering competitive ease-of-use and speed.

rate research

Read More

We believe that 3D visualisations should not be used alone; by coincidentally displaying alternative views the user can gain the best understanding of all situations. The different presentations signify manifold meanings and afford different tasks. Natural 3D worlds implicitly tell many stories. For instance, walking into a living room, seeing the TV, types of magazines, pictures on the wall, tells us much about the occupiers: their occupation, standards of living, taste in design, whether they have kids, and so on. How can we similarly create rich and diverse 3D visualisation presentations? How can we create visualisations that allow people to understand different stories from the data? In a multivariate 2D visualisation a developer may coordinate and link many views together to provide exploratory visualisation functionality. But how can this be achieved in 3D and in immersive visualisations? Different visualisation types, each have specific uses, and each has the potential to tell or evoke a different story. Through several use-cases, we discuss challenges of 3D visualisation, and present our argument for concurrent and coordinated visualisations of alternative styles, and encourage developers to consider using alternative representations with any 3D view, even if that view is displayed in a virtual, augmented or mixed reality setup.
With the continuing development of affordable immersive virtual reality (VR) systems, there is now a growing market for consumer content. The current form of consumer systems is not dissimilar to the lab-based VR systems of the past 30 years: the primary input mechanism is a head-tracked display and one or two tracked hands with buttons and joysticks on hand-held controllers. Over those 30 years, a very diverse academic literature has emerged that covers design and ergonomics of 3D user interfaces (3DUIs). However, the growing consumer market has engaged a very broad range of creatives that have built a very diverse set of designs. Sometimes these designs adopt findings from the academic literature, but other times they experiment with completely novel or counter-intuitive mechanisms. In this paper and its online adjunct, we report on novel 3DUI design patterns that are interesting from both design and research perspectives: they are highly novel, potentially broadly re-usable and/or suggest interesting avenues for evaluation. The supplemental material, which is a living document, is a crowd-sourced repository of interesting patterns. This paper is a curated snapshot of those patterns that were considered to be the most fruitful for further elaboration.
81 - Aaron Hertzmann 2021
It has often been conjectured that the effectiveness of line drawings can be explained by the similarity of edge images to line drawings. This paper presents several problems with explaining line drawing perception in terms of edges, and how the recently-proposed Realism Hypothesis of Hertzmann (2020) resolves these problems. There is nonetheless existing evidence that edges are often the best features for predicting where people draw lines; this paper describes how the Realism Hypothesis can explain this evidence.
We consider a class of variable effort human annotation tasks in which the number of labels required per item can greatly vary (e.g., finding all faces in an image, named entities in a text, bird calls in an audio recording, etc.). In such tasks, some items require far more effort than others to annotate. Furthermore, the per-item annotation effort is not known until after each item is annotated since determining the number of labels required is an implicit part of the annotation task itself. On an image bounding-box task with crowdsourced annotators, we show that annotator accuracy and recall consistently drop as effort increases. We hypothesize reasons for this drop and investigate a set of approaches to counteract it. Firstly, we benchmark on this task a set of general best-practice methods for quality crowdsourcing. Notably, only one of these methods actually improves quality: the use of visible gold questions that provide periodic feedback to workers on their accuracy as they work. Given these promising results, we then investigate and evaluate variants of the visible gold approach, yielding further improvement. Final results show a 7% improvement in bounding-box accuracy over the baseline. We discuss the generality of the visible gold approach and promising directions for future research.
To improve the viewers Quality of Experience (QoE) and optimize computer graphics applications, 3D model quality assessment (3D-QA) has become an important task in the multimedia area. Point cloud and mesh are the two most widely used digital representation formats of 3D models, the visual quality of which is quite sensitive to lossy operations like simplification and compression. Therefore, many related studies such as point cloud quality assessment (PCQA) and mesh quality assessment (MQA) have been carried out to measure the caused visual quality degradations. However, a large part of previous studies utilizes full-reference (FR) metrics, which means they may fail to predict the quality level with the absence of the reference 3D model. Furthermore, few 3D-QA metrics are carried out to consider color information, which significantly restricts the effectiveness and scope of application. In this paper, we propose a no-reference (NR) quality assessment metric for colored 3D models represented by both point cloud and mesh. First, we project the 3D models from 3D space into quality-related geometry and color feature domains. Then, the natural scene statistics (NSS) and entropy are utilized to extract quality-aware features. Finally, the Support Vector Regressor (SVR) is employed to regress the quality-aware features into quality scores. Our method is mainly validated on the colored point cloud quality assessment database (SJTU-PCQA) and the colored mesh quality assessment database (CMDM). The experimental results show that the proposed method outperforms all the state-of-art NR 3D-QA metrics and obtains an acceptable gap with the state-of-art FR 3D-QA metrics.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا