Do you want to publish a course? Click here

Efficient Spatial Anti-Aliasing Rendering for Line Joins on Vector Maps

111   0   0.0 ( 0 )
 Added by Chaoyang He
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

The spatial anti-aliasing technique for line joins (intersections of the road segments) on vector maps is exclusively crucial to visual experience and system performance. Due to limitations of OpenGL API, one common practice to achieve the anti-aliased effect is splicing multiple triangles at varying scale levels to approximate the fan-shaped line joins. However, this approximation inevitably produces some unreality, and the system rendering performance is not optimal. To circumvent these drawbacks, in this paper, we propose a simple but efficient algorithm which uses only two triangles to substitute the multiple triangles approximation and then renders a realistic fan-shaped curve with alpha operation based on geometrical relation computing. Our experiment shows it has advantages of a realistic anti-aliasing effect, less memory cost, higher frame rate, and drawing line joins without overlapping rendering. Our proposed spatial anti-aliasing technique has been widely used in Internet Maps such as Tencent Mobile Maps and Tencent Automotive Maps.



rate research

Read More

141 - Gaochang Wu , Yebin Liu , Lu Fang 2021
The light field (LF) reconstruction is mainly confronted with two challenges, large disparity and the non-Lambertian effect. Typical approaches either address the large disparity challenge using depth estimation followed by view synthesis or eschew explicit depth information to enable non-Lambertian rendering, but rarely solve both challenges in a unified framework. In this paper, we revisit the classic LF rendering framework to address both challenges by incorporating it with advanced deep learning techniques. First, we analytically show that the essential issue behind the large disparity and non-Lambertian challenges is the aliasing problem. Classic LF rendering approaches typically mitigate the aliasing with a reconstruction filter in the Fourier domain, which is, however, intractable to implement within a deep learning pipeline. Instead, we introduce an alternative framework to perform anti-aliasing reconstruction in the image domain and analytically show comparable efficacy on the aliasing issue. To explore the full potential, we then embed the anti-aliasing framework into a deep neural network through the design of an integrated architecture and trainable parameters. The network is trained through end-to-end optimization using a peculiar training set, including regular LFs and unstructured LFs. The proposed deep learning pipeline shows a substantial superiority in solving both the large disparity and the non-Lambertian challenges compared with other state-of-the-art approaches. In addition to the view interpolation for an LF, we also show that the proposed pipeline also benefits light field view extrapolation.
Real-time rendering and animation of humans is a core function in games, movies, and telepresence applications. Existing methods have a number of drawbacks we aim to address with our work. Triangle meshes have difficulty modeling thin structures like hair, volumetric representations like Neural Volumes are too low-resolution given a reasonable memory budget, and high-resolution implicit representations like Neural Radiance Fields are too slow for use in real-time applications. We present Mixture of Volumetric Primitives (MVP), a representation for rendering dynamic 3D content that combines the completeness of volumetric representations with the efficiency of primitive-based rendering, e.g., point-based or mesh-based methods. Our approach achieves this by leveraging spatially shared computation with a deconvolutional architecture and by minimizing computation in empty regions of space with volumetric primitives that can move to cover only occupied regions. Our parameterization supports the integration of correspondence and tracking constraints, while being robust to areas where classical tracking fails, such as around thin or translucent structures and areas with large topological variability. MVP is a hybrid that generalizes both volumetric and primitive-based representations. Through a series of extensive experiments we demonstrate that it inherits the strengths of each, while avoiding many of their limitations. We also compare our approach to several state-of-the-art methods and demonstrate that MVP produces superior results in terms of quality and runtime performance.
Preoperative gestures include tactile sampling of the mechanical properties of biological tissue for both histological and pathological considerations. Tactile properties used in conjunction with visual cues can provide useful feedback to the surgeon. Development of novel cost effective haptic-based simulators and their introduction in the minimally invasive surgery learning cycle can absorb the learning curve for your residents. Receiving pre-training in a core set of surgical skills can reduce skill acquisition time and risks. We present the integration of a real-time surface stiffness adjustment algorithm and a novel paradigm -- force maps -- in a visuo-haptic simulator module designed to train internal organs disease diagnostics through palpation.
An efficient computer algorithm is described for the perspective drawing of a wide class of surfaces. The class includes surfaces corresponding lo single-valued, continuous functions which are defined over rectangular domains. The algorithm automatically computes and eliminates hidden lines. The number of computations in the algorithm grows linearly with the number of sample points on the surface to be drawn. An analysis of the algorithm is presented, and extensions lo certain multi-valued functions are indicated. The algorithm is implemented and tested on .Net 2.0 platform that left interactive use. Running times are found lo be exceedingly efficient for visualization, where interaction on-line and view-point control, enables effective and rapid examination of a surfaces from many perspectives.
Spatial reasoning is an important component of human intelligence. We can imagine the shapes of 3D objects and reason about their spatial relations by merely looking at their three-view line drawings in 2D, with different levels of competence. Can deep networks be trained to perform spatial reasoning tasks? How can we measure their spatial intelligence? To answer these questions, we present the SPARE3D dataset. Based on cognitive science and psychometrics, SPARE3D contains three types of 2D-3D reasoning tasks on view consistency, camera pose, and shape generation, with increasing difficulty. We then design a method to automatically generate a large number of challenging questions with ground truth answers for each task. They are used to provide supervision for training our baseline models using state-of-the-art architectures like ResNet. Our experiments show that although convolutional networks have achieved superhuman performance in many visual learning tasks, their spatial reasoning performance on SPARE3D tasks is either lower than average human performance or even close to random guesses. We hope SPARE3D can stimulate new problem formulations and network designs for spatial reasoning to empower intelligent robots to operate effectively in the 3D world via 2D sensors. The dataset and code are available at https://ai4ce.github.io/SPARE3D.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا