Do you want to publish a course? Click here

Deep Vectorization of Technical Drawings

83   0   0.0 ( 0 )
 Added by Vage Egiazarian
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We present a new method for vectorization of technical line drawings, such as floor plans, architectural drawings, and 2D CAD images. Our method includes (1) a deep learning-based cleaning stage to eliminate the background and imperfections in the image and fill in missing parts, (2) a transformer-based network to estimate vector primitives, and (3) optimization procedure to obtain the final primitive configurations. We train the networks on synthetic data, renderings of vector line drawings, and manually vectorized scans of line drawings. Our method quantitatively and qualitatively outperforms a number of existing techniques on a collection of representative technical drawings.



rate research

Read More

74 - Aaron Hertzmann 2020
Why is it that we can recognize object identity and 3D shape from line drawings, even though they do not exist in the natural world? This paper hypothesizes that the human visual system perceives line drawings as if they were approximately realistic images. Moreover, the techniques of line drawing are chosen to accurately convey shape to a human observer. Several implications and variants of this hypothesis are explored.
Analysis of human sketches in deep learning has advanced immensely through the use of waypoint-sequences rather than raster-graphic representations. We further aim to model sketches as a sequence of low-dimensional parametric curves. To this end, we propose an inverse graphics framework capable of approximating a raster or waypoint based stroke encoded as a point-cloud with a variable-degree Bezier curve. Building on this module, we present Cloud2Curve, a generative model for scalable high-resolution vector sketches that can be trained end-to-end using point-cloud data alone. As a consequence, our model is also capable of deterministic vectorization which can map novel raster or waypoint based sketches to their corresponding high-resolution scalable Bezier equivalent. We evaluate the generation and vectorization capabilities of our model on Quick, Draw! and K-MNIST datasets.
Fencing is a sport that relies heavily on the use of tactics. However, most existing methods for analyzing fencing data are based on statistical models in which hidden patterns are difficult to discover. Unlike sequential games, such as tennis and table tennis, fencing is a type of simultaneous game. Thus, the existing methods on the sports visualization do not operate well for fencing matches. In this study, we cooperated with experts to analyze the technical and tactical characteristics of fencing competitions. To meet the requirements of the fencing experts, we designed and implemented FencingVis, an interactive visualization system for fencing competition data.The action sequences in the bout are first visualized by modified bar charts to reveal the actions of footworks and bladeworks of both fencers. Then an interactive technique is provided for exploring the patterns of behavior of fencers. The different combinations of tactical behavior patterns are further mapped to the graph model and visualized by a tactical flow graph. This graph can reveal the different strategies adopted by both fencers and their mutual influence in one bout. We also provided a number of well-coordinated views to supplement the tactical flow graph and display the information of the fencing competition from different perspectives. The well-coordinated views are meant to organically integrate with the tactical flow graph through consistent visual style and view coordination. We demonstrated the usability and effectiveness of the proposed system with three case studies. On the basis of expert feedback, FencingVis can help analysts find not only the tactical patterns hidden in fencing bouts, but also the technical and tactical characteristics of the contestant.
We present the first deep implicit 3D morphable model (i3DMM) of full heads. Unlike earlier morphable face models it not only captures identity-specific geometry, texture, and expressions of the frontal face, but also models the entire head, including hair. We collect a new dataset consisting of 64 people with different expressions and hairstyles to train i3DMM. Our approach has the following favorable properties: (i) It is the first full head morphable model that includes hair. (ii) In contrast to mesh-based models it can be trained on merely rigidly aligned scans, without requiring difficult non-rigid registration. (iii) We design a novel architecture to decouple the shape model into an implicit reference shape and a deformation of this reference shape. With that, dense correspondences between shapes can be learned implicitly. (iv) This architecture allows us to semantically disentangle the geometry and color components, as color is learned in the reference space. Geometry is further disentangled as identity, expressions, and hairstyle, while color is disentangled as identity and hairstyle components. We show the merits of i3DMM using ablation studies, comparisons to state-of-the-art models, and applications such as semantic head editing and texture transfer. We will make our model publicly available.
RGBD images, combining high-resolution color and lower-resolution depth from various types of depth sensors, are increasingly common. One can significantly improve the resolution of depth maps by taking advantage of color information; deep learning methods make combining color and depth information particularly easy. However, fusing these two sources of data may lead to a variety of artifacts. If depth maps are used to reconstruct 3D shapes, e.g., for virtual reality applications, the visual quality of upsampled images is particularly important. The main idea of our approach is to measure the quality of depth map upsampling using renderings of resulting 3D surfaces. We demonstrate that a simple visual appearance-based loss, when used with either a trained CNN or simply a deep prior, yields significantly improved 3D shapes, as measured by a number of existing perceptual metrics. We compare this approach with a number of existing optimization and learning-based techniques.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا