Do you want to publish a course? Click here

Surface2Volume: Surface Segmentation Conforming Assemblable Volumetric Partition

71   0   0.0 ( 0 )
 Added by Marco Livesu
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Users frequently seek to fabricate objects whose outer surfaces consist of regions with different surface attributes, such as color or material. Manufacturing such objects in a single piece is often challenging or even impossible. The alternative is to partition them into single-attribute volumetric parts that can be fabricated separately and then assembled to form the target object. Facilitating this approach requires partitioning the input model into parts that conform to the surface segmentation and that can be moved apart with no collisions. We propose Surface2Volume, a partition algorithm capable of producing such assemblable parts, each of which is affiliated with a single attribute, the outer surface of whose assembly conforms to the input surface geometry and segmentation. In computing the partition we strictly enforce conformity with surface segmentation and assemblability, and optimize for ease of fabrication by minimizing part count, promoting part simplicity, and simplifying assembly sequencing. We note that computing the desired partition requires solving for three types of variables: per-part assembly trajectories, partition topology, i.e. the connectivity of the interface surfaces separating the different parts, and the geometry, or location, of these interfaces. We efficiently produce the desired partitions by addressing one type of variables at a time: first computing the assembly trajectories, then determining interface topology, and finally computing interface locations that allow parts assemblability. We algorithmically identify inputs that necessitate sequential assembly, and partition these inputs gradually by computing and disassembling a subset of assemblable parts at a time. We demonstrate our method....

rate research

Read More

We present the first algorithm for designing volumetric Michell Trusses. Our method uses a parametrization approach to generate trusses made of structural elements aligned with the primary direction of an objects stress field. Such trusses exhibit high strength-to-weight ratios. We demonstrate the structural robustness of our designs via a posteriori physical simulation. We believe our algorithm serves as an important complement to existing structural optimization tools and as a novel standalone design tool itself.
Field-guided parametrization methods have proven effective for quad meshing of surfaces; these methods compute smooth cross fields to guide the meshing process and then integrate the fields to construct a discrete mesh. A key challenge in extending these methods to three dimensions, however, is representation of field values. Whereas cross fields can be represented by tangent vector fields that form a linear space, the 3D analog---an octahedral frame field---takes values in a nonlinear manifold. In this work, we describe the space of octahedral frames in the language of differential and algebraic geometry. With this understanding, we develop geometry-aware tools for optimization of octahedral fields, namely geodesic stepping and exact projection via semidefinite relaxation. Our algebraic approach not only provides an elegant and mathematically-sound description of the space of octahedral frames but also suggests a generalization to frames whose three axes scale independently, better capturing the singular behavior we expect to see in volumetric frame fields. These new odeco frames, so-called as they are represented by orthogonally decomposable tensors, also admit a semidefinite program--based projection operator. Our description of the spaces of octahedral and odeco frames suggests computing frame fields via manifold-based optimization algorithms; we show that these algorithms efficiently produce high-quality fields while maintaining stability and smoothness.
Real-time rendering and animation of humans is a core function in games, movies, and telepresence applications. Existing methods have a number of drawbacks we aim to address with our work. Triangle meshes have difficulty modeling thin structures like hair, volumetric representations like Neural Volumes are too low-resolution given a reasonable memory budget, and high-resolution implicit representations like Neural Radiance Fields are too slow for use in real-time applications. We present Mixture of Volumetric Primitives (MVP), a representation for rendering dynamic 3D content that combines the completeness of volumetric representations with the efficiency of primitive-based rendering, e.g., point-based or mesh-based methods. Our approach achieves this by leveraging spatially shared computation with a deconvolutional architecture and by minimizing computation in empty regions of space with volumetric primitives that can move to cover only occupied regions. Our parameterization supports the integration of correspondence and tracking constraints, while being robust to areas where classical tracking fails, such as around thin or translucent structures and areas with large topological variability. MVP is a hybrid that generalizes both volumetric and primitive-based representations. Through a series of extensive experiments we demonstrate that it inherits the strengths of each, while avoiding many of their limitations. We also compare our approach to several state-of-the-art methods and demonstrate that MVP produces superior results in terms of quality and runtime performance.
Given an algorithm the quality of the output largely depends on a proper specification of the input parameters. A lot of work has been done to analyze tasks related to using a fixed model [25] and finding a good set of inputs. In this paper we present a different scenario, model building. In contrast to model usage the underlying algorithm, i.e. the underlying model, changes and therefore the associated parameters also change. Developing a new algorithm requires a particular set of parameters that, on the one hand, give access to an expected range of outputs and, on the other hand, are still interpretable. As the model is developed and parameters are added, deleted, or changed different features of the outputs are of interest. Therefore it is important to find objective measures that quantify these features. In a model building process these features are prone to change and need to be adaptable as the model changes. We discuss these problems in the application of cellPACK, a tool that generates virtual 3D cells. Our analysis is based on an output set generated by sampling the input parameter space. Hence we also present techniques and metrics to analyze an ensemble of probabilistic volumes.
Transformers, the default model of choices in natural language processing, have drawn scant attention from the medical imaging community. Given the ability to exploit long-term dependencies, transformers are promising to help atypical convolutional neural networks (convnets) to overcome its inherent shortcomings of spatial inductive bias. However, most of recently proposed transformer-based segmentation approaches simply treated transformers as assisted modules to help encode global context into convolutional representations without investigating how to optimally combine self-attention (i.e., the core of transformers) with convolution. To address this issue, in this paper, we introduce nnFormer (i.e., Not-aNother transFormer), a powerful segmentation model with an interleaved architecture based on empirical combination of self-attention and convolution. In practice, nnFormer learns volumetric representations from 3D local volumes. Compared to the naive voxel-level self-attention implementation, such volume-based operations help to reduce the computational complexity by approximate 98% and 99.5% on Synapse and ACDC datasets, respectively. In comparison to prior-art network configurations, nnFormer achieves tremendous improvements over previous transformer-based methods on two commonly used datasets Synapse and ACDC. For instance, nnFormer outperforms Swin-UNet by over 7 percents on Synapse. Even when compared to nnUNet, currently the best performing fully-convolutional medical segmentation network, nnFormer still provides slightly better performance on Synapse and ACDC.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا