Do you want to publish a course? Click here

ScalarFlow: A Large-Scale Volumetric Data Set of Real-world Scalar Transport Flows for Computer Animation and Machine Learning

85   0   0.0 ( 0 )
 Added by Nils Thuerey
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper, we present ScalarFlow, a first large-scale data set of reconstructions of real-world smoke plumes. We additionally propose a framework for accurate physics-based reconstructions from a small number of video streams. Central components of our algorithm are a novel estimation of unseen inflow regions and an efficient regularization scheme. Our data set includes a large number of complex and natural buoyancy-driven flows. The flows transition to turbulent flows and contain observable scalar transport processes. As such, the ScalarFlow data set is tailored towards computer graphics, vision, and learning applications. The published data set will contain volumetric reconstructions of velocity and density, input image sequences, together with calibration data, code, and instructions how to recreate the commodity hardware capture setup. We further demonstrate one of the many potential application areas: a first perceptual evaluation study, which reveals that the complexity of the captured flows requires a huge simulation resolution for regular solvers in order to recreate at least parts of the natural complexity contained in the captured data.

rate research

Read More

Simulating realistic radar data has the potential to significantly accelerate the development of data-driven approaches to radar processing. However, it is fraught with difficulty due to the notoriously complex image formation process. Here we propose to learn a radar sensor model capable of synthesising faithful radar observations based on simulated elevation maps. In particular, we adopt an adversarial approach to learning a forward sensor model from unaligned radar examples. In addition, modelling the backward model encourages the output to remain aligned to the world state through a cyclical consistency criterion. The backward model is further constrained to predict elevation maps from real radar data that are grounded by partial measurements obtained from corresponding lidar scans. Both models are trained in a joint optimisation. We demonstrate the efficacy of our approach by evaluating a down-stream segmentation model trained purely on simulated data in a real-world deployment. This achieves performance within four percentage points of the same model trained entirely on real data.
A considerable limitation of employing sparse voxels octrees (SVOs) as a model format for ray tracing has been that the octree data structure is inherently static. Due to traversal algorithms dependence on the strict hierarchical structure of octrees, it has been challenging to achieve real-time performance of SVO model animation in ray tracing since the octree data structure would typically have to be regenerated every frame. Presented in this article is a novel method for animation of models specified on the SVO format. The method distinguishes itself by permitting model transformations such as rotation, translation, and anisotropic scaling, while preserving the hierarchical structure of SVO models so that they may be efficiently traversed. Due to its modest memory footprint and straightforward arithmetic operations, the method is well-suited for implementation in hardware. A software ray tracing implementation of animated SVO models demonstrates real-time performance on current-generation desktop GPUs, and shows that the animation method does not substantially slow down the rendering procedure compared to rendering static SVOs.
Real-time rendering and animation of humans is a core function in games, movies, and telepresence applications. Existing methods have a number of drawbacks we aim to address with our work. Triangle meshes have difficulty modeling thin structures like hair, volumetric representations like Neural Volumes are too low-resolution given a reasonable memory budget, and high-resolution implicit representations like Neural Radiance Fields are too slow for use in real-time applications. We present Mixture of Volumetric Primitives (MVP), a representation for rendering dynamic 3D content that combines the completeness of volumetric representations with the efficiency of primitive-based rendering, e.g., point-based or mesh-based methods. Our approach achieves this by leveraging spatially shared computation with a deconvolutional architecture and by minimizing computation in empty regions of space with volumetric primitives that can move to cover only occupied regions. Our parameterization supports the integration of correspondence and tracking constraints, while being robust to areas where classical tracking fails, such as around thin or translucent structures and areas with large topological variability. MVP is a hybrid that generalizes both volumetric and primitive-based representations. Through a series of extensive experiments we demonstrate that it inherits the strengths of each, while avoiding many of their limitations. We also compare our approach to several state-of-the-art methods and demonstrate that MVP produces superior results in terms of quality and runtime performance.
Existing online 3D shape repositories contain thousands of 3D models but lack photorealistic appearance. We present an approach to automatically assign high-quality, realistic appearance models to large scale 3D shape collections. The key idea is to jointly leverage three types of online data -- shape collections, material collections, and photo collections, using the photos as reference to guide assignment of materials to shapes. By generating a large number of synthetic renderings, we train a convolutional neural network to classify materials in real photos, and employ 3D-2D alignment techniques to transfer materials to different parts of each shape model. Our system produces photorealistic, relightable, 3D shapes (PhotoShapes).
Executing machine learning (ML) pipelines in real-time on radiology images is hard due to the limited computing resources in clinical environments and the lack of efficient data transfer capabilities to run them on research clusters. We propose Niffler, an integrated framework that enables the execution of ML pipelines at research clusters by efficiently querying and retrieving radiology images from the Picture Archiving and Communication Systems (PACS) of the hospitals. Niffler uses the Digital Imaging and Communications in Medicine (DICOM) protocol to fetch and store imaging data and provides metadata extraction capabilities and Application programming interfaces (APIs) to apply filters on the images. Niffler further enables the sharing of the outcomes from the ML pipelines in a de-identified manner. Niffler has been running stable for more than 19 months and has supported several research projects at the department. In this paper, we present its architecture and three of its use cases: an inferior vena cava (IVC) filter detection from the images in real-time, identification of scanner utilization, and scanner clock calibration. Evaluations on the Niffler prototype highlight its feasibility and efficiency in facilitating the ML pipelines on the images and metadata in real-time and retrospectively.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا