No Arabic abstract
A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.
Despite the recent advances in graphics hardware capabilities, a brute force approach is incapable of interactively displaying terabytes of data. We have implemented a system that uses hierarchical level-of-detailing for the results of cosmological simulations, in order to display visually accurate results without loading in the full dataset (containing over 10 billion points). The guiding principle of the program is that the user should not be able to distinguish what they are seeing from a full rendering of the original data. Furthermore, by using a tree-based system for levels of detail, the size of the underlying data is limited only by the capacity of the IO system containing it.
We have recently developed an algorithm for vector field visualization with oriented streamlines, able to depict the flow directions everywhere in a dense vector field and the sense of the local orientations. The algorithm has useful applications in the visualization of the director field in nematic liquid crystals. Here we propose an improvement of the algorithm able to enhance the visualization of the local magnitude of the field. This new approach of the algorithm is compared with the same procedure applied to the Line Integral Convolution (LIC) visualization.
We present a system to convert any set of images (e.g., a video clip or a photo album) into a storyboard. We aim to create multiple pleasing graphic representations of the content at interactive rates, so the user can explore and find the storyboard (images, layout, and stylization) that best suits their needs and taste. The main challenges of this work are: selecting the content images, placing them into panels, and applying a stylization. For the latter, we propose an interactive design tool to create new stylizations using a wide range of filter blocks. This approach unleashes the creativity by allowing the user to tune, modify, and intuitively design new sequences of filters. In parallel to this manual design, we propose a novel procedural approach that automatically assembles sequences of filters for innovative results. We aim to keep the algorithm complexity as low as possible such that it can run interactively on a mobile device. Our results include examples of styles designed using both our interactive and procedural tools, as well as their final composition into interesting and appealing storyboards.
We present a novel privacy preservation strategy for decentralized visualization. The key idea is to imitate the flowchart of the federated learning framework, and reformulate the visualization process within a federated infrastructure. The federation of visualization is fulfilled by leveraging a shared global module that composes the encrypted externalizations of transformed visual features of data pieces in local modules. We design two implementations of federated visualization: a prediction-based scheme, and a query-based scheme. We demonstrate the effectiveness of our approach with a set of visual forms, and verify its robustness with evaluations. We report the value of federated visualization in real scenarios with an expert review.
In this paper, we propose a perceptually-guided visualization sharpening technique. We analyze the spectral behavior of an established comprehensive perceptual model to arrive at our approximated model based on an adapted weighting of the bandpass images from a Gaussian pyramid. The main benefit of this approximated model is its controllability and predictability for sharpening color-mapped visualizations. Our method can be integrated into any visualization tool as it adopts generic image-based post-processing, and it is intuitive and easy to use as viewing distance is the only parameter. Using highly diverse datasets, we show the usefulness of our method across a wide range of typical visualizations.