No Arabic abstract
Fractal image generation algorithms exhibit extreme parallelizability. Using general purpose graphics processing unit (GPU) programming to implement escape-time algorithms for Julia sets of functions,parallel methods generate visually attractive fractal images much faster than traditional methods. Vastly improved speeds are achieved using this method of computation, which allow real-time generation and display of images. A comparison is made between sequential and parallel implementations of the algorithm. An application created by the authors demonstrates using the increased speed to create dynamic imaging of fractals where the user may explore paths of parameter values corresponding to a given functions Mandelbrot set. Examples are given of artistic and mathematical insights gained by experiencing fractals interactively and from the ability to sample the parameter space quickly and comprehensively.
An algorithm to generate the locus of a circle using the intersection points of straight lines is proposed. The pixels on the circle are plotted independent of one another and the operations involved in finding the locus of the circle from the intersection of straight lines are parallelizable. Integer only arithmetic and algorithmic optimizations are used for speedup. The proposed algorithm makes use of an envelope to form a parabolic arc which is consequent transformed into a circle. The use of parabolic arcs for the transformation results in higher pixel errors as the radius of the circle to be drawn increases. At its current state, the algorithm presented may be suitable only for generating circles for string art.
Graphic Processing Units (GPUs) are getting increasingly important as target architectures in scientific High Performance Computing (HPC). NVIDIA established CUDA as a parallel computing architecture controlling and making use of the compute power of GPUs. CUDA provides sufficient support for C++ language elements to enable the Expression Template (ET) technique in the device memory domain. QDP++ is a C++ vector class library suited for quantum field theory which provides vector data types and expressions and forms the basis of the lattice QCD software suite Chroma. In this work accelerating QDP++ expression evaluation to a GPU was successfully implemented leveraging the ET technique and using Just-In-Time (JIT) compilation. The Portable Expression Template Engine (PETE) and the C API for CUDA kernel arguments were used to build the bridge between host and device memory domains. This provides the possibility to accelerate Chroma routines to a GPU which are typically not subject to special optimisation. As an application example a smearing routine was accelerated to execute on a GPU. A significant speed-up compared to normal CPU execution could be measured.
In order to generate novel 3D shapes with machine learning, one must allow for interpolation. The typical approach for incorporating this creative process is to interpolate in a learned latent space so as to avoid the problem of generating unrealistic instances by exploiting the models learned structure. The process of the interpolation is supposed to form a semantically smooth morphing. While this approach is sound for synthesizing realistic media such as lifelike portraits or new designs for everyday objects, it subjectively fails to directly model the unexpected, unrealistic, or creative. In this work, we present a method for learning how to interpolate point clouds. By encoding prior knowledge about real-world objects, the intermediate forms are both realistic and unlike any existing forms. We show not only how this method can be used to generate creative point clouds, but how the method can also be leveraged to generate 3D models suitable for sculpture.
In this paper, we extend our earlier polycube-based all-hexahedral mesh generation method to hexahedral-dominant mesh generation, and present the HexDom software package. Given the boundary representation of a solid model, HexDom creates a hex-dominant mesh by using a semi-automated polycube-based mesh generation method. The resulting hexahedral dominant mesh includes hexahedra, tetrahedra, and triangular prisms. By adding non-hexahedral elements, we are able to generate better quality hexahedral elements than in all-hexahedral meshes. We explain the underlying algorithms in four modules including segmentation, polycube construction, hex-dominant mesh generation and quality improvement, and use a rockerarm model to explain how to run the software. We also apply our software to a number of other complex models to test their robustness. The software package and all tested models are availabe in github (https://github.com/CMU-CBML/HexDom).
Generative Adversarial Networks (GANs) have received a great deal of attention due in part to recent success in generating original, high-quality samples from visual domains. However, most current methods only allow for users to guide this image generation process through limited interactions. In this work we develop a novel GAN framework that allows humans to be in-the-loop of the image generation process. Our technique iteratively accepts relative constraints of the form Generate an image more like image A than image B. After each constraint is given, the user is presented with new outputs from the GAN, informing the next round of feedback. This feedback is used to constrain the output of the GAN with respect to an underlying semantic space that can be designed to model a variety of different notions of similarity (e.g. classes, attributes, object relationships, color, etc.). In our experiments, we show that our GAN framework is able to generate images that are of comparable quality to equivalent unsupervised GANs while satisfying a large number of the constraints provided by users, effectively changing a GAN into one that allows users interactive control over image generation without sacrificing image quality.