No Arabic abstract
This paper presents the image-quality-guided strategy for optimization of bicubic interpolation and interpolated scan conversion algorithms. This strategy uses feature selection through line chart data visualization technique and first index of the minimum absolute difference between computed scores and ideal scores to determine the image quality guided coefficient k that changes all sixteen BIC coefficients to new coefficients on which the OBIC interpolation algorithm is based. Perceptual evaluations of cropped sectored images from Matlab software implementation of interpolated scan conversion algorithms are presented. Also, IQA metrics-based evaluation is presented and demonstrates that the overall performance of the OBIC algorithm is 92.22% when compared with BIC alone, but becomes 57.22% with all other methods mentioned.
Compressed bitmap indexes are used in systems such as Git or Oracle to accelerate queries. They represent sets and often support operations such as unions, intersections, differences, and symmetric differences. Several important systems such as Elasticsearch, Apache Spark, Netflixs Atlas, LinkedIns Pinot, Metamarkets Druid, Pilosa, Apache Hive, Apache Tez, Microsoft Visual Studio Team Services and Apache Kylin rely on a specific type of compressed bitmap index called Roaring. We present an optimized software library written in C implementing Roaring bitmaps: CRoaring. It benefits from several algorithms designed for the single-instruction-multiple-data (SIMD) instructions available on commodity processors. In particular, we present vectorized algorithms to compute the intersection, union, difference and symmetric difference between arrays. We benchmark the library against a wide range of competitive alternatives, identifying weaknesses and strengths in our software. Our work is available under a liberal open-source license.
We present a method for the efficient processing of contact and collision in volumetric elastic models simulated using the Projective Dynamics paradigm. Our approach enables interactive simulation of tetrahedral meshes with more than half a million elements, provided that the model satisfies two fundamental properties: the region of the models surface that is susceptible to collision events needs to be known in advance, and the simulation degrees of freedom associated with that surface region should be limited to a small fraction (e.g. 5%) of the total simulation nodes. Despite this conscious delineation of scope, our hypotheses hold true for common animation subjects, such as simulated models of the human face and parts of the body. In such scenarios, a partial Cholesky factorization can abstract away the behavior of the collision-safe subset of the face into the Schur Complement matrix with respect to the collision-prone region. We demonstrate how fast and accurate updates of penalty-based collision terms can be incorporated into this representation, and solved with high efficiency on the GPU. We also demonstrate the opportunity to iterate a partial update of the element rotations, akin to a selective application of the local step, specifically on the smaller collision-prone region without explicitly paying the cost associated with the rest of the simulation mesh. We demonstrate efficient and robust interactive simulation in detailed models from animation and medical applications.
The main output of the FORCE11 Software Citation working group (https://www.force11.org/group/software-citation-working-group) was a paper on software citation principles (https://doi.org/10.7717/peerj-cs.86) published in September 2016. This paper laid out a set of six high-level principles for software citation (importance, credit and attribution, unique identification, persistence, accessibility, and specificity) and discussed how they could be used to implement software citation in the scholarly community. In a series of talks and other activities, we have promoted software citation using these increasingly accepted principles. At the time the initial paper was published, we also provided guidance and examples on how to make software citable, though we now realize there are unresolved problems with that guidance. The purpose of this document is to provide an explanation of current issues impacting scholarly attribution of research software, organize updated implementation guidance, and identify where best practices and solutions are still needed.
The creation of high-fidelity computer-generated (CG) characters used in film and gaming requires intensive manual labor and a comprehensive set of facial assets to be captured with complex hardware, resulting in high cost and long production cycles. In order to simplify and accelerate this digitization process, we propose a framework for the automatic generation of high-quality dynamic facial assets, including rigs which can be readily deployed for artists to polish. Our framework takes a single scan as input to generate a set of personalized blendshapes, dynamic and physically-based textures, as well as secondary facial components (e.g., teeth and eyeballs). Built upon a facial database consisting of pore-level details, with over $4,000$ scans of varying expressions and identities, we adopt a self-supervised neural network to learn personalized blendshapes from a set of template expressions. We also model the joint distribution between identities and expressions, enabling the inference of the full set of personalized blendshapes with dynamic appearances from a single neutral input scan. Our generated personalized face rig assets are seamlessly compatible with cutting-edge industry pipelines for facial animation and rendering. We demonstrate that our framework is robust and effective by inferring on a wide range of novel subjects, and illustrate compelling rendering results while animating faces with generated customized physically-based dynamic textures.
This work compares the performance of software implementations of different Gabidulin decoders. The parameter sets used within the comparison stem from their applications in recently proposed cryptographic schemes. The complexity analysis of the decoders is recalled, counting the occurrence of each operation within the respective decoders. It is shown that knowing the number of operations may be misleading when comparing different algorithms as the run-time of the implementation depends on the instruction set of the device on which the algorithm is executed.