ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep Surface Light Fields

206   0   0.0 ( 0 )
 نشر من قبل Anpei Chen
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A surface light field represents the radiance of rays originating from any points on the surface in any directions. Traditional approaches require ultra-dense sampling to ensure the rendering quality. In this paper, we present a novel neural network based technique called deep surface light field or DSLF to use only moderate sampling for high fidelity rendering. DSLF automatically fills in the missing data by leveraging different sampling patterns across the vertices and at the same time eliminates redundancies due to the networks prediction capability. For real data, we address the image registration problem as well as conduct texture-aware remeshing for aligning texture edges with vertices to avoid blurring. Comprehensive experiments show that DSLF can further achieve high data compression ratio while facilitating real-time rendering on the GPU.



قيم البحث

اقرأ أيضاً

Implicit representations of 3D objects have recently achieved impressive results on learning-based 3D reconstruction tasks. While existing works use simple texture models to represent object appearance, photo-realistic image synthesis requires reason ing about the complex interplay of light, geometry and surface properties. In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field. In contrast to existing representations, our implicit model represents surface light fields in a continuous fashion and independent of the geometry. Moreover, we condition the surface light field with respect to the location and color of a small light source. Compared to traditional surface light field models, this allows us to manipulate the light source and relight the object using environment maps. We further demonstrate the capabilities of our model to predict the visual appearance of an unseen object from a single real RGB image and corresponding 3D shape information. As evidenced by our experiments, our model is able to infer rich visual appearance including shadows and specular reflections. Finally, we show that the proposed representation can be embedded into a variational auto-encoder for generating novel appearances that conform to the specified illumination conditions.
This study starts from the counter-intuitive question of how we can render a conventional stiff, non-stretchable and even brittle material conformable so that it can fully wrap around a curved surface, such as a sphere, without failure. Here, we answ er this conundrum by extending geometrical design in computational kirigami (paper cutting and folding) to paper wrapping. Our computational paper wrapping-based approach provides the more robust and reliable fabrication of conformal devices than paper folding approaches. This in turn leads to a significant increase in the applicability of computational kirigami to real-world fabrication. This new computer-aided design transforms 2D-based conventional materials, such as Si and copper, into a variety of targeted conformal structures that can fully wrap the desired 3D structure without plastic deformation or fracture. We further demonstrated that our novel approach enables a pluripotent design platform to transform conventional non-stretchable 2D-based devices, such as electroluminescent lighting and a paper battery, into wearable and conformable 3D curved devices.
It is known that the region $V(s)$ of a simple polygon $P$, directly visible (illuminable) from an internal point $s$, is simply connected. Aronov et al. cite{addpp981} established that the region $V_1(s)$ of a simple polygon visible from an internal point $s$ due to at most one diffuse reflection on the boundary of the polygon $P$, is also simply connected. In this paper we establish that the region $V_2(s)$, visible from $s$ due to at most two diffuse reflections may be multiply connected; we demonstrate the construction of an $n$-sided simple polygon with a point $s$ inside it so that and the region of $P$ visible from $s$ after at most two diffuse reflections is multiple connected.
Delaunay flip is an elegant, simple tool to convert a triangulation of a point set to its Delaunay triangulation. The technique has been researched extensively for full dimensional triangulations of point sets. However, an important case of triangula tions which are not full dimensional is surface triangulations in three dimensions. In this paper we address the question of converting a surface triangulation to a subcomplex of the Delaunay triangulation with edge flips. We show that the surface triangulations which closely approximate a smooth surface with uniform density can be transformed to a Delaunay triangulation with a simple edge flip algorithm. The condition on uniformity becomes less stringent with increasing density of the triangulation. If the condition is dropped completely, the flip algorithm still terminates although the output surface triangulation becomes almost Delaunay instead of exactly Delaunay.
We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the f ull 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network -- thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا