ترغب بنشر مسار تعليمي؟ اضغط هنا

BurstLink: Techniques for Energy-Efficient Conventional and Virtual Reality Video Display

289   0   0.0 ( 0 )
 نشر من قبل Jawad Haj-Yahya
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Conventional planar video streaming is the most popular application in mobile systems and the rapid growth of 360 video content and virtual reality (VR) devices are accelerating the adoption of VR video streaming. Unfortunately, video streaming consumes significant system energy due to the high power consumption of the system components (e.g., DRAM, display interfaces, and display panel) involved in this process. We propose BurstLink, a novel system-level technique that improves the energy efficiency of planar and VR video streaming. BurstLink is based on two key ideas. First, BurstLink directly transfers a decoded video frame from the host system to the display panel, bypassing the host DRAM. To this end, we extend the display panel with a double remote frame buffer (DRFB), instead of the DRAMs double frame buffer, so that the system can directly update the DRFB with a new frame while updating the panels pixels with the current frame stored in the DRFB. Second, BurstLink transfers a complete decoded frame to the display panel in a single burst, using the maximum bandwidth of modern display interfaces. Unlike conventional systems where the frame transfer rate is limited by the pixel-update throughput of the display panel, BurstLink can always take full advantage of the high bandwidth of modern display interfaces by decoupling the frame transfer from the pixel update as enabled by the DRFB. This direct and burst frame transfer of BurstLink significantly reduces energy consumption in video display by reducing access to the host DRAM and increasing the systems residency at idle power states. We evaluate BurstLink using an analytical power model that we rigorously validate on a real modern mobile system. Our evaluation shows that BurstLink reduces system energy consumption for 4K planar and VR video streaming by 41% and 33%, respectively.



قيم البحث

اقرأ أيضاً

Computers continue to diversify with respect to system designs, emerging memory technologies, and application memory demands. Unfortunately, continually adapting the conventional virtual memory framework to each possible system configuration is chall enging, and often results in performance loss or requires non-trivial workarounds. To address these challenges, we propose a new virtual memory framework, the Virtual Block Interface (VBI). We design VBI based on the key idea that delegating memory management duties to hardware can reduce the overheads and software complexity associated with virtual memory. VBI introduces a set of variable-sized virtual blocks (VBs) to applications. Each VB is a contiguous region of the globally-visible VBI address space, and an application can allocate each semantically meaningful unit of information (e.g., a data structure) in a separate VB. VBI decouples access protection from memory allocation and address translation. While the OS controls which programs have access to which VBs, dedicated hardware in the memory controller manages the physical memory allocation and address translation of the VBs. This approach enables several architectural optimizations to (1) efficiently and flexibly cater to different and increasingly diverse system configurations, and (2) eliminate key inefficiencies of conventional virtual memory. We demonstrate the benefits of VBI with two important use cases: (1) reducing the overheads of address translation (for both native execution and virtual machine environments), as VBI reduces the number of translation requests and associated memory accesses; and (2) two heterogeneous main memory architectures, where VBI increases the effectiveness of managing fast memory regions. For both cases, VBI significanttly improves performance over conventional virtual memory.
Traditional cinematography has relied for over a century on a well-established set of editing rules, called continuity editing, to create a sense of situational continuity. Despite massive changes in visual content across cuts, viewers in general exp erience no trouble perceiving the discontinuous flow of information as a coherent set of events. However, Virtual Reality (VR) movies are intrinsically different from traditional movies in that the viewer controls the camera orientation at all times. As a consequence, common editing techniques that rely on camera orientations, zooms, etc., cannot be used. In this paper we investigate key relevant questions to understand how well traditional movie editing carries over to VR. To do so, we rely on recent cognition studies and the event segmentation theory, which states that our brains segment continuous actions into a series of discrete, meaningful events. We first replicate one of these studies to assess whether the predictions of such theory can be applied to VR. We next gather gaze data from viewers watching VR videos containing different edits with varying parameters, and provide the first systematic analysis of viewers behavior and the perception of continuity in VR. From this analysis we make a series of relevant findings; for instance, our data suggests that predictions from the cognitive event segmentation theory are useful guides for VR editing; that different types of edits are equally well understood in terms of continuity; and that spatial misalignments between regions of interest at the edit boundaries favor a more exploratory behavior even after viewers have fixated on a new region of interest. In addition, we propose a number of metrics to describe viewers attentional behavior in VR. We believe the insights derived from our work can be useful as guidelines for VR content creation.
117 - Lun Wang , Damai Dai , Jie Jiang 2017
The panoramic video is widely used to build virtual reality (VR) and is expected to be one of the next generation Killer-Apps. Transmitting panoramic VR videos is a challenging task because of two problems: 1) panoramic VR videos are typically much l arger than normal videos but they need to be transmitted with limited bandwidth in mobile networks. 2) high-resolution and fluent views should be provided to guarantee a superior user experience and avoid side-effects such as dizziness and nausea. To address these two problems, we propose a novel interactive streaming technology, namely Focus-based Interactive Streaming Framework (FISF). FISF consists of three parts: 1) we use the classic clustering algorithm DBSCAN to analyze real user data for Video Focus Detection (VFD); 2) we propose a Focus-based Interactive Streaming Technology (FIST), including a static version and a dynamic version; 3) we propose two optimization methods: focus merging and prefetch strategy. Experimental results show that FISF significantly outperforms the state-of-the-art. The paper is submitted to Sigcomm 2017, VR/AR Network on 31 Mar 2017 at 10:44:04am EDT.
87 - Dawen Xu , Cheng Chu , Cheng Liu 2021
Deformable convolution networks (DCNs) proposed to address the image recognition with geometric or photometric variations typically involve deformable convolution that convolves on arbitrary locations of input features. The locations change with diff erent inputs and induce considerable dynamic and irregular memory accesses which cannot be handled by classic neural network accelerators (NNAs). Moreover, bilinear interpolation (BLI) operation that is required to obtain deformed features in DCNs also cannot be deployed on existing NNAs directly. Although a general purposed processor (GPP) seated along with classic NNAs can process the deformable convolution, the processing on GPP can be extremely slow due to the lack of parallel computing capability. To address the problem, we develop a DCN accelerator on existing NNAs to support both the standard convolution and deformable convolution. Specifically, for the dynamic and irregular accesses in DCNs, we have both the input and output features divided into tiles and build a tile dependency table (TDT) to track the irregular tile dependency at runtime. With the TDT, we further develop an on-chip tile scheduler to handle the dynamic and irregular accesses efficiently. In addition, we propose a novel mapping strategy to enable parallel BLI processing on NNAs and apply layer fusion techniques for more energy-efficient DCN processing. According to our experiments, the proposed accelerator achieves orders of magnitude higher performance and energy efficiency compared to the typical computing architectures including ARM, ARM+TPU, and GPU with 6.6% chip area penalty to a classic NNA.
With the mounting global interest for optical see-through head-mounted displays (OST-HMDs) across medical, industrial and entertainment settings, many systems with different capabilities are rapidly entering the market. Despite such variety, they all require display calibration to create a proper mixed reality environment. With the aid of tracking systems, it is possible to register rendered graphics with tracked objects in the real world. We propose a calibration procedure to properly align the coordinate system of a 3D virtual scene that the user sees with that of the tracker. Our method takes a blackbox approach towards the HMD calibration, where the trackers data is its input and the 3D coordinates of a virtual object in the observers eye is the output; the objective is thus to find the 3D projection that aligns the virtual content with its real counterpart. In addition, a faster and more intuitive version of this calibration is introduced in which the user simultaneously aligns multiple points of a single virtual 3D object with its real counterpart; this reduces the number of required repetitions in the alignment from 20 to only 4, which leads to a much easier calibration task for the user. In this paper, both internal (HMD camera) and external tracking systems are studied. We perform experiments with Microsoft HoloLens, taking advantage of its self localization and spatial mapping capabilities to eliminate the requirement for line of sight from the HMD to the object or external tracker. The experimental results indicate an accuracy of up to 4 mm in the average reprojection error based on two separate evaluation methods. We further perform experiments with the internal tracking on the Epson Moverio BT-300 to demonstrate that the method can provide similar results with other HMDs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا