ترغب بنشر مسار تعليمي؟ اضغط هنا

A Practical Approach to Spatiotemporal Data Compression

97   0   0.0 ( 0 )
 نشر من قبل Niall Robinson PhD
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Datasets representing the world around us are becoming ever more unwieldy as data volumes grow. This is largely due to increased measurement and modelling resolution, but the problem is often exacerbated when data are stored at spuriously high precisions. In an effort to facilitate analysis of these datasets, computationally intensive calculations are increasingly being performed on specialised remote servers before the reduced data are transferred to the consumer. Due to bandwidth limitations, this often means data are displayed as simple 2D data visualisations, such as scatter plots or images. We present here a novel way to efficiently encode and transmit 4D data fields on-demand so that they can be locally visualised and interrogated. This nascent 4D video format allows us to more flexibly move the boundary between data server and consumer client. However, it has applications beyond purely scientific visualisation, in the transmission of data to virtual and augmented reality.



قيم البحث

اقرأ أيضاً

78 - Noel Alben , Ranjani H.G 2021
We present a computational assessment system that promotes the learning of basic rhythmic patterns. The system is capable of generating multiple rhythmic patterns with increasing complexity within various cycle lengths. For a generated rhythm pattern the performance assessment of the learner is carried out through the statistical deviations calculated from the onset detection and temporal assessment of a learners performance. This is compared with the generated pattern, and their performance accuracy forms the feedback to the learner. The system proceeds to generate a new pattern of increased complexity when performance assessment results are within certain error bounds. The system thus mimics a learner-teacher relationship as the learner progresses in their feedback-based learning. The choice of progression within a cycle for each pattern is determined by a predefined complexity metric. This metric is based on a coded element model for the perceptual processing of sequential stimuli. The model earlier proposed for a sequence of tones and non-tones, is now used for onsets and silences. This system is developed into a web-based application and provides accessibility for learning purposes. Analysis of the performance assessments shows that the complexity metric is indicative of the perceptual processing of rhythm patterns and can be used for rhythm learning.
Explicit high-order feature interactions efficiently capture essential structural knowledge about the data of interest and have been used for constructing generative models. We present a supervised discriminative High-Order Parametric Embedding (HOPE ) approach to data visualization and compression. Compared to deep embedding models with complicated deep architectures, HOPE generates more effective high-order feature mapping through an embarrassingly simple shallow model. Furthermore, two approaches to generating a small number of exemplars conveying high-order interactions to represent large-scale data sets are proposed. These exemplars in combination with the feature mapping learned by HOPE effectively capture essential data variations. Moreover, through HOPE, these exemplars are employed to increase the computational efficiency of kNN classification for fast information retrieval by thousands of times. For classification in two-dimensional embedding space on MNIST and USPS datasets, our shallow method HOPE with simple Sigmoid transformations significantly outperforms state-of-the-art supervised deep embedding models based on deep neural networks, and even achieved historically low test error rate of 0.65% in two-dimensional space on MNIST, which demonstrates the representational efficiency and power of supervised shallow models with high-order feature interactions.
94 - Zhaoxia Yin , Yinyin Peng , 2019
Reversible data hiding in encrypted images (RDHEI) receives growing attention because it protects the content of the original image while the embedded data can be accurately extracted and the original image can be reconstructed lossless. To make full use of the correlation of the adjacent pixels, this paper proposes an RDHEI scheme based on pixel prediction and bit-plane compression. Firstly, to vacate room for data embedding, the prediction error of the original image is calculated and used for bit-plane rearrangement and compression. Then, the image after vacating room is encrypted by a stream cipher. Finally, the additional data is embedded in the vacated room by multi-LSB substitution. Experimental results show that the embedding capacity of the proposed method outperforms the state-of-the-art methods.
As a technology that can prevent the information of original image and additional information from being disclosed, the reversible data hiding in encrypted images (RDHEI) has been widely concerned by researchers. How to further improve the performanc e of RDHEI methods has become a focus of research. To this end, this work proposes a high-capacity RDHEI method based on bit plane compression of prediction error. Firstly, to reserve the room for embedding information, the image owner rearranges and compresses the bit plane of prediction error. Next, the image after reserving room is encrypted with a serect key. Finally, the information hiding device embeds the additional information into the reserved room. This paper makes full use of the correlation between adjacent pixels. Experimental results show that this method can realize the real reversibility and provide higher embedding capacity than state-of-the-art works.
Deep neural networks (DNNs) frequently contain far more weights, represented at a higher precision, than are required for the specific task which they are trained to perform. Consequently, they can often be compressed using techniques such as weight pruning and quantization that reduce both the model size and inference time without appreciable loss in accuracy. However, finding the best compression strategy and corresponding target sparsity for a given DNN, hardware platform, and optimization objective currently requires expensive, frequently manual, trial-and-error experimentation. In this paper, we introduce a programmable system for model compression called Condensa. Users programmatically compose simple operators, in Python, to build more complex and practically interesting compression strategies. Given a strategy and user-provided objective (such as minimization of running time), Condensa uses a novel Bayesian optimization-based algorithm to automatically infer desirable sparsities. Our experiments on four real-world DNNs demonstrate memory footprint and hardware runtime throughput improvements of 188x and 2.59x, respectively, using at most ten samples per search. We have released a reference implementation of Condensa at https://github.com/NVlabs/condensa.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا