ﻻ يوجد ملخص باللغة العربية
Needs of multimedia systems evolved due to the evolution of their architecture which is now distributed into heterogeneous contexts. A critical issue lies in the fact that they handle, process, and transmit multimedia data. This data integrates several properties which should be considered since it holds a considerable part of its semantics, for instance the lips synchronization in a video. In this paper, we focus on the definition of a model as a basic abstraction for describing and modeling media in multimedia systems by taking into account their properties. This model will be used in software architecture in order to handle data in efficient way. The provided model is an interesting solution for the integration of media into applications; we propose to consider and to handle them in a uniform way. This model is proposed with synchronization policies to ensure synchronous transport of media. Therefore, we use it in a component model that we develop for the design and deployment of distributed multimedia systems.
Steganography represents the art of unobtrusively concealing a secrete message within some cover data. The key scope of this work is about visual steganography techniques that hide a full-sized color image / video within another. A majority of existi
Immersive video offers the freedom to navigate inside virtualized environment. Instead of streaming the bulky immersive videos entirely, a viewport (also referred to as field of view, FoV) adaptive streaming is preferred. We often stream the high-qua
Datasets representing the world around us are becoming ever more unwieldy as data volumes grow. This is largely due to increased measurement and modelling resolution, but the problem is often exacerbated when data are stored at spuriously high precis
Nowadays, most existing blind image quality assessment (BIQA) models 1) are developed for synthetically-distorted images and often generalize poorly to authentic ones; 2) heavily rely on human ratings, which are prohibitively labor-expensive to colle
Vector quantization is an essential tool for tasks involving large scale data, for example, large scale similarity search, which is crucial for content-based information retrieval and analysis. In this paper, we propose a novel vector quantization fr