Do you want to publish a course? Click here

Surveying the Social, Smart and Converged TV Landscape: Where is Television Research Headed?

198   0   0.0 ( 0 )
 Publication date 2012
and research's language is English




Ask ChatGPT about the research

The TV is dead motto of just a few years ago has been replaced by the prospect of Internet Protocol (IP) television experiences over converged networks to become one of the great technology opportunities in the next few years. As an introduction to the Special Issue on Smart, Social and Converged Television, this extended editorial intends to review the current IP television landscape in its many realizations: operator-based, over-the-top, and user generated. We will address new services like social TV and recommendation engines, dissemination including new paradigms built on peer to peer and content centric networks, as well as the all important quality of experience that challenges services and networks alike. But we intend to go further than just review the existing work by proposing areas for the future of television research. These include strategies to provide services that are more efficient in network and energy usage while being socially engaging, novel services that will provide consumers with a broader choice of content and devices, and metrics that will enable operators and users alike to define the level of service they require or that they are ready to provide. These topics are addressed in this survey paper that attempts to create a unifying framework to link them all together. Not only is television not dead, it is well alive, thriving and fostering innovation and this paper will hopefully prove it.



rate research

Read More

Motivated by the growing popularity of smart TVs, we present a large-scale measurement study of smart TVs by collecting and analyzing their network traffic from two different vantage points. First, we analyze aggregate network traffic of smart TVs in-the-wild, collected from residential gateways of tens of homes and several different smart TV platforms, including Apple, Samsung, Roku, and Chromecast. In addition to accessing video streaming and cloud services, we find that smart TVs frequently connect to well-known as well as platform-specific advertising and tracking services (ATS). Second, we instrument Roku and Amazon Fire TV, two popular smart TV platforms, by setting up a controlled testbed to systematically exercise the top-1000 apps on each platform, and analyze their network traffic at the granularity of the individual apps. We again find that smart TV apps connect to a wide range of ATS, and that the key players of the ATS ecosystems of the two platforms are different from each other and from that of the mobile platform. Third, we evaluate the (in)effectiveness of state-of-the-art DNS-based blocklists in filtering advertising and tracking traffic for smart TVs. We find that personally identifiable information (PII) is exfiltrated to platform-related Internet endpoints and third parties, and that blocklists are generally better at preventing exposure of PII to third parties than to platform-related endpoints. Our work demonstrates the segmentation of the smart TV ATS ecosystem across platforms and its differences from the mobile ATS ecosystem, thus motivating the need for designing privacy-enhancing tools specifically for each smart TV platform.
In this position paper, we consider some foundational topics regarding smart contracts (such as terminology, automation, enforceability, and semantics) and define a smart contract as an automatable and enforceable agreement. We explore a simple semantic framework for smart contracts, covering both operational and non-operational aspects, and describe templates and agreements for legally-enforceable smart contracts, based on legal documents. Building upon the Ricardian Contract, we identify operational parameters in the legal documents and use these to connect legal agreements to standardised code. We also explore the design landscape, including increasing sophistication of parameters, increasing use of common standardised code, and long-term research.
The abundance and ease of utilizing sound, along with the fact that auditory clues reveal so much about what happens in the scene, make the audio-visual space a perfectly intuitive choice for self-supervised representation learning. However, the current literature suggests that training on textit{uncurated} data yields considerably poorer representations compared to the textit{curated} alternatives collected in supervised manner, and the gap only narrows when the volume of data significantly increases. Furthermore, the quality of learned representations is known to be heavily influenced by the size and taxonomy of the curated datasets used for self-supervised training. This begs the question of whether we are celebrating too early on catching up with supervised learning when our self-supervised efforts still rely almost exclusively on curated data. In this paper, we study the efficacy of learning from Movies and TV Shows as forms of uncurated data for audio-visual self-supervised learning. We demonstrate that a simple model based on contrastive learning, trained on a collection of movies and TV shows, not only dramatically outperforms more complex methods which are trained on orders of magnitude larger uncurated datasets, but also performs very competitively with the state-of-the-art that learns from large-scale curated data. We identify that audiovisual patterns like the appearance of the main character or prominent scenes and mise-en-sc`ene which frequently occur through the whole duration of a movie, lead to an overabundance of easy negative instances in the contrastive learning formulation. Capitalizing on such observation, we propose a hierarchical sampling policy, which despite its simplicity, effectively improves the performance, particularly when learning from TV shows which naturally face less semantic diversity.
252 - Giancarlo Ruffo 2021
The history of journalism and news diffusion is tightly coupled with the effort to dispel hoaxes, misinformation, propaganda, unverified rumours, poor reporting, and messages containing hate and divisions. With the explosive growth of online social media and billions of individuals engaged with consuming, creating, and sharing news, this ancient problem has surfaced with a renewed intensity threatening our democracies, public health, and news outlets credibility. This has triggered many researchers to develop new methods for studying, understanding, detecting, and preventing fake-news diffusion; as a consequence, thousands of scientific papers have been published in a relatively short period, making researchers of different disciplines to struggle in search of open problems and most relevant trends. The aim of this survey is threefold: first, we want to provide the researchers interested in this multidisciplinary and challenging area with a network-based analysis of the existing literature to assist them with a visual exploration of papers that can be of interest; second, we present a selection of the main results achieved so far adopting the network as an unifying framework to represent and make sense of data, to model diffusion processes, and to evaluate different debunking strategies. Finally, we present an outline of the most relevant research trends focusing on the moving target of fake-news, bots, and trolls identification by means of data mining and text technologies; despite scholars working on computational linguistics and networks traditionally belong to different scientific communities, we expect that forthcoming computational approaches to prevent fake news from polluting the social media must be developed using hybrid and up-to-date methodologies.
184 - Xavier Bost 2018
Speaker diarization may be difficult to achieve when applied to narrative films, where speakers usually talk in adverse acoustic conditions: background music, sound effects, wide variations in intonation may hide the inter-speaker variability and make audio-based speaker diarization approaches error prone. On the other hand, such fictional movies exhibit strong regularities at the image level, particularly within dialogue scenes. In this paper, we propose to perform speaker diarization within dialogue scenes of TV series by combining the audio and video modalities: speaker diarization is first performed by using each modality, the two resulting partitions of the instance set are then optimally matched, before the remaining instances, corresponding to cases of disagreement between both modalities, are finally processed. The results obtained by applying such a multi-modal approach to fictional films turn out to outperform those obtained by relying on a single modality.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا