ترغب بنشر مسار تعليمي؟ اضغط هنا

Live Video Comment Generation Based on Surrounding Frames and Live Comments

151   0   0.0 ( 0 )
 نشر من قبل Damai Dai
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Damai Dai




اسأل ChatGPT حول البحث

In this paper, we propose the task of live comment generation. Live comments are a new form of comments on videos, which can be regarded as a mixture of comments and chats. A high-quality live comment should be not only relevant to the video, but also interactive with other users. In this work, we first construct a new dataset for live comment generation. Then, we propose a novel end-to-end model to generate the human-like live comments by referring to the video and the other users comments. Finally, we evaluate our model on the constructed dataset. Experimental results show that our method can significantly outperform the baselines.



قيم البحث

اقرأ أيضاً

We analyze the claims that video recreations of shoulder surfing attacks offer a suitable alternative and a baseline, as compared to evaluation in a live setting. We recreated a subset of the factors of a prior video-simulation experiment conducted b y Aviv et al. (ACSAC 2017), and model the same scenario using live participants ($n=36$) instead (i.e., the victim and attacker were both present). The live experiment confirmed that for Androids graphical patterns video simulation is consistent with the live setting for attacker success rates. However, both 4- and 6-digit PINs demonstrate statistically significant differences in attacker performance, with live attackers performing as much 1.9x better than in the video simulation. The security benefits gained from removing feedback lines in Androids graphical patterns are also greatly diminished in the live setting, particularly under multiple attacker observations, but overall, the data suggests that video recreations can provide a suitable baseline measure for attacker success rate. However, we caution that researchers should consider that these baselines may greatly underestimate the threat of an attacker in live settings.
Automatic live commenting aims to provide real-time comments on videos for viewers. It encourages users engagement on online video sites, and is also a good benchmark for video-to-text generation. Recent work on this task adopts encoder-decoder model s to generate comments. However, these methods do not model the interaction between videos and comments explicitly, so they tend to generate popular comments that are often irrelevant to the videos. In this work, we aim to improve the relevance between live comments and videos by modeling the cross-modal interactions among different modalities. To this end, we propose a multimodal matching transformer to capture the relationships among comments, vision, and audio. The proposed model is based on the transformer framework and can iteratively learn the attention-aware representations for each modality. We evaluate the model on a publicly available live commenting dataset. Experiments show that the multimodal matching transformer model outperforms the state-of-the-art methods.
We focus on the task of Automatic Live Video Commenting (ALVC), which aims to generate real-time video comments with both video frames and other viewers comments as inputs. A major challenge in this task is how to properly leverage the rich and diver se information carried by video and text. In this paper, we aim to collect diversified information from video and text for informative comment generation. To achieve this, we propose a Diversified Co-Attention (DCA) model for this task. Our model builds bidirectional interactions between video frames and surrounding comments from multiple perspectives via metric learning, to collect a diversified and informative context for comment generation. We also propose an effective parameter orthogonalization technique to avoid excessive overlap of information learned from different perspectives. Results show that our approach outperforms existing methods in the ALVC task, achieving new state-of-the-art results.
Call centers, in which human operators attend clients using textual chat, are very common in modern e-commerce. Training enough skilled operators who are able to provide good service is a challenge. We suggest an algorithm and a method to train and i mplement an assisting agent that provides on-line advice to operators while they attend clients. The agent is domain-independent and can be introduced to new domains without major efforts in design, training and organizing structured knowledge of the professional discipline. We demonstrate the applicability of the system in an experiment that realizes its full life-cycle on a specific domain and analyze its capabilities.
This paper proposes a method for generating bullet comments for live-streaming games based on highlights (i.e., the exciting parts of video clips) extracted from the game content and evaluate the effect of mental health promotion. Game live streaming is becoming a popular theme for academic research. Compared to traditional online video sharing platforms, such as Youtube and Vimeo, video live streaming platform has the benefits of communicating with other viewers in real-time. In sports broadcasting, the commentator plays an essential role as mood maker by making matches more exciting. The enjoyment emerged while watching game live streaming also benefits the audiences mental health. However, many e-sports live streaming channels do not have a commentator for entertaining viewers. Therefore, this paper presents a design of an AI commentator that can be embedded in live streaming games. To generate bullet comments for real-time game live streaming, the system employs highlight evaluation to detect the highlights, and generate the bullet comments. An experiment is conducted and the effectiveness of generated bullet comments in a live-streaming fighting game channel is evaluated.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا