ترغب بنشر مسار تعليمي؟ اضغط هنا

Screenplay Quality Assessment: Can We Predict Who Gets Nominated?

74   0   0.0 ( 0 )
 نشر من قبل Ming-Chang Chiu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Deciding which scripts to turn into movies is a costly and time-consuming process for filmmakers. Thus, building a tool to aid script selection, an initial phase in movie production, can be very beneficial. Toward that goal, in this work, we present a method to evaluate the quality of a screenplay based on linguistic cues. We address this in a two-fold approach: (1) we define the task as predicting nominations of scripts at major film awards with the hypothesis that the peer-recognized scripts should have a greater chance to succeed. (2) based on industry opinions and narratology, we extract and integrate domain-specific features into common classification techniques. We face two challenges (1) scripts are much longer than other document datasets (2) nominated scripts are limited and thus difficult to collect. However, with narratology-inspired modeling and domain features, our approach offers clear improvements over strong baselines. Our work provides a new approach for future work in screenplay analysis.

قيم البحث

اقرأ أيضاً

65 - J. Iliopoulos 2006
In the framework of the Standard Model the mass of the physical Higgs boson is an arbitrary parameter. In this note we examine whether it is possible to determine the ratio of $m_H /M$, where $M$ denotes any other mass in the theory, such as the $W$ or the $Z$-boson mass. We show that no such relation can be stable under renormalisation.
Several quality dimensions of natural language arguments have been investigated. Some are likely to be reflected in linguistic features (e.g., an arguments arrangement), whereas others depend on context (e.g., relevance) or topic knowledge (e.g., acc eptability). In this paper, we study the intrinsic computational assessment of 15 dimensions, i.e., only learning from an arguments text. In systematic experiments with eight feature types on an existing corpus, we observe moderate but significant learning success for most dimensions. Rhetorical quality seems hardest to assess, and subjectivity features turn out strong, although length bias in the corpus impedes full validity. We also find that human assessors differ more clearly to each other than to our approach.
We investigate the effects of multi-task learning using the recently introduced task of semantic tagging. We employ semantic tagging as an auxiliary task for three different NLP tasks: part-of-speech tagging, Universal Dependency parsing, and Natural Language Inference. We compare full neural network sharing, partial neural network sharing, and what we term the learning what to share setting where negative transfer between tasks is less likely. Our findings show considerable improvements for all tasks, particularly in the learning what to share setting, which shows consistent gains across all tasks.
Using the perturbative QCD amplitudes for $Bto pipi$ and $Bto Kpi$, we have performed an extensive study of the parameter space where the theoretical predictions for the branching ratios are consistent with recent experimental data. From this allowed range of parameter space, we predict the mixing induced CP asymmetry for $B to pi^+pi^-$ with about 11% uncertainty and the other CP asymmetries for $Bto pipi$, $Kpi$ with 40% ~ 47% uncertainty. These errors are expected to be reduced as we restrict the parameter space by studying other decay modes and by further improvements in the experimental data.
Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as Obama is a _ by profession. These prompts are usually manually created, and quite possibly s ub-optimal; another prompt such as Obama worked as a _ may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know. We have released the code and the resulting LM Prompt And Query Archive (LPAQA) at https://github.com/jzbjyb/LPAQA.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا