Do you want to publish a course? Click here

Frame-semantic parsers traditionally predict predicates, frames, and semantic roles in a fixed order. This paper explores the chicken-or-egg' problem of interdependencies between these components theoretically and practically. We introduce a flexible BERT-based sequence labeling architecture that allows for predicting frames and roles independently from each other or combining them in several ways. Our results show that our setups can approximate more complex traditional models' performance, while allowing for a clearer view of the interdependencies between the pipeline's components, and of how frame and role prediction models make different use of BERT's layers.
We use dialogue act recognition (DAR) to investigate how well BERT represents utterances in dialogue, and how fine-tuning and large-scale pre-training contribute to its performance. We find that while both the standard BERT pre-training and pretraini ng on dialogue-like data are useful, task-specific fine-tuning is essential for good performance.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا