ترغب بنشر مسار تعليمي؟ اضغط هنا

Automated Timeline Length Selection for Flexible Timeline Summarization

125   0   0.0 ( 0 )
 نشر من قبل Qianren Mao
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

By producing summaries for long-running events, timeline summarization (TLS) underpins many information retrieval tasks. Successful TLS requires identifying an appropriate set of key dates (the timeline length) to cover. However, doing so is challenging as the right length can change from one topic to another. Existing TLS solutions either rely on an event-agnostic fixed length or an expert-supplied setting. Neither of the strategies is desired for real-life TLS scenarios. A fixed, event-agnostic setting ignores the diversity of events and their development and hence can lead to low-quality TLS. Relying on expert-crafted settings is neither scalable nor sustainable for processing many dynamically changing events. This paper presents a better TLS approach for automatically and dynamically determining the TLS timeline length. We achieve this by employing the established elbow method from the machine learning community to automatically find the minimum number of dates within the time series to generate concise and informative summaries. We applied our approach to four TLS datasets of English and Chinese and compared them against three prior methods. Experimental results show that our approach delivers comparable or even better summaries over state-of-art TLS methods, but it achieves this without expert involvement.

قيم البحث

اقرأ أيضاً

In timeline-based planning, domains are described as sets of independent, but interacting, components, whose behaviour over time (the set of timelines) is governed by a set of temporal constraints. A distinguishing feature of timeline-based planning systems is the ability to integrate planning with execution by synthesising control strategies for flexible plans. However, flexible plans can only represent temporal uncertainty, while more complex forms of nondeterminism are needed to deal with a wider range of realistic problems. In this paper, we propose a novel game-theoretic approach to timeline-based planning problems, generalising the state of the art while uniformly handling temporal uncertainty and nondeterminism. We define a general concept of timeline-based game and we show that the notion of winning strategy for these games is strictly more general than that of control strategy for dynamically controllable flexible plans. Moreover, we show that the problem of establishing the existence of such winning strategies is decidable using a doubly exponential amount of space.
294 - Xiaoyu Du , Quan Le , Mark Scanlon 2020
Case-hindering, multi-year digital forensic evidence backlogs have become commonplace in law enforcement agencies throughout the world. This is due to an ever-growing number of cases requiring digital forensic investigation coupled with the growing v olume of data to be processed per case. Leveraging previously processed digital forensic cases and their component artefact relevancy classifications can facilitate an opportunity for training automated artificial intelligence based evidence processing systems. These can significantly aid investigators in the discovery and prioritisation of evidence. This paper presents one approach for file artefact relevancy determination building on the growing trend towards a centralised, Digital Forensics as a Service (DFaaS) paradigm. This approach enables the use of previously encountered pertinent files to classify newly discovered files in an investigation. Trained models can aid in the detection of these files during the acquisition stage, i.e., during their upload to a DFaaS system. The technique generates a relevancy score for file similarity using each artefacts filesystem metadata and associated timeline events. The approach presented is validated against three experimental usage scenarios.
Recovering images from undersampled linear measurements typically leads to an ill-posed linear inverse problem, that asks for proper statistical priors. Building effective priors is however challenged by the low train and test overhead dictated by re al-time tasks; and the need for retrieving visually plausible and physically feasible images with minimal hallucination. To cope with these challenges, we design a cascaded network architecture that unrolls the proximal gradient iterations by permeating benefits from generative residual networks (ResNet) to modeling the proximal operator. A mixture of pixel-wise and perceptual costs is then deployed to train proximals. The overall architecture resembles back-and-forth projection onto the intersection of feasible and plausible images. Extensive computational experiments are examined for a global task of reconstructing MR images of pediatric patients, and a more local task of superresolving CelebA faces, that are insightful to design efficient architectures. Our observations indicate that for MRI reconstruction, a recurrent ResNet with a single residual block effectively learns the proximal. This simple architecture appears to significantly outperform the alternative deep ResNet architecture by 2dB SNR, and the conventional compressed-sensing MRI by 4dB SNR with 100x faster inference. For image superresolution, our preliminary results indicate that modeling the denoising proximal demands deep ResNets.
The timeline of the lunar bombardment in the first Gy of the Solar System remains unclear. Some basin-forming impacts occurred 3.9-3.7Gy ago. Many other basins formed before, but their exact ages are not precisely known. There are two possible interp retations of the data: in the cataclysm scenario there was a surge in the impact rate approximately 3.9Gy ago, while in the accretion tail scenario the lunar bombardment declined since the era of planet formation and the latest basins formed in its tail-end. Here, we revisit the work of Morbidelli et al.(2012) that examined which scenario could be compatible with both the lunar crater record in the 3-4Gy period and the abundance of highly siderophile elements (HSE) in the lunar mantle. We use updated numerical simulations of the fluxes of impactors. Under the traditional assumption that the HSEs track the total amount of material accreted by the Moon since its formation, we conclude that only the cataclysm scenario can explain the data. The cataclysm should have started ~3.95Gy ago. However we show that HSEs could have been sequestered from the lunar mantle due to iron sulfide exsolution during magma ocean crystallization, followed by mantle overturn. Based on the hypothesis that the lunar magma ocean crystallized about 100-150My after Moon formation, and therefore that HSEs accumulated in the lunar mantle only after this time, we show that the bombardment in the 3-4Gy period can be explained in the accretion tail scenario. This hypothesis would also explain why the Moon appears so depleted in HSEs relative to the Earth. We also extend our analysis of the cataclysm and accretion tail scenarios to the case of Mars. The accretion tail scenario requires a global resurfacing event on Mars ~4.4Gy ago, possibly associated with the formation of the Borealis basin, and it is consistent with the HSE budget of the planet.
Given the collection of timestamped web documents related to the evolving topic, timeline summarization (TS) highlights its most important events in the form of relevant summaries to represent the development of a topic over time. Most of the previou s work focuses on fully-observable ranking models and depends on hand-designed features or complex mechanisms that may not generalize well. We present a novel dynamic framework for evolutionary timeline generation leveraging distributed representations, which dynamically finds the most likely sequence of evolutionary summaries in the timeline, called the Viterbi timeline, and reduces the impact of events that irrelevant or repeated to the topic. The assumptions of the coherence and the global view run through our model. We explore adjacent relevance to constrain timeline coherence and make sure the events evolve on the same topic with a global view. Experimental results demonstrate that our framework is feasible to extract summaries for timeline generation, outperforms various competitive baselines, and achieves the state-of-the-art performance as an unsupervised approach.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا