ترغب بنشر مسار تعليمي؟ اضغط هنا

Framing a news article means to portray the reported event from a specific perspective, e.g., from an economic or a health perspective. Reframing means to change this perspective. Depending on the audience or the submessage, reframing can become nece ssary to achieve the desired effect on the readers. Reframing is related to adapting style and sentiment, which can be tackled with neural text generation techniques. However, it is more challenging since changing a frame requires rewriting entire sentences rather than single phrases. In this paper, we study how to computationally reframe sentences in news articles while maintaining their coherence to the context. We treat reframing as a sentence-level fill-in-the-blank task for which we train neural models on an existing media frame corpus. To guide the training, we propose three strategies: framed-language pretraining, named-entity preservation, and adversarial learning. We evaluate respective models automatically and manually for topic consistency, coherence, and successful reframing. Our results indicate that generating properly-framed text works well but with tradeoffs.
A computational fluid dynamics (CFD) simulation framework for predicting complex flows is developed on the Tensor Processing Unit (TPU) platform. The TPU architecture is featured with accelerated performance of dense matrix multiplication, large high bandwidth memory, and a fast inter-chip interconnect, which makes it attractive for high-performance scientific computing. The CFD framework solves the variable-density Navier-Stokes equation using a Low-Mach approximation, and the governing equations are discretized by a finite difference method on a collocated structured mesh. It uses the graph-based TensorFlow as the programming paradigm. The accuracy and performance of this framework is studied both numerically and analytically, specifically focusing on effects of TPU-native single precision floating point arithmetic on solution accuracy. The algorithm and implementation are validated with canonical 2D and 3D Taylor Green vortex simulations. To demonstrate the capability for simulating turbulent flows, simulations are conducted for two configurations, namely the decaying homogeneous isotropic turbulence and a turbulent planar jet. Both simulations show good statistical agreement with reference solutions. The performance analysis shows a linear weak scaling and a super-linear strong scaling up to a full TPU v3 pod with 2048 cores.
We describe a novel application of the end-to-end deep learning technique to the task of discriminating top quark-initiated jets from those originating from the hadronization of a light quark or a gluon. The end-to-end deep learning technique combine s deep learning algorithms and low-level detector representation of the high-energy collision event. In this study, we use low-level detector information from the simulated CMS Open Data samples to construct the top jet classifiers. To optimize classifier performance we progressively add low-level information from the CMS tracking detector, including pixel detector reconstructed hits and impact parameters, and demonstrate the value of additional tracking information even when no new spatial structures are added. Relying only on calorimeter energy deposits and reconstructed pixel detector hits, the end-to-end classifier achieves an AUC score of 0.975$pm$0.002 for the task of classifying boosted top quark jets. After adding derived track quantities, the classifier AUC score increases to 0.9824$pm$0.0013, serving as the first performance benchmark for these CMS Open Data samples. We additionally provide a timing performance comparison of different processor unit architectures for training the network.
The Ly$alpha$ forest provides one of the best means of mapping large-scale structure at high redshift, including our tightest constraint on the distance-redshift relation before cosmic noon. We describe how the large-scale correlations in the Ly$alph a$ forest can be understood as an expansion in cumulants of the optical depth field, which itself can be related to the density field by a bias expansion. This provides a direct connection between the observable and the statistics of the matter fluctuations which can be computed in a systematic manner. We discuss the way in which complex, small-scale physics enters the predictions, the origin of the much-discussed velocity bias and the `renormalization of the large-scale bias coefficients. Our calculations are within the context of perturbation theory, but we also make contact with earlier work using the peak-background split. Using the structure of the equations of motion we demonstrate, to all orders in perturbation theory, that the large-scale flux power spectrum becomes the linear spectrum times the square of a quadratic in the cosine of the angle to the line of sight. Unlike the case of galaxies, both the isotropic and anisotropic pieces receive contributions from small-scale physics.
When engaging in argumentative discourse, skilled human debaters tailor claims to the beliefs of the audience, to construct effective arguments. Recently, the field of computational argumentation witnessed extensive effort to address the automatic ge neration of arguments. However, existing approaches do not perform any audience-specific adaptation. In this work, we aim to bridge this gap by studying the task of belief-based claim generation: Given a controversial topic and a set of beliefs, generate an argumentative claim tailored to the beliefs. To tackle this task, we model the peoples prior beliefs through their stances on controversial topics and extend state-of-the-art text generation models to generate claims conditioned on the beliefs. Our automatic evaluation confirms the ability of our approach to adapt claims to a set of given beliefs. In a manual study, we additionally evaluate the generated claims in terms of informativeness and their likelihood to be uttered by someone with a respective belief. Our results reveal the limitations of modeling users beliefs based on their stances, but demonstrate the potential of encoding beliefs into argumentative texts, laying the ground for future exploration of audience reach.
Engineered swift equilibration (ESE) is a class of driving protocols that enforce an equilibrium distribution with respect to external control parameters at the beginning and end of rapid state transformations of open, classical non-equilibrium syste ms. ESE protocols have previously been derived and experimentally realized for Brownian particles in simple, one-dimensional, time-varying trapping potentials; one recent study considered ESE in two-dimensional Euclidean configuration space. Here we extend the ESE framework to generic, overdamped Brownian systems in arbitrary curved configuration space and illustrate our results with specific examples not amenable to previous techniques. Our approach may be used to impose the necessary dynamics to control the full temporal configurational distribution in a wide variety of experimentally realizable settings.
We present the one-loop 2-point function of biased tracers in redshift space computed with Lagrangian perturbation theory, including a full resummation of both long-wavelength (infrared) displacements and associated velocities. The resulting model ac curately predicts the power spectrum and correlation function of halos and mock galaxies from two different sets of N-body simulations at the percent level for quasi-linear scales, including the damping of the baryon acoustic oscillation signal due to the bulk motions of galaxies. We compare this full resummation with other, approximate, techniques including the moment expansion and Gaussian streaming model. We discuss infrared resummation in detail and compare our Lagrangian formulation with the Eulerian theory augmented by an infrared resummation based on splitting the input power spectrum into wiggle and no-wiggle components. We show that our model is able to recover unbiased cosmological parameters in mock data encompassing a volume much larger than what will be available to future galaxy surveys. We demonstrate how to efficiently compute the resulting expressions numerically, making available a fast Python code capable of rapidly computing these statistics in both configuration and Fourier space.
Media organizations bear great reponsibility because of their considerable influence on shaping beliefs and positions of our society. Any form of media can contain overly biased content, e.g., by reporting on political events in a selective or incomp lete manner. A relevant question hence is whether and how such form of imbalanced news coverage can be exposed. The research presented in this paper addresses not only the automatic detection of bias but goes one step further in that it explores how political bias and unfairness are manifested linguistically. In this regard we utilize a new corpus of 6964 news articles with labels derived from adfontesmedia.com and develop a neural model for bias assessment. By analyzing this model on article excerpts, we find insightful bias patterns at different levels of text granularity, from single words to the whole article discourse.
Media plays an important role in shaping public opinion. Biased media can influence people in undesirable directions and hence should be unmasked as such. We observe that featurebased and neural text classification approaches which rely only on the d istribution of low-level lexical information fail to detect media bias. This weakness becomes most noticeable for articles on new events, where words appear in new contexts and hence their bias predictiveness is unclear. In this paper, we therefore study how second-order information about biased statements in an article helps to improve detection effectiveness. In particular, we utilize the probability distributions of the frequency, positions, and sequential order of lexical and informational sentence-level bias in a Gaussian Mixture Model. On an existing media bias dataset, we find that the frequency and positions of biased statements strongly impact article-level bias, whereas their exact sequential order is secondary. Using a standard model for sentence-level bias detection, we provide empirical evidence that article-level bias detectors that use second-order information clearly outperform those without.
Traffic evacuation plays a critical role in saving lives in devastating disasters such as hurricanes, wildfires, floods, earthquakes, etc. An ability to evaluate evacuation plans in advance for these rare events, including identifying traffic flow bo ttlenecks, improving traffic management policies, and understanding the robustness of the traffic management policy are critical for emergency management. Given the rareness of such events and the corresponding lack of real data, traffic simulation provides a flexible and versatile approach for such scenarios, and furthermore allows dynamic interaction with the simulated evacuation. In this paper, we build a traffic simulation pipeline to explore the above problems, covering many aspects of evacuation, including map creation, demand generation, vehicle behavior, bottleneck identification, traffic management policy improvement, and results analysis. We apply the pipeline to two case studies in California. The first is Paradise, which was destroyed by a large wildfire in 2018 and experienced catastrophic traffic jams during the evacuation. The second is Mill Valley, which has high risk of wildfire and potential traffic issues since the city is situated in a narrow valley.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا