Sentence weighting is a simple and powerful domain adaptation technique. We carry out domain classification for computing sentence weights with 1) language model cross entropy difference 2) a convolutional neural network 3) a Recursive Neural Tensor
Network. We compare these approaches with regard to domain classification accuracy and and study the posterior probability distributions. Then we carry out NMT experiments in the scenario where we have no in-domain parallel corpora and and only very limited in-domain monolingual corpora. Here and we use the domain classifier to reweight the sentences of our out-of-domain training corpus. This leads to improvements of up to 2.1 BLEU for German to English translation.
Abstractive summarization, the task of generating a concise summary of input documents, requires: (1) reasoning over the source document to determine the salient pieces of information scattered across the long document, and (2) composing a cohesive t
ext by reconstructing these salient facts into a shorter summary that faithfully reflects the complex relations connecting these facts. In this paper, we adapt TP-Transformer (Schlag et al., 2019), an architecture that enriches the original Transformer (Vaswani et al., 2017) with the explicitly compositional Tensor Product Representation (TPR), for the task of abstractive summarization. The key feature of our model is a structural bias that we introduce by encoding two separate representations for each token to represent the syntactic structure (with role vectors) and semantic content (with filler vectors) separately. The model then binds the role and filler vectors into the TPR as the layer output. We argue that the structured intermediate representations enable the model to take better control of the contents (salient facts) and structures (the syntax that connects the facts) when generating the summary. Empirically, we show that our TP-Transformer outperforms the Transformer and the original TP-Transformer significantly on several abstractive summarization datasets based on both automatic and human evaluations. On several syntactic and semantic probing tasks, we demonstrate the emergent structural information in the role vectors and the performance gain by information specificity of the role vectors and improved syntactic interpretability in the TPR layer outputs.(Code and models are available at https://github.com/jiangycTarheel/TPT-Summ)
The deep learning algorithm has recently achieved a lot of success, especially in the field of computer vision. This research aims to describe the classification method applied to the dataset of multiple types of images (Synthetic Aperture Radar (SAR
) images and non-SAR images). In such a classification, transfer learning was used followed by fine-tuning methods. Besides, pre-trained architectures were used on the known image database ImageNet. The model VGG16 was indeed used as a feature extractor and a new classifier was trained based on extracted features.The input data mainly focused on the dataset consist of five classes including the SAR images class (houses) and the non-SAR images classes (Cats, Dogs, Horses, and Humans). The Convolutional Neural Network (CNN) has been chosen as a better option for the training process because it produces a high accuracy. The final accuracy has reached 91.18% in five different classes. The results are discussed in terms of the probability of accuracy for each class in the image classification in percentage. Cats class got 99.6 %, while houses class got 100 %.Other types of classes were with an average score of 90 % and above.
Given a heterogeneous social network, can we forecast its future? Can we predict who will start using a given hashtag on twitter? Can we leverage side information, such as who retweets or follows whom, to improve our membership forecasts? We present
TENSORCAST, a novel method that forecasts time-evolving networks more accurately than the current state of the art methods by incorporating multiple data sources in coupled tensors. TENSORCAST is (a) scalable, being linearithmic on the number of connections; (b) effective, achieving over 20% improved precision on top-1000 forecasts of community members; (c) general, being applicable to data sources with a different structure. We run our method on multiple real-world networks, including DBLP and a Twitter temporal network with over 310 million nonzeros, where we predict the evolution of the activity of the use of political hashtags.
In this paper, we define tensors and space Riemann spaces and
fixed curvature, and offer a study of some cases associated
with the search topic, the basic function is to study the
relationships that remain valid when the coordinates system
change to another system.
The effective differential cross-section of the studied interaction was calculated within and outside the standard model. radiative corrections resulting from the introduction of scalar (S), pseudo scalar (P) and tensor (T) components into amplitude
of interaction were calculated in two deferent ways. It show that it is only related with square of coupling constants, . The study discussed the durability of the standard model to expand the amplitude of elastic scattering of neutrino on the electron, using the experimental values of the newly obtained coupling constants from the TEXONO, LSND experiments.
In this paper devined parablically Sasakei space, and
found necessary and sufficient conditions in order to exist
geodesic mapping between tow Sasakei spaces , and broved
that necessary and sufficien conditions to exist geodesic
mapping between t
ow Sasakie spaces with equivalent affinors
are equidistant .
A finally fond that is , if exist geodesic mappings between
tow constant corvator parablically Sasakei spaces to there
Rich tensors are proportional.
in this paper we:
1) defined Riemannian space , conformal mapping, Einstein
space , Ricci recurrent Einstein space.
2) study conformal mapping between Einstein spaces
corresponding flat surface, and Ricci recurrent Einstein
space.
This research studies the distributive solutions for some partial
differential equations of second order.
We study specially the distributive solutions for Laplas equation,
Heat equation, wave equations and schrodinger equation.
We introduce the
fundamental solutions for precedent equations
and inference the distributive solutions by using the convolution of
distributions concept. For that we use some of lemmas and theorems
with proofs, specially for Laplas equation. And precedent some of
concepts, defintions and remarks.
المعادلة التفاضلية الجزئية من المرتبة الثانية
التوزيعات
الجداء التنسوري للتوزبعات
التفاف التوزيعات
الحلول الأساسية
الحلول التوزيعية
partial differential equations of second order
Distributions
Tensor product of distributions
Convolution of distributions
Fundamental solution
Distributive solution
المزيد..
In this search, we have calculated thetransverse component of energy distortion in
elasticity wave modes of quantum liquid, by using Landau's theory in Fermi liquid taken
in consideration the effect of transverse component of an external disturbanc
e on the
liquid. We calculated the current density related to this component, and the stress tensor
component according with this state.In our search we have been considered the
temperature is low enough since the relation is true, where is the Fermi
temperature.
We have compared the response of the liquid, for transverse componentof the
external disturbance, with its response for longitudinal one in same conditions, by studding
the transverse and longitudinal shear modulus (which equivalent these responses) as
functions of the frequency and wave vector of the external disturbance. We have
found in general that these responses are different, but they become equal in particular
case , where the velocity on Fermi surface, and in this case the
viscoelastic model hypotheses become true.