Unsupervised cross-domain dependency parsing is to accomplish domain adaptation for dependency parsing without using labeled data in target domain. Existing methods are often of the pseudo-annotation type, which generates data through self-annotation
of the base model and performing iterative training. However, these methods fail to consider the change of model structure for domain adaptation. In addition, the structural information contained in the text cannot be fully exploited. To remedy these drawbacks, we propose a Semantics-Structure Adaptative Dependency Parser (SSADP), which accomplishes unsupervised cross-domain dependency parsing without relying on pseudo-annotation or data selection. In particular, we design two feature extractors to extract semantic and structural features respectively. For each type of features, a corresponding feature adaptation method is utilized to achieve domain adaptation to align the domain distribution, which effectively enhances the unsupervised cross-domain transfer capability of the model. We validate the effectiveness of our model by conducting experiments on the CODT1 and CTB9 respectively, and the results demonstrate that our model can achieve consistent performance improvement. Besides, we verify the structure transfer ability of the proposed model by introducing Weisfeiler-Lehman Test.
Automatic news recommendation has gained much attention from the academic community and industry. Recent studies reveal that the key to this task lies within the effective representation learning of both news and users. Existing works typically encod
e news title and content separately while neglecting their semantic interaction, which is inadequate for news text comprehension. Besides, previous models encode user browsing history without leveraging the structural correlation of user browsed news to reflect user interests explicitly. In this work, we propose a news recommendation framework consisting of collaborative news encoding (CNE) and structural user encoding (SUE) to enhance news and user representation learning. CNE equipped with bidirectional LSTMs encodes news title and content collaboratively with cross-selection and cross-attention modules to learn semantic-interactive news representations. SUE utilizes graph convolutional networks to extract cluster-structural features of user history, followed by intra-cluster and inter-cluster attention modules to learn hierarchical user interest representations. Experiment results on the MIND dataset validate the effectiveness of our model to improve the performance of news recommendation.
Web search is an essential way for humans to obtain information, but it's still a great challenge for machines to understand the contents of web pages. In this paper, we introduce the task of web-based structural reading comprehension. Given a web pa
ge and a question about it, the task is to find an answer from the web page. This task requires a system not only to understand the semantics of texts but also the structure of the web page. Moreover, we proposed WebSRC, a novel Web-based Structural Reading Comprehension dataset. WebSRC consists of 400K question-answer pairs, which are collected from 6.4K web pages with corresponding HTML source code, screenshots, and metadata. Each question in WebSRC requires a certain structural understanding of a web page to answer, and the answer is either a text span on the web page or yes/no. We evaluate various strong baselines on our dataset to show the difficulty of our task. We also investigate the usefulness of structural information and visual features. Our dataset and baselines have been publicly available.
Due to efficient end-to-end training and fluency in generated texts, several encoder-decoder framework-based models are recently proposed for data-to-text generations. Appropriate encoding of input data is a crucial part of such encoder-decoder model
s. However, only a few research works have concentrated on proper encoding methods. This paper presents a novel encoder-decoder based data-to-text generation model where the proposed encoder carefully encodes input data according to underlying structure of the data. The effectiveness of the proposed encoder is evaluated both extrinsically and intrinsically by shuffling input data without changing meaning of that data. For selecting appropriate content information in encoded data from encoder, the proposed model incorporates attention gates in the decoder. With extensive experiments on WikiBio and E2E dataset, we show that our model outperforms the state-of-the models and several standard baseline systems. Analysis of the model through component ablation tests and human evaluation endorse the proposed model as a well-grounded system.
Probes are models devised to investigate the encoding of knowledge---e.g. syntactic structure---in contextual representations. Probes are often designed for simplicity, which has led to restrictions on probe design that may not allow for the full exp
loitation of the structure of encoded information; one such restriction is linearity. We examine the case of a structural probe (Hewitt and Manning, 2019), which aims to investigate the encoding of syntactic structure in contextual representations through learning only linear transformations. By observing that the structural probe learns a metric, we are able to kernelize it and develop a novel non-linear variant with an identical number of parameters. We test on 6 languages and find that the radial-basis function (RBF) kernel, in conjunction with regularization, achieves a statistically significant improvement over the baseline in all languages---implying that at least part of the syntactic knowledge is encoded non-linearly. We conclude by discussing how the RBF kernel resembles BERT's self-attention layers and speculate that this resemblance leads to the RBF-based probe's stronger performance.
In this paper, we define an abstract task called structural realization that generates words given a prefix of words and a partial representation of a parse tree. We also present a method for solving instances of this task using a Gated Graph Neural
Network (GGNN). We evaluate it with standard accuracy measures, as well as with respect to perplexity, in which its comparison to previous work on language modelling serves to quantify the information added to a lexical selection task by the presence of syntactic knowledge. That the addition of parse-tree-internal nodes to this neural model should improve the model, with respect both to accuracy and to more conventional measures such as perplexity, may seem unsurprising, but previous attempts have not met with nearly as much success. We have also learned that transverse links through the parse tree compromise the model's accuracy at generating adjectival and nominal parts of speech.
In this research I studied structural concepts , and how to move into the Arabic
cultural field. I defined these concepts, and the most important people who founded it and
contributed to its construction as a structural approach . Then I traced the
ways that
brought it to the modern Arabic criticism .And the role of competency in the process of
vulnerability and influence. Perhaps the practical steps I have taken has made the structural
approach take its obvious features in our Arabic criticism. This research is presented in
detail by the structural critic Kamal Abu Dib , through his structural applications. And
because it is impossible to take note of all structural aspects that resembled the archipelago
to look up from the top, I tried to study structuralism as a way of thinking and literary
criticism. In particular ,structuralism seeks to discover the relationship between the literary
system ( the text ) and the culture , that the text part of it.
In order to build 3D structural model for Mamlaht Al-Kom
structure in the North Palmyride Chain, the potential reflections
were defined like Korushina Anhydrite (K.A) the cover, and
Korushina Dolomite (K.D) the reservoir, therefore the time,
velo
city, and depth maps for Korushina dolomite reflection were
constructed. Finally the 3D Structural Model for formation (K.D)
was defined. So that the hydrocarbon potential is discussed.
This research aims to produce a diagnosis system for breast cancer by using Neural
Network depending on Back Propagation algorithm(BPNN) and Adaptive Neuro Fuzzy
Inference System ‘ANFIS’, the both of studies was done using structural features of
b
iopsies in “Wisconson Breast Cancer “data base.
In the end a comparison was made between the two studies of malignant- benign
classification of breast masses of breast cancer which has accuracy 95,95% with BPNN
and 91.9% with ANFIS system, this results can be consider very important if they
compared with researches depending on image features that obtained of various devises
like mammography, magnetic resonance.
Mn doped tin oxide transparent conducting thin films were deposited at a
substrate temperature of 450°C by spray pyrolysis method. Structural
properties of the films were investigated as a function of various Mn-doping
levels (0, 1, 3, 5, 7 wt%) w
hile all other deposition parameters such as
substrate temperature, spray rate, carrier gas pressure and distance between
spray nozzle to substrate were kept constant.