Do you want to publish a course? Click here

Extracting Business Process Models from Natural Language Texts

استخلاص مخططات إجراءات العمل انطلاقا من النصوص

850   0   76   0 ( 0 )
 Publication date 2017
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

In our work, we chose to follow semantic transfer based approach. Our approach consists of two main phases. The first phase, Natural Language Analysis phase, aims to analyze the text and extract the required knowledge from it. In addition to the syntactic analysis results, one of the main outputs for this phase is a concept map which summarize the concepts of the related domain and the relationships between these concepts.

References used
Abney, S. (1996). Partial parsing via finite-state cascades. Natural Language Engineering, 2(4), 337-344
Achour, C. B. (1998). Guiding scenario authoring. In In: 8th European-Japanese Conference on Information Modelling and Knowledge Bases
Blumberg, R., & Atre, S. (2003). The problem with unstructured data. Dm Review, 13(42-49), 62
rate research

Read More

Various methods have been developed to measure the location of physical objects on a landscape with high positional accuracy. A new method that has been gaining popularity is the Airborne Light Detection and Ranging (LiDAR). LiDAR works by scan ning a landscape (the combination of ground, buildings, vegetation, etc.,) by multiple passes. In each scan (pass), pulses of laser light are emitted from an airborne platform and their return time is measured, thus enabling the range from the point of emission to the landscape to be determined. The product of airborne laser scanning is a cloud of points located in a 3D space. ALS is capable of delivering clouds of very dense and accurate points that represent the landscape in a relatively short time. However, in spite of the ability to measure objects with high positional accuracy, the automatic detection and interpretation of individual objects in landscapes remains a challenge. An example of such a challenge is the classification of the cloud points produced by ALS. The classification of LiDAR cloud points consists first of all of assigning the points as either object points or bare ground ones. The points labeled object points are then further classified as either buildings or vegetation. As a measurement technique, LiDAR is highly promising, research has been conducted here to automate the detection of bare ground, buildings and vegetation in LiDAR cloud points. In this Research, we describe a new automated scheme that utilizes the so-called “Edge Topology based Iterative Segmentation” (ETIS) model to classify the LiDAR points as ground and objects points. First ground seed points based on edges topology are to be selected and then the initial DTM is to be constructed, the second step is an iterative densification of the DTM using a cloud point segmentation method based on local slope parameter. General ground point filtering parameters have been used was achieved in this method, instead of scene- wise optimization of the parameters, in a way that many groups of benchmark datasets have been without changing the thresholds values. Data provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, have been used to compare the performance of ETIS. The new method is also tested against the 16 other publicized filtering methods. The results indicat that the proposed method is capable of producing a high fidelity terrain model.
In this research, the total content of phenols and flavonoids was determined from the peels of some Syrian citrus fruits and the total antioxidant activity of them was studied, and the effect of changing the extraction method on this activity and content was studied.
Using a corpus of compiled codes from U.S. states containing labeled tax law sections, we train text classifiers to automatically tag tax-law documents and, further, to identify the associated revenue source (e.g. income, property, or sales). After e valuating classifier performance in held-out test data, we apply them to an historical corpus of U.S. state legislation to extract the flow of relevant laws over the years 1910 through 2010. We document that the classifiers are effective in the historical corpus, for example by automatically detecting establishments of state personal income taxes. The trained models with replication code are published at https://github.com/luyang521/tax-classification.
Pretrained language models like BERT have advanced the state of the art for many NLP tasks. For resource-rich languages, one has the choice between a number of language-specific models, while multilingual models are also worth considering. These mode ls are well known for their crosslingual performance, but have also shown competitive in-language performance on some tasks. We consider monolingual and multilingual models from the perspective of historical texts, and in particular for texts enriched with editorial notes: how do language models deal with the historical and editorial content in these texts? We present a new Named Entity Recognition dataset for Dutch based on 17th and 18th century United East India Company (VOC) reports extended with modern editorial notes. Our experiments with multilingual and Dutch pretrained language models confirm the crosslingual abilities of multilingual models while showing that all language models can leverage mixed-variant data. In particular, language models successfully incorporate notes for the prediction of entities in historical texts. We also find that multilingual models outperform monolingual models on our data, but that this superiority is linked to the task at hand: multilingual models lose their advantage when confronted with more semantical tasks.
Automatic detection of the Myers-Briggs Type Indicator (MBTI) from short posts attracted noticeable attention in the last few years. Recent studies showed that this is quite a difficult task, especially on commonly used Twitter data. Obtaining MBTI l abels is also difficult, as human annotation requires trained psychologists, and automatic way of obtaining them is through long questionnaires of questionable usability for the task. In this paper, we present a method for collecting reliable MBTI labels via only four carefully selected questions that can be applied to any type of textual data.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا