ترغب بنشر مسار تعليمي؟ اضغط هنا

A Support Tool for Tagset Mapping

52   0   0.0 ( 0 )
 نشر من قبل Simone Teufel
 تاريخ النشر 1995
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Simone Teufel




اسأل ChatGPT حول البحث

Many different tagsets are used in existing corpora; these tagsets vary according to the objectives of specific projects (which may be as far apart as robust parsing vs. spelling correction). In many situations, however, one would like to have uniform access to the linguistic information encoded in corpus annotations without having to know the classification schemes in detail. This paper describes a tool which maps unstructured morphosyntactic tags to a constraint-based, typed, configurable specification language, a ``standard tagset. The mapping relies on a manually written set of mapping rules, which is automatically checked for consistency. In certain cases, unsharp mappings are unavoidable, and noise, i.e. groups of word forms {sl not} conforming to the specification, will appear in the output of the mapping. The system automatically detects such noise and informs the user about it. The tool has been tested with rules for the UPenn tagset cite{up} and the SUSANNE tagset cite{garside}, in the framework of the EAGLESfootnote{LRE project EAGLES, cf. cite{eagles}.} validation phase for standardised tagsets for European languages.



قيم البحث

اقرأ أيضاً

The ALMA Observation Support Tool (OST) is an ALMA simulator which is interacted with solely via a standard web browser. It is aimed at users who may or may not be experts in interferometry, or those that do not wish to familarise themselves with the simulation components of a data reduction package. It has been designed to offer full imaging simulation capability for an arbitrary ALMA observation while maintaining the accessibility of other online tools such as the ALMA Sensitivity Calculator. Simulation jobs are defined by selecting and entering options on a standard web form. The user can specify the standard parameters that would need to be considered for an ALMA observation (e.g. pointing direction, frequency set up, duration), and there is also the option to upload arbitrary sky models in FITS format. Once submitted, jobs are sequentially processed by a remote server running a CASA-based back-end system. The user is notified by email when the job is complete, and directed to a standard web page which contains the results of the simulation and a range of downloadable data products. The system is currently hosted by the UK ALMA Regional Centre, and can be accessed by directing a web browser to http://almaost.jb.man.ac.uk.
In this work, we present a web-based annotation and querying tool Sangrahaka. It annotates entities and relationships from text corpora and constructs a knowledge graph (KG). The KG is queried using templatized natural language queries. The applicati on is language and corpus agnostic, but can be tuned for special needs of a specific language or a corpus. A customized version of the framework has been used in two annotation tasks. The application is available for download and installation. Besides having a user-friendly interface, it is fast, supports customization, and is fault tolerant on both client and server side. The code is available at https://github.com/hrishikeshrt/sangrahaka and the presentation with a demo is available at https://youtu.be/nw9GFLVZMMo.
Mutation testing can be used to assess the fault-detection capabilities of a given test suite. To this aim, two characteristics of mutation testing frameworks are of paramount importance: (i) they should generate mutants that are representative of re al faults; and (ii) they should provide a complete tool chain able to automatically generate, inject, and test the mutants. To address the first point, we recently proposed an approach using a Recurrent Neural Network Encoder-Decoder architecture to learn mutants from ~787k faults mined from real programs. The empirical evaluation of this approach confirmed its ability to generate mutants representative of real faults. In this paper, we address the second point, presenting DeepMutation, a tool wrapping our deep learning model into a fully automated tool chain able to generate, inject, and test mutants learned from real faults. Video: https://sites.google.com/view/learning-mutation/deepmutation
The design and development process for Internet of Things (IoT) applications is more complicated than for desktop, mobile, or web applications. IoT applications require both software and hardware to work together across multiple different types of no des (e.g., microcontrollers, system-on-chips, mobile phones, miniaturised single-board computers, and cloud platforms) with different capabilities under different conditions. IoT applications typically collect and analyse personal data that can be used to derive sensitive information about individuals. Without proper privacy protections in place, IoT applications could lead to serious privacy violations. Thus far, privacy concerns have not been explicitly considered in software engineering processes when designing and developing IoT applications, partly due to a lack of tools, technologies, and guidance. This paper presents a research vision that argues the importance of developing a privacy-aware IoT application design tool to address the challenges mentioned above. This tool should not only transform IoT application designs into privacy-aware application designs but also validate and verify them. First, we outline how this proposed tool should work in practice and its core functionalities. Then, we identify research challenges and potential directions towards developing the proposed tool. We anticipate that this proposed tool will save many engineering hours which engineers would otherwise need to spend on developing privacy expertise and applying it. We also highlight the usefulness of this tool towards privacy education and privacy compliance.
Deep learning applications in shaping ad hoc planning proposals are limited by the difficulty in integrating professional knowledge about cities with artificial intelligence. We propose a novel, complementary use of deep neural networks and planning guidance to automate street network generation that can be context-aware, example-based and user-guided. The model tests suggest that the incorporation of planning knowledge (e.g., road junctions and neighborhood types) in the model training leads to a more realistic prediction of street configurations. Furthermore, the new tool provides both professional and lay users an opportunity to systematically and intuitively explore benchmark proposals for comparisons and further evaluations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا