ﻻ يوجد ملخص باللغة العربية
Automatic identification of mutiword expressions (MWEs) is a pre-requisite for semantically-oriented downstream applications. This task is challenging because MWEs, especially verbal ones (VMWEs), exhibit surface variability. However, this variability is usually more restricted than in regular (non-VMWE) constructions, which leads to various variability profiles. We use this fact to determine the optimal set of features which could be used in a supervised classification setting to solve a subproblem of VMWE identification: the identification of occurrences of previously seen VMWEs. Surprisingly, a simple custom frequency-based feature selection method proves more efficient than other standard methods such as Chi-squared test, information gain or decision trees. An SVM classifier using the optimal set of only 6 features outperforms the best systems from a recent shared task on the French seen data.
We review some aspects, especially those we can tackle analytically, of a minimal model of closed economy analogous to the kinetic theory model of ideal gases where the agents exchange wealth amongst themselves such that the total wealth is conserved
In cases where both components of a binary system show oscillations, asteroseismology has been proposed as a method to identify the system. For KIC 2568888, observed with $Kepler$, we detect oscillation modes for two red giants in a single power dens
We have in recent years come to view the outer parts of galaxies as having vital clues about their formation and evolution. Here, we would like to briefly present our results from a complete sample of nearby, late-type, spiral galaxies, using data fr
The cosmological missing baryons at z<1 most likely hide in the hot (T$gtrsim10^{5.5}$ K) phase of the Warm Hot Intergalactic Medium (WHIM). While the hot WHIM is hard to detect due to its high ionisation level, the warm (T$lesssim10^{5.5}$ K) phase
Due to the discrete nature of words, language GANs require to be optimized from rewards provided by discriminator networks, via reinforcement learning methods. This is a much harder setting than for continuous tasks, which enjoy gradient flows from d