ﻻ يوجد ملخص باللغة العربية
In contrast to their word- or sentence-level counterparts, character embeddings are still poorly understood. We aim at closing this gap with an in-depth study of English character embeddings. For this, we use resources from research on grapheme-color synesthesia -- a neuropsychological phenomenon where letters are associated with colors, which give us insight into which characters are similar for synesthetes and how characters are organized in color space. Comparing 10 different character embeddings, we ask: How similar are character embeddings to a synesthetes perception of characters? And how similar are character embeddings extracted from different models? We find that LSTMs agree with humans more than transformers. Comparing across tasks, grapheme-to-phoneme conversion results in the most human-like character embeddings. Finally, ELMo embeddings differ from both humans and other models.
As the numbers of submissions to conferences grow quickly, the task of assessing the quality of academic papers automatically, convincingly, and with high accuracy attracts increasing attention. We argue that studying interpretable dimensions of thes
We explore how the expulsion of gas from star-cluster forming cloud-cores due to supernova explosions affects the shape of the initial cluster mass function, that is, the mass function of star clusters when effects of gas expulsion are over. We demon
Combining insights from both the effective field theory of quantum gravity and black hole thermodynamics, we derive two novel consistency relations to be satisfied by any quantum theory of gravity. First, we show that a particular combination of the
Using an artificial neutral network we explore the parameter space of supergravity grand unified models consistent with the combined Fermilab E989 and Brookhaven E821 data on $(g-2)_mu$. The analysis indicates that the region favored by the data is t
When parsing morphologically-rich languages with neural models, it is beneficial to model input at the character level, and it has been claimed that this is because character-level models learn morphology. We test these claims by comparing character-