ترغب بنشر مسار تعليمي؟ اضغط هنا

Extending rational models of communication from beliefs to actions

51   0   0.0 ( 0 )
 نشر من قبل Theodore Sumers
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Speakers communicate to influence their partners beliefs and shape their actions. Belief- and action-based objectives have been explored independently in recent computational models, but it has been challenging to explicitly compare or integrate them. Indeed, we find that they are conflated in standard referential communication tasks. To distinguish these accounts, we introduce a new paradigm called signaling bandits, generalizing classic Lewis signaling games to a multi-armed bandit setting where all targets in the context have some relative value. We develop three speaker models: a belief-oriented speaker with a purely informative objective; an action-oriented speaker with an instrumental objective; and a combined speaker which integrates the two by inducing listener beliefs that generally lead to desirable actions. We then present a series of simulations demonstrating that grounding production choices in future listener actions results in relevance effects and flexible uses of nonliteral language. More broadly, our findings suggest that language games based on richer decision problems are a promising avenue for insight into rational communication.

قيم البحث

اقرأ أيضاً

Given a group $Gamma$ acting on a set $X$, a $k$-coloring $phi:Xto{1,dots,k}$ of $X$ is distinguishing with respect to $Gamma$ if the only $gammain Gamma$ that fixes $phi$ is the identity action. The distinguishing number of the action $Gamma$, denot ed $D_{Gamma}(X)$, is then the smallest positive integer $k$ such that there is a distinguishing $k$-coloring of $X$ with respect to $Gamma$. This notion has been studied in a number of settings, but by far the largest body of work has been concerned with finding the distinguishing number of the action of the automorphism group of a graph $G$ upon its vertex set, which is referred to as the distinguishing number of $G$. The distinguishing number of a group action is a measure of how difficult it is to break all of the permutations arising from that action. In this paper, we aim to further differentiate the resilience of group actions with the same distinguishing number. In particular, we introduce a precoloring extension framework to address this issue. A set $S subseteq X$ is a fixing set for $Gamma$ if for every non-identity element $gamma in Gamma$ there is an element $s in S$ such that $gamma(s) eq s$. The distinguishing extension number $operatorname{ext}_D(X,Gamma;k)$ is the minimum number $m$ such that for all fixing sets $W subseteq X$ with $|W| geq m$, every $k$-coloring $c : X setminus W to [k]$ can be extended to a $k$-coloring that distinguishes $X$. In this paper, we prove that $operatorname{ext}_D(mathbb{R},operatorname{Aut}(mathbb{R}),2) =4$, where $operatorname{Aut}(mathbb{R})$ is comprised of compositions of translations and reflections. We also consider the distinguishing extension number of the circle and (finite) cycles, obtaining several exact results and bounds.
We give an algebro-geometric classification of smooth real affine algebraic surfaces endowed with an effective action of the real algebraic circle group $mathbb{S}^1$ up to equivariant isomorphisms. As an application, we show that every compact diffe rentiable surface endowed with an action of the circle $S^1$ admits a unique smooth rational real quasi-projective model up to $mathbb{S}^1$-equivariant birational diffeomorphism.
Language is crucial for human intelligence, but what exactly is its role? We take language to be a part of a system for understanding and communicating about situations. The human ability to understand and communicate about situations emerges gradual ly from experience and depends on domain-general principles of biological neural networks: connection-based learning, distributed representation, and context-sensitive, mutual constraint satisfaction-based processing. Current artificial language processing systems rely on the same domain general principles, embodied in artificial neural networks. Indeed, recent progress in this field depends on emph{query-based attention}, which extends the ability of these systems to exploit context and has contributed to remarkable breakthroughs. Nevertheless, most current models focus exclusively on language-internal tasks, limiting their ability to perform tasks that depend on understanding situations. These systems also lack memory for the contents of prior situations outside of a fixed contextual span. We describe the organization of the brains distributed understanding system, which includes a fast learning system that addresses the memory problem. We sketch a framework for future models of understanding drawing equally on cognitive neuroscience and artificial intelligence and exploiting query-based attention. We highlight relevant current directions and consider further developments needed to fully capture human-level language understanding in a computational system.
We address the task of text translation on the How2 dataset using a state of the art transformer-based multimodal approach. The question we ask ourselves is whether visual features can support the translation process, in particular, given that this i s a dataset extracted from videos, we focus on the translation of actions, which we believe are poorly captured in current static image-text datasets currently used for multimodal translation. For that purpose, we extract different types of action features from the videos and carefully investigate how helpful this visual information is by testing whether it can increase translation quality when used in conjunction with (i) the original text and (ii) the original text where action-related words (or all verbs) are masked out. The latter is a simulation that helps us assess the utility of the image in cases where the text does not provide enough context about the action, or in the presence of noise in the input text.
Sports competitions are widely researched in computer and social science, with the goal of understanding how players act under uncertainty. While there is an abundance of computational work on player metrics prediction based on past performance, very few attempts to incorporate out-of-game signals have been made. Specifically, it was previously unclear whether linguistic signals gathered from players interviews can add information which does not appear in performance metrics. To bridge that gap, we define text classification tasks of predicting deviations from mean in NBA players in-game actions, which are associated with strategic choices, player behavior and risk, using their choice of language prior to the game. We collected a dataset of transcripts from key NBA players pre-game interviews and their in-game performance metrics, totalling in 5,226 interview-metric pairs. We design neural models for players action prediction based on increasingly more complex aspects of the language signals in their open-ended interviews. Our models can make their predictions based on the textual signal alone, or on a combination with signals from past-performance metrics. Our text-based models outperform strong baselines trained on performance metrics only, demonstrating the importance of language usage for action prediction. Moreover, the models that employ both textual input and past-performance metrics produced the best results. Finally, as neural networks are notoriously difficult to interpret, we propose a method for gaining further insight into what our models have learned. Particularly, we present an LDA-based analysis, where we interpret model predictions in terms of correlated topics. We find that our best performing textual model is most associated with topics that are intuitively related to each prediction task and that better models yield higher correlation with more informative topics.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا