ترغب بنشر مسار تعليمي؟ اضغط هنا

An Entropic Associative Memory

58   0   0.0 ( 0 )
 نشر من قبل Luis A. Pineda
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Natural memories are associative, declarative and distributed. Symbolic computing memories resemble natural memories in their declarative character, and information can be stored and recovered explicitly; however, they lack the associative and distributed properties of natural memories. Sub-symbolic memories developed within the connectionist or artificial neural networks paradigm are associative and distributed, but are unable to express symbolic structure and information cannot be stored and retrieved explicitly; hence, they lack the declarative property. To address this dilemma, we use Relational-Indeterminate Computing to model associative memory registers that hold distributed representations of individual objects. This mode of computing has an intrinsic computing entropy which measures the indeterminacy of representations. This parameter determines the operational characteristics of the memory. Associative registers are embedded in an architecture that maps concrete images expressed in modality-specific buffers into abstract representations, and vice versa, and the memory system as a whole fulfills the three properties of natural memories. The system has been used to model a visual memory holding the representations of hand-written digits, and recognition and recall experiments show that there is a range of entropy values, not too low and not too high, in which associative memory registers have a satisfactory performance. The similarity between the cue and the object recovered in memory retrieve operations depends on the entropy of the memory register holding the representation of the corresponding object. The experiments were implemented in a simulation using a standard computer, but a parallel architecture may be built where the memory operations would take a very reduced number of computing steps.

قيم البحث

اقرأ أيضاً

106 - Yiming Ding , Xuyan Xiang 2016
Long memory or long range dependency is an important phenomenon that may arise in the analysis of time series or spatial data. Most of the definitions of long memory of a stationary process $X={X_1, X_2,cdots,}$ are based on the second-order properti es of the process. The excess entropy of a stationary process is the summation of redundancies which relates to the rate of convergence of the conditional entropy $H(X_n|X_{n-1},cdots, X_1)$ to the entropy rate. It is proved that the excess entropy is identical to the mutual information between the past and the future when the entropy $H(X_1)$ is finite. We suggest the definition that a stationary process is long memory if the excess entropy is infinite. Since the definition of excess entropy of a stationary process requires very weak moment condition on the distribution of the process, it can be applied to processes whose distributions without bounded second moment. A significant property of excess entropy is that it is invariant under invertible transformation, which enables us to know the excess entropy of a stationary process from the excess entropy of other process. For stationary Guassian process, the excess entropy characterization of long memory relates to popular characterization well. It is proved that the excess entropy of fractional Gaussian noise is infinite if the Hurst parameter $H in (1/2, 1)$.
We investigate a new method to augment recurrent neural networks with extra memory without increasing the number of network parameters. The system has an associative memory based on complex-valued vectors and is closely related to Holographic Reduced Representations and Long Short-Term Memory networks. Holographic Reduced Representations have limited capacity: as they store more information, each retrieval becomes noisier due to interference. Our system in contrast creates redundant copies of stored information, which enables retrieval with reduced noise. Experiments demonstrate faster learning on multiple memorization tasks.
Despite recent progress in memory augmented neural network (MANN) research, associative memory networks with a single external memory still show limited performance on complex relational reasoning tasks. Especially the content-based addressable memor y networks often fail to encode input data into rich enough representation for relational reasoning and this limits the relation modeling performance of MANN for long temporal sequence data. To address these problems, here we introduce a novel Distributed Associative Memory architecture (DAM) with Memory Refreshing Loss (MRL) which enhances the relation reasoning performance of MANN. Inspired by how the human brain works, our framework encodes data with distributed representation across multiple memory blocks and repeatedly refreshes the contents for enhanced memorization similar to the rehearsal process of the brain. For this procedure, we replace a single external memory with a set of multiple smaller associative memory blocks and update these sub-memory blocks simultaneously and independently for the distributed representation of input data. Moreover, we propose MRL which assists a tasks target objective while learning relational information existing in data. MRL enables MANN to reinforce an association between input data and task objective by reproducing stochastically sampled input data from stored memory contents. With this procedure, MANN further enriches the stored representations with relational information. In experiments, we apply our approaches to Differential Neural Computer (DNC), which is one of the representative content-based addressing memory models and achieves the state-of-the-art performance on both memorization and relational reasoning tasks.
We consider the problem of identifying the causal direction between two discrete random variables using observational data. Unlike previous work, we keep the most general functional model but make an assumption on the unobserved exogenous variable: I nspired by Occams razor, we assume that the exogenous variable is simple in the true causal direction. We quantify simplicity using Renyi entropy. Our main result is that, under natural assumptions, if the exogenous variable has low $H_0$ entropy (cardinality) in the true direction, it must have high $H_0$ entropy in the wrong direction. We establish several algorithmic hardness results about estimating the minimum entropy exogenous variable. We show that the problem of finding the exogenous variable with minimum entropy is equivalent to the problem of finding minimum joint entropy given $n$ marginal distributions, also known as minimum entropy coupling problem. We propose an efficient greedy algorithm for the minimum entropy coupling problem, that for $n=2$ provably finds a local optimum. This gives a greedy algorithm for finding the exogenous variable with minimum $H_1$ (Shannon Entropy). Our greedy entropy-based causal inference algorithm has similar performance to the state of the art additive noise models in real datasets. One advantage of our approach is that we make no use of the values of random variables but only their distributions. Our method can therefore be used for causal inference for both ordinal and also categorical data, unlike additive noise models.
The paper proposes an improved quantum associative algorithm with distributed query based on model proposed by Ezhov et al. We introduce two modifications of the query that optimized data retrieval of correct multi-patterns simultaneously for any rat e of the number of the recognition pattern on the total patterns. Simulation results are given.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا