Do you want to publish a course? Click here

Disentangling Options with Hellinger Distance Regularizer

85   0   0.0 ( 0 )
 Added by Minsung Hyun
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In reinforcement learning (RL), temporal abstraction still remains as an important and unsolved problem. The options framework provided clues to temporal abstraction in the RL, and the option-critic architecture elegantly solved the two problems of finding options and learning RL agents in an end-to-end manner. However, it is necessary to examine whether the options learned through this method play a mutually exclusive role. In this paper, we propose a Hellinger distance regularizer, a method for disentangling options. In addition, we will shed light on various indicators from the statistical point of view to compare with the options learned through the existing option-critic architecture.



rate research

Read More

In this paper we study the local linearization of the Hellinger--Kantorovich distance via its Riemannian structure. We give explicit expressions for the logarithmic and exponential map and identify a suitable notion of a Riemannian inner product. Samples can thus be represented as vectors in the tangent space of a suitable reference measure where the norm locally approximates the original metric. Working with the local linearization and the corresponding embeddings allows for the advantages of the Euclidean setting, such as faster computations and a plethora of data analysis tools, whilst still still enjoying approximately the descriptive power of the Hellinger--Kantorovich metric.
We introduce Inner Ensemble Networks (IENs) which reduce the variance within the neural network itself without an increase in the model complexity. IENs utilize ensemble parameters during the training phase to reduce the network variance. While in the testing phase, these parameters are removed without a change in the enhanced performance. IENs reduce the variance of an ordinary deep model by a factor of $1/m^{L-1}$, where $m$ is the number of inner ensembles and $L$ is the depth of the model. Also, we show empirically and theoretically that IENs lead to a greater variance reduction in comparison with other similar approaches such as dropout and maxout. Our results show a decrease of error rates between 1.7% and 17.3% in comparison with an ordinary deep model. We also show that IEN was preferred by Neural Architecture Search (NAS) methods over prior approaches. Code is available at https://github.com/abduallahmohamed/inner_ensemble_nets.
Noisy labels are ubiquitous in real-world datasets, which poses a challenge for robustly training deep neural networks (DNNs) since DNNs can easily overfit to the noisy labels. Most recent efforts have been devoted to defending noisy labels by discarding noisy samples from the training set or assigning weights to training samples, where the weight associated with a noisy sample is expected to be small. Thereby, these previous efforts result in a waste of samples, especially those assigned with small weights. The input $x$ is always useful regardless of whether its observed label $y$ is clean. To make full use of all samples, we introduce a manifold regularizer, named as Paired Softmax Divergence Regularization (PSDR), to penalize the Kullback-Leibler (KL) divergence between softmax outputs of similar inputs. In particular, similar inputs can be effectively generated by data augmentation. PSDR can be easily implemented on any type of DNNs to improve the robustness against noisy labels. As empirically demonstrated on benchmark datasets, our PSDR impressively improve state-of-the-art results by a significant margin.
Especially investigated in recent years, the Gaussian discord can be quantified by a distance between a given two-mode Gaussian state and the set of all the zero-discord two-mode Gaussian states. However, as this set consists only of product states, such a distance captures all the correlations (quantum and classical) between modes. Therefore it is merely un upper bound for the geometric discord, no matter which is the employed distance. In this work we choose for this purpose the Hellinger metric that is known to have many beneficial properties recommending it as a good measure of quantum behaviour. In general, this metric is determined by affinity, a relative of the Uhlmann fidelity with which it shares many important features. As a first step of our work, the affinity of a pair of $n$-mode Gaussian states is written. Then, in the two-mode case, we succeeded in determining exactly the closest Gaussian product state and computed the Gaussian discord accordingly. The obtained general formula is remarkably simple and becomes still friendlier in the significant case of symmetric two-mode Gaussian states. We then analyze in detail two special classes of two-mode Gaussian states of theoretical and experimental interest as well: the squeezed thermal states and the mode-mixed thermal ones. The former are separable under a well-known threshold of squeezing, while the latter are always separable. It is worth stressing that for symmetric states belonging to either of these classes, we find consistency between their geometric Hellinger discord and the originally defined discord in the Gaussian approach. At the same time, the Gaussian Hellinger discord of such a state turns out to be a reliable measure of the total amount of its cross correlations.
Motivated by the need to audit complex and black box models, there has been extensive research on quantifying how data features influence model predictions. Feature influence can be direct (a direct influence on model outcomes) and indirect (model outcomes are influenced via proxy features). Feature influence can also be expressed in aggregate over the training or test data or locally with respect to a single point. Current research has typically focused on one of each of these dimensions. In this paper, we develop disentangled influence audits, a procedure to audit the indirect influence of features. Specifically, we show that disentangled representations provide a mechanism to identify proxy features in the dataset, while allowing an explicit computation of feature influence on either individual outcomes or aggregate-level outcomes. We show through both theory and experiments that disentangled influence audits can both detect proxy features and show, for each individual or in aggregate, which of these proxy features affects the classifier being audited the most. In this respect, our method is more powerful than existing methods for ascertaining feature influence.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا