ﻻ يوجد ملخص باللغة العربية
The problem of obtaining optimal projections for performing discriminant analysis with Gaussian class densities is studied. Unlike in most existing approaches to the problem, the focus of the optimisation is on the multinomial likelihood based on posterior probability estimates, which directly captures discriminability of classes. In addition to the more commonly considered problem, in this context, of classification, the unsupervised clustering counterpart is also considered. Finding optimal projections offers utility for dimension reduction and regularisation, as well as instructive visualisation for better model interpretability. Practical applications of the proposed approach show considerable promise for both classification and clustering. Code to implement the proposed method is available in the form of an R package from https://github.com/DavidHofmeyr/OPGD.
Bayesian approaches are appealing for constrained inference problems in allowing a probabilistic characterization of uncertainty, while providing a computational machinery for incorporating complex constraints in hierarchical models. However, the usu
Causal inference of treatment effects is a challenging undertaking in it of itself; inference for sequential treatments leads to even more hurdles. In precision medicine, one additional ambitious goal may be to infer about effects of dynamic treatmen
We propose optimal observables for Gaussian illumination to maximize the signal-to-noise ratio, which minimizes the discrimination error between the presence and absence of a low-reflectivity target using Gaussian states. The optimal observables domi
Generalized Gaussian processes (GGPs) are highly flexible models that combine latent GPs with potentially non-Gaussian likelihoods from the exponential family. GGPs can be used in a variety of settings, including GP classification, nonparametric coun
Neural topic models have triggered a surge of interest in extracting topics from text automatically since they avoid the sophisticated derivations in conventional topic models. However, scarce neural topic models incorporate the word relatedness info