No Arabic abstract
We address the use of neural networks (NNs) in classifying the environmental parameters of single-qubit dephasing channels. In particular, we investigate the performance of linear perceptrons and of two non-linear NN architectures. At variance with time-series-based approaches, our goal is to learn a discretized probability distribution over the parameters using tomographic data at just two random instants of time. We consider dephasing channels originating either from classical 1/f{alpha} noise or from the interaction with a bath of quantum oscillators. The parameters to be classified are the color {alpha} of the classical noise or the Ohmicity parameter s of the quantum environment. In both cases, we found that NNs are able to exactly classify parameters into 16 classes using noiseless data (a linear NN is enough for the color, whereas a single-layer NN is needed for the Ohmicity). In the presence of noisy data (e.g. coming from noisy tomographic measurements), the network is able to classify the color of the 1/f{alpha} noise into 16 classes with about 70% accuracy, whereas classification of Ohmicity turns out to be challenging. We also consider a more coarse-grained task, and train the network to discriminate between two macro-classes corresponding to {alpha} lessgtr 1 and s lessgtr 1, obtaining up to 96% and 79% accuracy using single-layer NNs.
We investigate the dynamics of quantum and classical correlations in a system of two qubits under local colored-noise dephasing channels. The time evolution of a single qubit interacting with its own environment is described by a memory kernel non-Markovian master equation. The memory effects of the non-Markovian reservoirs introduce new features in the dynamics of quantum and classical correlations compared to the white noise Markovian case. Depending on the geometry of the initial state, the system can exhibit frozen discord and multiple sudden transitions between classical and quantum decoherence [L. Mazzola, J. Piilo and S. Maniscalco, Phys. Rev. Lett. 104 (2010) 200401]. We provide a geometric interpretation of those phenomena in terms of the distance of the state under investigation to its closest classical state in the Hilbert space of the system.
We discuss the problem of estimating a frequency via N-qubit probes undergoing independent dephasing channels that can be continuously monitored via homodyne or photo-detection. We derive the corresponding analytical solutions for the conditional states, for generic initial states and for arbitrary efficiency of the continuous monitoring. For the detection strategies considered, we show that: i) in the case of perfect continuous detection, the quantum Fisher information (QFI) of the conditional states is equal to the one obtained in the noiseless dynamics; ii) for smaller detection efficiencies, the QFI of the conditional state is equal to the QFI of a state undergoing the (unconditional) dephasing dynamics, but with an effectively reduced noise parameter.
This paper introduces a new online learning framework for multiclass classification called learning with diluted bandit feedback. At every time step, the algorithm predicts a candidate label set instead of a single label for the observed example. It then receives feedback from the environment whether the actual label lies in this candidate label set or not. This feedback is called diluted bandit feedback. Learning in this setting is even more challenging than the bandit feedback setting, as there is more uncertainty in the supervision. We propose an algorithm for multiclass classification using dilute bandit feedback (MC-DBF), which uses the exploration-exploitation strategy to predict the candidate set in each trial. We show that the proposed algorithm achieves O(T^{1-frac{1}{m+2}}) mistake bound if candidate label set size (in each step) is m. We demonstrate the effectiveness of the proposed approach with extensive simulations.
We provide a unifying view of statistical information measures, multi-way Bayesian hypothesis testing, loss functions for multi-class classification problems, and multi-distribution $f$-divergences, elaborating equivalence results between all of these objects, and extending existing results for binary outcome spaces to more general ones. We consider a generalization of $f$-divergences to multiple distributions, and we provide a constructive equivalence between divergences, statistical information (in the sense of DeGroot), and losses for multiclass classification. A major application of our results is in multi-class classification problems in which we must both infer a discriminant function $gamma$---for making predictions on a label $Y$ from datum $X$---and a data representation (or, in the setting of a hypothesis testing problem, an experimental design), represented as a quantizer $mathsf{q}$ from a family of possible quantizers $mathsf{Q}$. In this setting, we characterize the equivalence between loss functions, meaning that optimizing either of two losses yields an optimal discriminant and quantizer $mathsf{q}$, complementing and extending earlier results of Nguyen et. al. to the multiclass case. Our results provide a more substantial basis than standard classification calibration results for comparing different losses: we describe the convex losses that are consistent for jointly choosing a data representation and minimizing the (weighted) probability of error in multiclass classification problems.
In this paper, we propose online algorithms for multiclass classification using partial labels. We propose two variants of Perceptron called Avg Perceptron and Max Perceptron to deal with the partial labeled data. We also propose Avg Pegasos and Max Pegasos, which are extensions of Pegasos algorithm. We also provide mistake bounds for Avg Perceptron and regret bound for Avg Pegasos. We show the effectiveness of the proposed approaches by experimenting on various datasets and comparing them with the standard Perceptron and Pegasos.