ترغب بنشر مسار تعليمي؟ اضغط هنا

Despite extensive research efforts, few quantum algorithms for classical optimization demonstrate realizable advantage. The utility of many quantum algorithms is limited by high requisite circuit depth and nonconvex optimization landscapes. We tackle these challenges to quantum advantage with two new variational quantum algorithms, which utilize multi-basis graph encodings and nonlinear activation functions to outperform existing methods with shallow quantum circuits. Additionally, both algorithms provide a polynomial reduction in measurement complexity and either a factor of two speedup textit{or} a factor of two reduction in quantum resources. Typically, the classical simulation of such algorithms with many qubits is impossible due to the exponential scaling of traditional quantum formalism and the limitations of tensor networks. Nonetheless, the shallow circuits and moderate entanglement of our algorithms, combined with efficient tensor method-based simulation, enable us to successfully optimize the MaxCut of high-connectivity graphs with up to $512$ nodes (qubits) on a single GPU.
A large fraction of the arithmetic operations required to evaluate deep neural networks (DNNs) consists of matrix multiplications, in both convolution and fully connected layers. We perform end-to-end learning of low-cost approximations of matrix mul tiplications in DNN layers by casting matrix multiplications as 2-layer sum-product networks (SPNs) (arithmetic circuits) and learning their (ternary) edge weights from data. The SPNs disentangle multiplication and addition operations and enable us to impose a budget on the number of multiplication operations. Combining our method with knowledge distillation and applying it to image classification DNNs (trained on ImageNet) and language modeling DNNs (using LSTMs), we obtain a first-of-a-kind reduction in number of multiplications (over 99.5%) while maintaining the predictive performance of the full-precision models. Finally, we demonstrate that the proposed framework is able to rediscover Strassens matrix multiplication algorithm, learning to multiply $2 times 2$ matrices using only 7 multiplications instead of 8.
We consider the problem of training input-output recurrent neural networks (RNN) for sequence labeling tasks. We propose a novel spectral approach for learning the network parameters. It is based on decomposition of the cross-moment tensor between th e output and a non-linear transformation of the input, based on score functions. We guarantee consistent learning with polynomial sample and computational complexity under transparent conditions such as non-degeneracy of model parameters, polynomial activations for the neurons, and a Markovian evolution of the input sequence. We also extend our results to Bidirectional RNN which uses both previous and future information to output the label at each time point, and is employed in many NLP tasks such as POS tagging.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا