Do you want to publish a course? Click here

Simulating quantum circuits with classical computers requires resources growing exponentially in terms of system size. Real quantum computer with noise, however, may be simulated polynomially with various methods considering different noise models. In this work, we simulate random quantum circuits in 1D with Matrix Product Density Operators (MPDO), for different noise models such as dephasing, depolarizing, and amplitude damping. We show that the method based on Matrix Product States (MPS) fails to approximate the noisy output quantum states for any of the noise models considered, while the MPDO method approximates them well. Compared with the method of Matrix Product Operators (MPO), the MPDO method reflects a clear physical picture of noise (with inner indices taking care of the noise simulation) and quantum entanglement (with bond indices taking care of two-qubit gate simulation). Consequently, in case of weak system noise, the resource cost of MPDO will be significantly less than that of the MPO due to a relatively small inner dimension needed for the simulation. In case of strong system noise, a relatively small bond dimension may be sufficient to simulate the noisy circuits, indicating a regime that the noise is large enough for an `easy classical simulation. Moreover, we propose a more effective tensor updates scheme with optimal truncations for both the inner and the bond dimensions, performed after each layer of the circuit, which enjoys a canonical form of the MPDO for improving simulation accuracy. With truncated inner dimension to a maximum value $kappa$ and bond dimension to a maximum value $chi$, the cost of our simulation scales as $sim NDkappa^3chi^3$, for an $N$-qubit circuit with depth $D$.
A deep neural network is a parametrization of a multilayer mapping of signals in terms of many alternatively arranged linear and nonlinear transformations. The linear transformations, which are generally used in the fully connected as well as convolutional layers, contain most of the variational parameters that are trained and stored. Compressing a deep neural network to reduce its number of variational parameters but not its prediction power is an important but challenging problem toward the establishment of an optimized scheme in training efficiently these parameters and in lowering the risk of overfitting. Here we show that this problem can be effectively solved by representing linear transformations with matrix product operators (MPOs), which is a tensor network originally proposed in physics to characterize the short-range entanglement in one-dimensional quantum states. We have tested this approach in five typical neural networks, including FC2, LeNet-5, VGG, ResNet, and DenseNet on two widely used data sets, namely, MNIST and CIFAR-10, and found that this MPO representation indeed sets up a faithful and efficient mapping between input and output signals, which can keep or even improve the prediction accuracy with a dramatically reduced number of parameters. Our method greatly simplifies the representations in deep learning, and opens a possible route toward establishing a framework of modern neural networks which might be simpler and cheaper, but more efficient.
This paper proposes an original solution to input saturation and dead zone of fractional order system. To overcome these nonsmooth nonlinearities, the control input is decomposed into two independent parts by introducing an intermediate variable, and thus the problem of dead zone and saturation transforms into the problem of disturbance and saturation afterwards. With the procedure of fractional order adaptive backstepping controller design, the bound of disturbance is estimated, and saturation is compensated by the virtual signal of an auxiliary system as well. In spite of the existence of nonsmooth nonlinearities, the output is guaranteed to track the reference signal asymptotically on the basis of our proposed method. Some simulation studies are carried out in order to demonstrate the effectiveness of method at last.
Modern world builds on the resilience of interdependent infrastructures characterized as complex networks. Recently, a framework for analysis of interdependent networks has been developed to explain the mechanism of resilience in interdependent networks. Here we extend this interdependent network model by considering flows in the networks and study the systems resilience under different attack strategies. In our model, nodes may fail due to either overload or loss of interdependency. Under the interaction between these two failure mechanisms, it is shown that interdependent scale-free networks show extreme vulnerability. The resilience of interdependent SF networks is found in our simulation much smaller than single SF network or interdependent SF networks without flows.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا