No Arabic abstract
The quantum singular value transformation is a powerful quantum algorithm that allows one to apply a polynomial transformation to the singular values of a matrix that is embedded as a block of a unitary transformation. This paper shows how to perform the quantum singular value transformation for a matrix that can be embedded as a block of a Hamiltonian. The transformation can be implemented in a purely Hamiltonian context by the alternating application of Hamiltonians for chosen intervals: it is an example of the Quantum Alternating Operator Ansatz (generalized QAOA). We also show how to use the Hamiltonian quantum singular value transformation to perform inverse block encoding to implement a unitary of which a given Hamiltonian is a block. Inverse block encoding leads to novel procedures for matrix multiplication and for solving differential equations on quantum information processors in a purely Hamiltonian fashion.
When the amount of entanglement in a quantum system is limited, the relevant dynamics of the system is restricted to a very small part of the state space. When restricted to this subspace the description of the system becomes efficient in the system size. A class of algorithms, exemplified by the Time-Evolving Block-Decimation (TEBD) algorithm, make use of this observation by selecting the relevant subspace through a decimation technique relying on the Singular Value Decomposition (SVD). In these algorithms, the complexity of each time-evolution step is dominated by the SVD. Here we show that, by applying a randomized version of the SVD routine (RRSVD), the power law governing the computational complexity of TEBD is lowered by one degree, resulting in a considerable speed-up. We exemplify the potential gains in efficiency at the hand of some real world examples to which TEBD can be successfully applied to and demonstrate that for those system RRSVD delivers results as accurate as state-of-the-art deterministic SVD routines.
Since being analyzed by Rokhlin, Szlam, and Tygert and popularized by Halko, Martinsson, and Tropp, randomized Simultaneous Power Iteration has become the method of choice for approximate singular value decomposition. It is more accurate than simpler sketching algorithms, yet still converges quickly for any matrix, independently of singular value gaps. After $tilde{O}(1/epsilon)$ iterations, it gives a low-rank approximation within $(1+epsilon)$ of optimal for spectral norm error. We give the first provable runtime improvement on Simultaneous Iteration: a simple randomized block Krylov method, closely related to the classic Block Lanczos algorithm, gives the same guarantees in just $tilde{O}(1/sqrt{epsilon})$ iterations and performs substantially better experimentally. Despite their long history, our analysis is the first of a Krylov subspace method that does not depend on singular value gaps, which are unreliable in practice. Furthermore, while it is a simple accuracy benchmark, even $(1+epsilon)$ error for spectral norm low-rank approximation does not imply that an algorithm returns high quality principal components, a major issue for data applications. We address this problem for the first time by showing that both Block Krylov Iteration and a minor modification of Simultaneous Iteration give nearly optimal PCA for any matrix. This result further justifies their strength over non-iterative sketching methods. Finally, we give insight beyond the worst case, justifying why both algorithms can run much faster in practice than predicted. We clarify how simple techniques can take advantage of common matrix properties to significantly improve runtime.
We present an interferometric technique for measuring ultra-small tilts. The information of a tilt in one of the mirrors of a modified Sagnac interferometer is carried by the phase difference between the counter propagating laser beams. Using a small misalignment of the interferometer, orthogonal to the plane of the tilt, a bimodal (or two-fringe) pattern is induced in the beams transverse power distribution. By tracking the mean of such a distribution, using a split detector, a sensitive measurement of the phase is performed. With 1.2 mW of continuous-wave laser power, the technique has a shot noise limited sensitivity of 56 frad/$sqrt{mbox{Hz}}$, and a measured noise floor of 200 frad/$sqrt{mbox{Hz}}$ for tilt frequencies above 2 Hz. A tilt of 200 frad corresponds to a differential displacement of 4.0 fm in our setup. The novelty of the protocol relies on signal amplification due to the misalignment, and on good performance at low frequencies. A noise floor of about 70 prad/$sqrt{mbox{Hz}}$ is observed between 2 and 100 mHz.
The identifiability of a system is concerned with whether the unknown parameters in the system can be uniquely determined with all the possible data generated by a certain experimental setting. A test of quantum Hamiltonian identifiability is an important tool to save time and cost when exploring the identification capability of quantum probes and experimentally implementing quantum identification schemes. In this paper, we generalize the identifiability test based on the Similarity Transformation Approach (STA) in classical control theory and extend it to the domain of quantum Hamiltonian identification. We employ STA to prove the identifiability of spin-1/2 chain systems with arbitrary dimension assisted by single-qubit probes. We further extend the traditional STA method by proposing a Structure Preserving Transformation (SPT) method for non-minimal systems. We use the SPT method to introduce an indicator for the existence of economic quantum Hamiltonian identification algorithms, whose computational complexity directly depends on the number of unknown parameters (which could be much smaller than the system dimension). Finally, we give an example of such an economic Hamiltonian identification algorithm and perform simulations to demonstrate its effectiveness.
This chapter describes gene expression analysis by Singular Value Decomposition (SVD), emphasizing initial characterization of the data. We describe SVD methods for visualization of gene expression data, representation of the data using a smaller number of variables, and detection of patterns in noisy gene expression data. In addition, we describe the precise relation between SVD analysis and Principal Component Analysis (PCA) when PCA is calculated using the covariance matrix, enabling our descriptions to apply equally well to either method. Our aim is to provide definitions, interpretations, examples, and references that will serve as resources for understanding and extending the application of SVD and PCA to gene expression analysis.