No Arabic abstract
We introduce Q-space, the tensor product of an index space with a primary space, to achieve a more general mathematical description of correlations in terms of q-tuples. Topics discussed include the decomposition of Q-space into a sum-variable (location) subspace S plus an orthogonal difference-variable subspace D, and a systematisation of q-tuple size estimation in terms of p-norms. The GHP sum prescription for q-tuple size emerges naturally as the 2-norm of difference-space vectors. Maximum- and minimum-size prescriptions are found to be special cases of a continuum of p-sizes.
The CMS collaboration at the LHC has reported a remarkable and unexpected phenomenon in very high-multiplicity high energy proton-proton collisions: a positive correlation between two particles produced at similar azimuthal angles, spanning a large range in rapidity. We suggest that this ridge-like correlation may be a reflection of the rare events generated by the collision of aligned flux tubes connecting the valence quarks in the wave functions of the colliding protons. The spray of particles resulting from the approximate line source produced in such inelastic collisions then gives rise to events with a strong correlation between particles produced over a large range of both positive and negative rapidity. We suggest an additional variable that is sensitive to such a line source which is related to a commonly used measure, ellipticity.
In this paper, based on a weighted projection of bipartite user-object network, we introduce a personalized recommendation algorithm, called the emph{network-based inference} (NBI), which has higher accuracy than the classical algorithm, namely emph{collaborative filtering}. In the NBI, the correlation resulting from a specific attribute may be repeatedly counted in the cumulative recommendations from different objects. By considering the higher order correlations, we design an improved algorithm that can, to some extent, eliminate the redundant correlations. We test our algorithm on two benchmark data sets, emph{MovieLens} and emph{Netflix}. Compared with the NBI, the algorithmic accuracy, measured by the ranking score, can be further improved by 23% for emph{MovieLens} and 22% for emph{Netflix}, respectively. The present algorithm can even outperform the emph{Latent Dirichlet Allocation} algorithm, which requires much longer computational time. Furthermore, most of the previous studies considered the algorithmic accuracy only, in this paper, we argue that the diversity and popularity, as two significant criteria of algorithmic performance, should also be taken into account. With more or less the same accuracy, an algorithm giving higher diversity and lower popularity is more favorable. Numerical results show that the present algorithm can outperform the standard one simultaneously in all five adopted metrics: lower ranking score and higher precision for accuracy, larger Hamming distance and lower intra-similarity for diversity, as well as smaller average degree for popularity.
The correlation properties of the magnitudes of a time series (sometimes called volatility) are associated with nonlinear and multifractal properties and have been applied in a great variety of fields. Here, we have obtained analytically the expression of the autocorrelation of the magnitude series of a linear Gaussian noise as a function of its correlation as well as several analytical relations involving them. For both, models and natural signals, the deviation from these equations can be used as an index of non-linearity that can be applied to relatively short records and that does not require the presence of scaling in the time series under study. We apply this approach to show that the heart-beat records during rest show higher non-linearities than the records of the same subject during moderate exercise. This behavior is also achieved on average for the analyzed set of 10 semiprofessional soccer players. This result agrees with the fact that other measures of complexity are dramatically reduced during exercise and can shed light on its relationship with the withdrawal of parasympathetic tone and/or the activation of sympathetic activity during physical activity.
The numerous recent breakthroughs in machine learning (ML) make imperative to carefully ponder how the scientific community can benefit from a technology that, although not necessarily new, is today living its golden age. This Grand Challenge review paper is focused on the present and future role of machine learning in space weather. The purpose is twofold. On one hand, we will discuss previous works that use ML for space weather forecasting, focusing in particular on the few areas that have seen most activity: the forecasting of geomagnetic indices, of relativistic electrons at geosynchronous orbits, of solar flares occurrence, of coronal mass ejection propagation time, and of solar wind speed. On the other hand, this paper serves as a gentle introduction to the field of machine learning tailored to the space weather community and as a pointer to a number of open challenges that we believe the community should undertake in the next decade. The recurring themes throughout the review are the need to shift our forecasting paradigm to a probabilistic approach focused on the reliable assessment of uncertainties, and the combination of physics-based and machine learning approaches, known as gray-box.
Entanglement properties of IBM Q 53 qubit quantum computer are carefully examined with the noisy intermediate-scale quantum (NISQ) technology. We study GHZ-like states with multiple qubits (N=2 to N=7) on IBM Rochester and compare their maximal violation values of Mermin polynomials with analytic results. A rule of N-qubits orthogonal measurements is taken to further justify the entanglement less than maximal values of local realism (LR). The orthogonality of measurements is another reliable criterion for entanglement except the maximal values of LR. Our results indicate that the entanglement of IBM 53-qubits is reasonably good when N <= 4 while for the longer entangle chains the entanglement is only valid for some special connectivity.