No Arabic abstract
In applications of dynamical systems, situations can arise where it is desired to predict the onset of synchronization as it can lead to characteristic and significant changes in the system performance and behaviors, for better or worse. In experimental and real settings, the system equations are often unknown, raising the need to develop a prediction framework that is model free and fully data driven. We contemplate that this challenging problem can be addressed with machine learning. In particular, exploiting reservoir computing or echo state networks, we devise a parameter-aware scheme to train the neural machine using asynchronous time series, i.e., in the parameter regime prior to the onset of synchronization. A properly trained machine will possess the power to predict the synchronization transition in that, with a given amount of parameter drift, whether the system would remain asynchronous or exhibit synchronous dynamics can be accurately anticipated. We demonstrate the machine-learning based framework using representative chaotic models and small network systems that exhibit continuous (second-order) or abrupt (first-order) transitions. A remarkable feature is that, for a network system exhibiting an explosive (first-order) transition and a hysteresis loop in synchronization, the machine learning scheme is capable of accurately predicting these features, including the precise locations of the transition points associated with the forward and backward transition paths.
We show that the degree distributions of graphs do not suffice to characterize the synchronization of systems evolving on them. We prove that, for any given degree sequence satisfying certain conditions, there exists a connected graph having that degree sequence for which the first nontrivial eigenvalue of the graph Laplacian is arbitrarily close to zero. Consequently, complex dynamical systems defined on such graphs have poor synchronization properties. The result holds under quite mild assumptions, and shows that there exists classes of random, scale-free, regular, small-world, and other common network architectures which impede synchronization. The proof is based on a construction that also serves as an algorithm for building non-synchronizing networks having a prescribed degree distribution.
We study the quantum synchronization between a pair of two-level systems inside two coupled cavities. By using a digital-analog decomposition of the master equation that rules the system dynamics, we show that this approach leads to quantum synchronization between both two-level systems. Moreover, we can identify in this digital-analog block decomposition the fundamental elements of a quantum machine learning protocol, in which the agent and the environment (learning units) interact through a mediating system, namely, the register. If we can additionally equip this algorithm with a classical feedback mechanism, which consists of projective measurements in the register, reinitialization of the register state and local conditional operations on the agent and environment subspace, a powerful and flexible quantum machine learning protocol emerges. Indeed, numerical simulations show that this protocol enhances the synchronization process, even when every subsystem experience different loss/decoherence mechanisms, and give us the flexibility to choose the synchronization state. Finally, we propose an implementation based on current technologies in superconducting circuits.
In previously identified forms of remote synchronization between two nodes, the intermediate portion of the network connecting the two nodes is not synchronized with them but generally exhibits some coherent dynamics. Here we report on a network phenomenon we call incoherence-mediated remote synchronization (IMRS), in which two non-contiguous parts of the network are identically synchronized while the dynamics of the intermediate part is statistically and information-theoretically incoherent. We identify mirror symmetry in the network structure as a mechanism allowing for such behavior, and show that IMRS is robust against dynamical noise as well as against parameter changes. IMRS may underlie neuronal information processing and potentially lead to network solutions for encryption key distribution and secure communication.
Chimera and Solitary states have captivated scientists and engineers due to their peculiar dynamical states corresponding to the co-existence of coherent and incoherent dynamical evolution in coupled units in various natural and artificial systems. It has been further demonstrated that such states can be engineered in systems of coupled oscillators by the suitable implementation of communication delays. Here, using supervised machine learning, we predict (a) the precise value of delay which is sufficient for engineering chimera and solitary states for a given set of system parameters, as well as (b) the intensity of incoherence for such engineered states. The results are demonstrated for two different examples consisting of single layer and multi layer networks. First, the chimera states (solitary states) are engineered by establishing delays in the neighboring links of a node (the interlayer links) in a 2-D lattice (multiplex network) of oscillators. Then, different machine learning classifiers, KNN, SVM and MLP-Neural Network are employed by feeding the data obtained from the network models. Once a machine learning model is trained using a limited amount of data, it makes predictions for a given unknown systems parameter values. Testing accuracy, sensitivity, and specificity analysis reveal that MLP-NN classifier is better suited than Knn or SVM classifier for the predictions of parameters values for engineered chimera and solitary states. The technique provides an easy methodology to predict critical delay values as well as the intensity of incoherence for designing an experimental setup to create solitary and chimera states.
The synchronization phenomenon is ubiquitous in nature. In ensembles of coupled oscillators, explosive synchronization is a particular type of transition to phase synchrony that is first-order as the coupling strength increases. Explosive sychronization has been observed in several natural systems, and recent evidence suggests that it might also occur in the brain. A natural system to study this phenomenon is the Kuramoto model that describes an ensemble of coupled phase oscillators. Here we calculate bi-variate similarity measures (the cross-correlation, $rho_{ij}$, and the phase locking value, PLV$_{ij}$) between the phases, $phi_i(t)$ and $phi_j(t)$, of pairs of oscillators and determine the lag time between them as the time-shift, $tau_{ij}$, which gives maximum similarity (i.e., the maximum of $rho_{ij}(tau)$ or PLV$_{ij}(tau)$). We find that, as the transition to synchrony is approached, changes in the distribution of lag times provide an earlier warning of the synchronization transition (either gradual or explosive). The analysis of experimental data, recorded from Rossler-like electronic chaotic oscillators, suggests that these findings are not limited to phase oscillators, as the lag times display qualitatively similar behavior with increasing coupling strength, as in the Kuramoto oscillators. We also analyze the statistical relationship between the lag times between pairs of oscillators and the existence of a direct connection between them. We find that depending on the strength of the coupling, the lags can be informative of the network connectivity.