Do you want to publish a course? Click here

State-Space Based Network Topology Identification

298   0   0.0 ( 0 )
 Added by Mario Coutino
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In this work, we explore the state-space formulation of network processes to recover the underlying structure of the network (local connections). To do so, we employ subspace techniques borrowed from system identification literature and extend them to the network topology inference problem. This approach provides a unified view of the traditional network control theory and signal processing on networks. In addition, it provides theoretical guarantees for the recovery of the topological structure of a deterministic linear dynamical system from input-output observations even though the input and state evolution networks can be different.



rate research

Read More

In this work, we explore the state-space formulation of a network process to recover, from partial observations, the underlying network topology that drives its dynamics. To do so, we employ subspace techniques borrowed from system identification literature and extend them to the network topology identification problem. This approach provides a unified view of the traditional network control theory and signal processing on graphs. In addition, it provides theoretical guarantees for the recovery of the topological structure of a deterministic continuous-time linear dynamical system from input-output observations even though the input and state interaction networks might be different. The derived mathematical analysis is accompanied by an algorithm for identifying, from data, a network topology consistent with the dynamics of the system and conforms to the prior information about the underlying structure. The proposed algorithm relies on alternating projections and is provably convergent. Numerical results corroborate the theoretical findings and the applicability of the proposed algorithm.
Data defined over a network have been successfully modelled by means of graph filters. However, although in many scenarios the connectivity of the network is known, e.g., smart grids, social networks, etc., the lack of well-defined interaction weights hinders the ability to model the observed networked data using graph filters. Therefore, in this paper, we focus on the joint identification of coefficients and graph weights defining the graph filter that best models the observed input/output network data. While these two problems have been mostly addressed separately, we here propose an iterative method that exploits the knowledge of the support of the graph for the joint identification of graph filter coefficients and edge weights. We further show that our iterative scheme guarantees a non-increasing cost at every iteration, ensuring a globally-convergent behavior. Numerical experiments confirm the applicability of our proposed approach.
Signal processing and machine learning algorithms for data supported over graphs, require the knowledge of the graph topology. Unless this information is given by the physics of the problem (e.g., water supply networks, power grids), the topology has to be learned from data. Topology identification is a challenging task, as the problem is often ill-posed, and becomes even harder when the graph structure is time-varying. In this paper, we address the problem of dynamic topology identification by building on recent results from time-varying optimization, devising a general-purpose online algorithm operating in non-stationary environments. Because of its iteration-constrained nature, the proposed approach exhibits an intrinsic temporal-regularization of the graph topology without explicitly enforcing it. As a case-study, we specialize our method to the Gaussian graphical model (GGM) problem and corroborate its performance.
In some Internet of Things (IoT) applications, multi-path propagation is a main constraint of communication channel. Recently, the chaotic baseband wireless communication system (CBWCS) is promising to eliminate the inter-symbol interference (ISI) caused by multipath propagation. However, the current technique is only capable of removing the partial effect of ISI, due to only past decoded bits are available for the suboptimal decoding threshold calculation. However, the future transmitting bits also contribute to the threshold. The unavailable future information bits needed by the optimal decoding threshold are an obstacle to further improve the bit error rate (BER) performance. Different from the previous method using echo state network (ESN) to predict one future information bit, the proposed method in this paper predicts the optimal threshold directly using ESN. The proposed ESN-based threshold prediction method simplifies the symbol decoding operation by removing the threshold calculation from the transmitting symbols and channel information, which achieves better BER performance as compared to the previous method. The reason for this superior result lies in two folds, first, the proposed ESN is capable of using more future symbols information conveyed by the ESN input to get more accurate threshold; second, the proposed method here does not need to estimate the channel information using Least Square method, which avoids the extra error caused by inaccurate channel information estimation. By this way, the calculation complexity is decreased as compared to the previous method. Simulation results and experiment based on a wireless open-access research platform under a practical wireless channel, show the effectiveness and superiority of the proposed method.
Graph-based representations play a key role in machine learning. The fundamental step in these representations is the association of a graph structure to a dataset. In this paper, we propose a method that aims at finding a block sparse representation of the graph signal leading to a modular graph whose Laplacian matrix admits the found dictionary as its eigenvectors. The role of sparsity here is to induce a band-limited representation or, equivalently, a modular structure of the graph. The proposed strategy is composed of two optimization steps: i) learning an orthonormal sparsifying transform from the data; ii) recovering the Laplacian, and then topology, from the transform. The first step is achieved through an iterative algorithm whose alternating intermediate solutions are expressed in closed form. The second step recovers the Laplacian matrix from the sparsifying transform through a convex optimization method. Numerical results corroborate the effectiveness of the proposed methods over both synthetic data and real brain data, used for inferring the brain functionality network through experiments conducted over patients affected by epilepsy.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا