Do you want to publish a course? Click here

Fast Graph Filters for Decentralized Subspace Projection

54   0   0.0 ( 0 )
 Added by Daniel Romero
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

A number of inference problems with sensor networks involve projecting a measured signal onto a given subspace. In existing decentralized approaches, sensors communicate with their local neighbors to obtain a sequence of iterates that asymptotically converges to the desired projection. In contrast, the present paper develops methods that produce these projections in a finite and approximately minimal number of iterations. Building upon tools from graph signal processing, the problem is cast as the design of a graph filter which, in turn, is reduced to the design of a suitable graph shift operator. Exploiting the eigenstructure of the projection and shift matrices leads to an objective whose minimization yields approximately minimum-order graph filters. To cope with the fact that this problem is not convex, the present work introduces a novel convex relaxation of the number of distinct eigenvalues of a matrix based on the nuclear norm of a Kronecker difference. To tackle the case where there exists no graph filter capable of implementing a certain subspace projection with a given network topology, a second optimization criterion is presented to approximate the desired projection while trading the number of iterations for approximation error. Two algorithms are proposed to optimize the aforementioned criteria based on the alternating-direction method of multipliers. An exhaustive simulation study demonstrates that the obtained filters can effectively obtain subspace projections markedly faster than existing algorithms.



rate research

Read More

57 - Daniel Romero 2021
This correspondence disproves the main results in the paper Design of Asymmetric Shift Operators for Efficient Decentralized Subspace Projection by S. Mollaebrahim and B. Beferull-Lozano. Counterexamples and counterproofs are provided when applicable. When those problems can be amended, a correction is suggested. However, in most cases, no correction may be possible since the problem addressed by the aforementioned paper is for the most part intractable.
We develop algorithms for computing expectations of the laws of models associated to stochastic differential equations (SDEs) driven by pure Levy processes. We consider filtering such processes and well as pricing of path dependent options. We propose a multilevel particle filter (MLPF) to address the computational issues involved in solving these continuum problems. We show via numerical simulations and theoretical results that under suitable assumptions of the discretization of the underlying driving Levy proccess, our proposed method achieves optimal convergence rates. The cost to obtain MSE $O(epsilon^2)$ scales like $O(epsilon^{-2})$ for our method, as compared with the standard particle filter $O(epsilon^{-3})$.
134 - Shiyuan He , Xiaomeng Yan 2021
Functional data analysis (FDA) methods have computational and theoretical appeals for some high dimensional data, but lack the scalability to modern large sample datasets. To tackle the challenge, we develop randomized algorithms for two important FDA methods: functional principal component analysis (FPCA) and functional linear regression (FLR) with scalar response. The two methods are connected as they both rely on the accurate estimation of functional principal subspace. The proposed algorithms draw subsamples from the large dataset at hand and apply FPCA or FLR over the subsamples to reduce the computational cost. To effectively preserve subspace information in the subsamples, we propose a functional principal subspace sampling probability, which removes the eigenvalue scale effect inside the functional principal subspace and properly weights the residual. Based on the operator perturbation analysis, we show the proposed probability has precise control over the first order error of the subspace projection operator and can be interpreted as an importance sampling for functional subspace estimation. Moreover, concentration bounds for the proposed algorithms are established to reflect the low intrinsic dimension nature of functional data in an infinite dimensional space. The effectiveness of the proposed algorithms is demonstrated upon synthetic and real datasets.
Given a reference model that includes all the available variables, projection predictive inference replaces its posterior with a constrained projection including only a subset of all variables. We extend projection predictive inference to enable computationally efficient variable and structure selection in models outside the exponential family. By adopting a latent space projection predictive perspective we are able to: 1) propose a unified and general framework to do variable selection in complex models while fully honouring the original model structure, 2) properly identify relevant structure and retain posterior uncertainties from the original model, and 3) provide an improved approach also for non-Gaussian models in the exponential family. We demonstrate the superior performance of our approach by thoroughly testing and comparing it against popular variable selection approaches in a wide range of settings, including realistic data sets. Our results show that our approach successfully recovers relevant terms and model structure in complex models, selecting less variables than competing approaches for realistic datasets.
Dynamical systems comprised of autonomous agents arise in many relevant problems such as multi-agent robotics, smart grids, or smart cities. Controlling these systems is of paramount importance to guarantee a successful deployment. Optimal centralized controllers are readily available but face limitations in terms of scalability and practical implementation. Optimal decentralized controllers, on the other hand, are difficult to find. In this paper, we propose a framework using graph neural networks (GNNs) to learn decentralized controllers from data. While GNNs are naturally distributed architectures, making them perfectly suited for the task, we adapt them to handle delayed communications as well. Furthermore, they are equivariant and stable, leading to good scalability and transferability properties. The problem of flocking is explored to illustrate the potential of GNNs in learning decentralized controllers.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا