ترغب بنشر مسار تعليمي؟ اضغط هنا

An efficient algorithm for graph Laplacian optimization based on effective resistances

277   0   0.0 ( 0 )
 نشر من قبل Eduardo Pavez
 تاريخ النشر 2020
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

In graph signal processing, data samples are associated to vertices on a graph, while edge weights represent similarities between those samples. We propose a convex optimization problem to learn sparse well connected graphs from data. We prove that each edge weight in our solution is upper bounded by the inverse of the distance between data features of the corresponding nodes. We also show that the effective resistance distance between nodes is upper bounded by the distance between nodal data features. Thus, our proposed method learns a sparse well connected graph that encodes geometric properties of the data. We also propose a coordinate minimization algorithm that, at each iteration, updates an edge weight using exact minimization. The algorithm has a simple and low complexity implementation based on closed form expressions.



قيم البحث

اقرأ أيضاً

This work demonstrates a hardware-efficient support vector machine (SVM) training algorithm via the alternative direction method of multipliers (ADMM) optimizer. Low-rank approximation is exploited to reduce the dimension of the kernel matrix by empl oying the Nystr{o}m method. Verified in four datasets, the proposed ADMM-based training algorithm with rank approximation reduces 32$times$ of matrix dimension with only 2% drop in inference accuracy. Compared to the conventional sequential minimal optimization (SMO) algorithm, the ADMM-based training algorithm is able to achieve a 9.8$times$10$^7$ shorter latency for training 2048 samples. Hardware design techniques, including pre-computation and memory sharing, are proposed to reduce the computational complexity by 62% and the memory usage by 60%. As a proof of concept, an epileptic seizure detector chip is designed to demonstrate the effectiveness of the proposed hardware-efficient training algorithm. The chip achieves a 153,310$times$ higher energy efficiency and a 364$times$ higher throughput-to-area ratio for SVM training than a high-end CPU. This work provides a promising solution for edge devices which require low-power and real-time training.
We study the problem of optimal power allocation in a single-hop ad hoc wireless network. In solving this problem, we propose a hybrid neural architecture inspired by the algorithmic unfolding of the iterative weighted minimum mean squared error (WMM SE) method, that we denote as unfolded WMMSE (UWMMSE). The learnable weights within UWMMSE are parameterized using graph neural networks (GNNs), where the time-varying underlying graphs are given by the fading interference coefficients in the wireless network. These GNNs are trained through a gradient descent approach based on multiple instances of the power allocation problem. Once trained, UWMMSE achieves performance comparable to that of WMMSE while significantly reducing the computational complexity. This phenomenon is illustrated through numerical experiments along with the robustness and generalization to wireless networks of different densities and sizes.
In this paper, we redefine the Graph Fourier Transform (GFT) under the DSP$_mathrm{G}$ framework. We consider the Jordan eigenvectors of the directed Laplacian as graph harmonics and the corresponding eigenvalues as the graph frequencies. For this pu rpose, we propose a shift operator based on the directed Laplacian of a graph. Based on our shift operator, we then define total variation of graph signals, which is used in frequency ordering. We achieve natural frequency ordering and interpretation via the proposed definition of GFT. Moreover, we show that our proposed shift operator makes the LSI filters under DSP$_mathrm{G}$ to become polynomial in the directed Laplacian.
Modern image and video compression codes employ elaborate structures existing in such signals to encode them into few number of bits. Compressed sensing recovery algorithms on the other hand use such signals structures to recover them from few linear observations. Despite the steady progress in the field of compressed sensing, structures that are often used for signal recovery are still much simpler than those employed by state-of-the-art compression codes. The main goal of this paper is to bridge this gap through answering the following question: Can one employ a given compression code to build an efficient (polynomial time) compressed sensing recovery algorithm? In response to this question, the compression-based gradient descent (C-GD) algorithm is proposed. C-GD, which is a low-complexity iterative algorithm, is able to employ a generic compression code for compressed sensing and therefore elevates the scope of structures used in compressed sensing to those used by compression codes. The convergence performance of C-GD and its required number of measurements in terms of the rate-distortion performance of the compression code are theoretically analyzed. It is also shown that C-GD is robust to additive white Gaussian noise. Finally, the presented simulation results show that combining C-GD with commercial image compression codes such as JPEG2000 yields state-of-the-art performance in imaging applications.
There is an influx of heterogeneous information network (HIN) based recommender systems in recent years since HIN is capable of characterizing complex graphs and contains rich semantics. Although the existing approaches have achieved performance impr ovement, while practical, they still face the following problems. On one hand, most existing HIN-based methods rely on explicit path reachability to leverage path-based semantic relatedness between users and items, e.g., metapath-based similarities. These methods are hard to use and integrate since path connections are sparse or noisy, and are often of different lengths. On the other hand, other graph-based methods aim to learn effective heterogeneous network representations by compressing node together with its neighborhood information into single embedding before prediction. This weakly coupled manner in modeling overlooks the rich interactions among nodes, which introduces an early summarization issue. In this paper, we propose an end-to-end Neighborhood-based Interaction Model for Recommendation (NIRec) to address the above problems. Specifically, we first analyze the significance of learning interactions in HINs and then propose a novel formulation to capture the interactive patterns between each pair of nodes through their metapath-guided neighborhoods. Then, to explore complex interactions between metapaths and deal with the learning complexity on large-scale networks, we formulate interaction in a convolutional way and learn efficiently with fast Fourier transform. The extensive experiments on four different types of heterogeneous graphs demonstrate the performance gains of NIRec comparing with state-of-the-arts. To the best of our knowledge, this is the first work providing an efficient neighborhood-based interaction model in the HIN-based recommendations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا