ترغب بنشر مسار تعليمي؟ اضغط هنا

Neural Ordinary Differential Equation Model for Evolutionary Subspace Clustering and Its Applications

132   0   0.0 ( 0 )
 نشر من قبل Mingyuan Bai
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The neural ordinary differential equation (neural ODE) model has attracted increasing attention in time series analysis for its capability to process irregular time steps, i.e., data are not observed over equally-spaced time intervals. In multi-dimensional time series analysis, a task is to conduct evolutionary subspace clustering, aiming at clustering temporal data according to their evolving low-dimensional subspace structures. Many existing methods can only process time series with regular time steps while time series are unevenly sampled in many situations such as missing data. In this paper, we propose a neural ODE model for evolutionary subspace clustering to overcome this limitation and a new objective function with subspace self-expressiveness constraint is introduced. We demonstrate that this method can not only interpolate data at any time step for the evolutionary subspace clustering task, but also achieve higher accuracy than other state-of-the-art evolutionary subspace clustering methods. Both synthetic and real-world data are used to illustrate the efficacy of our proposed method.

قيم البحث

اقرأ أيضاً

We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box di fferential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.
86 - Bei Li , Quan Du , Tao Zhou 2021
It has been found that residual networks are an Euler discretization of solutions to Ordinary Differential Equations (ODEs). In this paper, we explore a deeper relationship between Transformer and numerical methods of ODEs. We show that a residual bl ock of layers in Transformer can be described as a higher-order solution to ODEs. This leads us to design a new architecture (call it ODE Transformer) analogous to the Runge-Kutta method that is well motivated in ODEs. As a natural extension to Transformer, ODE Transformer is easy to implement and parameter efficient. Our experiments on three WMT tasks demonstrate the genericity of this model, and large improvements in performance over several strong baselines. It achieves 30.76 and 44.11 BLEU scores on the WMT14 En-De and En-Fr test data. This sets a new state-of-the-art on the WMT14 En-Fr task.
Many state-of-the-art subspace clustering methods follow a two-step process by first constructing an affinity matrix between data points and then applying spectral clustering to this affinity. Most of the research into these methods focuses on the fi rst step of generating the affinity, which often exploits the self-expressive property of linear subspaces, with little consideration typically given to the spectral clustering step that produces the final clustering. Moreover, existing methods often obtain the final affinity that is used in the spectral clustering step by applying ad-hoc or arbitrarily chosen postprocessing steps to the affinity generated by a self-expressive clustering formulation, which can have a significant impact on the overall clustering performance. In this work, we unify these two steps by learning both a self-expressive representation of the data and an affinity matrix that is well-normalized for spectral clustering. In our proposed models, we constrain the affinity matrix to be doubly stochastic, which results in a principled method for affinity matrix normalization while also exploiting known benefits of doubly stochastic normalization in spectral clustering. We develop a general framework and derive two models: one that jointly learns the self-expressive representation along with the doubly stochastic affinity, and one that sequentially solves for one then the other. Furthermore, we leverage sparsity in the problem to develop a fast active-set method for the sequential solver that enables efficient computation on large datasets. Experiments show that our method achieves state-of-the-art subspace clustering performance on many common datasets in computer vision.
In this paper, we develop fast procedures for solving linear systems arising from discretization of ordinary and partial differential equations with Caputo fractional derivative w.r.t time variable. First, we consider a finite difference scheme to so lve a two-sided fractional ordinary equation. Furthermore, we present a fast solution technique to accelerate Toeplitz matrix-vector multiplications arising from finite difference discretization. This fast solution technique is based on a fast Fourier transform and depends on the special structure of coefficient matrices, and it helps to reduce the computational work from $O(N^{3})$ required by traditional methods to $O(Nlog^{2}N)$ and the memory requirement from $O(N^{2})$ to $O(N)$ without using any lossy compression, where $N$ is the number of unknowns. Two finite difference schemes to solve time fractional hyperbolic equations with different fractional order $gamma$ are considered. We present a fast solution technique depending on the special structure of coefficient matrices by rearranging the order of unknowns. It helps to reduce the computational work from $O(N^2M)$ required by traditional methods to $O(N$log$^{2}N)$ and the memory requirement from $O(NM)$ to $O(N)$ without using any lossy compression, where $N=tau^{-1}$ and $tau$ is the size of time step, $M=h^{-1}$ and $h$ is the size of space step. Importantly, a fast method is employed to solve the classical time fractional diffusion equation with a lower coast at $O(MN$log$^2N)$, where the direct method requires an overall computational complexity of $O(N^2M)$. Moreover, the applicability and accuracy of the scheme are demonstrated by numerical experiments to support our theoretical analysis.
Subspace clustering is an unsupervised clustering technique designed to cluster data that is supported on a union of linear subspaces, with each subspace defining a cluster with dimension lower than the ambient space. Many existing formulations for t his problem are based on exploiting the self-expressive property of linear subspaces, where any point within a subspace can be represented as linear combination of other points within the subspace. To extend this approach to data supported on a union of non-linear manifolds, numerous studies have proposed learning an embedding of the original data using a neural network which is regularized by a self-expressive loss function on the data in the embedded space to encourage a union of linear subspaces prior on the data in the embedded space. Here we show that there are a number of potential flaws with this approach which have not been adequately addressed in prior work. In particular, we show the model formulation is often ill-posed in that it can lead to a degenerate embedding of the data, which need not correspond to a union of subspaces at all and is poorly suited for clustering. We validate our theoretical results experimentally and also repeat prior experiments reported in the literature, where we conclude that a significant portion of the previously claimed performance benefits can be attributed to an ad-hoc post processing step rather than the deep subspace clustering model.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا