Do you want to publish a course? Click here

On orthogonal projections for dimension reduction and applications in augmented target loss functions for learning problems

183   0   0.0 ( 0 )
 Added by Anna Breger
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

The use of orthogonal projections on high-dimensional input and target data in learning frameworks is studied. First, we investigate the relations between two standard objectives in dimension reduction, preservation of variance and of pairwise relative distances. Investigations of their asymptotic correlation as well as numerical experiments show that a projection does usually not satisfy both objectives at once. In a standard classification problem we determine projections on the input data that balance the objectives and compare subsequent results. Next, we extend our application of orthogonal projections to deep learning tasks and introduce a general framework of augmented target loss functions. These loss functions integrate additional information via transformations and projections of the target data. In two supervised learning problems, clinical image segmentation and music information classification, the application of our proposed augmented target loss functions increase the accuracy.



rate research

Read More

Random projections reduce the dimension of a set of vectors while preserving structural information, such as distances between vectors in the set. This paper proposes a novel use of row-product random matrices in random projection, where we call it Tensor Random Projection (TRP). It requires substantially less memory than existing dimension reduction maps. The TRP map is formed as the Khatri-Rao product of several smaller random projections, and is compatible with any base random projection including sparse maps, which enable dimension reduction with very low query cost and no floating point operations. We also develop a reduced variance extension. We provide a theoretical analysis of the bias and variance of the TRP, and a non-asymptotic error analysis for a TRP composed of two smaller maps. Experiments on both synthetic and MNIST data show that our method performs as well as conventional methods with substantially less storage.
In this paper, we present a dimension reduction method to reduce the dimension of parameter space and state space and efficiently solve inverse problems. To this end, proper orthogonal decomposition (POD) and radial basis function (RBF) are combined to represent the solution of forward model with a form of variable separation. This POD-RBF method can be used to efficiently evaluate the models output. A gradient regularization method is presented to solve the inverse problem with fast convergence. A generalized cross validation method is suggested to select the regularization parameter and differential step size for the gradient computation. Because the regularization method needs many models evaluations. This is desirable for POD-RBF method. Thus, the POD-RBF method is integrated with the gradient regularization method to provide an efficient approach to solve inverse problems. We focus on the coefficient inversion of diffusion equations using the proposed approach. Based on different types of measurement data and different basis functions for coefficients, we present a few numerical examples for the coefficient inversion. The numerical results show that accurate reconstruction for the coefficient can be achieved efficiently.
97 - Philipp Trunschke 2021
We consider the problem of approximating a function in general nonlinear subsets of $L^2$ when only a weighted Monte Carlo estimate of the $L^2$-norm can be computed. Of particular interest in this setting is the concept of sample complexity, the number of samples that are necessary to recover the best approximation. Bounds for this quantity have been derived in a previous work and depend primarily on the model class and are not influenced positively by the regularity of the sought function. This result however is only a worst-case bound and is not able to explain the remarkable performance of iterative hard thresholding algorithms that is observed in practice. We reexamine the results of the previous paper and derive a new bound that is able to utilize the regularity of the sought function. A critical analysis of our results allows us to derive a sample efficient algorithm for the model set of low-rank tensors. The viability of this algorithm is demonstrated by recovering quantities of interest for a classical high-dimensional random partial differential equation.
In this work, we describe a new approach that uses deep neural networks (DNN) to obtain regularization parameters for solving inverse problems. We consider a supervised learning approach, where a network is trained to approximate the mapping from observation data to regularization parameters. Once the network is trained, regularization parameters for newly obtained data can be computed by efficient forward propagation of the DNN. We show that a wide variety of regularization functionals, forward models, and noise models may be considered. The network-obtained regularization parameters can be computed more efficiently and may even lead to more accurate solutions compared to existing regularization parameter selection methods. We emphasize that the key advantage of using DNNs for learning regularization parameters, compared to previous works on learning via optimal experimental design or empirical Bayes risk minimization, is greater generalizability. That is, rather than computing one set of parameters that is optimal with respect to one particular design objective, DNN-computed regularization parameters are tailored to the specific features or properties of the newly observed data. Thus, our approach may better handle cases where the observation is not a close representation of the training set. Furthermore, we avoid the need for expensive and challenging bilevel optimization methods as utilized in other existing training approaches. Numerical results demonstrate the potential of using DNNs to learn regularization parameters.
Model Order Reduction (MOR) methods enable the generation of real-time-capable digital twins, which can enable various novel value streams in industry. While traditional projection-based methods are robust and accurate for linear problems, incorporating Machine Learning to deal with nonlinearity becomes a new choice for reducing complex problems. Such methods usually consist of two steps. The first step is dimension reduction by projection-based method, and the second is the model reconstruction by Neural Network. In this work, we apply some modifications for both steps respectively and investigate how they are impacted by testing with three simulation models. In all cases Proper Orthogonal Decomposition (POD) is used for dimension reduction. For this step, the effects of generating the input snapshot database with constant input parameters is compared with time-dependent input parameters. For the model reconstruction step, two types of neural network architectures are compared: Multilayer Perceptron (MLP) and Runge-Kutta Neural Network (RKNN). The MLP learns the system state directly while RKNN learns the derivative of system state and predicts the new state as a Runge-Kutta integrator.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا