Do you want to publish a course? Click here

GAIL---Guaranteed Automatic Integration Library in MATLAB: Documentation for Version 2.1

104   0   0.0 ( 0 )
 Added by Sou-Cheng Choi
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

Automatic and adaptive approximation, optimization, or integration of functions in a cone with guarantee of accuracy is a relatively new paradigm. Our purpose is to create an open-source MATLAB package, Guaranteed Automatic Integration Library (GAIL), following the philosophy of reproducible research and sustainable practices of robust scientific software development. For our conviction that true scholarship in computational sciences are characterized by reliable reproducibility, we employ the best practices in mathematical research and software engineering known to us and available in MATLAB. This document describes the key features of functions in GAIL, which includes one-dimensional function approximation and minimization using linear splines, one-dimensional numerical integration using trapezoidal rule, and last but not least, mean estimation and multidimensional integration by Monte Carlo methods or Quasi Monte Carlo methods.

rate research

Read More

In this paper we introduce DiffSharp, an automatic differentiation (AD) library designed with machine learning in mind. AD is a family of techniques that evaluate derivatives at machine precision with only a small constant factor of overhead, by systematically applying the chain rule of calculus at the elementary operator level. DiffSharp aims to make an extensive array of AD techniques available, in convenient form, to the machine learning community. These including arbitrary nesting of forward/reverse AD operations, AD with linear algebra primitives, and a functional API that emphasizes the use of higher-order functions and composition. The library exposes this functionality through an API that provides gradients, Hessians, Jacobians, directional derivatives, and matrix-free Hessian- and Jacobian-vector products. Bearing the performance requirements of the latest machine learning techniques in mind, the underlying computations are run through a high-performance BLAS/LAPACK backend, using OpenBLAS by default. GPU support is currently being implemented.
75 - James Yang 2021
Automatic differentiation is a set of techniques to efficiently and accurately compute the derivative of a function represented by a computer program. Existing C++ libraries for automatic differentiation (e.g. Adept, Stan Math Library), however, exhibit large memory consumptions and runtime performance issues. This paper introduces FastAD, a new C++ template library for automatic differentiation, that overcomes all of these challenges in existing libraries by using vectorization, simpler memory management using a fully expression-template-based design, and other compile-time optimizations to remove some run-time overhead. Benchmarks show that FastAD performs 2-10 times faster than Adept and 2-19 times faster than Stan across various test cases including a few real-world examples.
104 - Alberto Gomez 2021
This paper presents a Matlab toolbox to perform basic image processing and visualization tasks, particularly designed for medical image processing. The functionalities available are similar to basic functions found in other non-Matlab widely used libraries such as the Insight Toolkit (ITK). The toolbox is entirely written in native Matlab code, but is fast and flexible. Main use cases for the toolbox are illustrated here, including image input/output, pre-processing, filtering, image registration and visualisation. Both the code and sample data are made publicly available and open source.
Transformer, BERT and their variants have achieved great success in natural language processing. Since Transformer models are huge in size, serving these models is a challenge for real industrial applications. In this paper, we propose LightSeq, a highly efficient inference library for models in the Transformer family. LightSeq includes a series of GPU optimization techniques to to streamline the computation of neural layers and to reduce memory footprint. LightSeq can easily import models trained using PyTorch and Tensorflow. Experimental results on machine translation benchmarks show that LightSeq achieves up to 14x speedup compared with TensorFlow and 1.4x compared with FasterTransformer, a concurrent CUDA implementation. The code is available at https://github.com/bytedance/lightseq.
We describe in this paper new design techniques used in the cpp exact linear algebra library linbox, intended to make the library safer and easier to use, while keeping it generic and efficient. First, we review the new simplified structure for containers, based on our emph{founding scope allocation} model. We explain design choices and their impact on coding: unification of our matrix classes, clearer model for matrices and submatrices, etc Then we present a variation of the emph{strategy} design pattern that is comprised of a controller--plugin system: the controller (solution) chooses among plug-ins (algorithms) that always call back the controllers for subtasks. We give examples using the solution mul. Finally we present a benchmark architecture that serves two purposes: Providing the user with easier ways to produce graphs; Creating a framework for automatically tuning the library and supporting regression testing.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا