Do you want to publish a course? Click here

A data-driven P-spline smoother and the P-Spline-GARCH-models

75   0   0.0 ( 0 )
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Penalized spline smoothing of time series and its asymptotic properties are studied. A data-driven algorithm for selecting the smoothing parameter is developed. The proposal is applied to define a semiparametric extension of the well-known Spline-GARCH, called a P-Spline-GARCH, based on the log-data transformation of the squared returns. It is shown that now the errors process is exponentially strong mixing with finite moments of all orders. Asymptotic normality of the P-spline smoother in this context is proved. Practical relevance of the proposal is illustrated by data examples and simulation. The proposal is further applied to value at risk and expected shortfall.



rate research

Read More

Partial Differential Equations (PDEs) are notoriously difficult to solve. In general, closed-form solutions are not available and numerical approximation schemes are computationally expensive. In this paper, we propose to approach the solution of PDEs based on a novel technique that combines the advantages of two recently emerging machine learning based approaches. First, physics-informed neural networks (PINNs) learn continuous solutions of PDEs and can be trained with little to no ground truth data. However, PINNs do not generalize well to unseen domains. Second, convolutional neural networks provide fast inference and generalize but either require large amounts of training data or a physics-constrained loss based on finite differences that can lead to inaccuracies and discretization artifacts. We leverage the advantages of both of these approaches by using Hermite spline kernels in order to continuously interpolate a grid-based state representation that can be handled by a CNN. This allows for training without any precomputed training data using a physics-informed loss function only and provides fast, continuous solutions that generalize to unseen domains. We demonstrate the potential of our method at the examples of the incompressible Navier-Stokes equation and the damped wave equation. Our models are able to learn several intriguing phenomena such as Karman vortex streets, the Magnus effect, Doppler effect, interference patterns and wave reflections. Our quantitative assessment and an interactive real-time demo show that we are narrowing the gap in accuracy of unsupervised ML based methods to industrial CFD solvers while being orders of magnitude faster.
A standard construction in approximation theory is mesh refinement. For a simplicial or polyhedral mesh D in R^k, we study the subdivision D obtained by subdividing a maximal cell of D. We give sufficient conditions for the module of splines on D to split as the direct sum of splines on D and splines on the subdivided cell. As a consequence, we obtain dimension formulas and explicit bases for several commonly used subdivisions and their multivariate generalizations.
Partially linear additive models generalize the linear models since they model the relation between a response variable and covariates by assuming that some covariates are supposed to have a linear relation with the response but each of the others enter with unknown univariate smooth functions. The harmful effect of outliers either in the residuals or in the covariates involved in the linear component has been described in the situation of partially linear models, that is, when only one nonparametric component is involved in the model. When dealing with additive components, the problem of providing reliable estimators when atypical data arise, is of practical importance motivating the need of robust procedures. Hence, we propose a family of robust estimators for partially linear additive models by combining $B-$splines with robust linear regression estimators. We obtain consistency results, rates of convergence and asymptotic normality for the linear components, under mild assumptions. A Monte Carlo study is carried out to compare the performance of the robust proposal with its classical counterpart under different models and contamination schemes. The numerical experiments show the advantage of the proposed methodology for finite samples. We also illustrate the usefulness of the proposed approach on a real data set.
In this paper, we investigate the problem of designing compact support interpolation kernels for a given class of signals. By using calculus of variations, we simplify the optimization problem from an infinite nonlinear problem to a finite dimensional linear case, and then find the optimum compact support function that best approximates a given filter in the least square sense (l2 norm). The benefit of compact support interpolants is the low computational complexity in the interpolation process while the optimum compact support interpolant gaurantees the highest achivable Signal to Noise Ratio (SNR). Our simulation results confirm the superior performance of the proposed splines compared to other conventional compact support interpolants such as cubic spline.
205 - K. Kopotun , D. Leviatan , 2014
Several results on constrained spline smoothing are obtained. In particular, we establish a general result, showing how one can constructively smooth any monotone or convex piecewise polynomial function (ppf) (or any $q$-monotone ppf, $qgeq 3$, with one additional degree of smoothness) to be of minimal defect while keeping it close to the original function in the ${mathbb L}_p$-(quasi)norm. It is well known that approximating a function by ppfs of minimal defect (splines) avoids introduction of artifacts which may be unrelated to the original function, thus it is always preferable. On the other hand, it is usually easier to construct constrained ppfs with as little requirements on smoothness as possible. Our results allow to obtain shape-preserving splines of minimal defect with equidistant or Chebyshev knots. The validity of the corresponding Jackson-type estimates for shape-preserving spline approximation is summarized, in particular we show, that the ${mathbb L}_p$-estimates, $pge1$, can be immediately derived from the ${mathbb L}_infty$-estimates.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا