Do you want to publish a course? Click here

L{e}vy Adaptive B-spline Regression via Overcomplete Systems

122   0   0.0 ( 0 )
 Added by Sewon Park
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The estimation of functions with varying degrees of smoothness is a challenging problem in the nonparametric function estimation. In this paper, we propose the LABS (L{e}vy Adaptive B-Spline regression) model, an extension of the LARK models, for the estimation of functions with varying degrees of smoothness. LABS model is a LARK with B-spline bases as generating kernels. The B-spline basis consists of piecewise k degree polynomials with k-1 continuous derivatives and can express systematically functions with varying degrees of smoothness. By changing the orders of the B-spline basis, LABS can systematically adapt the smoothness of functions, i.e., jump discontinuities, sharp peaks, etc. Results of simulation studies and real data examples support that this model catches not only smooth areas but also jumps and sharp peaks of functions. The proposed model also has the best performance in almost all examples. Finally, we provide theoretical results that the mean function for the LABS model belongs to the certain Besov spaces based on the orders of the B-spline basis and that the prior of the model has the full support on the Besov spaces.



rate research

Read More

83 - Sewon Park , Jaeyong Lee 2021
We develop a fully Bayesian nonparametric regression model based on a Levy process prior named MLABS (Multivariate Levy Adaptive B-Spline regression) model, a multivariate version of the LARK (Levy Adaptive Regression Kernels) models, for estimating unknown functions with either varying degrees of smoothness or high interaction orders. Levy process priors have advantages of encouraging sparsity in the expansions and providing automatic selection over the number of basis functions. The unknown regression function is expressed as a weighted sum of tensor product of B-spline basis functions as the elements of an overcomplete system, which can deal with multi-dimensional data. The B-spline basis can express systematically functions with varying degrees of smoothness. By changing a set of degrees of the tensor product basis function, MLABS can adapt the smoothness of target functions due to the nice properties of B-spline bases. The local support of the B-spline basis enables the MLABS to make more delicate predictions than other existing methods in the two-dimensional surface data. Experiments on various simulated and real-world datasets illustrate that the MLABS model has comparable performance on regression and classification problems. We also show that the MLABS model has more stable and accurate predictive abilities than state-of-the-art nonparametric regression models in relatively low-dimensional data.
Motivated by the emph{L{e}vy foraging hypothesis} -- the premise that various animal species have adapted to follow emph{L{e}vy walks} to optimize their search efficiency -- we study the parallel hitting time of L{e}vy walks on the infinite two-dimensional grid.We consider $k$ independent discrete-time L{e}vy walks, with the same exponent $alpha in(1,infty)$, that start from the same node, and analyze the number of steps until the first walk visits a given target at distance $ell$.We show that for any choice of $k$ and $ell$ from a large range, there is a unique optimal exponent $alpha_{k,ell} in (2,3)$, for which the hitting time is $tilde O(ell^2/k)$ w.h.p., while modifying the exponent by an $epsilon$ term increases the hitting time by a polynomial factor, or the walks fail to hit the target almost surely.Based on that, we propose a surprisingly simple and effective parallel search strategy, for the setting where $k$ and $ell$ are unknown:The exponent of each L{e}vy walk is just chosen independently and uniformly at random from the interval $(2,3)$.This strategy achieves optimal search time (modulo polylogarithmic factors) among all possible algorithms (even centralized ones that know $k$).Our results should be contrasted with a line of previous work showing that the exponent $alpha = 2$ is optimal for various search problems.In our setting of $k$ parallel walks, we show that the optimal exponent depends on $k$ and $ell$, and that randomizing the choice of the exponents works simultaneously for all $k$ and $ell$.
Recent experiments (G. Ariel, et al., Nature Comm. 6, 8396 (2015)) revealed an intriguing behavior of swarming bacteria: they fundamentally change their collective motion from simple diffusion into a superdiffusive L{e}vy walk dynamics. We introduce a nonlinear non-Markovian persistent random walk model that explains the emergence of superdiffusive L{e}vy walks. We show that the alignment interaction between individuals can lead to the superdiffusive growth of the mean squared displacement and the power law distribution of run length with infinite variance. The main result is that the superdiffusive behavior emerges as a nonlinear collective phenomenon, rather than due to the standard assumption of the power law distribution of run distances from the inception. At the same time, we find that the repulsion/collision effects lead to the density dependent exponential tempering of power law distributions. This qualitatively explains experimentally observed transition from superdiffusion to the diffusion of mussels as their density increases (M. de Jager et al., Proc. R. Soc. B 281, 20132605 (2014)).
377 - Jean Bertoin 2018
In a step reinforced random walk, at each integer time and with a fixed probability p $in$ (0, 1), the walker repeats one of his previous steps chosen uniformly at random, and with complementary probability 1 -- p, the walker makes an independent new step with a given distribution. Examples in the literature include the so-called elephant random walk and the shark random swim. We consider here a continuous time analog, when the random walk is replaced by a L{e}vy process. For sub-critical (or admissible) memory parameters p < p c , where p c is related to the Blumenthal-Getoor index of the L{e}vy process, we construct a noise reinforced L{e}vy process. Our main result shows that the step-reinforced random walks corresponding to discrete time skeletons of the L{e}vy process, converge weakly to the noise reinforced L{e}vy process as the time-mesh goes to 0.
139 - Wenni Zheng , Pengbo Bo , Yang Liu 2011
We propose a novel method for fitting planar B-spline curves to unorganized data points. In traditional methods, optimization of control points and foot points are performed in two very time-consuming steps in each iteration: 1) control points are updated by setting up and solving a linear system of equations; and 2) foot points are computed by projecting each data point onto a B-spline curve. Our method uses the L-BFGS optimization method to optimize control points and foot points simultaneously and therefore it does not need to perform either matrix computation or foot point projection in every iteration. As a result, our method is much faster than existing methods.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا