ﻻ يوجد ملخص باللغة العربية
Developing efficient kernel methods for regression is very popular in the past decade. In this paper, utilizing boosting on kernel-based weaker learners, we propose a novel kernel-based learning algorithm called kernel-based re-scaled boosting with truncation, dubbed as KReBooT. The proposed KReBooT benefits in controlling the structure of estimators and producing sparse estimate, and is near overfitting resistant. We conduct both theoretical analysis and numerical simulations to illustrate the power of KReBooT. Theoretically, we prove that KReBooT can achieve the almost optimal numerical convergence rate for nonlinear approximation. Furthermore, using the recently developed integral operator approach and a variant of Talagrands concentration inequality, we provide fast learning rates for KReBooT, which is a new record of boosting-type algorithms. Numerically, we carry out a series of simulations to show the promising performance of KReBooT in terms of its good generalization, near over-fitting resistance and structure constraints.
Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning
The diversification (generating slightly varying separating discriminators) of Support Vector Machines (SVMs) for boosting has proven to be a challenge due to the strong learning nature of SVMs. Based on the insight that perturbing the SVM kernel may
A major factor in the success of deep neural networks is the use of sophisticated architectures rather than the classical multilayer perceptron (MLP). Residual networks (ResNets) stand out among these powerful modern architectures. Previous works foc
Variational Inference makes a trade-off between the capacity of the variational family and the tractability of finding an approximate posterior distribution. Instead, Boosting Variational Inference allows practitioners to obtain increasingly good pos
It has been well demonstrated that inverse reinforcement learning (IRL) is an effective technique for teaching machines to perform tasks at human skill levels given human demonstrations (i.e., human to machine apprenticeship learning). This paper see