ترغب بنشر مسار تعليمي؟ اضغط هنا

Bootstrapping Through Discrete Convolutional Methods

95   0   0.0 ( 0 )
 نشر من قبل Richard Warr
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Bootstrapping was designed to randomly resample data from a fixed sample using Monte Carlo techniques. However, the original sample itself defines a discrete distribution. Convolutional methods are well suited for discrete distributions, and we show the advantages of utilizing these techniques for bootstrapping. The discrete convolutional approach can provide exact numerical solutions for bootstrap quantities, or at least mathematical error bounds. In contrast, Monte Carlo bootstrap methods can only provide confidence intervals which converge slowly. Additionally, for some problems the computation time of the convolutional approach can be dramatically less than that of Monte Carlo resampling. This article provides several examples of bootstrapping using the proposed convolutional technique and compares the results to those of the Monte Carlo bootstrap, and to those of the competing saddlepoint method.

قيم البحث

اقرأ أيضاً

We introduce two new bootstraps for exchangeable random graphs. One, the empirical graphon, is based purely on resampling, while the other, the histogram stochastic block model, is a model-based sieve bootstrap. We show that both of them accurately a pproximate the sampling distributions of motif densities, i.e., of the normalized counts of the number of times fixed subgraphs appear in the network. These densities characterize the distribution of (infinite) exchangeable networks. Our bootstraps therefore give, for the first time, a valid quantification of uncertainty in inferences about fundamental network statistics, and so of parameters identifiable from them.
This paper investigates the (in)-consistency of various bootstrap methods for making inference on a change-point in time in the Cox model with right censored survival data. A criterion is established for the consistency of any bootstrap method. It is shown that the usual nonparametric bootstrap is inconsistent for the maximum partial likelihood estimation of the change-point. A new model-based bootstrap approach is proposed and its consistency established. Simulation studies are carried out to assess the performance of various bootstrap schemes.
547 - Weiyu Guo , Jiabin Ma , Liang Wang 2019
As deep neural networks are increasingly used in applications suited for low-power devices, a fundamental dilemma becomes apparent: the trend is to grow models to absorb increasing data that gives rise to memory intensive; however low-power devices a re designed with very limited memory that can not store large models. Parameters pruning is critical for deep model deployment on low-power devices. Existing efforts mainly focus on designing highly efficient structures or pruning redundant connections for networks. They are usually sensitive to the tasks or relay on dedicated and expensive hashing storage strategies. In this work, we introduce a novel approach for achieving a lightweight model from the views of reconstructing the structure of convolutional kernels and efficient storage. Our approach transforms a traditional square convolution kernel to line segments, and automatically learn a proper strategy for equipping these line segments to model diverse features. The experimental results indicate that our approach can massively reduce the number of parameters (pruned 69% on DenseNet-40) and calculations (pruned 59% on DenseNet-40) while maintaining acceptable performance (only lose less than 2% accuracy).
We develop Bayesian models for density regression with emphasis on discrete outcomes. The problem of density regression is approached by considering methods for multivariate density estimation of mixed scale variables, and obtaining conditional densi ties from the multivariate ones. The approach to multivariate mixed scale outcome density estimation that we describe represents discrete variables, either responses or covariates, as discretis
Recent work in word spotting in handwritten documents has yielded impressive results. This progress has largely been made by supervised learning systems, which are dependent on manually annotated data, making deployment to new collections a significa nt effort. In this paper, we propose an approach that utilises transcripts without bounding box annotations to train segmentation-free query-by-string word spotting models, given a partially trained model. This is done through a training-free alignment procedure based on hidden Markov models. This procedure creates a tentative mapping between word region proposals and the transcriptions to automatically create additional weakly annotated training data, without choosing any single alignment possibility as the correct one. When only using between 1% and 7% of the fully annotated training sets for partial convergence, we automatically annotate the remaining training data and successfully train using it. On all our datasets, our final trained model then comes within a few mAP% of the performance from a model trained with the full training set used as ground truth. We believe that this will be a significant advance towards a more general use of word spotting, since digital transcription data will already exist for parts of many collections of interest.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا