ترغب بنشر مسار تعليمي؟ اضغط هنا

A novel learning-based frame pooling method for Event Detection

89   0   0.0 ( 0 )
 نشر من قبل Jiang Liu
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Detecting complex events in a large video collection crawled from video websites is a challenging task. When applying directly good image-based feature representation, e.g., HOG, SIFT, to videos, we have to face the problem of how to pool multiple frame feature representations into one feature representation. In this paper, we propose a novel learning-based frame pooling method. We formulate the pooling weight learning as an optimization problem and thus our method can automatically learn the best pooling weight configuration for each specific event category. Experimental results conducted on TRECVID MED 2011 reveal that our method outperforms the commonly used average pooling and max pooling strategies on both high-level and low-level 2D image features.



قيم البحث

اقرأ أيضاً

Early wildfire detection is of paramount importance to avoid as much damage as possible to the environment, properties, and lives. Deep Learning (DL) models that can leverage both visible and infrared information have the potential to display state-o f-the-art performance, with lower false-positive rates than existing techniques. However, most DL-based image fusion methods have not been evaluated in the domain of fire imagery. Additionally, to the best of our knowledge, no publicly available dataset contains visible-infrared fused fire images. There is a growing interest in DL-based image fusion techniques due to their reduced complexity. Due to the latter, we select three state-of-the-art, DL-based image fusion techniques and evaluate them for the specific task of fire image fusion. We compare the performance of these methods on selected metrics. Finally, we also present an extension to one of the said methods, that we called FIRe-GAN, that improves the generation of artificial infrared images and fused ones on selected metrics.
In recent years, the involvement of synthetic strongly labeled data,weakly labeled data and unlabeled data has drawn much research attentionin semi-supervised sound event detection (SSED). Self-training models carry out predictions without strong ann otations and then take predictions with high probabilities as pseudo-labels for retraining. Such models have shown its effectiveness in SSED. However, probabilities are poorly calibrated confidence estimates, and samples with low probabilities are ignored. Hence, we introduce a method of learning confidence deliberately and retaining all data distinctly by applying confidence as weights. Additionally, linear pooling has been considered as a state-of-the-art aggregation function for SSED with weak labeling. In this paper, we propose a power pooling function whose coefficient can be trained automatically to achieve nonlinearity. A confidencebased semi-supervised sound event detection (C-SSED) framework is designed to combine confidence and power pooling. The experimental results demonstrate that confidence is proportional to the accuracy of the predictions. The power pooling function outperforms linear pooling at both error rate and F1 results. In addition, the C-SSED framework achieves a relative error rate reduction of 34% in contrast to the baseline model.
Access to large corpora with strongly labelled sound events is expensive and difficult in engineering applications. Much research turns to address the problem of how to detect both the types and the timestamps of sound events with weak labels that on ly specify the types. This task can be treated as a multiple instance learning (MIL) problem, and the key to it is the design of a pooling function. In this paper, we propose an adaptive power pooling function which can automatically adapt to various sound sources. On two public datasets, the proposed power pooling function outperforms the state-of-the-art linear softmax pooling on both coarsegrained and fine-grained metrics. Notably, it improves the event-based F1 score (which evaluates the detection of event onsets and offsets) by 11.4% and 10.2% relative on the two datasets. While this paper focuses on sound event detection applications, the proposed method can be applied to MIL tasks in other domains.
187 - Jiahui Cheng , Bin Guo , Jiaqi Liu 2021
Surface defect detection plays an increasingly important role in manufacturing industry to guarantee the product quality. Many deep learning methods have been widely used in surface defect detection tasks, and have been proven to perform well in defe cts classification and location. However, deep learning-based detection methods often require plenty of data for training, which fail to apply to the real industrial scenarios since the distribution of defect categories is often imbalanced. In other words, common defect classes have many samples but rare defect classes have extremely few samples, and it is difficult for these methods to well detect rare defect classes. To solve the imbalanced distribution problem, in this paper we propose TL-SDD: a novel Transfer Learning-based method for Surface Defect Detection. First, we adopt a two-phase training scheme to transfer the knowledge from common defect classes to rare defect classes. Second, we propose a novel Metric-based Surface Defect Detection (M-SDD) model. We design three modules for this model: (1) feature extraction module: containing feature fusion which combines high-level semantic information with low-level structural information. (2) feature reweighting module: transforming examples to a reweighting vector that indicates the importance of features. (3) distance metric module: learning a metric space in which defects are classified by computing distances to representations of each category. Finally, we validate the performance of our proposed method on a real dataset including surface defects of aluminum profiles. Compared to the baseline methods, the performance of our proposed method has improved by up to 11.98% for rare defect classes.
Event cameras are activity-driven bio-inspired vision sensors, thereby resulting in advantages such as sparsity,high temporal resolution, low latency, and power consumption. Given the different sensing modality of event camera and high quality of con ventional vision paradigm, event processing is predominantly solved by transforming the sparse and asynchronous events into 2D grid and subsequently applying standard vision pipelines. Despite the promising results displayed by supervised learning approaches in 2D grid generation, these approaches treat the task in supervised manner. Labeled task specific ground truth event data is challenging to acquire. To overcome this limitation, we propose Event-LSTM, an unsupervised Auto-Encoder architecture made up of LSTM layers as a promising alternative to learn 2D grid representation from event sequence. Compared to competing supervised approaches, ours is a task-agnostic approach ideally suited for the event domain, where task specific labeled data is scarce. We also tailor the proposed solution to exploit asynchronous nature of event stream, which gives it desirable charateristics such as speed invariant and energy-efficient 2D grid generation. Besides, we also push state-of-the-art event de-noising forward by introducing memory into the de-noising process. Evaluations on activity recognition and gesture recognition demonstrate that our approach yields improvement over state-of-the-art approaches, while providing the flexibilty to learn from unlabelled data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا