ترغب بنشر مسار تعليمي؟ اضغط هنا

Fast-AT: Fast Automatic Thumbnail Generation using Deep Neural Networks

94   0   0.0 ( 0 )
 نشر من قبل Seyed Abdulaziz Esmaeili
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Fast-AT is an automatic thumbnail generation system based on deep neural networks. It is a fully-convolutional deep neural network, which learns specific filters for thumbnails of different sizes and aspect ratios. During inference, the appropriate filter is selected depending on the dimensions of the target thumbnail. Unlike most previous work, Fast-AT does not utilize saliency but addresses the problem directly. In addition, it eliminates the need to conduct region search on the saliency map. The model generalizes to thumbnails of different sizes including those with extreme aspect ratios and can generate thumbnails in real time. A data set of more than 70,000 thumbnail annotations was collected to train Fast-AT. We show competitive results in comparison to existing techniques.



قيم البحث

اقرأ أيضاً

Photo retouching enables photographers to invoke dramatic visual impressions by artistically enhancing their photos through stylistic color and tone adjustments. However, it is also a time-consuming and challenging task that requires advanced skills beyond the abilities of casual photographers. Using an automated algorithm is an appealing alternative to manual work but such an algorithm faces many hurdles. Many photographic styles rely on subtle adjustments that depend on the image content and even its semantics. Further, these adjustments are often spatially varying. Because of these characteristics, existing automatic algorithms are still limited and cover only a subset of these challenges. Recently, deep machine learning has shown unique abilities to address hard problems that resisted machine algorithms for long. This motivated us to explore the use of deep learning in the context of photo editing. In this paper, we explain how to formulate the automatic photo adjustment problem in a way suitable for this approach. We also introduce an image descriptor that accounts for the local semantics of an image. Our experiments demonstrate that our deep learning formulation applied using these descriptors successfully capture sophisticated photographic styles. In particular and unlike previous techniques, it can model local adjustments that depend on the image semantics. We show on several examples that this yields results that are qualitatively and quantitatively better than previous work.
Numerical integration is a foundational technique in scientific computing and is at the core of many computer vision applications. Among these applications, neural volume rendering has recently been proposed as a new paradigm for view synthesis, achi eving photorealistic image quality. However, a fundamental obstacle to making these methods practical is the extreme computational and memory requirements caused by the required volume integrations along the rendered rays during training and inference. Millions of rays, each requiring hundreds of forward passes through a neural network are needed to approximate those integrations with Monte Carlo sampling. Here, we propose automatic integration, a new framework for learning efficient, closed-form solutions to integrals using coordinate-based neural networks. For training, we instantiate the computational graph corresponding to the derivative of the network. The graph is fitted to the signal to integrate. After optimization, we reassemble the graph to obtain a network that represents the antiderivative. By the fundamental theorem of calculus, this enables the calculation of any definite integral in two evaluations of the network. Applying this approach to neural rendering, we improve a tradeoff between rendering speed and image quality: improving render times by greater than 10 times with a tradeoff of slightly reduced image quality.
Thumbnails are widely used all over the world as a preview for digital images. In this work we propose a deep neural framework to generate thumbnails of any size and aspect ratio, even for unseen values during training, with high accuracy and precisi on. We use Global Context Aggregation (GCA) and a modified Region Proposal Network (RPN) with adaptive convolutions to generate thumbnails in real time. GCA is used to selectively attend and aggregate the global context information from the entire image while the RPN is used to predict candidate bounding boxes for the thumbnail image. Adaptive convolution eliminates the problem of generating thumbnails of various aspect ratios by using filter weights dynamically generated from the aspect ratio information. The experimental results indicate the superior performance of the proposed model over existing state-of-the-art techniques.
Reconstruction of PET images is an ill-posed inverse problem and often requires iterative algorithms to achieve good image quality for reliable clinical use in practice, at huge computational costs. In this paper, we consider the PET reconstruction a dense prediction problem where the large scale contextual information is essential, and propose a novel architecture of multi-scale fully convolutional neural networks (msfCNN) for fast PET image reconstruction. The proposed msfCNN gains large receptive fields with both memory and computational efficiency, by using a downscaling-upscaling structure and dilated convolutions. Instead of pooling and deconvolution, we propose to use the periodic shuffling operation from sub-pixel convolution and its inverse to scale the size of feature maps without losing resolution. Residual connections were added to improve training. We trained the proposed msfCNN model with simulated data, and applied it to clinical PET data acquired on a Siemens mMR scanner. The results from real oncological and neurodegenerative cases show that the proposed msfCNN-based reconstruction outperforms the iterative approaches in terms of computational time while achieving comparable image quality for quantification. The proposed msfCNN model can be applied to other dense prediction tasks, and fast msfCNN-based PET reconstruction could facilitate the potential use of molecular imaging in interventional/surgical procedures, where cancer surgery can particularly benefit.
The aim of this study is developing an automatic system for detection of gait-related health problems using Deep Neural Networks (DNNs). The proposed system takes a video of patients as the input and estimates their 3D body pose using a DNN based met hod. Our code is publicly available at https://github.com/rmehrizi/multi-view-pose-estimation. The resulting 3D body pose time series are then analyzed in a classifier, which classifies input gait videos into four different groups including Healthy, with Parkinsons disease, Post Stroke patient, and with orthopedic problems. The proposed system removes the requirement of complex and heavy equipment and large laboratory space, and makes the system practical for home use. Moreover, it does not need domain knowledge for feature engineering since it is capable of extracting semantic and high level features from the input data. The experimental results showed the classification accuracy of 56% to 96% for different groups. Furthermore, only 1 out of 25 healthy subjects were misclassified (False positive), and only 1 out of 70 patients were classified as a healthy subject (False negative). This study presents a starting point toward a powerful tool for automatic classification of gait disorders and can be used as a basis for future applications of Deep Learning in clinical gait analysis. Since the system uses digital cameras as the only required equipment, it can be employed in domestic environment of patients and elderly people for consistent gait monitoring and early detection of gait alterations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا