No Arabic abstract
Weighted Gaussian Curvature is an important measurement for images. However, its conventional computation scheme has low performance, low accuracy and requires that the input image must be second order differentiable. To tackle these three issues, we propose a novel discrete computation scheme for the weighted Gaussian curvature. Our scheme does not require the second order differentiability. Moreover, our scheme is more accurate, has smaller support region and computationally more efficient than the conventional schemes. Therefore, our scheme holds promise for a large range of applications where the weighted Gaussian curvature is needed, for example, image smoothing, cartoon texture decomposition, optical flow estimation, etc.
This paper presents results on the detection and identification mango fruits from colour images of trees. We evaluate the behaviour and the performances of the Faster R-CNN network to determine whether it is robust enough to detect and classify fruits under particularly heterogeneous conditions in terms of plant cultivars, plantation scheme, and visual information acquisition contexts. The network is trained to distinguish the Kent, Keitt, and Boucodiekhal mango cultivars from 3,000 representative labelled fruit annotations. The validation set composed of about 7,000 annotations was then tested with a confidence threshold of 0.7 and a Non-Maximal-Suppression threshold of 0.25. With a F1-score of 0.90, the Faster R-CNN is well suitable to the simple fruit detection in tiles of 500x500 pixels. We then combine a multi-tiling approach with a Jaccard matrix to merge the different parts of objects detected several times, and thus report the detections made at the tile scale to the native 6,000x4,000 pixel size images. Nonetheless with a F1-score of 0.56, the cultivar identification Faster R-CNN network presents some limitations for simultaneously detecting the mango fruits and identifying their respective cultivars. Despite the proven errors in fruit detection, the cultivar identification rates of the detected mango fruits are in the order of 80%. The ideal solution could combine a Mask R-CNN for the image pre-segmentation of trees and a double-stream Faster R-CNN for detecting the mango fruits and identifying their respective cultivar to provide predictions more relevant to users expectations.
The morphometric approach [HRC13,RHK06] writes the solvation free energy as a linear combination of weight
Deformable image registration is a fundamental task in medical imaging. Due to the large computational complexity of deformable registration of volumetric images, conventional iterative methods usually face the tradeoff between the registration accuracy and the computation time in practice. In order to boost the registration performance in both accuracy and runtime, we propose a fast convolutional neural network. Specially, to efficiently utilize the memory resources and enlarge the model capacity, we adopt additive forwarding instead of channel concatenation and deepen the network in each encoder and decoder stage. To facilitate the learning efficiency, we leverage skip connection within the encoder and decoder stages to enable residual learning and employ an auxiliary loss at the bottom layer with lowest resolution to involve deep supervision. Particularly, the low-resolution auxiliary loss is weighted by an exponentially decayed parameter during the training phase. In conjunction with the main loss in high-resolution grid, a coarse-to-fine learning strategy is achieved. Last but not least, we introduce an auxiliary loss based on the segmentation prior to improve the registration performance in Dice score. Comparing to the auxiliary loss using average Dice score, the proposed multi-label segmentation loss does not induce additional memory cost in the training phase and can be employed on images with arbitrary amount of categories. In the experiments, we show FDRN outperforms the existing state-of-the-art registration methods for brain MR images by resorting to the compact network structure and efficient learning. Besides, FDRN is a generalized framework for image registration which is not confined to a particular type of medical images or anatomy.
This paper explores the connection between steganography and adversarial images. On the one hand, ste-ganalysis helps in detecting adversarial perturbations. On the other hand, steganography helps in forging adversarial perturbations that are not only invisible to the human eye but also statistically undetectable. This work explains how to use these information hiding tools for attacking or defending computer vision image classification. We play this cat and mouse game with state-of-art classifiers, steganalyzers, and steganographic embedding schemes. It turns out that steganography helps more the attacker than the defender.
Because exoplanets are extremely dim, an Electron Multiplying Charged Coupled Device (EMCCD) operating in photon counting (PC) mode is necessary to reduce the detector noise level and enable their detection. Typically, PC images are added together as a co-added image before processing. We present here a signal detection and estimation technique that works directly with individual PC images. The method is based on the generalized likelihood ratio test (GLRT) and uses a Bernoulli distribution between PC images. The Bernoulli distribution is derived from a stochastic model for the detector, which accurately represents its noise characteristics. We show that our technique outperforms a previously used GLRT method that relies on co-added images under a Gaussian noise assumption and two detection algorithms based on signal-to-noise ratio (SNR). Furthermore, our method provides the maximum likelihood estimate of exoplanet intensity and background intensity while doing detection. It can be applied online, so it is possible to stop observations once a specified threshold is reached, providing confidence for the existence (or absence) of planets. As a result, the observation time is efficiently used. Besides the observation time, the analysis of detection performance introduced in the paper also gives quantitative guidance on the choice of imaging parameters, such as the threshold. Lastly, though this work focuses on the example of detecting point source, the framework is widely applicable.