No Arabic abstract
The histogram of oriented gradients (HOG) is a widely used feature descriptor in computer vision for the purpose of object detection. In the paper, a modified HOG descriptor is described, it uses a lookup table and the method of integral image to speed up the detection performance by a factor of 5~10. By exploiting the special hardware features of a given platform(e.g. a digital signal processor), further improvement can be made to the HOG descriptor in order to have real-time object detection and tracking.
Detection and description of keypoints from an image is a well-studied problem in Computer Vision. Some methods like SIFT, SURF or ORB are computationally really efficient. This paper proposes a solution for a particular case study on object recognition of industrial parts based on hierarchical classification. Reducing the number of instances leads to better performance, indeed, that is what the use of the hierarchical classification is looking for. We demonstrate that this method performs better than using just one method like ORB, SIFT or FREAK, despite being fairly slower.
Content-based image retrieval (CBIR) is an essential part of computer vision research, especially in medical expert systems. Having a discriminative image descriptor with the least number of parameters for tuning is desirable in CBIR systems. In this paper, we introduce a new simple descriptor based on the histogram of local Radon projections. We also propose a very fast convolution-based local Radon estimator to overcome the slow process of Radon projections. We performed our experiments using pathology images (KimiaPath24) and lung CT patches and test our proposed solution for medical image processing. We achieved superior results compared with other histogram-based descriptors such as LBP and HoG as well as some pre-trained CNNs.
Incompatibility of image descriptor and ranking is always neglected in image retrieval. In this paper, manifold learning and Gestalt psychology theory are involved to solve the incompatibility problem. A new holistic descriptor called Perceptual Uniform Descriptor (PUD) based on Gestalt psychology is proposed, which combines color and gradient direction to imitate the human visual uniformity. PUD features in the same class images distributes on one manifold in most cases because PUD improves the visual uniformity of the traditional descriptors. Thus, we use manifold ranking and PUD to realize image retrieval. Experiments were carried out on five benchmark data sets, and the proposed method can greatly improve the accuracy of image retrieval. Our experimental results in the Ukbench and Corel-1K datasets demonstrated that N-S score reached to 3.58 (HSV 3.4) and mAP to 81.77% (ODBTC 77.9%) respectively by utilizing PUD which has only 280 dimension. The results are higher than other holistic image descriptors (even some local ones) and state-of-the-arts retrieval methods.
Under current conditions, the cosmic ray spectrum incident on the Earth is dominated by particles with energies < 1 GeV. Astrophysical sources including high energy solar flares, supernovae and gamma ray bursts produce high energy cosmic rays (HECRs) with drastically higher energies. The Earth is likely episodically exposed to a greatly increased HECR flux from such events, some of which lasting thousands to millions of years. The air showers produced by HECRs ionize the atmosphere and produce harmful secondary particles such as muons and neutrons. Neutrons currently contribute a significant radiation dose at commercial passenger airplane altitude. With higher cosmic ray energies, these effects will be propagated to ground level. This work shows the results of Monte Carlo simulations quantifying the neutron flux due to high energy cosmic rays at various primary energies and altitudes. We provide here lookup tables that can be used to determine neutron fluxes from primaries with total energies 1 GeV - 1 PeV. By convolution, one can compute the neutron flux for any arbitrary CR spectrum. Our results demonstrate that deducing the nature of primaries from ground level neutron enhancements would be very difficult.
Recently, deep learning-based image enhancement algorithms achieved state-of-the-art (SOTA) performance on several publicly available datasets. However, most existing methods fail to meet practical requirements either for visual perception or for computation efficiency, especially for high-resolution images. In this paper, we propose a novel real-time image enhancer via learnable spatial-aware 3-dimentional lookup tables(3D LUTs), which well considers global scenario and local spatial information. Specifically, we introduce a light weight two-head weight predictor that has two outputs. One is a 1D weight vector used for image-level scenario adaptation, the other is a 3D weight map aimed for pixel-wise category fusion. We learn the spatial-aware 3D LUTs and fuse them according to the aforementioned weights in an end-to-end manner. The fused LUT is then used to transform the source image into the target tone in an efficient way. Extensive results show that our model outperforms SOTA image enhancement methods on public datasets both subjectively and objectively, and that our model only takes about 4ms to process a 4K resolution image on one NVIDIA V100 GPU.