No Arabic abstract
Thresholding is an important task in image processing. It is a main tool in pattern recognition, image segmentation, edge detection and scene analysis. In this paper, we present a new thresholding technique based on two-dimensional Tsallis entropy. The two-dimensional Tsallis entropy was obtained from the twodimensional histogram which was determined by using the gray value of the pixels and the local average gray value of the pixels, the work it was applied a generalized entropy formalism that represents a recent development in statistical mechanics. The effectiveness of the proposed method is demonstrated by using examples from the real-world and synthetic images. The performance evaluation of the proposed technique in terms of the quality of the thresholded images are presented. Experimental results demonstrate that the proposed method achieve better result than the Shannon method.
Image denoising is getting more significance, especially in Computed Tomography (CT), which is an important and most common modality in medical imaging. This is mainly due to that the effectiveness of clinical diagnosis using CT image lies on the image quality. The denoising technique for CT images using window-based Multi-wavelet transformation and thresholding shows the effectiveness in denoising, however, a drawback exists in selecting the closer windows in the process of window-based multi-wavelet transformation and thresholding. Generally, the windows of the duplicate noisy image that are closer to each window of original noisy image are obtained by the checking them sequentially. This leads to the possibility of missing out very closer windows and so enhancement is required in the aforesaid process of the denoising technique. In this paper, we propose a GA-based window selection methodology to include the denoising technique. With the aid of the GA-based window selection methodology, the windows of the duplicate noisy image that are very closer to every window of the original noisy image are extracted in an effective manner. By incorporating the proposed GA-based window selection methodology, the denoising the CT image is performed effectively. Eventually, a comparison is made between the denoising technique with and without the proposed GA-based window selection methodology.
There has been a debate in 3D medical image segmentation on whether to use 2D or 3D networks, where both pipelines have advantages and disadvantages. 2D methods enjoy a low inference time and greater transfer-ability while 3D methods are superior in performance for hard targets requiring contextual information. This paper investigates efficient 3D segmentation from another perspective, which uses 2D networks to mimic 3D segmentation. To compensate the lack of contextual information in 2D manner, we propose to thicken the 2D network inputs by feeding multiple slices as multiple channels into 2D networks and thus 3D contextual information is incorporated. We also put forward to use early-stage multiplexing and slice sensitive attention to solve the confusion problem of information loss which occurs when 2D networks face thickened inputs. With this design, we achieve a higher performance while maintaining a lower inference latency on a few abdominal organs from CT scans, in particular when the organ has a peculiar 3D shape and thus strongly requires contextual information, demonstrating our methods effectiveness and ability in capturing 3D information. We also point out that thickened 2D inputs pave a new method of 3D segmentation, and look forward to more efforts in this direction. Experiments on segmenting a few abdominal targets in particular blood vessels which require strong 3D contexts demonstrate the advantages of our approach.
Hashing has been recognized as an efficient representation learning method to effectively handle big data due to its low computational complexity and memory cost. Most of the existing hashing methods focus on learning the low-dimensional vectorized binary features based on the high-dimensional raw vectorized features. However, studies on how to obtain preferable binary codes from the original 2D image features for retrieval is very limited. This paper proposes a bilinear supervised discrete hashing (BSDH) method based on 2D image features which utilizes bilinear projections to binarize the image matrix features such that the intrinsic characteristics in the 2D image space are preserved in the learned binary codes. Meanwhile, the bilinear projection approximation and vectorization binary codes regression are seamlessly integrated together to formulate the final robust learning framework. Furthermore, a discrete optimization strategy is developed to alternatively update each variable for obtaining the high-quality binary codes. In addition, two 2D image features, traditional SURF-based FVLAD feature and CNN-based AlexConv5 feature are designed for further improving the performance of the proposed BSDH method. Results of extensive experiments conducted on four benchmark datasets show that the proposed BSDH method almost outperforms all competing hashing methods with different input features by different evaluation protocols.
Classifiers based on sparse representations have recently been shown to provide excellent results in many visual recognition and classification tasks. However, the high cost of computing sparse representations at test time is a major obstacle that limits the applicability of these methods in large-scale problems, or in scenarios where computational power is restricted. We consider in this paper a simple yet efficient alternative to sparse coding for feature extraction. We study a classification scheme that applies the soft-thresholding nonlinear mapping in a dictionary, followed by a linear classifier. A novel supervised dictionary learning algorithm tailored for this low complexity classification architecture is proposed. The dictionary learning problem, which jointly learns the dictionary and linear classifier, is cast as a difference of convex (DC) program and solved efficiently with an iterative DC solver. We conduct experiments on several datasets, and show that our learning algorithm that leverages the structure of the classification problem outperforms generic learning procedures. Our simple classifier based on soft-thresholding also competes with the recent sparse coding classifiers, when the dictionary is learned appropriately. The adopted classification scheme further requires less computational time at the testing stage, compared to other classifiers. The proposed scheme shows the potential of the adequately trained soft-thresholding mapping for classification and paves the way towards the development of very efficient classification methods for vision problems.
One of the major issues studied in finance that has always intrigued, both scholars and practitioners, and to which no unified theory has yet been discovered, is the reason why prices move over time. Since there are several well-known traditional techniques in the literature to measure stock market volatility, a central point in this debate that constitutes the actual scope of this paper is to compare this common approach in which we discuss such popular techniques as the standard deviation and an innovative methodology based on Econophysics. In our study, we use the concept of Tsallis entropy to capture the nature of volatility. More precisely, what we want to find out is if Tsallis entropy is able to detect volatility in stock market indexes and to compare its values with the ones obtained from the standard deviation. Also, we shall mention that one of the advantages of this new methodology is its ability to capture nonlinear dynamics. For our purpose, we shall basically focus on the behaviour of stock market indexes and consider the CAC 40, MIB 30, NIKKEI 225, PSI 20, IBEX 35, FTSE 100 and SP 500 for a comparative analysis between the approaches mentioned above.