Do you want to publish a course? Click here

The purpose of this article is to shed light on the mechanism and the procedures of a neuro-fuzzy controller that classifies an input face into any of the four facial expressions, which are Happiness, Sadness, Anger and Fear. This program works a ccording to the facial characteristic points-FCP which is taken from one side of the face, and depends, in contrast with some traditional studies which rely on the whole face, on three components: Eyebrows, Eyes and Mouth.
The study suggests designing a weighting model for iris features and selection of the best ones to show the effect of weighting and selection process on system performance. The search introduces a new weighting and fusion algorithm depends on the i nter and intra class differences and the fuzzy logic. The output of the algorithm is the feature’s weight of the selected features. The designed system consists of four stages which are iris segmentation, feature extraction, feature weighting_selection_fusion model implementation and recognition. System suggests using region descriptors for defining the center and radius of iris region, then the iris is cropped and transformed into the polar coordinates via rotation and selection of radius-size pixels of fixed window from center to circumference. Feature extraction stage is done by wavelet vertical details and the statistical metrics of 1st and 2nd derivative of normalized iris image. At weighting and fusion step the best features are selected and fused for classification stage which is done by distance classifier. The algorithm is applied on CASIA database which consists of iris images related to 250 persons. It achieved 100% segmentation precision and 98.7% recognition rate. The results show that segmentation algorithm is robust against illumination and rotation variations and occlusion by eye lash and lid, and the weighting_selection_fusion algorithm enhances the system performance.
This paper proposes a new approach for the segmentation of the retina images to obtain the optic nerve and blood vessels regions. We used retinal images from DRIVE and STARE databases which include different situations like illumination variations, d ifferent optic nerve positions (left, right and center). Illumination problem has been solved by preprocessing stage including image histogram-based illumination correction. Next, some morphological operations were used to filter the preprocessed image to obtain the ROI region, then, the center and radius of optic nerve were determined, and the optic nerve region was extracted from the original image. In blood vessels segmentation, we applied the illumination correction and median filtering.Then the closing, subtraction and morphological operations were done to get the blood vessels image which was thresholded and thinned to get the final blood vessels image.
This research aims to developing new method for breast tumors extraction and features detection in breast magnetic resonance images by depending on clusteringand image processing algorithms. At the beginning, one of clustering algorithms was used f or image segmentation and grouping pixels by their gray scale values. Then morphological operations were implemented in order to remove noise and undesired regions, after that suspected areas were extracted. Finally some shape features for extracted area were detected, this features could be very useful for tumors diagnosis. A database consisted of 96breast magnetic resonance images were used and proposed approach was appliedby MATLAB program, and we obtainedbreast tumors extraction and its features and compared them with the doctor's opinion .
Personal identification based on handprint has been gaining more attention with the increasing needs of high level of security. In this study a novel approach for human recognition based on handprint is proposed. Wavelet transform was used to extra ct features presented in the palm image based on wavelet zero-crossing method. Firstly the wavelet transform of the whole palm image at the fourth level was worked out, which results in four matrices; three of them are detail matrices (i.e., horizontal, vertical and diagonal) as well as one approximation matrix. Throughout this study, only the detail matrices were used because the required information (i.e., hand lines and curves) is included in those matrices. Sixteen features were extracted from each detail matrix, and then arranged in one vector. Consequently, for each palm sample a feature vector consisting of 48 input features of the used neural network was obtained. For this purpose, a database consisting of 400 palm images belonging to 40 people at the rate of 10 images per person was built. Practical tests outcome showed that the designed system successfully indentified 91.36% of the tested images.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا