This research suggests a new method that aims to verify the manual signature image which is written by person, and specify whether this signature back to this person or that forged signature. This was done by extracting geometric features of the sign
ature image and applying statistical functions on them as a way to verify the signature of that person.
The features from the signature image have been extracted on many stages so a signature image has been transformed from the gray scale to binary format, and then extracting the statistical features from the original signature image which is the maximum value from the most repeated values in the ones' coordination line that determine the signature shape, in addition to the number of ones which also determine the signature shape. Finally two ranges have been identified for the values accepted for original signature image. By the same way, statistical features have been extracted from the foreign signature image and tested if they aggregate within the specified domain of acceptable values. This research also includes the results of the proposed approach that compared with the previous methods in this scope. The proposed method has been tested to the data base consisting of 16200 signatures back to 300 persons, and as a result the signature image has been verified with a good percentage.
facial characteristic points-FCP
neuro_fuzzy controller
Pattern recognition
Signature image
Signature detection
Signature Image's features
off line Signature verification
statistical functions
التعرف على النماذج
صورة التوقيع
كشف التوقيع
سمات من صورة التوقيع
التحقق من صحة الأشخاص باستخدام صورة التوقيع
التوابع الإحصائية
المزيد..
This paper proposes a new approach for the segmentation of the side face images to obtain the ear region. The proposed approach is divided into two basic steps: The first step classifies the image pixels into skin and non-skin pixels using likelihood
skin detector.
This likelihood image is processed by using morphological operations to detect the ear region. In the second step, image containing ear region is isolated from side face image by using one of two methods; the first is based on experiment, while the second is based measurements. The study includes a comparison of the results between the proposed study and previous ones to identify the differences. The proposed approach is applied on a database containing 146 images of 20 persons. These images were taken under different illumination, pose, day, and location variations. The partial occlusion by hair or earing was also taken in account. The results showed that the system achieved a correct segmentation
with rate 95.8%.
facial characteristic points-FCP
neuro_fuzzy controller
Morphological Operations
Pattern recognition
Ear image
Ear shape
Ear detection
Ear segmentation
Ear recognition
Skin detection
Likelihoo
تعرف النماذج
صورة الأذن
كشف الأذن
اقتطاع منطقة الأذن
تعرف الأشخاص باستخدام الأذن
كشف الجلد
العمليات المورفولوجية
الأرجحية
المزيد..
The study suggests designing a weighting model for iris features and selection of the
best ones to show the effect of weighting and selection process on system performance.
The search introduces a new weighting and fusion algorithm depends on the i
nter and intra
class differences and the fuzzy logic. The output of the algorithm is the feature’s weight of
the selected features. The designed system consists of four stages which are iris
segmentation, feature extraction, feature weighting_selection_fusion model
implementation and recognition. System suggests using region descriptors for defining the
center and radius of iris region, then the iris is cropped and transformed into the polar
coordinates via rotation and selection of radius-size pixels of fixed window from center to
circumference. Feature extraction stage is done by wavelet vertical details and the
statistical metrics of 1st and 2nd derivative of normalized iris image. At weighting and
fusion step the best features are selected and fused for classification stage which is done by
distance classifier. The algorithm is applied on CASIA database which consists of iris
images related to 250 persons. It achieved 100% segmentation precision and 98.7%
recognition rate. The results show that segmentation algorithm is robust against
illumination and rotation variations and occlusion by eye lash and lid, and the
weighting_selection_fusion algorithm enhances the system performance.
Personal identification based on handprint has been gaining more attention with the
increasing needs of high level of security. In this study a novel approach for human
recognition based on handprint is proposed. Wavelet transform was used to extra
ct features
presented in the palm image based on wavelet zero-crossing method. Firstly the wavelet
transform of the whole palm image at the fourth level was worked out, which results in
four matrices; three of them are detail matrices (i.e., horizontal, vertical and diagonal) as
well as one approximation matrix. Throughout this study, only the detail matrices were
used because the required information (i.e., hand lines and curves) is included in those
matrices. Sixteen features were extracted from each detail matrix, and then arranged in one
vector. Consequently, for each palm sample a feature vector consisting of 48 input features
of the used neural network was obtained. For this purpose, a database consisting of 400
palm images belonging to 40 people at the rate of 10 images per person was built. Practical
tests outcome showed that the designed system successfully indentified 91.36% of the
tested images.