Article

Key Point Localization Based on Intersecting Circle for Palmprint Preprocessing in Public Security

Zibo Zhou1,3, Qi Chen2, Lu Leng1,3,*
Author Information & Copyright
1Key Laboratory of Jiangxi Province for Image Processing and Pattern Recognition, Nanchang Hangkong University, Nanchang 330063, China
2School of Information Engineering, Nanchang Hangkong University, Nanchang 330063, China
3School of Software, Nanchang Hangkong University, Nanchang 330063, China
*Corresponding Author : Lu Leng, Nanchang Hangkong University, 696 Fenghe South Avenue, Nanchang 330063, China. E-mail: drluleng@gmail.com, Tel. +86-791-86453251

© Copyright 2019 Innovative Defense Acquisition Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Dec 26, 2019; Revised: Dec 30, 2019; Accepted: Dec 30, 2019

Published Online: Dec 31, 2019

Abstract

Secure identity authentication is important in defense technology fields and public security. Palmprint is a good biometric modality used for authentication on mobile devices. Unfortunately, the background of palmprint image collected with mobile devices is random and changeable. Usually, the region of interest is localized based on the key points of the inter-finger-valley. In order to resist the random and variable background interference and effectively remove the false inter-finger-valley contours, this paper develops a key point localization method based on intersecting circle for palmprint preprocessing. Since the contour curvature of inter-finger-valley is greater than that of the rest of the hand, each contour point is taken as the center of an intersecting circle, and the radius of the intersecting circle is set according to the finger width. The developed method can determine the set of candidate contour points at the inter-finger-valley. The remained false inter-finger-valley contour points are further removed according to the direction of the central angle. The bottom point of the contour at the inter-finger-valley is used as the key point. The sufficient experiments show that the method in this paper is less dependent on the segmentation effect, and it is robust to false contour and background interference, so the localization and recognition accuracies are high.

Keywords: Key Point Localization; Palmprint Preprocessing; Inter-Finger-Valley; Intersecting Circle; Mobile Device

I. INTRODUCTION

Secure identity authentication is important in defense technology fields and public security. As a novel authentication mode, biometric recognition has been popular due to its notable advantages compared with the traditional authentication modes [1, 2]. Currently biometric samples can be directly acquired and used for authentication on mobile devices. Palmprint is a good biometric modality due to its rich texture information [3, 4], relatively low resolution requirement, low cost of acquisition equipment, etc. [5, 6]. In addition, palmprint can be fused with other hand biometric modalities [79] and used for multi-instance fusion [10].

Palmprint image acquisition can be briefly divided into contact and non-contact modes [11]. In non-contact mode, users do not need to contact device surface, and the operation is relatively convenient [12]. Although it improves the acquisition flexibility to a certain extent, it suffers from more severe challenges than contact mode, including background complexity, illumination variance, uncontrollable hand posture and position and so on [13]. The environment of contact image collection is ideal, so the contact palmprint preprocessing methods cannot be used directly in non-contact mode due the aforementioned challenges.

The palmprint image can be captured with the built-in camera of the mobile device, and the hand does not need to contact the device. Therefore, the mobile mode can be regarded as a special case of non-contact mode [14, 15]. Since mobile devices can be used almost anytime anywhere, the challenges existing in non-contact mode are more remarkable in mobile mode [16]. In addition, due to the limited resources on the mobile device, the computational complexity and storage overhead cannot be too high [17, 18].

1.1. Assistant Palmprint Image Acquisition

In order to reduce the complexity of the segmentation and localization, some researchers designed assistant image acquisition techniques [19]. Shoichiro et al. designed an assistant rectangle guide window to help users place their hands [20]. Similarly, Kim et al. designed an assistant hand-shaped guide windows [21]. Ramli et al. designed a hand template to control the appropriate distance between the hand and the input device [22]. In [23], Lee et al. designed three inter-finger-valley (IFV) assistant lines to help users place their hands. In 2017, Leng et al. designed two assistant points to align the IFV key points (KPs) [24] and implemented the scheme on iOS mobile platform [25]. In 2018, Leng et al. designed a dual-line-single-point acquisition method [26] and implemented the scheme on Android mobile platform [27]. The dual-line-single-point assistant graphs include two parallel line segments with different lengths, and an anchor point. This method requires few restrictions and operations, and has a comfortable user experience. However, the Region of Interest (ROI) is directly cropped with assistant graph, so this method requires a high degree of user cooperation.

1.2. Non-contact Palmprint Segmentation and Localization

Palmprint preprocessing is important, because its segmentation and localization affect the accuracy of feature extraction and matching. Non-contact palmprint preprocessing methods can be summarized as follows:

Skin color model: In YCbCr, HSV, CIELab and other color spaces, segmentation and localization methods are based on hand skin color. In order to improve the robustness of the segmentation to illumination changes, Rotinwaakinbile et al. proposed a skin color neural network model [28]. Zhao et al. first used the ellipse skin color model for skin color detection, and then performed secondary detection on skin color based on the clustering of split K-means [29]. In 2017, Zhang et al. proposed an adaptive segmentation method based on Gaussian skin color model [30], but the method is susceptible to skin-like color, which leads to over- or under-segmentation.

Maximum inscribed circle: A maximum inscribed circle is detected as ROI, which is tangent to both sides of the hand and contains rich palm information [31, 32]. The disadvantage of this method is that it takes a lot of time to find the center of the largest inscribed circle.

Statistical model: Active Appearance Model (AAM) and Active Shape Model (ASM) are two typical statistical models [33]. Fratric et al. proposed a real-time hand localization method based on Viola-Jone approach [34]. Aykut et al. obtained prior knowledge through the shape and texture observed in a training set, and segmented the hand shape [35,36]. AAM method has a strong ability to resist the changes in hand posture and background, so it improves the accuracy and robustness of the segmentation algorithm. However, this method mixes two complex features of shape and texture, so the calculation overhead is large.

Projection: Huang et al. performed vertical projection on the segmented binary image. The length of the finger affects the accuracy of KP localization [37].

Tangent model: Zhang et al. used a line tangent to two IFV contours to localize IFV KPs [38].

Competitive hand valley detection (CHVD): This method performs four conditions on all pixels in the hand image. Only the pixels satisfying four conditions at the same time are determined to be candidate KPs [39].

Triple-perpendicular-directional Translation Residual (TPDTR): Leng et al. proposed TPDTR logical connection to detect IFVs. These three vertical directions are the direction of the middle fingertip and its two perpendicular directions [40,41]. However, when the background is complicated, some false IFVs could be detected.

In addition to the above palmprint KP localization methods, there are also some other methods based on radial distance function [42, 43], line fitting [44, 45], and morphological corner detection [46]. However, these methods could have low localization accuracy in complex environment, or low applicable range. When the KP localizations are not accurate, subsequent operations, such as feature extraction [47, 48] and coding [49, 50], could be degraded.

In order to overcome the disadvantages of the above methods, this paper designs a method of KP localization based on intersecting circle (IC). This method is good at removing false contour and anti-interference, so both the localization accuracy and recognition accuracy are high.

II. Methodology

2.1. Inter-Finger-Valley Localization Based on Intersection Circle

Hand region and contour are segmented with an improved Gaussian skin-color model [25]. According to the characteristic of large contour curvature at IFV, we design a method of KP localization based on IC. The larger the contour curvature is, the smaller the central angle is. According to this characteristic, the contours of fingertips and IFVs are retained. Then, IFV direction range is used to exclude fingertips and false IFVs. Finally, the lowest midpoint of IFV is determined as the KP.

2.2.1. Obtaining inter-finger-valley and fingertip contours

We use the I-shaped guide assist method to take hand images, which is shown in Fig. 1. The coordinate system is established on the image, with the upper left corner as the origin. The positive directions of X-axis and Y-axis are horizontally right and vertically down, respectively. The length and width of hand image are H and W, respectively. The implementation steps are as follows:

jdaat-1-2-24-g1
Fig. 1. Geometric structure of hand.
Download Original Figure

Step 1: Extract edge.

Sobel operator is used to extract edge.

Step 2: Remove wrist.

The wrist affects on hand contour extraction, so it needs to be removed. As shown in Fig. 1, middle finger length to palm length ratio is about 3:4, AB:BC is 5:3, and the y coordinate of B is:

y B = 0.1 H+ 5 8 × 0.9 H 0 .6H
(1)

Step 3: Extract intersection points.

Each contour point is used as the center of an IC. The width occupied by each finger is about W/8~W/10. The selected IC radius should be smaller than the finger width. Therefore, the radius of an IC r < W/10, so we set r = W/13. (x0, y0) represents the central of IC, whose intersection points satisfy:

r 0.5 < ( x x 0 ) 2 + ( y y 0 ) 2 < r + 0.5
(2)

Where (x, y) represents the coordinates of contour points.

Step 4: Count the number of intersection points.

n denotes the number of intersection points between the IC and contour. Whether the point is retained depends on the value of n.

If n = 1, the point is close to the contour endpoints on the either side of wrist.

If n = 2, the point is at the fingertip or IFV contour.

If n = 3 or 4, the point is at the contour of finger body, because the distance between two adjacent fingers is less than the IC radius.

If n ≥ 5, the hand is abnormally collected, or there are major defects in the segmentation and contour extraction.

The contour points with different values of n are shown in Fig. 2. The contour points with n = 2 are retained.

jdaat-1-2-24-g2
Fig. 2. The contour points with different intersection point numbers.
Download Original Figure
2.1.2. Inter-finger-valley detection

A contour point is set as P, and the points where IC intersects with hand contour are E and F. If ∠EPF < 135°, keep the contour point P, otherwise, remove it.

IFV and fingertip contours have “U” or “V” shape characteristics. According to the central angle bisector direction of different contours, the fingertip contours and the false IFV contours are excluded.

In Fig. 3(a), Arrows 1, 2 and 3 respectively indicate central angle bisector direction ranges of fingertips, false IFV, and true IFV [51]. The contour is removed when all its points satisfy that the angle between the central angle bisector and the vertical direction is larger than 75°. Finally, the four IFVs with a few interference points as shown in Fig. 3(b) are retained.

jdaat-1-2-24-g3
Fig. 3. Remove interference contours.
Download Original Figure
2.2. Inter-Finger-Valley Contour Repair

After preprocessing, there may be few unsatisfactory points in IFV contours, which result in the fracture of the IFV. In order to connect the broken IFV contour and keep the shape unchanged, a 3 × 3 template is used to perform a morphological close operation on all contour points. At last, keep the four longest contours as the IFV contours.

2.3. Key Point Localization

First find the row of the points at the bottom of IFV contour, then select the middle point of these points as the KP.

III. EXPERIMENTS AND DISCUSSION

3.1. Database and Setup

The palmprint samples are acquired according to the I-shaped guide assist method. 5 to 7 images of each hand are collected. The database contains 918 samples of 160 hands of 80 people. The ratio of male to female is about 7:3, and the age is between 18 and 60. Most individuals are undergraduates. The collection mobile device is Huawei G750-t01. The collection environments include indoor, outdoor, various illuminations, even no illumination conditions.

3.2. Key Point Localization

Fig. 4 shows the palmprint images taken under different backgrounds. Several KP localization methods are compared. Red dots and blue boxes indicate KPs and ROIs.

jdaat-1-2-24-g4
Fig. 4. Comparison of localization results in different environments (projection method, tangent method, CHVD, TPDTR, our method).
Download Original Figure

In ideal background (Fig. 4(a)), all methods have good results. In the background with skin-like color (Fig. 4(b)), KP localization in our method is more accurate than others. In complex background (Fig. 4(c)), only our method succeeds to localize KP and ROI.

Our method has a lower error rate. In complex backgrounds, projection method and the tangent method have a weak ability to resist background interference. CHVD is easily affected by texture near IFVs, which results in localization errors. TPDTR method has good stability, but the “U” and “V” shape contours in complex background would lead to detect false IFV contour.

We judge the accuracy of KP localization with correct rate. The lowest points of the IFVs between the index finger and the middle finger as well as the ring finger and the little finger are manually labeled as the standard KPs. The images are resized to 600 × 338. When the distance between the KP and the marked point is within 5 pixels, the localization is judged to be correct. In Table 1, The correct rate of our method is 96.3%, which is the higher than the compared methods.

Table 1. Comparison of the accuracy rate (%) of each localization
Projection Tangent CHVD TPDTR Ours
Correct quantity 623 704 742 779 884
Correct rate 67.9 76.7 80.8 84.9 96.3
Download Excel Table
3.3. Verification Performance

Equal error rate (EER), d’, and receiver operating characteristic (ROC) are used as palmprint verification performance indicators. The palmprint feature templates are Binary Orientation Co-occurrence Vector [52], which is a good coding scheme [53] and has good verification performance thanks for the fusion at score level [54, 55]. The experimental results are shown in Table 2 and Fig. 5.

Table 2. Comparison of EER (%) and d’
Projection Tangent CHVD TPDTR Ours
EER 5.78 3.01 3.18 2.32 1.98
d’ 3.24 3.59 4.01 4.49 5.11
Download Excel Table
jdaat-1-2-24-g5
Fig. 5. ROC comparison.
Download Original Figure

In summary, our KP localization based on IC has higher recognition accuracy than the compared methods, and stronger ability to resist background interference.

IV. CONCLUSION

Uncontrollable palm placement and other factors in mobile acquisition environment can easily lead to KP localization failure. KP localization based on IC first selects the appropriate radius and center angle threshold value to exclude contours with larger central angle, and then uses the central angle direction formed by IC and IFV contour to exclude fingertip contours and false IFV contours. A morphological close operation is performed to repair the IFV contours. According to the experimental results, this method is a practical scheme for palmprint verification in mobile environments.

ACKNOWLEDGEMENT

This work was supported partially by the National Natural Science Foundation of China (61866028, 61763033, 61662049, 61663031, and 61866025), the Key Program Project of Research and Development (Jiangxi Provincial Department of Science and Technology) (20171ACE50024, 20161BBE50085), the Construction Project of Advantageous Science and Technology Innovation Team in Jiangxi Province (20165BCB19007), the Application Innovation Plan (Ministry of Public Security of P. R. China) (2017YY CXJXST048), and the Open Foundation of Key Laboratory of Jiangxi Province for Image Processing and Pattern Recognition (ET201680245, TX201604002), Innovation Foundation for Postgraduate (YC2018094).

REFERENCES

[1].

W. M. Matkowski, T. Chai, and A. W. K. Kong, “Palmprint recognition in uncontrolled and uncooperative environment,” IEEE Transactions on Information Forensics and Security, 2020 (in press).

[2].

L. Leng and J. S. Zhang, “PalmHash code vs. PalmPhasor code,” Neurocomputing, vol. 108, 2013, pp. 1-12.

[3].

L. Fei, G. Lu, W. Jia, et al, “Feature extraction methods for palmprint recognition: a survey and evaluation,” IEEE Transactions on System, Man, and Cybernetics: Systems, vol. 49, no. 2, 2018, pp. 346-363.

[4].

L. Leng, A. B. J. Teoh, M. Li, et al, “A remote cancelable palmprint authentication protocol based on multi-directional two-dimensional PalmPhasor-fusion,” Security and Communication Networks, vol. 7, no. 11, 2014, pp. 1860-1871.

[5].

L. Leng, J. S. Zhang, M. K. Khan, et al. “Cancelable PalmCode generated from randomized gabor filters for palmprint template protection,” Scientific Research and Essays, vol. 6, no. 4, 2010, pp. 784-792.

[6].

L. Leng, A. B. Teoh, and M. Li, “Simplified 2DPalmHash code for secure palmprint verification,” Multimedia Tools and Applications, vol. 76, no. 6, 2017, pp. 8373-8398.

[7].

G. Liu, M. Li, L. Leng, et al. “Adaptive conjugate discrimination power analysis for palm-print-vein verification,” Computer Simulation, vol. 32, no. 5, 2015, pp. 364-367.

[8].

L. Leng, G. Liu, and M. Li, “Conjugate verification of cancelable palm-print-vein using 2DPalmHash Code,” Journal of Optoelectronics Lasser, vol. 25, no. 9, 2014, pp. 1791-1795.

[9].

L. Leng, M. Li, and A. B. J. Teoh, “Conjugate 2DPalmHash code for secure palm-print-vein verification,” in 6th International Congress on Image and Signal Processing (CISP), IEEE, Hangzhou, Dec. 2013, pp. 1694-1699.

[10].

L. Leng, M. Li, C. Kim, et al, “Dual-source discrimination power analysis for multi-instance contactless palmprint recognition,” Multimedia Tools and Applications, vol. 76, no. 1, 2017, pp. 333-354.

[11].

D. Zhang, W. Zuo, and F. Yue, “A comparative study of palmprint recognition algorithms,” ACM Computing Surveys, vol. 44, no. 1, 2012, pp. 1-37.

[12].

L, Leng and A. B. J. Teoh, “Alignment-free row-co-occurrence cancelable palmprint Fuzzy Vault,” Pattern Recognition, vol. 48, no. 7, 2015, pp. 2290-2303.

[13].

L. Leng, A. B. J. Teoh, M. Li, et al, “Analysis of correlation of 2DPalmHash Code and orientation range suitable for transposition,” Neurocomputing, vol. 131, 2014, pp. 377-387.

[14].

L. Leng and J. S. Zhang, “Dual-key-binding cancelable palmprint cryptosystem for palmprint protection and information security,” Journal of Network and Computer Applications, vol. 34, no. 6, 2011, pp. 1979-1989.

[15].

L. Leng, A. B. J. Teoh, M. Li, et al, “Orientation range for transposition according to the correlation analysis of 2DPalmHash Code,” in International Symposium on Biometrics and Security Technologies (ISBAST), IEEE, Chengdu, Jul. 2013, pp. 230-234.

[16].

Y. X. Wu, L. Leng, H. P. Mao, et al. “Non-contact palmprint attendance system on PC platform,” Journal of Multimedia Information System, vol. 5, no. 3, 2018, pp. 179-188.

[17].

G. Raja, S. B. M. Baskaran, D. Ghosal, et al, “Reduced overhead frequent user authentication in EAP-dependent broadband wireless networks,” Mobile Networks and Applications, vol. 21, no.3 2016, pp. 523-538.

[18].

L. Leng, J. S. Zhang, G. Chen, et al. “Two-directional two-dimensional random projection and its variations for face and palmprint recognition,” in International Conference Computational Science and Its Applications, Springer, Berlin, Heidelberg, Dec. 2011, pp. 458-470.

[19].

L. Leng and J. S. Zhang, “Development and integration of software platform for palmprint recognition based on software component,” Journal of Jilin University (Engineering and Technology Edition), vol. 42, no. 1, 2012, pp. 166-169.

[20].

A. Shoichiro, I. Koichi, and A. A. Takafumi. “Contactless palmprint recognition algorithm for mobile phones,” in International Workshop on Advanced Image Technology, Dec. 2013, pp. 409-411.

[21].

J. S. Kim, G. Li, BJ. Son, et al. “An empirical study of palmprint recognition for mobile phones,” IEEE Transactions on Consumer Electronics, vol. 61, no. 3, 2015, pp. 311-319.

[22].

D. A. Ramli and S. Ibrahim, “Evaluation on palm-print ROI selection techniques for smart phone based touch-less biometric system,” American Academic and Scholarly Research Journal, vol. 5, no. 5, 2013, pp. 205-210.

[23].

S. H. Lee, S. B. Kang, D. H. Nyang, et al, “Effective palmprint authentication guideline image with smart phone,” Journal of Korea Communications Society, vol. 39, no. 11, 2014, pp. 994-999.

[24].

K. S. Cao and L. Leng, “Preprocessing based on double-point auxiliary for palmprint recognition on mobile devices,” Journal of Optoelectronics Lasser, vol. 29, no. 2, 2018, pp. 205-211.

[25].

F. M. Gao, L. Leng, and J. X. Zeng, “Palmprint recognition system with double-assistant-point on iOS mobile devices,” in Workshop (Image Analysis for Human Facial and Activity Recognition) in British Machine Vision Conference (BMVC), Newcastle, UK, Sept. 2018.

[26].

F. M. Gao, L. Leng and X. Fu, “Palmprint verification system with dual-line-single-point assistance on Android mobile devices,” in 4th Annual International Conference on Network and Information Systems for Computers (ICNISC), IEEE, Wuhan, Apr. 2018, pp. 437-441.

[27].

L. Leng, F. Gao, Q. Chen, et al, “Palmprint recognition system on mobile devices with double-line-single-point assistance,” Personal and Ubiquitous Computing, vol. 22, no. 1, 2018, pp. 1-12.

[28].

M. O. Rotinwa-Akinbile, A. M. Aibinu, and M. J. E. Salami, “A novel palmprint segmentation technique,” in First International Conference on Informatics and Computational Intelligence, IEEE, Dec. 2011, pp. 235-239.

[29].

J. Zhao, S. Q. Bing, and Y. K. Liu, “Skin color detection method based on split K-means clustering,” Computer Engineering and Applications, vol. 50, no. 1, 2014, pp. 134-138.

[30].

Q. Zhang, M. Li, and L. Leng, “Palmprint segmentation combining adaptive skin-color model and region growing,” Computer Engineering and Design, vol. 38, no. 10, 2017, pp. 2794-2798.

[31].

Y. X. Wang and Q. Q. Ruan, “A new palmprint image preprocessing method,” Journal of Image and Graphics, vol. 13, no. 6, 2008, pp. 1115-1122.

[32].

Z. L. Li, Q. C Tian, Y. C. Zhu, et al, “Palmprint region segmentation algorithm based on geometric features,” Application Research of Computers, vol. 27, no. 6, 2010, pp. 2370-2372.

[33].

F. M. Gao, K. S. Cao, and L. Leng, et al, “Mobile palmprint segmentation based on improved active shape mode,” Journal of Multimedia Information System, vol. 5, no. 4, 2018, pp. 221-228.

[34].

I. Fratric and S. Ribaric, “Real-time model-based hand localization for unsupervised palmar image acquisition,” in International Conference on Biometrics, Springer, Berlin, Heidelberg, Dec. 2009, pp. 1280-1289.

[35].

M. Aykut and M. Ekinci, “AAM-based palm segmentation in unrestricted backgrounds and various postures for palmprint recognition,” Pattern Recognition Letters, vol. 34, no. 9, 2013, pp. 955-962.

[36].

M. Aykut and M. Ekinci, “Developing a contactless palmprint authentication system by introducing a novel ROI extraction method,” Image and Vision Computing, vol. 40, no. 8, 2015, pp. 65-74.

[37].

S. Huang, C. Xu J. H. Xu, et al, “Extraction and repair of palmprint main line based on wavelet theory,” Journal of Image and Graphics, vol. 11, no. 8, 2006, pp. 1139-1149.

[38].

D. Zhang, A. W. K. Kong, J. You, et al. “Online palmprint identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 9, 2003, pp. 1041-1050.

[39].

T. A. B. Wirayuda, H. A. Adhi, D. H. Kuswanto, et al. “Real-time hand-tracking on video image based on palm geometry,” in Information and Communication Technology, IEEE, Dec. 2013, pp. 241-246.

[40].

L. Leng, G. Liu, M. Li, et al. “Logical conjunction of triple-perpendicular-directional translation residual for contactless palmprint preprocessing,” in 2014 11th International Conference on Information Technology: New Generations, IEEE, Dec. 2014, pp. 523-528.

[41].

G. Liu, M. Li, and L. Leng, “Location method of touchless palmvein image using point set operation,” Computer Simulation, vol. 32, no. 1, 2015, pp. 458-462.

[42].

S. Aoyama, K. Ito, T. Aoki, et al. “A contactless palmprint authentication algorithm for mobile phones,” Transactions of the Institute of Electronics Information and Communication Engineers A, vol, 96, no, 5, 2013, pp. 250-263.

[43].

K. Ito, T. Sato, S. Aoyama, et al. “Palm region extraction for contactless palmprint recognition,” in International Conference on Biometrics, IEEE, Dec. 2015, pp. 334-340.

[44].

F. Li, “Mobile based palmprint recognition system,” in 2015 International Conference on Control, Automation and Robotics, IEEE, Dec. 2015, pp. 233-237.

[45].

W. Xie, “Multispectral palmprint coding technology for human body recognition,” Computer Engineering, vol, 42, no. 3, 2016, pp. 226-231.

[46].

J. W. Liu and Y. Chen, “Segmentation of palmprint image based on morphological corner detection,” Modern Computer, vol, 24, no. 9, 2016, pp. 35-38.

[47].

L. Leng, J. S. Zhang, M. K. Khan, et al. “Dynamic weighted discrimination power analysis: a novel approach for face and palmprint recognition in DCT domain,” International Journal of the Physical Sciences, vol. 5, no. 17, 2010, pp. 2543-2554.

[48].

L. Leng, J. S. Zhang, J. Xu, et al. “Dynamic weighted discrimination power analysis in DCT domain for face and palmprint recognition,” in International Conference on Information and Communication Technology Convergence (ICTC), IEEE, Jeju, South Korea, Nov. 2010, pp. 467-471.

[49].

L. Leng, A.B. J. Teoh, M. Li, et al. “Orientation range of transposition for vertical correlation suppression of 2DPalmPhasor code,” Multimedia Tools and Applications, vol. 72, no. 24, 2015, pp. 11683-11701.

[50].

L. Leng and J. S. Zhang, “PalmHash code for palmprint verification and protection,” in 25th Annual Canadian Conference on Electrical and Computer Engineering (CCECE), IEEE, Montreal, Canada, Apr. 2012, pp. 1-4.

[51].

Q. Chen, Q. Zhang, H. Chen, et al. “Angular bisector fitting of finger valley edge for palmprint localization,” Microelectronics and Computer, vol. 35, no. 2, 2018.

[52].

Z. Guo, D. Zhang, L. Zhang, et al. “Palmprint verification using binary orientation co-occurrence vector,” Pattern Recognition Letters, Vol. 30, No. 13, pp. 1219-1227.

[53].

L. Leng, M. Li, and A. B. J. Teoh, “Matching reduction of 2DPalmHash code,” in International Symposium on Biometrics and Security Technologies (ISBAST), IEEE, Kuala Lumpur, Malaysia, Aug. 2014, pp. 124-128.

[54].

L. Leng, J. S. Zhang, G. Chen, et al. “Two dimensional PalmPhasor enhanced by multi-orientation score level fusion,” in 8th FTRA International Conference on Secure and Trust Computing, Data Management, and Applications (STA), Springer, Berlin, Heidelberg, Jun. 2011, pp. 122-129.

[55].

L Leng, M Li, A. B. J Teoh, et al. “Theoretical analysis on the performance of PalmPhasor algorithm PalmPhasor,” Journal of Software, vol. 26, no. 5, 2015, pp. 1237-1250.

AUTHOR

jdaat-1-2-24-i1

Zibo Zhou received his BS in Tongda College of Nanjing University of Posts and Telecommunications, Yangzhou, P. R. China in 2017. Currently he is pursuing his MS degree in School of Software, Nanchang Hangkong University, Nanchang, P. R. China. His research interests include palmprint recognition, image processing and deep learning.

jdaat-1-2-24-i2

Qi Chen received his MS in Hangkong University, Nanchang, His research interests include palmprint recognition, image processing, and computer vision.

jdaat-1-2-24-i3

Lu Leng received his Ph.D. degree from Southwest Jiaotong University, Chengdu, P. R. China, in 2012. He performed his post-doctoral research at Yonsei University, Seoul, South Korea, and Nanjing University of Aeronautics and Astronautics, Nanjing, P. R. China. He was a visiting scholar at West Virginia University, USA. Currently, he is an associate professor at Nanchang Hangkong University. He has published more than 60 international journal and conference papers and been granted several scholarships and funding projects for his academic research. He is the reviewer of several international journals and conferences. His research interests include biometric recognition, computer vision, and biometric template protection. Dr. Leng is a member of the Institute of Electrical and Electronics Engineers (IEEE), the Association for Computing Machinery (ACM), the China Society of Image and Graphics (CSIG), and the China Computer Federation (CCF).