-
摘要:
发际线是人体头部的一个重要特征,发际线的提取在面部感知应用系统、人体工效学、整形外科学等方面都具有很重要的研究意义和应用价值。基于人体头部的彩色点云模型,提出了直接提取三维发际线的方法,根据人体面貌特征建立人脸局部坐标系,并将点云模型转换到该坐标系下;基于发际线处rgb值突变的特性,对点云模型分层、排序,提取出头部深色部位的边界线点;基于人脸先验知识去噪,得到发际线点,并拟合得发际线。对多个真实的人体头部三维彩色点云模型进行实验,验证了所提方法的有效性。
Abstract:The hairline is an important feature of human head. Hairline extraction has great research significance and application value in face perception systems, ergonomics, plastic surgery, etc. A direct hairline extraction method, which uses 3D head color point cloud, is proposed. First, the point cloud is transformed to a face coordinate system based on human facial features. Second, extract the boundary points of the dark parts through layering and sorting on the basis of the rgb values' mutation near the hairline. Last, filter out noise points among the boundary points according to prior knowledge about human face, and fit the hairline with de-noised boundary points. Actual 3D head color point clouds are used to prove the effectiveness of proposed method.
-
Key words:
- hairline /
- layering /
- sorting /
- color point cloud
-
Abstract: As the hairline is an important feature of human head, hairline extraction has great research significance and wide applications, such as face perception systems, plastic surgery, 3D film and television, facelift game, hair set customization. With the development of 3D point cloud model acquirement technology, the study on the three-dimensional (3D) hairline extraction, which can be used to analyze the characteristics of hairline qualitatively and quantitatively, turns into a research hot gradually. Based on the 3D color point cloud of human head, a direct 3D hairline extraction method is proposed. Firstly, the point cloud is transformed into the face coordinate system which is built on the basis of human facial features. Secondly, the head dark parts, including eyeballs, eyebrows and hair, were extracted based on gray threshold T1 which can separate hair color from skin color and was calculated using the Otsu algorithm. Thirdly, the boundary points of the dark parts were picked out. The dark parts were layered based on the Y value and the points in every same layer were sorted in accordance with the X value. For each layer, the difference dj,j+1 of X coordinate component between consecutive points pj and pj+1 for arbitrary index j was calculated and the two points were selected out if the difference between them was higher than a certain threshold T2. In this way, all layers were visited and the boundary points were obtained. Fourthly, the 3D hairline points were acquired by filtering noise points out. According to the prior knowledge of human face that the locations of the eyeballs and eyebrows are on the front of hairline at the same height of face, the boundary points of eyeballs and eyebrows were deleted and the remaining points were 3D hairline points. Finally, the 3D hairline points were fitted to obtain 3D hairline curve. In order to speed up the fitting procedure, the hairline points were simplified using the method of bounding box which can keep the hairline character mostly, and then 3D points were fitted with the algorithm of three B-spline curve fitting. Some actual 3D color point clouds of human head were used to extract the 3D hairlines. The experimental results show that the method proposed here is proven a feasible and effective method. What's more, compared with the 2D hairline extraction algorithm, it can get more information of hairline.
-
-
[1] 刘海舟. 三维头发重用性的研究和应用[D]. 成都: 西南交通大学, 2014: 1–5.
Liu Haizhou. Research and application on reusability of 3D hair[D]. Chengdu: Southwest Jiaotong University, 2014: 1–5.
[2] 李鹏龙.自体毛发移植再造发际线的临床效果[J].健康之路, 2015, 14(9): 79. http://www.cqvip.com/QK/88424X/201509/74759076504849534857485648.html
Li Penglong. The clinical effect of reengineering hairline with autologous hair transplantation[J]. Health Way, 2015, 14(9): 79. http://www.cqvip.com/QK/88424X/201509/74759076504849534857485648.html
[3] Smeets D, Keustermans J, Vandermeulen D, et al. meshSIFT: Local surface features for 3D face recognition under expression variations and partial data[J]. Computer Vision and Image Understanding, 2013, 117(2): 158–169. doi: 10.1016/j.cviu.2012.10.002
[4] Lee J, Ku B, Da Silveira A C, et al. Three-dimensional analysis of facial asymmetry of healthy Hispanic Caucasian children[C]. Proceedings of the 3rd International Conference on 3D Body Scanning Technologies, Lugano, Switzerland, 2012: 133–138.
[5] Zou L, Hao P, McCarthy M. Establishment of reference frame for sequential facial biometrics[C]. Proceedings of the 5th International Conference on 3D Body Scanning Technologies, Lugano, Switzerland, 2014: 40–45.
[6] Goto T, Lee W S, Mangnenat-Thalmann N. Facial feature extraction for quick 3D face modeling[J]. Signal Processing: Image Communication, 2002, 17(3): 243–259. doi: 10.1016/S0923-5965(01)00021-2
[7] 刘岗, 沈晔湖, 胡静俊, 等.基于肤色及深度信息的人脸轮廓提取[J].江南大学学报(自然科学版), 2006, 5(5): 513–517. http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=jiangndxxb200605003
Liu Gang, Shen Yehu, Hu Jingjun, et al. Face Contour Extraction Algorithm Based on Skin Luma and Depth Information[J]. Journal of Southern Yangtze University (Natural Science Edition), 2006, 5(5): 513–517. http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=jiangndxxb200605003
[8] 徐从东, 罗家融, 舒双宝.肤色信息马氏图的RBPNN人脸识别[J].光电工程, 2008, 35(3): 131–135. http://www.cqvip.com/QK/90982A/200803/26707708.html
Xu Congdong, Luo Jiarong, Shu Shuangbao. Face recognition based on RBPNN of Mahalanobis distance map for skin color information[J]. Opto-Electronic Engineering, 2008, 35(3): 131– 135. http://www.cqvip.com/QK/90982A/200803/26707708.html
[9] Seo K H, Kim W, Oh C, et al. Face detection and facial feature extraction using color snake[C]// Proceedings of the 2002 IEEE International Symposium on Industrial Electronics, L'Aquila, Italy, 2002: 457–462.
[10] Othman A, El Ghoul O. A novel approach for 3D head segmentation and facial feature points extraction[C]// Proceedings of the 2013 International Conference on Electrical Engineering and Software Applications (ICEESA). Hammamet, 2013: 1–6.
[11] 唐路路, 张启灿, 胡松.一种自适应阈值的Canny边缘检测算法[J].光电工程, 2011, 38(5): 127–132. http://www.docin.com/p-484167071.html
Tang Lulu, Zhang Qican, Hu Song. An improved algorithm for Canny edge detection with adaptive threshold[J]. Opto-Elec-tronic Engineering, 2011, 38(5): 127–132. http://www.docin.com/p-484167071.html
[12] 朱立军, 苑玮琦.一种改进蚁群算法的睫毛提取[J].光电工程, 2016, 43(6): 44–50. http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=gdgc201606008
Zhu Lijun, Yuan Weiqi. An eyelash extraction method based on improved ant colony algorithm[J]. Opto-Electronic Engineering, 2016, 43(6): 44–50. http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=gdgc201606008
[13] 吕导中, 万荣春.人像面部综合测量特征的特异性研究[J].江苏公安专科学校学报, 2000, 14(2): 117–121.
Lv Daozhong, Wan Rongchun. The research about the specificity of comprehensive measurement features on face[J]. Journal of Jiangsu Public Security College, 2000, 14(2): 117–121.
[14] Horprasert T, Yacoob Y, Davis L S. Computing 3-D head orientation from a monocular image sequence[C]// Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition. Killington, VT, 1996: 242–247.
[15] 张梦泽. 三维点云数据的精简与平滑处理算法[D]. 青岛: 中国海洋大学, 2014: 18–20.
Zhang Mengze. A 3D data reduction and smoothing algorithm for point clouds[D]. Qing Dao: Ocean University of China, 2014: 18–20.
[16] Chougule V N, Mulay A V, Ahuja B B. Methodologies for development of patient specific bone models from human body CT scans[J]. Journal of the Institution of Engineers (India): Series C, 2016. DOI: 10.1007/s40032-016-0301-6. (in Press
[17] Mittal R C, Rohila R. Numerical simulation of reaction-diffusion systems by modified cubic B-spline differential quadrature method[J]. Chaos, Solitons & Fractals, 2016, 92: 9–19. https://www.sciencedirect.com/science/article/pii/S0960077916302594
[18] 袁敏, 姚恒, 刘牮.结合三帧差分和肤色椭圆模型的动态手势分割[J].光电工程, 2016, 43(6): 51–56. http://www.cqvip.com/QK/90982A/201606/669164765.html
Yuan Min, Yao Heng, Liu Jian. Dynamic gesture segmentation combining three-frame difference method and skin-color elliptic boundary model[J]. Opto-Electronic Engineering, 2016, 43(6): 51–56. http://www.cqvip.com/QK/90982A/201606/669164765.html
[19] Addleman F. Cyberware: Head & Face Color 3D Sam-ples[DB/OL]. [2015-12-02]. http://cyberware.com/.
[20] 李洪臣, 杨秀艳.四大人种主要特征的比较[J].生物学教学, 2012, 37(5): 54. http://d.wanfangdata.com.cn/Periodical_swxjx201205026.aspx
Li Hongchen, Yang XiuYan. The comparison in the characteristics of the four RACES[J]. Biology Teaching, 2012, 37(5): 54. http://d.wanfangdata.com.cn/Periodical_swxjx201205026.aspx