Intelligent Virtual Lipstick Trial Makeup Based on OpenCV and Dlib

In order to try different lipstick products in a reusable, low-cost, and hygienic manner, an intelligent virtual lipstick trial algorithm is proposed with assistance of Dlib and OpenCV. In the proposed method, we perform face detection through the pre-tra

  • PDF / 1,415,917 Bytes
  • 7 Pages / 439.37 x 666.142 pts Page_size
  • 6 Downloads / 226 Views

DOWNLOAD

REPORT


Abstract. In order to try different lipstick products in a reusable, low-cost, and hygienic manner, an intelligent virtual lipstick trial algorithm is proposed with assistance of Dlib and OpenCV. In the proposed method, we perform face detection through the pre-trained face detection model of dlib.get_frontal_face_detector in the Dlib library, and then extract feature points from the face in the video according to the above pre-trained model. When lips and other important face parts are recognized, they are then filled with specific color patterns according to lighting environment, highlighting the import characteristics for target lipstick, making it reliably displayed on a smart phone screen. The process is full of simplicity, quickness and convenience. Keywords: Lipstick makeup

 Virtual reality  OpenCV  Dlib

1 Introduction With the improvement in global people’s life, there evidence great increase in the field for cosmetic consumption. In the traditional entity shop sale model, consumers are obligated to face with the problem of hygiene when they are trying makeup products, and imposing business owners to the sale cost expanding. Confronting the predicament of the global epidemic in 2020, many major e-commerce companies have launched new business model of online live streaming with goods, providing consumers an unprecedented shopping experience. Among others, cosmetics section enjoys the great benefits under this model, while their consumers blindly experience the makeup effect vision online. To overcome this dilemma, online virtual makeup reality methods are proposed. Constructed on the maturity of face recognition and image processing technology, such shopping experience will inevitably become hot topics in the field of computer graphics. As our best knowledge, there are very few researches on online virtual makeup, while most of them are image-based. In 2007, Tong et al. [1] proposed a virtual makeup test method based on images of an identical person before and after makeup as an example. The steps of deformation, segmentation, and repair were used. The makeup was preserved completely but the process was cumbersome. In 2009, Guo et al. [2] proposed improvements, by inputting a single reference makeup image to migration, weaken precondition constraint. However, this method uses the Poisson equation to fuse the highlights and shadows of the image. Due to the lack of continuity © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 Q. Liu et al. (Eds.): CENet 2020, AISC 1274, pp. 121–127, 2021. https://doi.org/10.1007/978-981-15-8462-6_14

122

Y. Feng et al.

in the solution of the Poisson equation, the visual effect is still flawed. In 2010, Zhen et al. [3] proposed a digital face makeup technology based on sample pictures with lacks of automation in makeup transferring, and complexity in work mechanism. In 2013, Du et al. [4] proposed a multi-sample makeup transfer technology to transfer different sample makeups to the same face, using image fusi