In this series “Portrait Parlé”, Alice delves into the origin of Alphonse Bertillon’s invention of the mug shot and the development of his anthropometric measurement system to document an individual’s identity. She extends this exploration into present day, looking at how early photographic technology has morphed into the advent of surveillance and the legal privacy issues facing our society today.
Figure 1.1. Basic Emotions (BEs)
Six basic human emotions, i.e., happiness, surprise, sadness, anger, disgust, and fear, are proposed in FER (Facial Expression Recognition) -related datasets are generally labelled with these six BEs.
Figure 1.2. Multi-view face data
A multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence.
Figure 2. Lattice Point Subregions
Each face in the database is warped into a normalized coordinate frame using the hand-labeled locations of both eyes and the midpoint of the mouth. A 7x3 lattice is placed on the normalized face and 9x15 pixel subregions are extracted around every lattice point resulting in a total of 21 subregions.
Figure 3. Image Preprocessing
Face detection has developed into an independent field. Image Preprocessing is an essential pre-step in FER systems, with the purpose of localizing and extracting the face region. Normalization of the scale and grayscale is to normalize size and color of input images, the purpose of which is to reduce calculation complexity under the premise of ensuring the key features of the face.
Figure 4. Facial Landmarks
Example of facial landmarks are visually highlights in facial area, such as the alae of the nose, the end of the eyebrow, and the mouth corner. The locations of the FLs around facial components and contour, capturing facial deformations due to head movements and facial expressions. Point-to-point correspondences of facial landmarks can establish a feature vector of a human face.
Figure 5. Shape Model
To define a shape model, each face is annotated with a fixed number of points that define the key facial features and represent the shape of the face in the image. Typically, points are placed around the main facial features (eyes, nose, mouth and eye-brows) together with points that define the boundary of the face.