Face Detection Systems

Marco Niño
4 min readMay 20, 2021

Computers are really good at recognizing patterns, even more than humans, so in 1960 three pioneers Woody Bledsoe, Helen Chan Wolf, and Charles Bisson started working in a facial-recognition system which they called the “man-machine” project (this was because a human had to stablish the coordinates of the facial features before feeding it to the machine), when given as input to the machine, it would calculate a few characteristic distances such as the width of the mouth and eyes, then it calculated the difference in the distances and return the closed records as possible matches.

Later in 1970, Takeo Kanade presented a face matching system that could locate some anatomical features like the chin and also calculated the distance ratio between other features without human input, although his system was not always capable of accurate identification, it sparked the interest in the subject and later that decade (1977), Kanade published the first book on facial recognition technology.

The systems we know today, capable of real-time face detection came to live in 2001 with the Viola–Jones object detection framework created by Paul viola and Michael Jones, it combined their own method with the Haar-like feature approach and created AdaBoost, the first real-time frontal-view face detection system. To this day, the Viola-Jones framework is still used to create facial recognition systems (FRS) while adding new methods such as user interfaces and teleconferencing.

Us humans have it easy to recognize faces, so much that we even see them in objects that shouldn’t have faces at all; for computers on the other hand, this task is a complicated pattern recognition problem but as with any other algorithm, it has a handy set of steps:

Step 1. Face detection: First the system needs a camera (or any other optic sensor) to determine if what it is looking at is a human face (depending on the algorithm, this can be an individual’s face or the faces in a crowd), usually the computer has an easier time recognizing a face if the person is looking directly at the camera.

Some systems are better than others

For a machine to be able to recognize an image as “human face”, it needs a little bit of machine learning and deep neural networks making extensive use of databases with images of humans looking at different angles and positions.

This was not in training…

Step 2. Face Analysis: Now that the system knows it is looking at a human face, the software recognizes facial landmarks such as eyes gap, nose, mouth, etc. Each one of these marks are considered “nodal points” (each face can have up to 80 nodal points), these points are used to distinguish each face in its database.

Chihuahua or Muffin?

The system is also able to adjust the registered face in position, size, and scale with the user’s face, this is to help it recognize the person in front of it even if their expression changes or the user moves.

Step 3. Data conversion: After the previous step, the system converts the user’s nodal points into the facial signature (basically a mathematical formula or a vector in some cases) which is similar to a fingerprint on biometric systems; each person has a unique identification inside the database which is used in the final step.

In the end, we are all Math in the eyes of a computer

Step 4. Match: Finally, the system has your “Mathematical Faceprint” it searches in its internal database for a match, the time it takes to do this depends on the number of faces registered, how many databases the system has access to, and the efficiency of the searching algorithm. If there is a match, the software provides the person’s stored information (depends on the database).

Now, having a general understanding of how facial recognition works, you may be asking yourself, what uses can it have?, well they have a wide range of uses, from helping identifying missing persons, to help authorities apprehend criminals to give access to certain areas inside a company or simply play with applications that change the appearance of a face.

Facial Recognition Systems are tools, and like any other computer software and automated systems, they are not inherently good or bad, the use people give it are what determine the benefits or threats they impose, be it to help save people, or other… questionable practices, for this is important to always make sure these systems’ developers have their moral compass well calibrated.

REFERENCES

--

--