Original Source Here
Understanding Facial Recognition
Let’s explore the nooks and crannies of facial recognition.
In this post I’ll briefly cover topics to give you an idea about facial recognition, what it is, how it works, where it works, when it works and more.
We’ll explore a few burning topics in this post —
- What is facial recognition?
- Dawn of facial recognition — a brief look at the history behind
- Fundamentals of facial recognition
- Key elements of facial recognition
- Who uses facial recognition?
- Facial recognition softwares
- Problems with facial recognition systems
- Reasons to be concerned about facial recognition technology
- What lies ahead in the future for facial recognition?
What is Facial Recognition?
Facial recognition is a way of recognizing a human face through technology. A facial recognition system uses biometrics to map facial features from a photograph or video. It compares the information with a database of known faces to find a match. You probably utilized this system to unlock your phone. FaceID, Windows Hello and Google’s very own face authentication systems directly implement this technology.
Emotion recognition from facial expression is an exciting field of research with applications like safety, security, personal information and marketing. Facial recognition can help verify a person’s identity, but it also raises privacy issues.technology is fairly ubiquitous these days even if people are not that aware of it. Many people use it effortlessly to log onto their smartphones and with advanced face detection software surveillance operators are able to pick criminal faces out of crowds. What is less well-known are the developments in computer vision and machine learning technologies behind face recognition. Basically, about what happens under the hood.
To recap from the previous article, Machine Learning is getting computers to program themselves, the science of getting computers to act without being explicitly programmed.
Computer vision is the field of computer science that focuses on creating digital systems that can process, analyze, and make sense of images or videos in the same way that humans do. The concept of computer vision is based on teaching computers to process an image at a pixel level and understand it.
But it’s not all bells and whistles, so to speak. Concerns about systems used to collect, track, or surveil a unique and exposed part of the human body — one that is, for many, directly associated with identity, privacy, safety, democracy, and security — raise important questions about the appropriate role of this technology in society. This technology has been around for more than 50 years, theoretically. Recent advancements have made facial recognition systems much, much easier to implement.
Discussions surrounding the technical and social impacts of facial recognition systems are complicated by the fact that there is no one standard system design. Some systems operate entirely on a user’s device, others may be accessed online via consumer applications, or are optimized to work in the cloud as a service, while others consist of custom systems designed, developed, tested, deployed, and operated for a specific purpose.
The Dawn Of Facial Recognition
The earliest pioneers of facial recognition were Woody Bledsoe, Helen Chan Wolf and Charles Bisson. In 1964 and 1965, Bledsoe, along with Wolf and Bisson began work using computers to recognise the human face. Due to next to no funding for their project, much of their work was never published.
Their early steps in pioneering this technology were hampered by the advancements of the era. In the 1970s, the research was revived by Goldstein, Harmon and Lesk who extended the work and made it more accurate. However, their work still proved to be extremely labour intensive.
Not later than in the 1980s, we saw further progress with the development of Facial Recognition software as a viable biometric for businesses. In 1988, Sirovich and Kirby began applying linear algebra to the problem of facial recognition.
Then, in the 90s, Turk and Pentland carried on the work of Sirovich and Kirby by discovering how to detect faces within an image which led to the earliest instances of automatic facial recognition. This significant breakthrough was yet again hindered by the technological advancements of the time.
And since then, the advancements in technology took off, to where we are now. This technology is right in the palm of our hands.
Fundamentals of Facial Recognition
Facial recognition is a computerized method of identifying people based solely on their facial features. These systems range from small cameras that can detect different emotional responses to pinpointing the identities of multiple people in a moving crowd.
A facial recognition system works by first detecting whether an image contains a face. If so, it then tries to recognize the face in one of two ways:
- During facial verification, the system attempts to verify the identity of the face. It does so by determining whether the face in the image potentially matches a specific face (identity) previously stored in the system.
- During facial identification, the system attempts to predict the identity of the face. It does so by determining whether the face in the image potentially matches any of the faces (identities) previously stored in the system.
A high-definition camera captures images of a person’s face and checks for specific facial landmarks, such as the distance between the eyes, width of the nose, and shape of cheekbones. The recognition system then compares these findings to faces within its database. The more images that are in the database, the more likely the system will be able to identify the face.
Key elements of Facial Recognition
1) Face Detection
The facial recognition process starts with the human face and the necessary facial features pattern of the person to be identified. When we think of a human face, we probably also think of the very basic set of features, which are eyes, nose, and mouth. Similarly, facial recognition technology also needs to learn what a face is and how it looks.
You might be good at recognizing faces. You probably find it a cinch to identify the face of a family member, friend, or your teacher, perhaps. You’re familiar with their facial features — their eyes, nose, mouth — and how they come together.
The process starts with human eyes, which is one of the most accessible features to detect, and then it proceeds to detect eyebrows, nose, mouth, etc. by calculating the width of the nose, distance between the eyes, and the shape & size of mouth. Once it finds the facial region, multiple algorithm training is performed on large datasets to improve the algorithm’s accuracy to detect the faces and their positions.
2) Feature Extraction
Once the face is detected, the Machine Learning algorithm is trained with the help of computer vision algorithms to detect the facial landmarks (eyebrow corners, eyes gap, tip of the nose, mouth corners, etc.) Each landmark is considered as nodal points, and each face has approximately 80 nodal points. These landmarks are the key to distinguish each face present in the database.
There is a pattern involved — different faces have different dimensions. Similar faces have similar dimensions. Machine Learning algorithms only understand numbers so it is quite challenging. This numerical representation of a “face” (or an element in the training set) is termed as a feature vector. A feature vector comprises of various numbers in a specific order.
As a simple example, we can map a “face” into a feature vector which can comprise various features like:
- Height of face (cm)
- Width of the face (cm)
- Average color of face (R, G, B)
- Width of lips (cm)
- Height of nose (cm)
Now it is converted to a mathematical formula and these facial features become numbers. This numerical code is known a face print. The way every person has a unique fingerprint, in the same way, they have unique face print.
Machine Learning does two major functions in face recognition technology. These are given below:
- Deriving the feature vector: it is difficult to manually list down all of the features because there are just so many. A Machine Learning algorithm can intelligently label out many of such features. For instance, a complex feature could be the ratio of the height of the nose and width of the forehead.
- Matching algorithms: Once the feature vectors have been obtained, a Machine Learning algorithm needs to match a new image with the set of feature vectors present in the corpus.
After generating the unique vector code, it is compared against the faces in the database. The database has all the information of registered users. If the software identifies the match for exact feature in the database, it provides all the person’s details. If the compared featured vector value is below a certain threshold value, the feature-based classifier returns the id of the match found in the database. The matching process is a fundamental aspect of facial recognition systems. In simpler terms, it tries to match two or more face prints to determine if they are the same person.
All forms of face matching raise serious digital rights concerns, including face identification, verification, tracking, and clustering. Lawmakers must address them all. Any face recognition system used for “tracking”, “clustering”, or “verification” of an unknown person can easily be used for “identification” as well. The underlying technology is often exactly the same.
Other steps include facial characterization, Facial characterization is an automated process that also begins with facial detection, and then goes on to interpret, predict, and categorize the physical appearance of features on a face. These features can consist of relatively straightforward categories such as hair color or the presence of glasses. Facial characterization and analysis systems can also be used to observe facial expressions, and attempt to link them to attributes such as emotional state, mental health, personality, attractiveness, etc.
Who uses facial recognition?
A lot of people and organizations use facial recognition — and in a lot of different places. Here’s a short run-down:
- Mobile phone makers in products. Apple first used facial recognition to unlock its iPhone X, and has continued with the technology with the iPhone XS. Face ID authenticates — it makes sure you’re you when you access your phone. Apple says the chance of a random face unlocking your phone is about one in 1 million.
- Colleges in the classroom. Facial recognition software can, in essence, take roll. If you decide to cut class, your professor could know. Don’t even think of sending your brainy roommate to take your test.
- Social media companies on websites. Facebook uses an algorithm to spot faces when you upload a photo to its platform. The social media company asks if you want to tag people in your photos. If you say yes, it creates a link to their profiles. Facebook can recognize faces with 98 percent accuracy.
- Businesses at entrances and restricted areas. Some companies have traded in security badges for facial recognition systems. Beyond security, it could be one way to get some face time with the boss.
- Retailers in stores. Retailers can combine surveillance cameras and facial recognition to scan the faces of shoppers. One goal: identifying suspicious characters and potential shoplifters.
- Airlines at departure gates. You might be accustomed to having an agent scan your boarding pass at the gate to board your flight. At least one airline scans your face.
- Marketers and advertisers in campaigns. Marketers often consider things like gender, age, and ethnicity when targeting groups for a product or idea. Facial recognition can be used to define those audiences even at something like a concert.
Law enforcement has long been interested in facial recognition. They’ve deployed it at airports, train stations, border crossings, and sporting events around the world, hoping to catch wanted criminals and prevent terrorist attacks. However, most agencies have been unwilling to disclose how effective facial recognition is at finding offenders and reducing crime.
Facial recognition databases play a significant role in law enforcement today. law enforcement agencies routinely collect mugshots from those who have been arrested and compare them to local, state, and federal facial recognition databases. agencies can sift through these mugshot databases to identify people in photos taken from a variety of sources: traffic cameras, social media, or photos that police officers have taken themselves.
Examples of the use of facial recognition today
Apple: Apple could be considered a pioneer in facial recognition. The tech giant has long allowed consumers to unlock their phones, log into apps, and make purchases just by showing their face to their smartphones and other devices.
Driving: Automakers are testing facial recognition technology to help cut down on car theft. Consider Project Mobil: Ford and Intel are testing a project in which a dashboard camera uses facial recognition to identify the primary driver of a car and, perhaps, other authorized drivers.
Banking: Banking giants such as HSBC and Chase already use Apple’s FaceID to let customers log into their mobile banking apps. Other financial institutions are testing facial recognition to allow customers to use their phone’s cameras to approve online purchases.
Even soft drinks: Coca-Cola has been a longtime user of facial recognition. For instance, the company uses the technology to reward customers for recycling at some of its vending machines in China.
Facial Recognition Softwares
Many renowned companies are constantly innovating and improvising to develop face recognition software that is foolproof and dependable.
Deep Vision AI
Deep Vision AI is a front runner company excelling in facial recognition software. The company owns the proprietorship of advanced computer vision technology that can understand images and videos automatically. Deep Vision AI provides a plug and plays platform to its users worldwide. The users are given real-time alerts and faster response based upon the analysis of camera streams through various AI-based modules.
The software is highly flexible that it can be connected to any existing camera system or can be deployed through the cloud. At present, Deep Vision AI offers the best performance solution in the market supporting real-time processing at +15 streams per GPU.
Deep Vision AI is a certified partner for NVIDIA’s Metropolis, Dell Digital Cities, Amazon AWS, Microsoft, Red Hat, and others.
Amazon provides a cloud-based software solution Amazon Rekognition is a service computer vision platform. This solution allows an easy method to add image and video analysis to various applications. It uses a highly scalable and proven deep learning technology. The user is not required to have any machine learning expertise to use this software.
The platform can be utilized to identify objects, text, people, activities, and scenes in images and videos. It can also detect any inappropriate content. The user gets a highly accurate facial analysis and facial search capabilities.
There exist many, many more softwares that we haven’t discussed in this post.
How to get started with using facial recognition?
Amazon’s AWS provides 10-minute tutorials and in-depth documentation with prescriptive guidance to help you get started using facial recognition, utilizing the Amazon Rekognition technology.
Check out this free course by AWS to understand how you, too can implement facial recognition technologies in your own projects.
The Problems with Facial Recognition
Is it safe? Yes.
A common misconception people have is that humans are far better at recognising faces than machines are. However, numerous studies have shown that facial recognition systems, not even the ones we use nowadays, were already capable of detecting faces with much, much higher accuracy.
Like previously discussed and brought up in the article, Facial recognition offers an incredible amount of power and utility, which means it can also be abused, especially when it comes to invading people’s privacy. Also, the technology isn’t flawless quite yet, as it still has problems with bias and inaccuracy.
Invasion of privacy is a big concern, as any company or government institution could potentially collect your facial data without your consent whenever you enter a public place. Despite the immense power of facial recognition, it can be stumped. A number of things can confuse the technology, including poor lighting, sunglasses, or masks. There are even clothes and glasses designed for the sole purpose of disrupting facial recognition systems.
Aging lowers its effectiveness: Studies have found that as people age, and their features change, facial recognition has an increasingly difficult time identifying them. Other studies have shown that facial recognition is less effective in identifying people of color and women.
Reasons to be concerned
Privacy matters. Privacy refers to any rights you have to control your personal information and how it’s used — and that can include your face print.
- Security. Your facial data can be collected and stored, often without your permission. It’s possible hackers could access and steal that data.
- Prevalence. Facial recognition technology is becoming more widespread. That means your facial signature could end up in a lot of places. You probably won’t know who has access to it.
- Safety. Facial recognition could lead to online harassment and stalking. How? For example, someone takes your picture on a subway or some other public place and uses facial recognition software to find out exactly who you are.
- Mistaken identity. Say, for instance, law enforcement uses facial recognition to try to identify someone who robbed a corner store. Facial recognition systems may not be 100 percent accurate. What if the police think the suspect is you?
- Basic freedoms. Government agencies and others could have the ability to track you. What you do and where you go might no longer be private. It could become impossible to remain anonymous.
What lies ahead for facial recognition?
Facial recognition technology continues to develop at pace and the uses of the technology are becoming more widespread. Facial recognition solutions are expected to be present in 1.3 billion devices by 2024. Powered by AI, facial recognition software in mobile phones is already being used by companies like Mastercard to authenticate payments and other high-end authentication tasks.
Taking things one step further, Google Nest Cam IQ watches over your property 24/7 and can detect people from a distance of 50m away. The system can recognise familiar faces and you can pre-set actions for those visitors such as opening a gate or front door.
While facial recognition is undoubtedly changing the world in which we live, that world is also changing the way facial recognition is being deployed around the world.
The design, development, and use of facial recognition systems cannot be separated from our daily lives. These systems can make some aspects of life easier, and at the same time can amplify civil liberties and human rights concerns, including challenges of bias.
Face recognition is an emerging technology that can provide many benefits. Face recognition can save resources and time, and even generate new income streams, for companies that implement it right.
In case you’re wondering, I’m still going to use it on my iPhone.
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot