Face Detection
Five Positive Use Cases for Facial Recognition
While negative headlines around facial recognition tend to dominate the media landscape, positive impacts of facial recognition technology are being created on a daily basis -- despite these stories often being overshadowed by the negative noise. It is the mission of industry leaders in computer vision, biometric and facial recognition technologies to help the public see how this technology can solve a range of human problems.
In fact, the industry as a whole is tasked with advocating for clear and sensible regulation, all while applying guiding principles to the design, development and distribution of the technologies they are pursuing. AI solutions are solving real-world problems, with a special focus on deploying this technology for good.
In this eWEEK Data Points article, Dan Grimm, VP of Computer Vision and GM of SAFR, a RealNetworks company, uses his own industry information to describe five use cases on the socially beneficial side of facial recognition.
Data Point No. 1: Facial Recognition for School Safety
With school security a top priority for parents, teachers and communities, ensuring a safe space is vitally important. It can be difficult to always monitor who’s coming and going, and school administrators need a way to streamline secure entry onto campus property.
K-12 schools are using facial recognition for secure access--a system that requires a person to be an authorized individual (such as teachers and staff)--in order to gain access to the building. This not only helps keep students safe but also makes it easier for parents and faculty staff to enter school grounds during non-peak hours.
Facial recognition is being used to alert staff when threats, concerns or strangers are present on school grounds. Any number of security responses can be configured for common if-this-then-that scenarios, including initiating building lockdowns and notifying law enforcement, when needed.
Data Point No. 2: Facial Recognition for Health Care
As our population grows, so does the need for more efficient healthcare. Plain and simple, there simply isn’t time in busy physician offices for mistakes or delays. Facial recognition is revolutionizing the healthcare industry, whether it be AI-powered screenings and diagnoses, or utilizing secure access.
Healthcare professionals are using facial recognition technologies in some patient screening procedures. For example, the technology is being used to identify changes to facial features over time, which in some cases represent symptoms of illnesses that might otherwise require extensive tests to diagnose--or worse, go unnoticed.
Data Point No. 3: Facial Recognition for Disaster Response and Recovery
When first responders arrive on the scene of an emergency, they’re looked to as calming forces among the chaos. With every moment critical, time is precious as each second could spell the difference between favorable and unfavorable outcomes.
A first responder outfitted with a facial recognition bodycam could quickly scan a disaster site for matches to a database of victims. This piece of technology has the ability to immediately know the names of victims, which enables first responders to deliver more efficient care, transform outcomes and deliver faster peace of mind to family members awaiting news of their loved ones.
In critical-care situations, knowing the blood types of each resident in a disaster zone when identified by first responders could in turn, save more lives. This application would require the victims' family members to provide photos and blood type information so the emergency responders could scan the disaster area for the blood types needed.
Data Point No. 4: Facial Recognition for Assisting the Blind
In our media-driven world, it can be challenging for blind persons to gain access to information. Finding ways to translate visual information into aural cues to make data more easily accessible has the potential to be life changing.
Facial recognition apps highly tuned to facial expressions help blind persons read body language; specifically, an app equipped with this technology would enable a person to “see” a smile by facing their mobile phones outward. When someone around them is smiling, the phone vibrates--a transformative experience for someone who has not only never seen a smile but also has to work extra hard to detect with other senses as to whether the people around him/her are smiling.
Another mobile app is geared toward achieving greater situational awareness for the blind, announcing physical obstacles like a chair or a dog along the way, as well as reading exit signs and currency values when shopping. This not only enables blind persons to navigate their surroundings more efficiently, but also gives them greater control and confidence to go about their everyday life without those accustomed hurdles.
Data Point No. 5: Facial Recognition for Missing Persons
From runaways to victims of abduction and child trafficking, it’s believed that tens of thousands of kids go missing every year. This statistic is unacceptable, especially in spite of our digitally connected world. It is up to us, as technology entrepreneurs, to find new ways to work with local authorities to protect our most vulnerable demographic.Facial recognition is addressing the missing persons crisis in India. In New Delhi, police reportedly traced nearly 3,000 missing children within four days of kickstarting a new facial recognition system. Using a custom database, the system matched previous images of missing kids with about 45,000 current images of kids around the city.
Because children tend to change in appearance significantly as they mature, facial recognition technology has also been used with images of missing children to identify them years -- or even decades -- later. Parents and guardians provide local authorities with the last known photos they have of their children, and police match those against a missing persons database. Police can then search local shelters, homeless encampments and abandoned homes with this advanced technology, giving parents hope long after investigations have seemingly stalled.
Face Recognition with Python, in Under 25 Lines of Code
OpenCV uses machine learning algorithms to search for faces within a picture.
In a picture, there are thousands of small patterns/features that must be matched. The algorithms break the task of identifying the face into thousands of smaller, bite-sized tasks, each of which is easy to solve. These tasks are also called classifiers.
For something like a face, you might have 6,000 or more classifiers, all of which must match for a face to be detected (within error limits, of course).
To get around this heavy calculations, OpenCV uses cascades: : A waterfall or series of waterfalls.
The OpenCV cascade breaks the problem of detecting faces into multiple stages. For each block, it does a very rough and quick test.
If that passes, it does a slightly more detailed test, and so on. The algorithm may have 30-50 of these stages or cascades, and it will only detect a face if all stages pass.
The advantage is that the majority of the pictures will return negative during the first few stages, which means the algorithm won’t waste time testing all 6,000 features on it.
The cascades themselves are just a bunch of XML files that contain OpenCV data used to detect objects. You initialize your code with the cascade you want: faces, eyes, hands, legs...
Setup the test environment:
- Installing OpenCV on Ubuntu 18.04.1 Use only the following way to do the installation for cv2:
python3 -m pip install opencv-python
If you have installed Anaconda, you can install cv2 for Anaconda and skip the installation.
conda install -c conda-forge opencv
conda install -c conda-forge/label/gcc7 opencv
conda install -c conda-forge/label/broken opencv
conda install -c conda-forge/label/cf201901 opencv
- Otherwise, try the following and you may meet the problem "ModuleNotFoundError: No module named 'cv2'" while "import cv2".
- Install build tools
sudo apt-get install build-essential cmake pkg-config
sudo apt-get install libjpeg-dev libpng-dev libtiff-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libxvidcore-dev libx264-dev
sudo apt-get install libgtk-3-dev
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install python3-dev
sudo install -d /usr/local/src/opencv/build
cd /usr/local/src/opencv/
sudo wget https://github.com/opencv/opencv/archive/3.4.7.zip
sudo tar xf 3.4.7.tar.gz
cd ..
sudo mkdir opencv_contrib
cd opencv_contrib
sudo wget https://github.com/opencv/opencv_contrib/archive/3.4.7.tar.gz
sudo tar xf 3.4.7.tar.gz
Test the insalled OpenCV source:
sudo apt-get install python3-venv
sudo python3 -m venv /usr/local/src/opencv_venv
source /usr/local/src/opencv_venv/bin/activate
You should see the following:
(opencv_venv) (base) jerry@jerry-Latitude-E6410:/usr/local/src$
sudo wget https://bootstrap.pypa.io/get-pip.py
sudo python3 get-pip.py
sudo pip install numpy
cd /usr/local/src/
sudo mkdir opencv_build
sudo mkdir opencv_3.4.7
sudo cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_PYTHON_EXAMPLES=ON -D OPENCV_EXTRA_MODULES_PATH=/usr/local/src/opencv_contrib/opencv_contrib-3.4.7/modules -D PYTHON_EXECUTABLE=/usr/bin/python3 -D BUILD_EXAMPLES=ON /usr/local/src/opencv/opencv-3.4.7
sudo make -j4
sudo make install
sudo ldconfig
$ python
>>> import cv2
face_detect_cv3.py:
import cv2
import sys
sys.argv = [ "face_detect.py", "abba.png", "haarcascade_frontalface_default.xml" ]
# Get user supplied values
imagePath = sys.argv[1]
cascPath = sys.argv[1]
# Create the haar cascade
faceCascade = cv2.CascadeClassifier(cascPath)
# Read the image
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Detect faces in the image
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30)
#flags = cv2.CV_HAAR_SCALE_IMAGE
)
print("Found {0} faces!".format(len(faces)))
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.imshow("Faces found", image)
cv2.waitKey(0)
Test result:Live camera:
import cv2
cap = cv2.VideoCapture(0)
# Create the haar cascade
faceCascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Detect faces in the image
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30)
#flags = cv2.CV_HAAR_SCALE_IMAGE
)
print("Found {0} faces!".format(len(faces)))
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Display the resulting frame
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Face detection with OpenCV and Deep Learning from image-part 1
Face Detection using Haar Cascades
Real-Time Face Pose Estimation with Deep Learning
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Detecting faces with Python and OpenCV Face Detection Neural Network
MobileNet-SSD: A caffe implementation of MobileNet-SSD detection network, with pretrained weights on VOC0712 and mAP=0.727.
Inception in TensorFlow
paper
留言