`感谢主办方提供的双目摄像头提供测试。本项目利用双目摄像头模组+LattePanda Delta 432+NCS2神经棒部署openvino做一个敏感人群自动追踪装置,在热点地区或车辆顶上自动搜索指定的敏感人群人脸(如失踪人群等)。系统会调用摄像头图像根据目标文件夹的目标图像去搜索,相似度阀值大于设置值就报警显示出来。
依次在LattePanda Delta 432上搭建openvino环境进行开发、测试运行。 1.开发环境搭建步骤:
1)安装Microsoft Visual Studio* with C++ 2019 with MSBuild(注意安装community版,选中 “.NET 桌面开发”、“使用 C++ 的桌面开发”、“通用 Windows 平台开发”,选择MSBuild)
2)安装CMake 3.4 or higher 64-bit
3)Python 3.5 - 3.7 64-bit(如果安装过Anaconda就不必再安装)
4)设置环境变量
5)安装openvino(注意版本选择,不一定最新版本最好,新版本有空可以研究。2020.3版安装后好像发现自带openCV库缺文件,单独安装opencv才解决。2020.2版用到现在还没问题)
一般经过这个过程运行测试代码demo_security_barrier_camera.bat -d CPU会没问题,会出来识别汽车车牌图片,说明安装成功。 2.使用注意事项:
1).在cmd里面运行前,要右键cmd选择用管理员运行,否则一些命令会由于目录权限不够运行不了。
2).运行命令时,CPU、GPU、MYRIAD这些设备要大写,不能小写,因为这些是作为参数引入的,特别python是区分大小写,所以写错会匹配不到参数出问题。
3).下载一些东西是由于有些东西是国外的,文件比较大的时候会失败,需要切换到清华源等国内源再下载。
4)openvino要求最低6代酷睿处理器,太老的cpu不行。这次用的LattePanda 拿铁熊猫 Delta 432由于cpu是用的Intel 8th Gen Celeron Processor N4100,好像并不在支持之列。所以我们后来是用NCS2神经棒来运行的。 3.程序编写:
参考openvino的官方案例程序按项目需求改写。
1)程序文件
face_detector.py 处理面部探测
face_identifier.py 处理面部对比识别
faces_database.py 处理模型标准图片
ie_module.py 模型处理
landmarks_detector.py 处理标识探测
face_recognition_demo.py 主程序
2)用到的模型:
face-detection-adas-0001
face-reidentification-retail-0095
landmarks-regression-retail-0009
3)主程序分析:
主要说下主程序face_recognition_demo.py ,我们设置了超过70%的相似度阀值就触发报警,当然报警可以多种方式,可以是窗口报警,也可以是发出报警邮件,可按要求去改写:
import logging as log
import os.path as osp
import sys
import time
from argparse import ArgumentParser
import cv2
import numpy as np
from openvino.inference_engine import IENetwork
from ie_module import InferenceContext
from landmarks_detector import LandmarksDetector
from face_detector import FaceDetector
from faces_database import FacesDatabase
from face_identifier import FaceIdentifier
general = parser.add_argument_group('General')
general.add_argument('-i', '--input', metavar="PATH", default='0',
help="(optional) Path to the input video "
"('0' for the camera, default)")
general.add_argument('-o', '--output', metavar="PATH", default="",
help="(optional) Path to save the output video to")
general.add_argument('--no_show', action='store_true',
help="(optional) Do not display output")
general.add_argument('-tl', '--timelapse', action='store_true',
help="(optional) Auto-pause after each frame")
general.add_argument('-cw', '--crop_width', default=0, type=int,
help="(optional) Crop the input stream to this width "
"(default: no crop). Both -cw and -ch parameters "
"should be specified to use crop.")
general.add_argument('-ch', '--crop_height', default=0, type=int,
help="(optional) Crop the input stream to this height "
"(default: no crop). Both -cw and -ch parameters "
"should be specified to use crop.")
general.add_argument('--match_algo', default='HUNGARIAN', choices=MATCH_ALGO,
help="(optional)algorithm for face matching(default: %(default)s)")
gallery = parser.add_argument_group('Faces database')
gallery.add_argument('-fg', metavar="PATH", required=True,
help="Path to the face images directory")
gallery.add_argument('--run_detector', action='store_true',
help="(optional) Use Face Detection model to find faces"
" on the face images, otherwise use full images.")
models = parser.add_argument_group('Models')
models.add_argument('-m_fd', metavar="PATH", default="", required=True,
help="Path to the Face Detection model XML file")
models.add_argument('-m_lm', metavar="PATH", default="", required=True,
help="Path to the Facial Landmarks Regression model XML file")
models.add_argument('-m_reid', metavar="PATH", default="", required=True,
help="Path to the Face Reidentification model XML file")
models.add_argument('-fd_iw', '--fd_input_width', default=0, type=int,
help="(optional) specify the input width of detection model "
"(default: use default input width of model). Both -fd_iw and -fd_ih parameters "
"should be specified for reshape.")
models.add_argument('-fd_ih', '--fd_input_height', default=0, type=int,
help="(optional) specify the input height of detection model "
"(default: use default input height of model). Both -fd_iw and -fd_ih parameters "
"should be specified for reshape.")
infer = parser.add_argument_group('Inference options')
infer.add_argument('-d_fd', default='CPU', choices=DEVICE_KINDS,
help="(optional) Target device for the "
"Face Detection model (default: %(default)s)")
infer.add_argument('-d_lm', default='CPU', choices=DEVICE_KINDS,
help="(optional) Target device for the "
"Facial Landmarks Regression model (default: %(default)s)")
infer.add_argument('-d_reid', default='CPU', choices=DEVICE_KINDS,
help="(optional) Target device for the "
"Face Reidentification model (default: %(default)s)")
infer.add_argument('-l', '--cpu_lib', metavar="PATH", default="",
help="(optional) For MKLDNN (CPU)-targeted custom layers, if any. "
"Path to a shared library with custom layers implementations")
infer.add_argument('-c', '--gpu_lib', metavar="PATH", default="",
help="(optional) For clDNN (GPU)-targeted custom layers, if any. "
"Path to the XML file with descriptions of the kernels")
infer.add_argument('-v', '--verbose', action='store_true',
help="(optional) Be more verbose")
infer.add_argument('-pc', '--perf_stats', action='store_true',
help="(optional) Output detailed per-layer performance stats")
infer.add_argument('-t_fd', metavar='[0..1]', type=float, default=0.6,
help="(optional) Probability threshold for face detections"
"(default: %(default)s)")
infer.add_argument('-t_id', metavar='[0..1]', type=float, default=0.3,
help="(optional) Cosine distance threshold between two vectors "
"for face identification (default: %(default)s)")
infer.add_argument('-exp_r_fd', metavar='NUMBER', type=float, default=1.15,
help="(optional) Scaling ratio for bboxes passed to face recognition "
"(default: %(default)s)")
infer.add_argument('--allow_grow', action='store_true',
help="(optional) Allow to grow faces gallery and to dump on disk. "
"Available only if --no_show option is off.")
assert (args.fd_input_height and args.fd_input_width) or
(args.fd_input_height==0 and args.fd_input_width==0),
"Both -fd_iw and -fd_ih parameters should be specified for reshape"
if args.fd_input_height and args.fd_input_width :
face_detector_net.reshape({"data": [1, 3, args.fd_input_height,args.fd_input_width]})
landmarks_net = self.load_model(args.m_lm)
face_reid_net = self.load_model(args.m_reid)
log.info("Building faces database using images from '%s'" % (args.fg))
self.faces_database = FacesDatabase(args.fg, self.face_identifier,
self.landmarks_detector,
self.face_detector if args.run_detector else None, args.no_show)
self.face_identifier.set_faces_database(self.faces_database)
log.info("Database is built, registered %s identities" %
(len(self.faces_database)))
self.allow_grow = args.allow_grow and not args.no_show
def load_model(self, model_path):
model_path = osp.abspath(model_path)
model_description_path = model_path
model_weights_path = osp.splitext(model_path)[0] + ".bin"
log.info("Loading the model from '%s'" % (model_description_path))
assert osp.isfile(model_description_path),
"Model description is not found at '%s'" % (model_description_path)
assert osp.isfile(model_weights_path),
"Model weights are not found at '%s'" % (model_weights_path)
model = IENetwork(model_description_path, model_weights_path)
log.info("Model is loaded")
return model
def process(self, frame):
assert len(frame.shape) == 3,
"Expected input frame in (H, W, C) format"
assert frame.shape[2] in [3, 4],
"Expected BGR or BGRA input"
self.face_detector.start_async(frame)
rois = self.face_detector.get_roi_proposals(frame)
if self.QUEUE_SIZE < len(rois):
log.warning("Too many faces for processing."
" Will be processed only %s of %s." %
(self.QUEUE_SIZE, len(rois)))
rois = rois[:self.QUEUE_SIZE]
self.landmarks_detector.start_async(frame, rois)
landmarks = self.landmarks_detector.get_landmarks()
self.face_identifier.start_async(frame, rois, landmarks)
face_identities, unknowns = self.face_identifier.get_matches()
if self.allow_grow and len(unknowns) > 0:
for i in unknowns:
# This check is preventing asking to save half-images in the boundary of images
if rois.position[0] == 0.0 or rois.position[1] == 0.0 or
(rois.position[0] + rois.size[0] > orig_image.shape[1]) or
(rois.position[1] + rois.size[1] > orig_image.shape[0]):
continue
crop = orig_image[int(rois.position[1]):int(rois.position[1]+rois.size[1]), int(rois.position[0]):int(rois.position[0]+rois.size[0])]
#name = self.faces_database.ask_to_save(crop) xy
"""#xy
if name:
id = self.faces_database.dump_faces(crop, face_identities.descriptor, name)
face_identities.id = id
"""