Enable Face and Face Attribute Recognition
FindFace Multi allows you to recognize human faces and face attributes. Subject to your needs, you can enable recognition of such face attributes as age, gender, emotions, beard, glasses, medical masks, head position, eyes state, or liveness.
Face and face attribute recognition can be automatically enabled and configured during the FindFace Multi installation. This section describes how to enable face and face attribute recognition in case this step has been skipped during installation.
To enable face and face attribute recognition, do the following:
Specify neural network models for face object detection in the
/opt/findface-multi/configs/findface-extraction-api/findface-extraction-api.yamlconfiguration file.Important
Be sure to choose the right acceleration type for each model, matching the acceleration type of
findface-extraction-api: CPU or GPU. Be aware thatfindface-extraction-apion CPU can work only with CPU-models, whilefindface-extraction-apion GPU supports both CPU- and GPU-models.Open the
findface-extraction-api.yamlconfiguration file.sudo vi /opt/findface-multi/configs/findface-extraction-api/findface-extraction-api.yaml
Specify the face detector model in the
detectors → modelssection by pasting the following code:GPU
detectors: ... models: face_jasmine: aliases: - face - nnd - cheetah model: detector/facedet.kali.005.gpu.fnk options: min_object_size: 32 resolutions: - 2048x2048 ...
CPU
detectors: ... models: face_jasmine: aliases: - face - nnd - cheetah model: detector/facedet.jasmine_fast.004.cpu.fnk options: min_object_size: 32 resolutions: - 2048x2048 ...
In the
objects → facesection, specify thequality_attribute: face_qualityand thebase_normalizer: facenorm/crop2x.v2_maxsize400.gpu.fnkor thebase_normalizer: facenorm/crop2x.v2_maxsize400.cpu.fnk, depending on your acceleration type:GPU
objects: ... face: base_normalizer: facenorm/crop2x.v2_maxsize400.gpu.fnk quality_attribute: face_quality ...
CPU
objects: ... face: base_normalizer: facenorm/crop2x.v2_maxsize400.cpu.fnk quality_attribute: face_quality ...
Specify the face normalizer models in the
normalizerssection:GPU
normalizers: ... models: crop1x: model: facenorm/crop1x.v2_maxsize400.gpu.fnk crop2x: model: facenorm/crop2x.v2_maxsize400.gpu.fnk cropbbox: model: facenorm/cropbbox.v2.gpu.fnk multicrop_full_center: model: '' multicrop_full_crop2x: model: facenorm/facenorm.multicrop_full_crop2x_size400.gpu.fnk norm200: model: facenorm/bee.v3.gpu.fnk ...
CPU
normalizers: ... models: crop1x: model: facenorm/crop1x.v2_maxsize400.cpu.fnk crop2x: model: facenorm/crop2x.v2_maxsize400.cpu.fnk cropbbox: model: facenorm/cropbbox.v2.cpu.fnk multicrop_full_center: model: '' multicrop_full_crop2x: model: facenorm/facenorm.multicrop_full_crop2x_size400.cpu.fnk norm200: model: facenorm/bee.v3.cpu.fnk ...
Specify the extraction models in the
extractors → modelssection, subject to the extractors you want to enable:Important
The
face_livenessextraction modelfaceattr/faceattr.liveness_web.v1is enabled by default. Do not disable it if you use authentication by face.GPU
extractors: ... models: face_age: default: model: faceattr/faceattr.age.v3.gpu.fnk face_beard: default: model: faceattr/beard.v0.gpu.fnk face_beard4: default: model: '' face_countries47: default: model: '' face_emben: default: model: face/nectarine_l_320.gpu.fnk face_emotions: default: model: faceattr/emotions.v1.gpu.fnk face_eyes_attrs: default: model: faceattr/faceattr.eyes_attrs.v0.gpu.fnk face_eyes_openness: default: model: '' face_gender: default: model: faceattr/faceattr.gender.v3.gpu.fnk face_glasses3: default: model: '' face_glasses4: default: model: faceattr/faceattr.glasses4.v0.gpu.fnk face_hair: default: model: '' face_headpose: default: model: faceattr/headpose.v3.gpu.fnk face_headwear: default: model: '' face_highlight: default: model: '' face_liveness: default: model: faceattr/faceattr.liveness_web.v1.gpu.fnk face_luminance_overexposure: default: model: '' face_luminance_underexposure: default: model: '' face_luminance_uniformity: default: model: '' face_medmask3: default: model: faceattr/medmask3.v2.gpu.fnk face_medmask4: default: model: '' face_mouth_attrs: default: model: '' face_quality: default: model: faceattr/faceattr.quality.v5.gpu.fnk face_scar: default: model: '' face_sharpness: default: model: '' face_tattoo: default: model: '' face_validity: default: model: ''
CPU
extractors: ... models: face_age: default: model: faceattr/faceattr.age.v3.cpu.fnk face_beard: default: model: faceattr/beard.v0.cpu.fnk face_beard4: default: model: '' face_countries47: default: model: '' face_emben: default: model: face/nectarine_l_320.cpu.fnk face_emotions: default: model: faceattr/emotions.v1.cpu.fnk face_eyes_attrs: default: model: faceattr/faceattr.eyes_attrs.v0.cpu.fnk face_eyes_openness: default: model: '' face_gender: default: model: faceattr/faceattr.gender.v3.cpu.fnk face_glasses3: default: model: '' face_glasses4: default: model: faceattr/faceattr.glasses4.v0.cpu.fnk face_hair: default: model: '' face_headpose: default: model: faceattr/headpose.v3.cpu.fnk face_headwear: default: model: '' face_highlight: default: model: '' face_liveness: default: model: faceattr/faceattr.liveness_web.v1.cpu.fnk face_luminance_overexposure: default: model: '' face_luminance_underexposure: default: model: '' face_luminance_uniformity: default: model: '' face_medmask3: default: model: faceattr/medmask3.v2.cpu.fnk face_medmask4: default: model: '' face_mouth_attrs: default: model: '' face_quality: default: model: faceattr/faceattr.quality.v5.cpu.fnk face_scar: default: model: '' face_sharpness: default: model: '' face_tattoo: default: model: '' face_validity: default: model: ''
The following extraction models are available:
Extractor
Acceleration
Configure as follows
age
CPU
face_age: faceattr/faceattr.age.v3.cpu.fnkGPU
face_age: faceattr/faceattr.age.v3.gpu.fnkbeard
CPU
face_beard: faceattr/beard.v0.cpu.fnkGPU
face_beard: faceattr/beard.v0.gpu.fnkindividual face feature vector
CPU
face_emben: face/nectarine_l_320.cpu.fnkGPU
face_emben: face/nectarine_l_320.gpu.fnkgender
CPU
face_gender: faceattr/faceattr.gender.v3.cpu.fnkGPU
face_gender: faceattr/faceattr.gender.v3.gpu.fnkemotions
CPU
face_emotions: faceattr/emotions.v1.cpu.fnkGPU
face_emotions: faceattr/emotions.v1.gpu.fnkglasses
CPU
face_glasses3: faceattr/faceattr.glasses4.v0.cpu.fnkGPU
face_glasses3: faceattr/faceattr.glasses4.v0.gpu.fnkhead position
CPU
face_headpose: faceattr/headpose.v3.cpu.fnkGPU
face_headpose: faceattr/headpose.v3.gpu.fnkface liveness
CPU
face_liveness: faceattr/faceattr.liveness_web.v1.cpu.fnkGPU
face_liveness: faceattr/faceattr.liveness_web.v1.gpu.fnkface mask
CPU
face_medmask3: faceattr/medmask3.v2.cpu.fnkGPU
face_medmask3: faceattr/medmask3.v2.gpu.fnkface quality
CPU
face_quality: faceattr/faceattr.quality.v5.cpu.fnkGPU
face_quality: faceattr/faceattr.quality.v5.gpu.fnkeyes
CPU
face_eyes_attrs: faceattr/faceattr.eyes_attrs.v0.cpu.fnkGPU
face_eyes_attrs: faceattr/faceattr.eyes_attrs.v0.gpu.fnkImportant
For face recognition to work properly, the
face_embenand theface_qualityextractors must be enabled.Note
The default glasses recognition model is
faceattr/faceattr.glasses4.v0, it predicts four classes. The model is specified in theface_glasses4extractor. If you use a model that predicts three classes when recognizing glasses, specify it in theface_glasses3extractor in the/opt/findface-multi/configs/findface-extraction-api/findface-extraction-api.yamlfile.In the
/opt/findface-multi/configs/findface-multi-legacy/findface-multi-legacy.pyfile, configure the default value of theFACE_GLASSES_EXTRACTORparameter, which is set toface_glasses4, to specify the enabled glasses recognition extractor. E.g., if you enabled thefaceattr/glasses3.v0model, specify'FACE_GLASSES_EXTRACTOR': 'face_glasses3'.Standard FindFace Multi installation pack includes
faceattr/faceattr.glasses4.v0glasses recognition model. If you usefaceattr/glasses3.v0glasses recognition model, copy it to the/opt/findface-multi/models/faceattr/directory before editing the configuration files.Important
The enabled attributes will be recognized by FindFace Multi. The confidence value of the recognized attribute depends on the neural network models used. For more information please contact our support team at support@ntechlab.com.
Modify the
/opt/findface-multi/configs/findface-video-worker/findface-video-worker.yamlconfiguration file.In the
modelssection, specify the face neural network models by analogy with the example below:GPU
sudo vi /opt/findface-multi/configs/findface-video-worker/findface-video-worker.yaml models: ... detectors: ... face: fnk_path: /usr/share/findface-data/models/detector/facedet.kali.005.gpu.fnk min_size: 60 ... normalizers: ... face_norm: fnk_path: /usr/share/findface-data/models/facenorm/crop2x.v2_maxsize400.gpu.fnk face_norm_quality: fnk_path: /usr/share/findface-data/models/facenorm/crop1x.v2_maxsize400.gpu.fnk ... extractors: ... face_quality: fnk_path: /usr/share/findface-data/models/faceattr/faceattr.quality.v5.gpu.fnk normalizer: face_norm_quality
CPU
sudo vi /opt/findface-multi/configs/findface-video-worker/findface-video-worker.yaml models: ... detectors: ... face: fnk_path: /usr/share/findface-data/models/detector/facedet.jasmine_fast.004.cpu.fnk min_size: 60 ... normalizers: ... face_norm: fnk_path: /usr/share/findface-data/models/facenorm/crop2x.v2_maxsize400.cpu.fnk face_norm_quality: fnk_path: /usr/share/findface-data/models/facenorm/crop1x.v2_maxsize400.cpu.fnk ... extractors: ... face_quality: fnk_path: /usr/share/findface-data/models/faceattr/faceattr.quality.v5.cpu.fnk normalizer: face_norm_quality
Add the
facesection withinobjects:objects: ... face: normalizer: face_norm quality: face_quality track_features: ''
Open the
/opt/findface-multi/configs/findface-video-manager/findface-video-manager.yamlconfiguration file and make sure it contains thefacesection indetectorsthat looks similar to the example below.sudo vi /opt/findface-multi/configs/findface-video-manager/findface-video-manager.yaml detectors: face: filter_min_quality: 0.42 filter_min_size: 60 filter_max_size: 8192 roi: '' fullframe_crop_rot: false fullframe_use_png: false jpeg_quality: 95 overall_only: true realtime_post_first_immediately: false realtime_post_interval: 1 realtime_post_every_interval: false track_interpolate_bboxes: true track_miss_interval: 1 track_overlap_threshold: 0.25 track_max_duration_frames: 0 track_send_history: false post_best_track_frame: true post_best_track_normalize: true post_first_track_frame: false post_last_track_frame: false tracker_type: simple_iou track_deep_sort_matching_threshold: 0.65 track_deep_sort_filter_unconfirmed_tracks: true track_object_is_principal: false track_history_active_track_miss_interval: 0 filter_track_min_duration_frames: 1 tracker_settings: oc_sort: filter_unconfirmed_tracks: true high_quality_detects_threshold: 0.6 momentum_delta_time: 3 smooth_factor: 0.5 time_since_update: 0 extractors_track_triggers: {}
Enable recognition of faces and face attributes in the
/opt/findface-multi/configs/findface-multi-legacy/findface-multi-legacy.pyconfiguration file. Do the following:In the
FFSECURITYsection, set'ENABLE_FACES': True:sudo vi /opt/findface-multi/configs/findface-multi-legacy/findface-multi-legacy.py FFSECURITY = { ... # optional objects to detect 'ENABLE_FACES': True, ...
In the
FACE_EVENTS_FEATURESparameter, specify the face attributes that you want to display for the face recognition events.available features: age, beard, emotions, gender, glasses, headpose, medmask, eyes_attrs 'FACE_EVENTS_FEATURES': ['gender', 'beard', 'emotions', 'headpose', 'age', 'medmask', 'glasses', 'eyes_attrs'],
Restart all FindFace Multi containers.
cd /opt/findface-multi/ sudo docker-compose restart
In the web interface, navigate to Video Sources. Select a camera in the Cameras tab (or an uploaded file in the Uploads tab, or an external detector in the corresponding tab). Navigate to the General tab. Select Faces in the Detectors section.