Enable Face and Face Attribute Recognition
FindFace Server allows you to recognize human faces and face attributes. Subject to your needs, you can enable recognition of such face attributes as age, gender, emotions, beard, glasses, medical masks, head position, or liveness.
Face and face attribute recognition can be manually enabled and configured. Face and face attribute recognition works on both GPU- and CPU-acceleration.
Note
There are examples of how the sections should look like below. It may vary depending on the models that you have selected.
To enable face recognition, do the following:
Specify neural network models for face object detection in the
/opt/ffserver/configs/extraction-api.yamlconfiguration file.Important
Be sure to choose the right acceleration type for each model, matching the acceleration type of
extraction-api: CPU or GPU. Be aware thatextraction-apion CPU can work only with CPU-models, whileextraction-apion GPU supports both CPU- and GPU-models.Open the
extraction-api.yamlconfiguration file.sudo vi /opt/ffserver/configs/extraction-api.yaml
Specify the face detector model in the
detectors -> modelssection by pasting the following code:GPU
detectors: ... models: ... face_jasmine: aliases: - face - nnd - cheetah model: detector/facedet.jasmine_fast.004.gpu.fnk options: min_object_size: 32 resolutions: - 2048x2048 ...
CPU
detectors: ... models: ... face_jasmine: aliases: - face - nnd - cheetah model: detector/facedet.jasmine_fast.004.cpu.fnk options: min_object_size: 32 resolutions: - 2048x2048 ...
Make sure that the
objects -> facesection contains thequality_attribute: face_qualityand thebase_normalizer: facenorm/crop2x.v2_maxsize400.gpu.fnkor thebase_normalizer: facenorm/crop2x.v2_maxsize400.cpu.fnk, depending on your acceleration type:GPU
objects: ... face: base_normalizer: facenorm/crop2x.v2_maxsize400.gpu.fnk quality_attribute: face_quality ...
CPU
objects: ... face: base_normalizer: facenorm/crop2x.v2_maxsize400.cpu.fnk quality_attribute: face_quality ...
Specify the face normalizer models in the
normalizerssection by pasting the following code:GPU
normalizers: ... models: crop1x: model: facenorm/crop1x.v2_maxsize400.gpu.fnk crop2x: model: facenorm/crop2x.v2_maxsize400.gpu.fnk cropbbox: model: facenorm/cropbbox.v2.gpu.fnk multicrop_full_center: model: '' multicrop_full_crop2x: model: facenorm/facenorm.multicrop_full_crop2x_size400.gpu.fnk norm200: model: facenorm/bee.v3.gpu.fnk ...
CPU
normalizers: ... models: crop1x: model: facenorm/crop1x.v2_maxsize400.cpu.fnk crop2x: model: facenorm/crop2x.v2_maxsize400.cpu.fnk cropbbox: model: facenorm/cropbbox.v2.cpu.fnk multicrop_full_center: model: '' multicrop_full_crop2x: model: facenorm/facenorm.multicrop_full_crop2x_size400.cpu.fnk norm200: model: facenorm/bee.v3.cpu.fnk ...
Note
This step is required to enable face attribute recognition.
To enable face attribute recognition, do the following:
In the
/opt/ffserver/configs/extraction-api.yamlconfiguration file, specify the extraction models in theextractorssection, subject to the extractors you want to enable. Be sure to indicate the right acceleration type for each model, matching the acceleration type ofextraction-api: CPU or GPU.GPU
extractors: ... models: face_age: default: model: faceattr/faceattr.age.v3.gpu.fnk face_beard: default: model: faceattr/beard.v0.gpu.fnk face_emben: default: model: face/nectarine_xl_320.gpu.fnk face_emotions: default: model: faceattr/emotions.v1.gpu.fnk face_gender: default: model: faceattr/faceattr.gender.v3.gpu.fnk face_glasses3: default: model: faceattr/glasses3.v0.gpu.fnk face_headpose: default: model: faceattr/headpose.v3.gpu.fnk face_liveness: default: model: faceattr/faceattr.liveness_web.v1.gpu.fnk face_medmask3: default: model: faceattr/medmask3.v2.gpu.fnk face_quality: default: model: faceattr/faceattr.quality.v5.gpu.fnk ...
CPU
extractors: ... models: face_age: default: model: faceattr/faceattr.age.v3.cpu.fnk face_beard: default: model: faceattr/beard.v0.cpu.fnk face_emben: default: model: face/nectarine_xl_320.cpu.fnk face_emotions: default: model: faceattr/emotions.v1.cpu.fnk face_gender: default: model: faceattr/faceattr.gender.v3.cpu.fnk face_glasses3: default: model: faceattr/glasses3.v0.cpu.fnk face_headpose: default: model: faceattr/headpose.v3.cpu.fnk face_liveness: default: model: faceattr/faceattr.liveness_web.v1.cpu.fnk face_medmask3: default: model: faceattr/medmask3.v2.cpu.fnk face_quality: default: model: faceattr/faceattr.quality.v5.cpu.fnk ...
The most used extraction models are the following:
Extractor
Acceleration
Configure as follows
age
CPU
face_age: faceattr/faceattr.age.v3.cpu.fnkGPU
face_age: faceattr/faceattr.age.v3.gpu.fnkbeard
CPU
face_beard: faceattr/beard.v0.cpu.fnkGPU
face_beard: faceattr/beard.v0.gpu.fnkCPU
face_beard4: faceattr/faceattr.beard4.v1.cpu.fnkGPU
face_beard4: faceattr/faceattr.beard4.v1.cpu.fnkindividual face feature vector
CPU
face_emben: face/nectarine_xl_320.cpu.fnkGPU
face_emben: face/nectarine_xl_320.gpu.fnkgender
CPU
face_gender: faceattr/faceattr.gender.v3.cpu.fnkGPU
face_gender: faceattr/faceattr.gender.v3.gpu.fnkemotions
CPU
face_emotions: faceattr/emotions.v1.cpu.fnkGPU
face_emotions: faceattr/emotions.v1.gpu.fnkglasses
CPU
face_glasses3: faceattr/glasses3.v0.cpu.fnkGPU
face_glasses3: faceattr/glasses3.v0.gpu.fnkCPU
face_glasses4: faceattr/faceattr.glasses4.v0.cpu.fnkhead position
CPU
face_headpose: faceattr/headpose.v3.cpu.fnkGPU
face_headpose: faceattr/headpose.v3.gpu.fnkface liveness
CPU
face_liveness: faceattr/faceattr.liveness_web.v1.cpu.fnkGPU
face_liveness: faceattr/faceattr.liveness_web.v1.gpu.fnkface mask
CPU
face_medmask3: faceattr/medmask3.v2.cpu.fnkGPU
face_medmask3: faceattr/medmask3.v2.gpu.fnkCPU
face_medmask4: faceattr/faceattr.medmask4.v0.cpu.fnkGPU
face_medmask4: faceattr/faceattr.medmask4.v0.gpu.fnkface quality
CPU
face_quality: faceattr/faceattr.quality.v5.cpu.fnkGPU
face_quality: faceattr/faceattr.quality.v5.gpu.fnk
To enable face recognition, modify the
/opt/ffserver/configs/video-worker.yamlconfiguration file.In the
modelssection, specify the face neural network models by analogy with the example below:GPU
sudo vi /opt/ffserver/configs/video-worker.yaml
models: ... detectors: ... face: fnk_path: /usr/share/findface-data/models/detector/facedet.jasmine_fast.004.gpu.fnk min_size: 60 ... normalizers: ... face_norm: fnk_path: /usr/share/findface-data/models/facenorm/crop2x.v2_maxsize400.gpu.fnk face_norm_quality: fnk_path: /usr/share/findface-data/models/facenorm/crop1x.v2_maxsize400.gpu.fnk ... extractors: ... face_quality: fnk_path: /usr/share/findface-data/models/faceattr/faceattr.quality.v5.gpu.fnk normalizer: face_norm_quality ...
CPU
models: ... detectors: ... face: fnk_path: /usr/share/findface-data/models/detector/facedet.jasmine_fast.004.cpu.fnk min_size: 60 ... normalizers: ... face_norm: fnk_path: /usr/share/findface-data/models/facenorm/crop2x.v2_maxsize400.cpu.fnk face_norm_quality: fnk_path: /usr/share/findface-data/models/facenorm/crop1x.v2_maxsize400.cpu.fnk ... extractors: ... face_quality: fnk_path: /usr/share/findface-data/models/faceattr/faceattr.quality.v5.cpu.fnk normalizer: face_norm_quality ...
Make sure that the
objects -> facesection is included:objects: ... face: normalizer: face_norm quality: face_quality track_features: ''
To enable face recognition, open the
/opt/ffserver/configs/video-manager.yamlconfiguration file and make sure it contains thefacesection indetectorsthat looks similar to the example below.
sudo vi /opt/ffserver/configs/video-manager.yamldetectors: face: filter_min_quality: 0.45 filter_min_size: 1 filter_max_size: 8192 roi: "" fullframe_crop_rot: false fullframe_use_png: false jpeg_quality: 95 overall_only: false realtime_post_first_immediately: false realtime_post_interval: 1 realtime_post_every_interval: false track_interpolate_bboxes: true track_miss_interval: 1 track_overlap_threshold: 0.25 track_max_duration_frames: 0 track_send_history: false post_best_track_frame: true post_best_track_normalize: true post_first_track_frame: false post_last_track_frame: false tracker_type: simple_iou track_deep_sort_matching_threshold: 0.65 track_deep_sort_filter_unconfirmed_tracks: true track_object_is_principal: false track_history_active_track_miss_interval: 0 filter_track_min_duration_frames: 1 extractors_track_triggers: {}