Enable Face and Face Attribute Recognition

FindFace Multi allows you to recognize human faces and face attributes. Subject to your needs, you can enable recognition of such face attributes as age, gender, emotions, beard, glasses, medical masks, head position, eyes state, or liveness.

Face and face attribute recognition can be automatically enabled and configured during the FindFace Multi installation. This section describes how to enable face and face attribute recognition in case this step has been skipped during installation.

To enable face and face attribute recognition, do the following:

  1. Specify neural network models for face object detection in the /opt/findface-multi/configs/findface-extraction-api/findface-extraction-api.yaml configuration file.

    Important

    Be sure to choose the right acceleration type for each model, matching the acceleration type of findface-extraction-api: CPU or GPU. Be aware that findface-extraction-api on CPU can work only with CPU-models, while findface-extraction-api on GPU supports both CPU- and GPU-models.

    1. Open the findface-extraction-api.yaml configuration file.

      sudo vi /opt/findface-multi/configs/findface-extraction-api/findface-extraction-api.yaml
      
    2. Specify the face detector model in the detectors models section by pasting the following code:

      GPU

      detectors:
        ...
        models:
          face_jasmine:
            aliases:
            - face
            - nnd
            - cheetah
            model: detector/facedet.kali.005.gpu.fnk
            options:
              min_object_size: 32
              resolutions:
              - 2048x2048
        ...
      

      CPU

      detectors:
        ...
        models:
          face_jasmine:
            aliases:
            - face
            - nnd
            - cheetah
            model: detector/facedet.jasmine_fast.004.cpu.fnk
            options:
              min_object_size: 32
              resolutions:
              - 2048x2048
        ...
      
    3. In the objects face section, specify the quality_attribute: face_quality and the base_normalizer: facenorm/crop2x.v2_maxsize400.gpu.fnk or the base_normalizer: facenorm/crop2x.v2_maxsize400.cpu.fnk, depending on your acceleration type:

      GPU

      objects:
        ...
        face:
          base_normalizer: facenorm/crop2x.v2_maxsize400.gpu.fnk
          quality_attribute: face_quality
        ...
      

      CPU

      objects:
        ...
        face:
          base_normalizer: facenorm/crop2x.v2_maxsize400.cpu.fnk
          quality_attribute: face_quality
        ...
      
    4. Specify the face normalizer models in the normalizers section:

      GPU

      normalizers:
        ...
        models:
          crop1x:
            model: facenorm/crop1x.v2_maxsize400.gpu.fnk
          crop2x:
            model: facenorm/crop2x.v2_maxsize400.gpu.fnk
          cropbbox:
            model: facenorm/cropbbox.v2.gpu.fnk
          multicrop_full_center:
            model: ''
          multicrop_full_crop2x:
            model: facenorm/facenorm.multicrop_full_crop2x_size400.gpu.fnk
          norm200:
            model: facenorm/bee.v3.gpu.fnk
        ...
      

      CPU

      normalizers:
        ...
        models:
          crop1x:
            model: facenorm/crop1x.v2_maxsize400.cpu.fnk
          crop2x:
            model: facenorm/crop2x.v2_maxsize400.cpu.fnk
          cropbbox:
            model: facenorm/cropbbox.v2.cpu.fnk
          multicrop_full_center:
            model: ''
          multicrop_full_crop2x:
            model: facenorm/facenorm.multicrop_full_crop2x_size400.cpu.fnk
          norm200:
            model: facenorm/bee.v3.cpu.fnk
        ...
      
    5. Specify the extraction models in the extractors models section, subject to the extractors you want to enable:

      Important

      The face_liveness extraction model faceattr/faceattr.liveness_web.v1 is enabled by default. Do not disable it if you use authentication by face.

      GPU

      extractors:
        ...
        models:
          face_age:
            default:
              model: faceattr/faceattr.age.v3.gpu.fnk
          face_beard:
            default:
              model: faceattr/beard.v0.gpu.fnk
          face_beard4:
            default:
              model: ''
          face_countries47:
            default:
              model: ''
          face_emben:
            default:
              model: face/nectarine_l_320.gpu.fnk
          face_emotions:
            default:
              model: faceattr/emotions.v1.gpu.fnk
          face_eyes_attrs:
            default:
              model: faceattr/faceattr.eyes_attrs.v0.gpu.fnk
          face_eyes_openness:
            default:
              model: ''
          face_gender:
            default:
              model: faceattr/faceattr.gender.v3.gpu.fnk
          face_glasses3:
            default:
              model: ''
          face_glasses4:
            default:
              model: faceattr/faceattr.glasses4.v0.gpu.fnk
          face_hair:
            default:
              model: ''
          face_headpose:
            default:
              model: faceattr/headpose.v3.gpu.fnk
          face_headwear:
            default:
              model: ''
          face_highlight:
            default:
              model: ''
          face_liveness:
            default:
              model: faceattr/faceattr.liveness_web.v1.gpu.fnk
          face_luminance_overexposure:
            default:
              model: ''
          face_luminance_underexposure:
            default:
              model: ''
          face_luminance_uniformity:
            default:
              model: ''
          face_medmask3:
            default:
              model: faceattr/medmask3.v2.gpu.fnk
          face_medmask4:
            default:
              model: ''
          face_mouth_attrs:
            default:
              model: ''
          face_quality:
            default:
              model: faceattr/faceattr.quality.v5.gpu.fnk
          face_scar:
            default:
              model: ''
          face_sharpness:
            default:
              model: ''
          face_tattoo:
            default:
              model: ''
          face_validity:
            default:
              model: ''
      

      CPU

      extractors:
        ...
        models:
          face_age:
            default:
              model: faceattr/faceattr.age.v3.cpu.fnk
          face_beard:
            default:
              model: faceattr/beard.v0.cpu.fnk
          face_beard4:
            default:
              model: ''
          face_countries47:
            default:
              model: ''
          face_emben:
            default:
              model: face/nectarine_l_320.cpu.fnk
          face_emotions:
            default:
              model: faceattr/emotions.v1.cpu.fnk
          face_eyes_attrs:
            default:
              model: faceattr/faceattr.eyes_attrs.v0.cpu.fnk
          face_eyes_openness:
            default:
              model: ''
          face_gender:
            default:
              model: faceattr/faceattr.gender.v3.cpu.fnk
          face_glasses3:
            default:
              model: ''
          face_glasses4:
            default:
              model: faceattr/faceattr.glasses4.v0.cpu.fnk
          face_hair:
            default:
              model: ''
          face_headpose:
            default:
              model: faceattr/headpose.v3.cpu.fnk
          face_headwear:
            default:
              model: ''
          face_highlight:
            default:
              model: ''
          face_liveness:
            default:
              model: faceattr/faceattr.liveness_web.v1.cpu.fnk
          face_luminance_overexposure:
            default:
              model: ''
          face_luminance_underexposure:
            default:
              model: ''
          face_luminance_uniformity:
            default:
              model: ''
          face_medmask3:
            default:
              model: faceattr/medmask3.v2.cpu.fnk
          face_medmask4:
            default:
              model: ''
          face_mouth_attrs:
            default:
              model: ''
          face_quality:
            default:
              model: faceattr/faceattr.quality.v5.cpu.fnk
          face_scar:
            default:
              model: ''
          face_sharpness:
            default:
              model: ''
          face_tattoo:
            default:
              model: ''
          face_validity:
            default:
              model: ''
      

      The following extraction models are available:

      Extractor

      Acceleration

      Configure as follows

      age

      CPU

      face_age: faceattr/faceattr.age.v3.cpu.fnk

      GPU

      face_age: faceattr/faceattr.age.v3.gpu.fnk

      beard

      CPU

      face_beard: faceattr/beard.v0.cpu.fnk

      GPU

      face_beard: faceattr/beard.v0.gpu.fnk

      individual face feature vector

      CPU

      face_emben: face/nectarine_l_320.cpu.fnk

      GPU

      face_emben: face/nectarine_l_320.gpu.fnk

      gender

      CPU

      face_gender: faceattr/faceattr.gender.v3.cpu.fnk

      GPU

      face_gender: faceattr/faceattr.gender.v3.gpu.fnk

      emotions

      CPU

      face_emotions: faceattr/emotions.v1.cpu.fnk

      GPU

      face_emotions: faceattr/emotions.v1.gpu.fnk

      glasses

      CPU

      face_glasses3: faceattr/faceattr.glasses4.v0.cpu.fnk

      GPU

      face_glasses3: faceattr/faceattr.glasses4.v0.gpu.fnk

      head position

      CPU

      face_headpose: faceattr/headpose.v3.cpu.fnk

      GPU

      face_headpose: faceattr/headpose.v3.gpu.fnk

      face liveness

      CPU

      face_liveness: faceattr/faceattr.liveness_web.v1.cpu.fnk

      GPU

      face_liveness: faceattr/faceattr.liveness_web.v1.gpu.fnk

      face mask

      CPU

      face_medmask3: faceattr/medmask3.v2.cpu.fnk

      GPU

      face_medmask3: faceattr/medmask3.v2.gpu.fnk

      face quality

      CPU

      face_quality: faceattr/faceattr.quality.v5.cpu.fnk

      GPU

      face_quality: faceattr/faceattr.quality.v5.gpu.fnk

      eyes

      CPU

      face_eyes_attrs: faceattr/faceattr.eyes_attrs.v0.cpu.fnk

      GPU

      face_eyes_attrs: faceattr/faceattr.eyes_attrs.v0.gpu.fnk

      Important

      For face recognition to work properly, the face_emben and the face_quality extractors must be enabled.

      Note

      The default glasses recognition model is faceattr/faceattr.glasses4.v0, it predicts four classes. The model is specified in the face_glasses4 extractor. If you use a model that predicts three classes when recognizing glasses, specify it in the face_glasses3 extractor in the /opt/findface-multi/configs/findface-extraction-api/findface-extraction-api.yaml file.

      In the /opt/findface-multi/configs/findface-multi-legacy/findface-multi-legacy.py file, configure the default value of the FACE_GLASSES_EXTRACTOR parameter, which is set to face_glasses4, to specify the enabled glasses recognition extractor. E.g., if you enabled the faceattr/glasses3.v0 model, specify 'FACE_GLASSES_EXTRACTOR': 'face_glasses3'.

      Standard FindFace Multi installation pack includes faceattr/faceattr.glasses4.v0 glasses recognition model. If you use faceattr/glasses3.v0 glasses recognition model, copy it to the /opt/findface-multi/models/faceattr/ directory before editing the configuration files.

      Important

      The enabled attributes will be recognized by FindFace Multi. The confidence value of the recognized attribute depends on the neural network models used. For more information please contact our support team at support@ntechlab.com.

  2. Modify the /opt/findface-multi/configs/findface-video-worker/findface-video-worker.yaml configuration file.

    1. In the models section, specify the face neural network models by analogy with the example below:

      GPU

      sudo vi /opt/findface-multi/configs/findface-video-worker/findface-video-worker.yaml
      
      models:
        ...
        detectors:
          ...
          face:
            fnk_path: /usr/share/findface-data/models/detector/facedet.kali.005.gpu.fnk
            min_size: 60
          ...
        normalizers:
          ...
          face_norm:
            fnk_path: /usr/share/findface-data/models/facenorm/crop2x.v2_maxsize400.gpu.fnk
          face_norm_quality:
            fnk_path: /usr/share/findface-data/models/facenorm/crop1x.v2_maxsize400.gpu.fnk
          ...
        extractors:
          ...
          face_quality:
            fnk_path: /usr/share/findface-data/models/faceattr/faceattr.quality.v5.gpu.fnk
            normalizer: face_norm_quality
      

      CPU

      sudo vi /opt/findface-multi/configs/findface-video-worker/findface-video-worker.yaml
      
      models:
        ...
        detectors:
          ...
          face:
            fnk_path: /usr/share/findface-data/models/detector/facedet.jasmine_fast.004.cpu.fnk
            min_size: 60
          ...
        normalizers:
          ...
          face_norm:
            fnk_path: /usr/share/findface-data/models/facenorm/crop2x.v2_maxsize400.cpu.fnk
          face_norm_quality:
            fnk_path: /usr/share/findface-data/models/facenorm/crop1x.v2_maxsize400.cpu.fnk
          ...
        extractors:
          ...
          face_quality:
            fnk_path: /usr/share/findface-data/models/faceattr/faceattr.quality.v5.cpu.fnk
            normalizer: face_norm_quality
      
    2. Add the face section within objects:

      objects:
        ...
        face:
          normalizer: face_norm
          quality: face_quality
          track_features: ''
      
  3. Open the /opt/findface-multi/configs/findface-video-manager/findface-video-manager.yaml configuration file and make sure it contains the face section in detectors that looks similar to the example below.

    sudo vi /opt/findface-multi/configs/findface-video-manager/findface-video-manager.yaml
    
    detectors:
      face:
        filter_min_quality: 0.42
        filter_min_size: 60
        filter_max_size: 8192
        roi: ''
        fullframe_crop_rot: false
        fullframe_use_png: false
        jpeg_quality: 95
        overall_only: true
        realtime_post_first_immediately: false
        realtime_post_interval: 1
        realtime_post_every_interval: false
        track_interpolate_bboxes: true
        track_miss_interval: 1
        track_overlap_threshold: 0.25
        track_max_duration_frames: 0
        track_send_history: false
        post_best_track_frame: true
        post_best_track_normalize: true
        post_first_track_frame: false
        post_last_track_frame: false
        tracker_type: simple_iou
        track_deep_sort_matching_threshold: 0.65
        track_deep_sort_filter_unconfirmed_tracks: true
        track_object_is_principal: false
        track_history_active_track_miss_interval: 0
        filter_track_min_duration_frames: 1
        tracker_settings:
          oc_sort:
            filter_unconfirmed_tracks: true
            high_quality_detects_threshold: 0.6
            momentum_delta_time: 3
            smooth_factor: 0.5
            time_since_update: 0
        extractors_track_triggers: {}
    
  4. Enable recognition of faces and face attributes in the /opt/findface-multi/configs/findface-multi-legacy/findface-multi-legacy.py configuration file. Do the following:

    1. In the FFSECURITY section, set 'ENABLE_FACES': True:

      sudo vi /opt/findface-multi/configs/findface-multi-legacy/findface-multi-legacy.py
      
      FFSECURITY = {
          ...
      
          # optional objects to detect
          'ENABLE_FACES': True,
          ...
      
    2. In the FACE_EVENTS_FEATURES parameter, specify the face attributes that you want to display for the face recognition events.

      available features: age, beard, emotions, gender, glasses, headpose, medmask, eyes_attrs
      'FACE_EVENTS_FEATURES': ['gender', 'beard', 'emotions', 'headpose', 'age', 'medmask', 'glasses', 'eyes_attrs'],
      
    3. Restart all FindFace Multi containers.

      cd /opt/findface-multi/
      
      sudo docker-compose restart
      
    4. In the web interface, navigate to Video Sources. Select a camera in the Cameras tab (or an uploaded file in the Uploads tab, or an external detector in the corresponding tab). Navigate to the General tab. Select Faces in the Detectors section.