findface-extraction-apiΒΆ

The findface-extraction-api service uses neural networks to detect a face in an image, extract face biometric data (feature vector), and recognize gender, age, emotions, and other features.

It interfaces with the findface-sf-api service as follows:

  • Gets original images with faces and normalized face images.
  • Returns the coordinates of the face bounding box, and (optionally) feature vector, gender, age and emotions data, should these data be requested by findface-sf-api.

Functionality:

  • face detection in an original image (with return of the bbox coordinates),
  • face normalization,
  • feature vector extraction from a normalized image,
  • face feature recognition (gender, age, emotions, beard, glasses, face mask, etc.).

The findface-extraction-api service can be based on CPU (installed from the findface-extraction-api package) or GPU (installed from the findface-extraction-api-gpu package). For both CPU- and GPU-accelerated services, configuration is done through the /etc/findface-extraction-api.ini configuration file. Its content varies subject to the acceleration type.

CPU-service configuration file:

detectors:
  max_batch_size: 1
  instances: 4
  models:
    cheetah:
      model: facedet/cheetah.cpu.fnk
      options:
        min_object_size: 32
        resolutions:
        - 256x256
        - 384x384
        - 512x512
        - 768x768
        - 1024x1024
        - 1536x1536
        - 2048x2048
  quality_estimator: true
normalizers:
  max_batch_size: 8
  instances: 4
  models:
    crop1x:
      model: ''
    crop2x:
      model: facenorm/crop2x.v2_maxsize400.cpu.fnk
    cropbbox:
      model: ''
    norm200:
      model: facenorm/bee.v2.cpu.fnk
extractors:
  max_batch_size: 8
  instances: 4
  models:
    age: ''
    beard: ''
    carattr_color: ''
    carattr_description: ''
    carattr_make: ''
    carattr_trash: ''
    countries47: ''
    emotions: ''
    face: face/ifruit_320.cpu.fnk
    gender: ''
    glasses3: ''
    headpose: ''
    liveness: ''
    luminance_overexposure: ''
    luminance_underexposure: ''
    medmask3: ''
    quality: faceattr/quality.v0.cpu.fnk
    sharpness: ''
    validity: ''
gpu_device: 0
models_root: /usr/share/findface-data/models
cache_dir: /var/cache/findface/models_cache
listen: 127.0.0.1:18666
license_ntls_server: 127.0.0.1:3133
fetch:
  enabled: true
  size_limit: 10485760
max_dimension: 6000
allow_cors: false
ticker_interval: 5000
prometheus:
  timing_buckets: [0.001, 0.005, 0.01, 0.02, 0.03, 0.05, 0.1, 0.2, 0.3, 0.5, 0.75,
    0.9, 1, 1.1, 1.3, 1.5, 1.7, 2, 3, 5, 10, 20, 30, 50]
  resolution_buckets: [10000, 20000, 40000, 80000, 100000, 200000, 400000, 800000,
    1e+06, 2e+06, 3e+06, 4e+06, 5e+06, 6e+06, 8e+06, 1e+07, 1.2e+07, 1.5e+07, 1.8e+07,
    2e+07, 3e+07, 5e+07, 1e+08]
  faces_buckets: [0, 1, 2, 5, 10, 20, 50, 75, 100, 200, 300, 400, 500, 600, 700, 800,
    900, 1000]

GPU-service configuration file:

detectors:
  max_batch_size: 1
  instances: 1
  models:
    cheetah:
      aliases:
      - face
      - nnd
      model: facedet/cheetah.gpu.fnk
      options:
        min_object_size: 32
        resolutions: [256x256, 384x384, 512x512, 768x768, 1024x1024, 1536x1536, 2048x2048]
  quality_estimator: true
normalizers:
  max_batch_size: 8
  instances: 1
  models:
    crop1x:
      model: ""
    crop2x:
      model: facenorm/crop2x.v2_maxsize400.gpu.fnk
    cropbbox:
      model: ""
    norm200:
      model: facenorm/bee.v2.gpu.fnk
extractors:
  max_batch_size: 8
  instances: 1
  models:
    age: faceattr/age.v1.gpu.fnk
    beard: ""
    carattr_color: ""
    carattr_description: ""
    carattr_license_plate: ""
    carattr_make: ""
    carattr_trash: ""
    countries47: ""
    emotions: ""
    face: ""
    gender: ""
    glasses3: ""
    headpose: ""
    liveness: ""
    luminance_overexposure: ""
    luminance_underexposure: ""
    medmask3: ""
    quality: faceattr/quality.v1.gpu.fnk
    sharpness: ""
    validity: ""
gpu_device: 0
models_root: /usr/share/findface-data/models
cache_dir: /var/cache/findface/models_cache
listen: :18666
license_ntls_server: 127.0.0.1:3133
fetch:
  enabled: true
  size_limit: 10485760
max_dimension: 6000
allow_cors: false
ticker_interval: 5000
debug: false
prometheus:
  timing_buckets: [0.001, 0.005, 0.01, 0.02, 0.03, 0.05, 0.1, 0.2, 0.3, 0.5, 0.75,
    0.9, 1, 1.1, 1.3, 1.5, 1.7, 2, 3, 5, 10, 20, 30, 50]
  resolution_buckets: [10000, 20000, 40000, 80000, 100000, 200000, 400000, 800000,
    1e+06, 2e+06, 3e+06, 4e+06, 5e+06, 6e+06, 8e+06, 1e+07, 1.2e+07, 1.5e+07, 1.8e+07,
    2e+07, 3e+07, 5e+07, 1e+08]
  faces_buckets: [0, 1, 2, 5, 10, 20, 50, 75, 100, 200, 300, 400, 500, 600, 700, 800,
    900, 1000]

When configuring findface-extraction-api (on CPU or GPU), refer to the following parameters:

Parameter Description
cheetah -> min_object_size The minimum size of a face (bbox) guaranteed to be detected. The larger the value, the less resources required for face detection.
gpu_device (Only for GPU) The number of the GPU device used by findface-extraction-api-gpu.
license_ntls_server The ntls license server IP address and port.

You will also have to enable recognition models for face features such as gender, age, emotions, glasses3, and/or beard, subject to your needs. Be sure to choose the right acceleration type for each model, matching the acceleration type of findface-extraction-api: CPU or GPU. Be aware that findface-extraction-api on CPU can work only with CPU-models, while findface-extraction-api on GPU supports both CPU- and GPU-models.

models:
  age: faceattr/age.v1.cpu.fnk
  emotions: faceattr/emotions.v1.cpu.fnk
  face: face/ifruit_320.cpu.fnk
  gender: faceattr/gender.v2.cpu.fnk
  beard: faceattr/beard.v0.cpu.fnk
  glasses3: faceattr/glasses3.v0.cpu.fnk
  medmask3: faceattr/medmask3.v2.cpu.fnk

The following models are available:

Face feature Acceleration Configuration file parameter
face (biometry) CPU face: face/ifruit_320.cpu.fnk face: face/ifruit_160.cpu.fnk
GPU face: face/ifruit_320.gpu.fnk face: face/ifruit_160.gpu.fnk
age CPU age: faceattr/age.v1.cpu.fnk
GPU age: faceattr/age.v1.gpu.fnk
gender CPU gender: faceattr/gender.v2.cpu.fnk
GPU gender: faceattr/gender.v2.gpu.fnk
emotions CPU emotions: faceattr/emotions.v1.cpu.fnk
GPU emotions: faceattr/emotions.v1.gpu.fnk
glasses3 CPU glasses3: faceattr/glasses3.v0.cpu.fnk
GPU glasses3: faceattr/glasses3.v0.gpu.fnk
beard CPU beard: faceattr/beard.v0.cpu.fnk
GPU beard: faceattr/beard.v0.gpu.fnk
face mask CPU medmask3: faceattr/medmask3.v2.cpu.fnk
GPU medmask3: faceattr/medmask3.v2.gpu.fnk

To enable the neural network model that provides the liveness standalone service, specify it in the liveness parameter: faceattr/liveness.alleyn.cpu.fnk/faceattr/liveness.alleyn.gpu.fnk.

models:
 ...
 liveness: faceattr/liveness.alleyn.cpu.fnk
 ...

models:
 ...
 liveness: faceattr/liveness.alleyn.gpu.fnk

Tip

To disable a model, simply pass an empty value to a relevant parameter. Do not remove the parameter itself as in this case the system will be searching for the default model.

models:
  gender: ""
  age: ""
  emotions: ""