Use Multiple Extraction Models within a Single extraction-api Instance

Starting from version 12.240830, FindFace Server allows using multiple attribute extraction models on the installations with a single extraction-api instance. This functionality can be employed in user liveness verification scenarios, e.g., when users register on a certain platform from various devices. Considering a user device type (e.g., a mobile phone, a laptop camera), you can now configure a neural network model that will be used for liveness recognition. Along with the default model, you can now configure a variant.

In this section, we will cover configuration of models for the face_liveness attribute. Use this instruction as an example to configure models for other attributes if needed.

Important

The sf-api service will not work with any variants of *_emben models other than default. Even if you set up a variant for an *_emben extractor, the sf-api service will only send feature vectors extracted by the default model in to the Tarantool gallery.

To start using multiple face_liveness extraction models on a single extraction-api instance installation, configure the extraction-api.yaml file.

In this section:

Configure face_liveness Extraction Models in the extraction-api.yaml

In the extraction-api.yaml file, specify a variant for each model of the face_liveness attribute. Do the following:

  1. Open the extraction-api.yaml configuration file.

    sudo vi /path/to/ffserver-12.240830.2/configs/extraction-api.yaml
    
  2. Locate extractorsface_liveness. The default configuration will look like this:

    GPU

    extractors:
      max_batch_size: 1
      models:
        face_liveness:
          default:
            model: faceattr/faceattr.liveness_web.v1.gpu.fnk
          ...
    

    CPU

    extractors:
      max_batch_size: 1
      models:
        face_liveness:
          default:
            model: faceattr/faceattr.liveness_web.v1.cpu.fnk
          ...
    
  3. Specify a variant for each model of the face_liveness attribute, e.g., mobile and pvn. A variant name is limited to lowercase letters, digits and an underscore. Specify a corresponding neural network model for each variant:

    GPU

    extractors:
      models:
        face_liveness:
          default:
            model: faceattr/faceattr.liveness_web.v1.gpu.fnk
          mobile:
            model: faceattr/faceattr.liveness_mobile.hart.gpu.fnk
          pvn:
            model: faceattr/liveness.pvn.v2.gpu.fnk
          ...
    

    CPU

    extractors:
      models:
        face_liveness:
          default:
            model: faceattr/faceattr.liveness_web.v1.cpu.fnk
          mobile:
            model: faceattr/faceattr.liveness_mobile.hart.cpu.fnk
          pvn:
            model: faceattr/liveness.pvn.v2.cpu.fnk
          ...
    

Once configuration is complete, specify the liveness_variant parameter in the HTTP API requests to liveness-api.