extraction-api

The extraction-api service uses neural networks to detect an object in an image and extract its feature vector. It also recognizes object attributes (for example, gender, age, emotions, beard, glasses, face mask - for face objects).

It interfaces with the sf-api service as follows:

  • Gets original images with objects and normalized object images.

  • Returns the coordinates of the object bounding box, and (if requested by sf-api) feature vector and object attribute data.

Tip

You can use HTTP API to directly access extraction-api.

Functionality:

  • object detection in an original image (with a return of the bbox coordinates),

  • object normalization,

  • feature vector extraction from a normalized image,

  • object attribute recognition (gender, age, emotions, vehicle model, vehicle color, etc.)

The extraction-api service can be based on CPU (installed from the docker.int.ntl/ntech/universe/extraction-api-cpu image) or GPU (installed from the docker.int.ntl/ntech/universe/extraction-api-gpu image). For both CPU- and GPU-accelerated services, configuration is done through environment variables and command line flags. You can also use the extraction-api configuration file. Its content varies subject to the acceleration type. You can find its default content here for CPU and here for GPU or by running docker command with a flag -config-template.

When configuring extraction-api (on CPU or GPU), refer to the following parameters:

Command line flags

Type

Description

-allow-cors

Add CORS headers to allow cross-origin requests.

-ascend-device

uint

Ascend device ID on which to launch inference.

-cache-dir

string

Directory for GPU model’s cache (default /var/cache/findface/models_cache).

-config

string

Path to config file.

-config-template

Output config template and exit.

-debug

Enable verbose logging.

-detectors-instances

int

DEPRECATED [use detectors-max-batch-size] Number of parallel detector instances.

-detectors-max-batch-size

int

Upper limit on detection batch size. When using the CPU, you can specify max_batch_size: -1, then the detector maximum batch size will correspond to the number of CPU cores (default 1).

-extractors-instances

int

DEPRECATED [use extractors-max-batch-size] Number of parallel extractor instances.

-extractors-max-batch-size

int

Upper limit on extraction batch size. When using the CPU, you can specify max_batch_size: -1, then the extractor maximum batch size will correspond to the number of CPU cores (default 1).

-extractors-models

value

Attribute models.

-fetch-enabled

Enable fetching from remote urls (default true).

-fetch-size-limit

int

File size limit (default 10485760).

-gpu-device

uint

GPU device ID on which to launch inference.

-help

Print help information.

-license-ntls-server

string

NTLS (ntechlab license server) address (default 127.0.0.1:3133).

-listen

string

IP:port to listen on (default :18666).

-max-dimension

int

Maximum dimension (default 6000).

-models-root

string

Root directory for model files (.fnk) (default /usr/share/findface-data/models).

-normalizers-instances

int

DEPRECATED [use normalizers-max-batch-size] Number of parallel normalizer instances.

-normalizers-max-batch-size

int

Upper limit on normalization batch size. When using the CPU, you can specify max_batch_size: -1, then the normalizer maximum batch size will correspond to the number of CPU cores (default 1).

-normalizers-models

value

Map of facenkit normalizers and their parameters.

-ticker-interval

int

Interval between ticker lines in log in milliseconds (0 - disabled) (default 5000).

The format of environment variable for flag -my-flag is CFG_MY_FLAG.

Priority:

  1. Defaults from source code (lowest priority).

  2. Configuration file.

  3. Environment variables.

  4. Command line.

If necessary, you can also enable recognition models for face attributes, body and body attributes, vehicle and vehicle attributes, and liveness detection. You can find the detailed step-by-step instructions in the following sections:

Important

The acceleration type for each model must match the acceleration type of extraction-api: CPU or GPU. Note that extraction-api on CPU can work only with CPU-models, while extraction-api on GPU supports both CPU- and GPU-models.