extraction-api
The extraction-api
service uses neural networks to detect an object in an image and extract its feature vector. It also recognizes object attributes (for example, gender, age, emotions, beard, glasses, face mask - for face objects).
It interfaces with the sf-api
service as follows:
Gets original images with objects and normalized object images.
Returns the coordinates of the object bounding box, and (if requested by
sf-api
) feature vector and object attribute data.
Tip
You can use HTTP API to directly access extraction-api
.
Functionality:
object detection in an original image (with a return of the bbox coordinates),
object normalization,
feature vector extraction from a normalized image,
object attribute recognition (gender, age, emotions, vehicle model, vehicle color, etc.)
The extraction-api
service can be based on CPU (installed from the docker.int.ntl/ntech/universe/extraction-api-cpu
image) or GPU (installed from the docker.int.ntl/ntech/universe/extraction-api-gpu
image). For both CPU- and GPU-accelerated services, configuration is done through environment variables and command line flags. You can also use the extraction-api
configuration file. Its content varies subject to the acceleration type. You can find its default content here for CPU
and here for GPU
or by running docker command with a flag -config-template
.
When configuring extraction-api
(on CPU or GPU), refer to the following parameters:
Command line flags |
Type |
Description |
---|---|---|
|
– |
Add CORS headers to allow cross-origin requests. |
|
uint |
Ascend device ID on which to launch inference. |
|
string |
Directory for GPU model’s cache (default |
|
string |
Path to config file. |
|
– |
Output config template and exit. |
|
– |
Enable verbose logging. |
|
int |
DEPRECATED [use |
|
int |
Upper limit on detection batch size. When using the CPU, you can specify |
|
int |
DEPRECATED [use |
|
int |
Upper limit on extraction batch size. When using the CPU, you can specify |
|
value |
Attribute models. |
|
– |
Enable fetching from remote urls (default true). |
|
int |
File size limit (default 10485760). |
|
uint |
GPU device ID on which to launch inference. |
|
– |
Print help information. |
|
string |
NTLS (ntechlab license server) address (default |
|
string |
IP:port to listen on (default |
|
int |
Maximum dimension (default 6000). |
|
string |
Root directory for model files (.fnk) (default |
|
int |
DEPRECATED [use |
|
int |
Upper limit on normalization batch size. When using the CPU, you can specify |
|
value |
Map of facenkit normalizers and their parameters. |
|
int |
Interval between ticker lines in log in milliseconds (0 - disabled) (default 5000). |
The format of environment variable for flag -my-flag
is CFG_MY_FLAG
.
Priority:
Defaults from source code (lowest priority).
Configuration file.
Environment variables.
Command line.
If necessary, you can also enable recognition models for face attributes, body and body attributes, vehicle and vehicle attributes, and liveness detection. You can find the detailed step-by-step instructions in the following sections:
Important
The acceleration type for each model must match the acceleration type of extraction-api
: CPU or GPU. Note that extraction-api
on CPU can work only with CPU-models, while extraction-api
on GPU supports both CPU- and GPU-models.