Deploy Components
This section will guide you through the FindFace Server installation process.
FindFace Server is configurable through a configuration file, various command-line flags, and environment variables.
A reusable configuration file is a yaml/env/ini file made with name and value of one or more command-line flags described below. In order to use this file, specify the file path as a value to the -c
/ --config
flag or CFG_CONFIG
environment variable. The sample configuration file can be used as a starting point to create a new configuration file as needed.
Options set on the command line take precedence over those from the environment.
The format of environment variable for flag --my-flag
is CFG_MY_FLAG
. It applies to all flags.
The instructions below are given only for a reference and a base configurations. See Components in Depth to build a configuration suitable for a specific project. For the latest available flags, use --help
flag:
docker run --rm -ti --name <container_name> docker.int.ntl/ntech/universe/<service_name>:ffserver-11.240325 --help
In this section:
Create a Docker Network
To create a docker network, use the following command:
docker network create --attachable server
Network name must be unique. The --attachable
option used to enable manual container attachment. server
is a name of your network.
Tip
Useful commands:
List networks.
docker network ls
Display detailed information on your network.
docker network inspect server
When a container is created and connected to a created network, Docker automatically creates a DNS record for that container, using the container name as the hostname and the IP address of the container as the record’s value. This enables other containers on the same network to access each other by name, rather than needing to know the IP address of the target container.
Running etcd
under Docker
The etcd
is a third-party software that implements a distributed key-value store for video-manager
. It is used as a coordination service in the distributed system, providing the video object detector with fault tolerance.
Run etcd
under Docker:
docker run -tid --name etcd-1 --network server --restart always \ quay.io/coreos/etcd:v3.5.11 /usr/local/bin/etcd \ -advertise-client-urls http://0.0.0.0:2379 \ -listen-client-urls http://0.0.0.0:2379
This docker run
command will expose the etcd
client API over port 2379. This will run v3.5.11
version of etcd
. You can specify a different version after consulting with our specialists.
Use docker options:
--name
: specify a custom identifier for a container, for exampleetcd-1
.
--network
: connect a container to a network, namedserver
.
--restart
: restart policy to apply when a container exits. Setalways
to always restart the container if it stops.
Use configuration flags to etcd
:
-advertise-client-urls
: list of this member’s client URLs to advertise to the public. Defaulthttp://localhost:2379
.
-listen-client-urls
: list of URLs to listen on for client traffic. This flag tells theetcd
to accept incoming requests from the clients on the specified scheme://IP:port combinations. Scheme can be either http or https. If 0.0.0.0 is specified as the IP,etcd
listens to the given port on all interfaces. If an IP address is given as well as a port,etcd
will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. Theetcd
will respond to requests from any of the listed addresses and ports. default:http://localhost:2379
.
See etcd for more information.
Running memcached
under Docker
The memcached
is a third-party software that implements a distributed memory caching system. Used by sf-api
as a temporary storage for extracted object feature vectors before they are written to the feature vector database powered by Tarantool
.
Run memcached
under Docker:
docker run -tid --name memcached --restart always --network server \ docker.io/library/memcached:1.5.22 -v -m '1024' -I 16m -u memcache
This docker run
command will expose the memcached
client API over port 11211. This will run v1.5.22
version of memcached
. You can specify a different version after consulting with our specialists.
Use docker options:
--name
: specify a custom identifier for a container, for examplememcached
.
--network
: connect a container to a network, namedserver
.
--restart
: restart policy to apply when a container exits. Setalways
to always restart the container if it stops.
Use configuration flags to memcached
:
-v
: be verbose during the event loop; print out errors and warnings.
-m
: use 1 GB of memory to store features.
-I
: override the default size of each slab page.
-u
: assume the identity ofmemcache
.
Running redis
under Docker
The redis
is a third-party software that implements a distributed memory caching system. Used by sf-api
as a temporary storage for extracted object feature vectors before they are written to the feature vector database powered by Tarantool
.
Run redis
under Docker:
docker run -tid --name redis --restart always --network server \ --volume /opt/ffserver/redis-data:/data \ docker.io/redis:7 --appendonly no --save "" --maxmemory 1073741824 --maxmemory-policy allkeys-lru
This docker run
command will expose the redis
client API over port 6379. This will run 7
version of redis
. You can specify a different version after consulting with our specialists.
Use docker options:
--name
: specify a custom identifier for a container, for exampleredis
.
--network
: connect a container to a network, namedserver
.
--restart
: restart policy to apply when a container exits. Setalways
to always restart the container if it stops.
Use configuration flags to redis
:
--appendonly
: asynchronously dumps the dataset on disk.
--save
: save a snapshot of the database to disk.
--maxmemory
: limiting memory usage to the specified number of bytes.
--maxmemory-policy
: howredis
will select what to remove whenmaxmemory
is reached.
Running postgres
under Docker
The postgres
is a third-party object-relational database system. Used by counter
as storage for extracted object feature vectors and other information.
Run postgres
under Docker:
docker run -tid --name db-postgres --restart always --network server \ --env POSTGRES_USER=login \ --env POSTGRES_PASSWORD=password \ --env POSTGRES_DB=ntech \ --volume /opt/ffserver/postgres-data:/var/lib/postgresql \ docker.io/postgres:14.12
This docker run
command will expose the postgres
client API over port 5432. This will run 14.12
version of postgres
. You can specify a different version after consulting with our specialists.
Use docker options:
--name
: specify a custom identifier for a container, for exampledb-postgres
.
--network
: connect a container to a network, namedserver
.
--restart
: restart policy to apply when a container exits. Setalways
to always restart the container if it stops.
Use environment variables to postgres
:
POSTGRES_USER
andPOSTGRES_PASSWORD
: these are the credentials used to authenticate with the database.
POSTGRES_DB
: the name of the database.
Provide Licensing
You receive a license file from your NtechLab manager. If you opt for the on-premise licensing, we will also send you a USB dongle.
The FindFace Server licensing is provided as follows:
Follow the steps required for your type of licensing. These steps are described here: Licensing
Deploy
ntls
, license server in the FindFace Server.docker run -tid --name ntls --restart always --network server \ --env CFG_LISTEN=0.0.0.0:3133 \ --env CFG_UI=0.0.0.0:3185 \ --volume /opt/ffserver/licenses:/ntech/license \ --publish 127.0.0.1:3185:3185 \ docker.int.ntl/ntech/universe/ntls:ffserver-11.240325
Use docker options:
--name
: specify a custom identifier for a container, for examplentls
.--network
: connect a container to a network, namedserver
. You should use thehost
namespace instead ofserver
if your license type requires it. But be aware that in this case other services won’t be able to connect to ntls using container name ofntls
. As a workaround, you can specify the IP address of thentls
host instead of thentls
container name in the configuration files of licensable components.--restart
: restart policy to apply when a container exits. Setalways
to always restart the container if it stops.--publish 127.0.0.1:3185:3185
: this allows you to connect to thentls
API from the host withhttp://localhost:3185
(if you need to access not only from the local machine, remove127.0.0.1:
). This option is needless if--net=host
is specified.
Use environment variables:
CFG_LISTEN
: address to accept incoming client connections (IP:PORT) (-l
flag).CFG_UI
: bind address for embedded UI (IP:PORT) (--ui
flag).
The directory you chose to store the licenses must be mounted to
/ntech/license
. In the example,/opt/ffserver/licenses
is used as such a directory.Important
There must be only one
ntls
instance in each FindFace Server installation.Tip
In the
ntls
configuration file, you can change the license folder and specify your proxy server IP address if necessary. You can also change thentls
web interface remote access settings. See ntls for details.Upload the license file via the
ntls
web interface in one of the following ways:Navigate to the
ntls
web interfacehttp://<NTLS_IP_address>:3185/#/
. Upload the license file.Tip
Later on, use the
ntls
web interface to consult your license information, and upgrade or extend your license.Directly put the license file into the license folder (by default,
/ntech/license
, can be changed in the configuration file).
For the on-premise licensing, insert the USB dongle into a USB port.
If the licensable components are installed on remote hosts, specify the IP address of the
ntls
host in their configuration files. See extraction-api, tntapi, Video Object Detection: video-manager and video-worker for details.
See also
Deploy extraction-api
The extraction-api
is a service that uses neural networks to detect an object in an image and extract its feature vector. It also recognizes object attributes (for example, gender, age, emotions, beard, glasses, face mask - for face objects).
To deploy the extraction-api
component, do the following:
Important
This component requires the installation of neural network models. Load the desired models manually and mount a volume to the container.
Note
To deploy the extraction-api
service with acceleration on the GPU, use the extraction-api-gpu
image. Don’t forget to use the --runtime=nvidia
flag in the docker run
command. You can specify the value of the GPU device identifier on which the output will be started using the -gpu-device
or CFG_GPU_DEVICE
variable flag.
Create a default
extraction-api
configuration file.docker run --rm -ti docker.int.ntl/ntech/universe/extraction-api-cpu:ffserver-11.240325 \ --config-template > /opt/ffserver/configs/extraction-api.yaml
/opt/ffserver/configs
: the directory on the host to store the configuration file.
Open the
extraction-api.yaml
configuration file.sudo vi /opt/ffserver/configs/extraction-api.yaml
Enable recognition models, subject to your needs. Be sure to choose the right acceleration type for each model, matching the acceleration type of
extraction-api
: CPU or GPU. Be aware thatextraction-api
on CPU can work only with CPU-models, whileextraction-api
on GPU supports both CPU- and GPU-models.detectors: models: jasmine: aliases: - face model: detector/facedet.jasmine_fast.004.cpu.fnk options: min_object_size: 32 resolutions: [2048x2048] objects: face: base_normalizer: facenorm/crop2x.v2_maxsize400.cpu.fnk quality_attribute: face_quality normalizers: crop2x: model: facenorm/crop2x.v2_maxsize400.cpu.fnk norm200: model: facenorm/bee.v3.cpu.fnk extractors: models_root: /usr/share/findface-data/models models: face_emben: default: model: face/nectarine_m_160.cpu.fnk face_age: default: model: faceattr/faceattr.age.v3.cpu.fnk face_quality: default: model: faceattr/faceattr.quality.v5.cpu.fnk
Configure other parameters, if needed. For example, enable or disable image fetching from a remote server for some kind of request.
fetch: enabled: true size_limit: 10485760
(Optional) Enable emben cutting.
extractors: models: face_emben: default: model: face/nectarine_xl_320.gpu.fnk params: emben_cut: - 64
Currently, only 1 value in the
ember_cut
list is supported.The value must be divisible by 16 without remainder.
Make sure the model supports the emben cutting. Be sure to consult with our technical experts prior (support@ntechlab.com).
Specify upper limit on detection/extraction/normalization batch size, if needed.
detectors: max_batch_size: 1 extractors: max_batch_size: 1 normalizers: max_batch_size: 1
Note
The
max_batch_size
value determines the maximum number of images that are processed in parallel on the GPU or CPU. For GPU, it is recommendedmax_batch_size: 8 or 16
. For CPU, you can specifymax_batch_size: -1
, this meansmax_batch_size
is equal to the number of CPU cores.Warning
The
*-instances
parameter is DEPRECATED. Its fields are outdated. It indicated how manyextraction-api
instances were used. The number of instances were specified from your license. The value (0) doesn’t means that this number is equal to the number of CPU cores!When you have edited the configuration file, run
extraction-api
component with the docker run command.docker run -tid --name extraction-api --restart always --network server \ --env CFG_LICENSE_NTLS_SERVER=ntls:3133 \ --volume /opt/ffserver/models:/usr/share/findface-data/models \ --volume /opt/ffserver/configs/extraction-api.yaml:/extraction-api.yaml \ --publish 127.0.0.1:18666:18666 \ docker.int.ntl/ntech/universe/extraction-api-cpu:ffserver-11.240325 \ --config /extraction-api.yaml
Use docker options:
--name
: specify a custom identifier for a container, for exampleextraction-api
.--network
: connect a container to a network, namedserver
.--restart
: restart policy to apply when a container exits. Setalways
to always restart the container if it stops.
Use environment variables and configuration flags:
CFG_LICENSE_NTLS_SERVER=ntls:3133
: host and port of thentls
container. See Licensing./opt/ffserver/configs
: the directory on the host to store the configuration file./opt/ffserver/models
: the directory on the host to store the models.--publish 127.0.0.1:18666:18666
: this allows you to connect to theextraction-api
API from the host withhttp://localhost:18666
(if you need to access not only from the local machine, remove127.0.0.1:
).
Deploy tntapi
The tntapi
component provides interaction of the sf-api
component with the Tarantool database for efficient storage and search by feature vectors of objects. To increase search speed, multiple tntapi
shards can be created on each Tarantool host. Their running concurrently leads to a remarkable increase in performance.
Each shard can handle up to approximately 10,000,000 faces. In the case of the standalone deployment, you need only one shard (already created by default). In a cluster environment, the number of shards has to be calculated depending on your hardware configuration and database size (see details below).
To deploy the tntapi
component, do the following:
Create a directory for snapshots and xlogs.
sudo mkdir -p /opt/ffserver/tnt/001-01/{snapshots,xlogs}
Run docker command and configure parameters:
docker run -tid --name tnt-1-1 --restart always --network server \ --env CFG_LISTEN_HOST=0.0.0.0 \ --env CFG_NTLS=ntls:3133 \ --env TT_LISTEN=0.0.0.0:32001 \ --env TT_MEMTX_MEMORY=$((1024 * 1024 * 1024)) \ --volume /opt/ffserver/tnt/001-01:/opt/ntech/var/lib/tarantool/default \ docker.int.ntl/ntech/universe/tntapi:ffserver-11.240325
Use docker options:
--name
: specify a custom identifier for a container, for exampletnt-1-1
. Throughout this page we usetnt-<shard>-<replica>
naming convention fortntapi
containers.
--network
: connect a container to a network, namedserver
.
--restart
: restart policy to apply when a container exits. Setalways
to always restart the container if it stops.By default, the configuration is via environment variables (ENV), but it is also possible to use the configuration file
FindFace.lua
.
CFG_LISTEN_HOST=0.0.0.0
: host to public HTTP API.
CFG_NTLS=ntls:3133
: host and port of thentls
server.
TT_LISTEN=0.0.0.0:32001
: binary host/port, used for admin operations and replication.
TT_MEMTX_MEMORY=$((1024 * 1024 * 1024))
: the maximum memory usage in bytes.
/opt/ffserver/tnt/
: the directory on the host to storetntapi
data.
If needed, configure via FindFace.lua
file:
Download the
FindFace.lua
file and put it into some directory on the host (for example,/opt/ffserver/configs
). Open the configuration file:sudo vi /opt/ffserver/configs/FindFace.lua
Edit the maximum memory usage. The memory usage must be set in bytes, depending on the number of faces the shard handles, at the rate roughly 1280 byte per face. For example, the value
1.2*1024*1024*1024
corresponds to 1,000,000 faces:memtx_memory = 1.2 * 1024 * 1024 * 1024,
When using the default
FindFace.lua
configuration file, you can set a set ofmeta_scheme/meta_indexes
in the global variablecfg_spaces
.Create a database structure to store the face recognition results. The structure is created as a set spaces of fields. Describe each field with the following parameters:
id
: field id (starting from 1!);name
: field name, must be the same as the name of a relevant object parameter, string;field_type
: data type (unsigned|string|set[string]|set[unsigned]
);default
: field default value. If a default value exceeds1e14 – 1
, use a string data type to specify it, for example,"123123.."
instead of123123..
.meta_indexes
: the names of the meta fields to build the index on.
You can find the custom
tnt-schema.lua
here.Mount your edited
tnt-scheme.lua
file into the container when running thetntapi
component:docker run ... \ --volume /opt/ffserver/configs/tnt-scheme.lua:/tnt-scheme.lua \ ...
Use the environment variable
CFG_EXTRA_LUA
for the filetnt-scheme.lua
:CFG_EXTRA_LUA='dofile("/tnt-scheme.lua")'
Mount your edited
FindFace.lua
file into the container when running thetntapi
component, otherwise, the default one will be used.docker run ... \ --volume /opt/ffserver/configs/FindFace.lua:/etc/tarantool/instances.enabled/FindFace.lua \ ...
Deploy upload
The upload
service is a storage of normalized images of objects and video chunks. FindFace Server uses the saved normalized images of objects in order to migrate the database to the EmbeN of another neural network, and video chunks are used to implement the Video Recorder. The upload
API is based on the WebDAV protocol.
Tip
If you don’t need this functionality, skip this step.
Install upload
as such:
docker run -tid --name upload --restart always --network server \ -v /opt/ffserver/upload:/var/lib/ffupload \ docker.int.ntl/ntech/universe/upload:ffserver-11.240325
Use docker options:
--name
: specify a custom identifier for a container, for exampleupload
.
--network
: connect a container to a network, namedserver
.
--restart
: restart policy to apply when a container exits. Setalways
to always restart the container if it stops.
/var/lib/ffupload
: directory to store the data.
Deploy sf-api
Note
sf-api
requires such third-party software as memcached
. Deploy it first.
The sf-api
is a service that implements an internal user-friendly HTTP API for the extraction-api
and tntapi
. That is, it provides an opportunity to detect an object in an image and extract its feature vector (or do it from a normalized object images), as well as save it in the Tarantool database.
To deploy the sf-api
component, do the following:
Create a default
sf-api
configuration file.docker run --rm -ti docker.int.ntl/ntech/universe/sf-api:ffserver-11.240325 \ --config-template > /opt/ffserver/configs/sf-api.yaml
/opt/ffserver/configs/
: the directory on the host to store the configuration file.
Open the
/opt/ffserver/configs/sf-api.yaml
configuration file.sudo vi /opt/ffserver/configs/sf-api.yaml
Specify the addresses and ports of the
extraction-api
container (extraction-api -> url
, in the format:http://<domain_or_ip>:<port>
), thetntapi
shards (storage-api -> shards -> master
, in the format:http://<domain_or_ip>:<port>/v2/
).extraction-api: url: http://extraction-api:18666 ... storage-api: shards: - master: http://tnt-1-1:8001/v2/ slaves: [] - master: http://tnt-2-1:8001/v2/ slaves: [] read_slave_only: false read_slave_first: false galleries_read_slave_first: false max_slave_attempts: 2 cooldown: 2s ....
The main part of the detected object data is stored in the
sf-api
cache.Depending on the load profile, the cache may be:
inmemory
cache: type: inmemory inmemory: size: 16384
memcache
(see Memcached).cache: type: memcache memcache: nodes: - memcached:11211
redis
cache: type: redis redis: # https://redis.io/ nodes: - redis:6379 timeout: 5s network: tcp password: "" db: 0
The
sf-api
service implements HTTP API to access the FindFace Server functions such as object detection and object recognition. In order to allow migration from oneEmbeN
model to another, thesf-api
provides API to get normalized object images.One normalized object image can take from one to hundreds of kilobytes, so the storage for it should be several times larger than for vectors.
Two storage types are currently supported:
webdav
normalized-storage: type: webdav enabled: true webdav: # <http_client options> upload-url: http://upload:3333/uploads/
s3
storagenormalized-storage: type: s3 enabled: true s3: endpoint: "" bucket-name: "" access-key: "" secret-access-key: "" secure: true region: "" public-url: "" operation-timeout: 30
If a vector migration is not needed, you can disable the storage of normalized object images:
normalized-storage: enabled: false
When you have edited the configuration file, run
sf-api
component with the docker run command.docker run -tid --name sf-api --restart always --network server \ --volume /opt/ffserver/configs/sf-api.yaml:/sf-api.yaml \ --publish 127.0.0.1:18411:18411 \ docker.int.ntl/ntech/universe/sf-api:ffserver-11.240325 \ --config /sf-api.yaml
Use docker options:
--name
: specify a custom identifier for a container, for examplesf-api
.--restart
: restart policy to apply when a container exits. Setalways
to always restart the container if it stops./opt/ffserver/configs
: the directory on the host to store the configuration file.--network
: connect a container to a network, namedserver
.--publish 127.0.0.1:18411:18411
: this allows you to connect to thesf-api
API from the host withhttp://localhost:18411
(if you need to access not only from the local machine, remove127.0.0.1:
).
Use
sf-api
options:--config
: path tosf-api.yaml
configuration file.
Deploy storage-api-proxy
(optional)
The storage-api-proxy
is an optional component that proxies requests to the specified service that implements storage-api
(for example, tntapi
service or another storage-api-proxy
services).
To run the storage-api-proxy
, do the following:
Create a default
storage-api-proxy
configuration file.docker run --rm -ti --entrypoint "/storage-api-proxy" docker.int.ntl/ntech/universe/sf-api:ffserver-11.240325 \ --config-template > /opt/ffserver/configs/storage-api-proxy.yaml
/opt/ffserver/configs/
: the directory on the host to store the configuration file.
Modify configuration file depending on your needs, refer to the
storage-api
parameters:storage-api: # <http_client options> shards: - master: http://tnt-1-1:8001/v2/ slaves: [] # an array of URLs to the other shard nodes - master: http://tnt-2-1:8001/v2/ slaves: [] read_slave_only: false # If true: Ignore master on read requests. read_slave_first will be ignored. read_slave_first: false # If true: Prefer slaves over master for requests. galleries_read_slave_first: false # If true: Prefer slaves over master for get/list galleries requests. max_slave_attempts: 2 # Give up after trying to read from max_slave_attempts slaves (default 2). cooldown: 2s # Cooldown timeout after communication error.
Run
storage-api-proxy
with the configuration file.docker run --rm -ti --network server --entrypoint "/storage-api-proxy" \ --volume /opt/ffserver/configs/storage-api-proxy.yaml:/storage-api-proxy.yaml \ --publish 127.0.0.1:18411:18411 \ docker.int.ntl/ntech/universe/sf-api:ffserver-11.240325 \ --config /storage-api-proxy.yaml \
Use docker options:
/opt/ffserver/configs
: the directory on the host to store the configuration file.--network
: connect a container to a network, namedserver
.--publish 127.0.0.1:18411:18411
: this allows you to connect to thestorage-api-proxy
API from the host withhttp://localhost:18411
(if you need to access not only from the local machine, remove127.0.0.1:
).
Use
storage-api-proxy
options:--config
: path tostorage-api-proxy.yaml
configuration file.
Deploy Video Objects Detection
Video objects detection is provided by the video-manager
and video-worker
services.
Note
video-manager
requires such third-party software as etcd
. Deploy it first.
To deploy the video-manager
component, run its image and specify the environment variables:
docker run -tid --name video-manager-1 --restart always --network server \ --env CFG_ETCD_ENDPOINTS=http://etcd-1:2379,http://etcd-2:2379,http://etcd-3:2379 \ --env CFG_RPC_LISTEN=:18811 \ --env CFG_MASTER_SELF_URL=video-manager-1:18811 \ --env CFG_MASTER_SELF_URL_HTTP=video-manager-1:18810 \ --publish 127.0.0.1:18810:18810 \ docker.int.ntl/ntech/universe/video-manager:ffserver-11.240325
Use docker options:
--name
: specify a custom identifier for a container, for examplevideo-manager-1
.
--network
: connect a container to a network, namedserver
.
--restart
: restart policy to apply when a container exits. Specifyalways
to always restart the container if it stops.
Environment variables and flags:
CFG_ETCD_ENDPOINTS=http://etcd-1:2379,http://etcd-2:2379,http://etcd-3:2379
: list ofetcd
URLs, whereetcd-1
is etcd container name.
CFG_RPC_LISTEN=:18811
: allow accessing thevideo-manager
service from any IP address.
CFG_MASTER_SELF_URL=video-manager-1:18811
: self url.
CFG_MASTER_SELF_URL_HTTP=video-manager-1:18810
: self url http.
--publish 127.0.0.1:18810:18810
: this allows you to connect to thevideo-manager
API from the host withhttp://localhost:18810
(if you need to access not only from the local machine, remove127.0.0.1:
).
If needed, configure the following parameters using environment variables/flags or through a configuration file:
If you need to deploy
video-worker
instances on remote hosts, specify--publish 18811:18811
and inCFG_MASTER_SELF_URL
andCFG_MASTER_SELF_URL_HTTP
replace the container’s hostname with the hostname or IP of the host.If needed, in the
router_url
parameter, specify IP address and port of thefacerouter
component (if installed) which will receive detected faces fromvideo-worker
(env:CFG_ROUTER_URL
).--env CFG_ROUTER_URL=http://<facerouter-ip>:8085/
Note
<facerouter-ip>
must be substituted into your IP address for thefacerouter
component.If you need to run multiple
video-manager
instances, and you need the license limit on the number of cameras was checked by thevideo-manager
, then specify the environment variablesCFG_NTLS_ENABLED=true
andCFG_NTLS_URL=ntls:3185
wherevideo-manager
can send requests.If necessary, configure the video processing settings which apply to all video streams in the system.
Tip
You can skip this step: when creating a job for
video-manager
, you will be able to individually configure processing settings for each video stream (see Video Object Detection API).
As an alternative way, you can use the configuration file:
Create a default
video-manager.yaml
configuration file.docker run --rm -ti docker.int.ntl/ntech/universe/video-manager:ffserver-11.240325 \ --config-template > /opt/ffserver/configs/video-manager.yaml
/opt/ffserver/configs/
: the directory on the host to store the configuration file.
Open the
/opt/ffserver/configs/video-manager.yaml
configuration file and specify parameters as above.sudo vi /opt/ffserver/configs/video-manager.yaml
Run the
video-manager
service using--config /video-manager.yaml
flag with your modified configuration file mounted into the container.docker run -tid --name video-manager-1 --restart always --network server \ -v /opt/ffserver/configs/video-manager.yaml:/video-manager.yaml \ docker.int.ntl/ntech/universe/video-manager:ffserver-11.240325 --config /video-manager.yaml
To deploy the video-worker
component, do the following:
Note
To deploy the video-worker
service with acceleration on the GPU, use the video-worker-gpu
image. Don’t forget to use the --runtime=nvidia
flag in the docker run
command.
Create a default
video-worker.yaml
configuration file.docker run --rm -ti docker.int.ntl/ntech/universe/video-worker-cpu:ffserver-11.240325 \ --config-template > /opt/ffserver/configs/video-worker.yaml
/opt/ffserver/configs/
: the directory on the host to store the configuration file.
Open the
/opt/ffserver/configs/video-worker.yaml
configuration file.sudo vi /opt/ffserver/configs/video-worker.yaml
Fill out the values for all required parameters. Make sure that
detectors
,normalizers
, andextractors
are specified in themodels
section andobjects
section has corresponding values. Below is an example of how the section should look like. It may vary depending on the recognition objects that you have selected.models: cache_dir: /var/cache/findface/models_cache detectors: car: fnk_path: /usr/share/findface-data/models/detector/cardet.kali.008.cpu.fnk min_size: 60 body: fnk_path: /usr/share/findface-data/models/detector/bodydet.kali.021.cpu.fnk min_size: 60 face: fnk_path: /usr/share/findface-data/models/detector/facedet.jasmine_fast.004.cpu.fnk min_size: 60 normalizers: car_norm: fnk_path: /usr/share/findface-data/models/facenorm/cropbbox.v2.cpu.fnk car_norm_quality: fnk_path: /usr/share/findface-data/models/facenorm/cropbbox.v2.cpu.fnk body_norm: fnk_path: /usr/share/findface-data/models/facenorm/cropbbox.v2.cpu.fnk body_norm_quality: fnk_path: /usr/share/findface-data/models/facenorm/cropbbox.v2.cpu.fnk face_norm: fnk_path: /usr/share/findface-data/models/facenorm/crop2x.v2_maxsize400.cpu.fnk face_norm_quality: fnk_path: /usr/share/findface-data/models/facenorm/crop1x.v2_maxsize400.cpu.fnk extractors: car_quality: fnk_path: /usr/share/findface-data/models/carattr/carattr.quality.v1.cpu.fnk normalizer: car_norm_quality body_quality: fnk_path: /usr/share/findface-data/models/pedattr/pedattr.quality.v0.cpu.fnk normalizer: body_norm_quality face_quality: fnk_path: /usr/share/findface-data/models/faceattr/faceattr.quality.v5.cpu.fnk normalizer: face_norm_quality objects: car: normalizer: car_norm quality: car_quality track_features: '' body: normalizer: body_norm quality: body_quality track_features: '' face: normalizer: face_norm quality: face_quality track_features: ''
Run
video-worker
image and specify the environment variables. Mount volumes with models and modified configuration file.docker run -tid --name video-worker --network server --restart always \ --env CFG_MGR_STATIC=video-manager-1:18811 \ --env CFG_NTLS_ADDR=ntls:3133 \ --volume /opt/ffserver/models:/usr/share/findface-data/models \ --volume /opt/ffserver/configs/video-worker.yaml:/video-worker.yaml \ docker.int.ntl/ntech/universe/video-worker-cpu:ffserver-11.240325 \ --config /video-worker.yaml
Use docker options:
--name
: specify a custom identifier for a container, for examplevideo-worker
.
--network
: connect a container to a network, namedserver
.
--restart
: restart policy to apply when a container exits. Specifyalways
to always restart the container if it stops.Environment variables and flags:
CFG_MGR_STATIC=video-manager-1:18811
: videomanager grpc ip:port, which providesvideo-worker
with settings and the video stream list.
CFG_NTLS_ADDR=ntls:3133
:ntls
server ip:port.
/opt/ffserver/models
: the directory on the host to store models.
/opt/ffserver/configs
: the directory on the host to store the configuration file.
Deploy Video Recorder: video-storage
and video-streamer-cpu
The Video Recorder operation requires a third-party software, MongoDB. Install it as follows:
docker run -tid --name mongo --restart always --network server \ --volume /opt/ffserver/mongo-data:/data/db \ docker.io/library/mongo:4.4
To deploy the video-storage
component, run its image and specify the environment variables:
docker run -tid --name video-storage --restart always --network server \ --env CFG_STREAMER_ENDPOINTS=video-streamer:9000 \ --env CFG_CHUNK_STORAGE_TYPE=webdav \ --env CFG_CHUNK_STORAGE_WEBDAV_UPLOAD_URL=http://upload:3333/uploads/ \ --env CFG_META_STORAGE_MONGO_URI=mongodb://mongo \ --publish 127.0.0.1:18611:18611 \ docker.int.ntl/ntech/universe/video-storage:ffserver-11.240325
Use docker options:
--name
: specify a custom identifier for a container, for examplevideo-storage
.
--network
: connect a container to a network, namedserver
.
--restart
: restart policy to apply when a container exits. Specifyalways
to always restart the container if it stops.
Environment variables and flags:
CFG_STREAMER_ENDPOINTS=video-streamer:9000
: list of streamer endpoint URLs.
CFG_CHUNK_STORAGE_TYPE=webdav
: Setwebdav
in chunk storage type.
CFG_CHUNK_STORAGE_WEBDAV_UPLOAD_URL=http://upload:3333/uploads/
: webdav storage URL.
CFG_META_STORAGE_MONGO_URI=mongodb://mongo
: meta storage mongo URI.
--publish 127.0.0.1:18611:18611
: this allows you to connect to thevideo-storage
API from the host withhttp://localhost:18611
(if you need to access not only from the local machine, remove127.0.0.1:
).
To deploy the video-streamer-cpu
component, run its image and specify the environment variables:
docker run -tid --name video-streamer --restart always --network server \ --env CFG_VIDEO_STORAGE_URL=http://video-storage:18611 \ --volume /opt/ffserver/video-streamer:/var/cache/findface/video-streamer \ --publish 127.0.0.1:9000:9000 \ docker.int.ntl/ntech/universe/video-streamer-cpu:ffserver-11.240325
Use docker options:
--name
: specify a custom identifier for a container, for examplevideo-streamer
.
--network
: connect a container to a network, namedserver
.
--restart
: restart policy to apply when a container exits. Specifyalways
to always restart the container if it stops.
Environment variables and flags:
CFG_VIDEO_STORAGE_URL=http://video-storage:18611
--volume /opt/ffserver/video-streamer:/var/cache/findface/video-streamer
: mount cache with volume.
--publish 127.0.0.1:9000:9000
: this allows you to connect to thevideo-streamer
API from the host withhttp://localhost:9000
(if you need to access not only from the local machine, remove127.0.0.1:
).
Deploy liveness-api
To deploy the liveness-api
component, run its image and specify the environment variables:
docker run -tid --name liveness-api --restart always --network server \ --env CFG_EXTRACTION_API_EXTRACTION_API=http://extraction-api:18666 \ --env CFG_SF_API_SF_API=http://sf-api:18411 \ --publish 127.0.0.1:18301:18301 \ docker.int.ntl/ntech/universe/liveness-api:ffserver-11.240325
Use docker options:
--name
: specify a custom identifier for a container, for exampleliveness-api
.
--network
: connect a container to a network, namedserver
.
--restart
: restart policy to apply when a container exits. Specifyalways
to always restart the container if it stops.
Environment variables and flags:
--publish 127.0.0.1:18301:18301
: this allows you to connect to theliveness-api
API from the host withhttp://localhost:18301
(if you need to access not only from the local machine, remove127.0.0.1:
).
Deploy counter
Note
counter
requires such third-party software as postgres
. Deploy it first.
The counter
lets you get statistics of unique individuals.
To deploy the counter
component, run its image and specify the environment variables:
docker run -tid --name counter --restart always --network server \ --env CFG_DATABASE_CONNECTION_STRING=postgres://login:password@db-postgres/ntech?sslmode=disable \ --publish 127.0.0.1:18300:18300 \ docker.int.ntl/ntech/universe/counter:ffserver-11.240325
Use docker options:
--name
: specify a custom identifier for a container, for examplecounter
.
--network
: connect a container to a network, namedserver
.
--restart
: restart policy to apply when a container exits. Specifyalways
to always restart the container if it stops.
Environment variables and flags:
CFG_DATABASE_CONNECTION_STRING=postgres://login:password@db-postgres/ntech?sslmode=disable
: PostgreSQL connection string.
login:password
: these are the credentials used to authenticate with the database.
db-postgres
: name of the container, where PostgreSQL database is located, or remote host.
ntech
: the name of the database.
--publish 127.0.0.1:18300:18300
: this allows you to connect to thecounter
API from the host withhttp://localhost:18300
(if you need to access not only from the local machine, remove127.0.0.1:
).
Deploy deduplicator
To deploy the deduplicator
component, run its image and specify the environment variables and flags:
docker run -tid --name deduplicator --restart always --network server \ --publish 127.0.0.1:18310:18310 \ docker.int.ntl/ntech/universe/deduplicator:ffserver-11.240325
Use docker options:
--name
: specify a custom identifier for a container, for examplededuplicator
.
--network
: connect a container to a network, namedserver
.
--restart
: restart policy to apply when a container exits. Specifyalways
to always restart the container if it stops.
Environment variables and flags:
--publish 127.0.0.1:18310:18310
: this allows you to connect to thededuplicator
API from the host withhttp://localhost:18310
(if you need to access not only from the local machine, remove127.0.0.1:
).