Guide to Typical Cluster Installation
This section is all about deploying FindFace Enterprise Server in a cluster environment.
Tip
If after having read this section, you still have questions, do not hesitate to contact our experts by support@ntechlab.com.
The reasons for deploying FindFace Enterprise Server in a cluster are the following:
The necessity to distribute the video processing high load.
The necessity to process video streams from a group of cameras in the place of their physical location.
Note
The most common use cases where such need comes to the fore are hotel chains, chain stores, several security checkpoints in the same building, etc.
The necessity to distribute the biometric sample extraction high load.
A large number of faces to search through that requires the implementation of a distributed face database.
Before you start the deployment, outline your system architecture, depending on its load and allotted resources (see System Requirements). The most common distributed scheme is as follows:
One principal server with the following components:
findface-ntls
,findface-sf-api
,findface-video-manager
,findface-upload
,findface-video-worker
,findface-extraction-api
,findface-tarantool-server
, and third-parties.Several additional video processing servers with installed
findface-video-worker
.(If needed) Several additional biometric servers with installed
findface-extraction-api
.(If needed) Additional database servers with multiple Tarantool shards.
This section describes the most common distributed deployment. In high load systems, it may also be necessary to distribute the API processing (findface-sf-api
and findface-video-manager
) across several additional servers. In this case, refer to Fully Customized Installation.
To deploy FindFace Enterprise Server in a cluster environment, follow the steps below:
Deploy Principal Server
To deploy the principal server as part of a distributed architecture, do the following:
On the designated physical server, install FindFace Enterprise Server from the console installer as follows:
Product to install:
FindFace Server
.Installation type:
Single server, multiple video workers
. In this case, FindFace Enterprise Server will be installed and configured to interact with additional remotefindface-video-worker
instances.Type of the
findface-video-worker
acceleration (on the principal server): CPU or GPU, subject to your hardware configuration.Type of the
findface-extraction-api
acceleration (on the principal server): CPU or GPU, subject to your hardware configuration.
After the installation is complete, the following output will be shown on the console:
Tip
Be sure to save this data: you will need it later.
############################################################################# # Installation is complete # ############################################################################# - upload your license to http://127.0.0.1:3185/ - FindFace SF-API address: http://172.20.77.78:18411/ - FindFace VideoManager address: http://172.20.77.78:18411/
Upload the FindFace Enterprise Server license file via the
findface-ntls
web interfacehttp://<ntls_host_IP_address>:3185
.Note
The IP address is shown in the links to the FindFace web services in the following way: as an external IP address if the host belongs to a network, or
127.0.0.1
otherwise.Allow the licensable services to access the
findface-ntls
license server from any IP address, To do so, open the/etc/findface-ntls.cfg
configuration file and setlisten = 0.0.0.0:3133
.sudo vi /etc/findface-ntls.cfg # Listen address of NTLS server where services will connect to. # The format is IP:PORT # Use 0.0.0.0:PORT to listen on all interfaces # This parameter is mandatory and may occur multiple times # if you need to listen on several specific interfaces or ports. listen = 0.0.0.1:3133
Deploy Video Processing Servers
On an additional video processing server, install only a findface-video-worker
instance following the step-by-step instructions. Answer the installer questions as follows:
Product to install: FindFace Video Worker.
Type of the
findface-video-worker
acceleration: CPU or GPU, subject to your hardware configuration.FindFace Enterprise Server IP address: IP address of the principal server.
After that, the installation process will automatically begin. The answers will be saved to a file /tmp/<findface-installer-*>.json
. Use this file to install FindFace Video Worker
on other hosts without having to answer the questions again, by executing:
sudo ./<findface-security-and-server-xxx>.run -f /tmp/<findface-installer-*>.jsonNote
After the installation, specify the
findface-ntls
and/orfindface-video-manager
IP addresses in thefindface-video-worker
configuration file.sudo vi /etc/findface-video-worker-cpu.ini sudo vi /etc/findface-video-worker-gpu.iniIn the
ntls-addr
parameter, specify thefindface-ntls
host IP address.ntls-addr=127.0.0.1:3133In the
mgr-static
parameter, specify thefindface-video-manager
host IP address, which providesfindface-video-worker
with settings and the video stream list.mgr-static=127.0.0.1:18811
Deploy Biometric Servers
On an additional biometric server, install only a findface-extraction-api
instance from the console installer. Answer the installer questions as follows:
Product to install:
FindFace Server
.Installation type:
Fully customized installation
.FindFace Enterprise Server components to install:
findface-extraction-api
andfindface-data
. To make a selection, first, deselect all the listed components by entering-*
in the command line. Selectfindface-extraction-api
andfindface-data
by entering their sequence number (keyword):1 7
. Enterdone
to save your selection and proceed to another step.Type of
findface-extraction-api
acceleration: CPU or GPU.Modification of the
findface-extraction-api
configuration file: specify the IP address of thefindface-ntls
server.Neural network models to install: CPU or GPU model for face biometrics (mandatory), and (optional) CPU/GPU models for gender, age, emotions, glasses and/or beard recognition. To make a selection, first, deselect all the listed models by entering
-*
in the command line. Select required models by entering their sequence number (keyword), for example,8 2
to select the GPU-models for biometric sample extraction and age recognition. Enterdone
to save your selection and proceed to another step. Be sure to choose the right acceleration type for each model, matching the acceleration type offindface-extraction-api
: CPU or GPU. Be aware thatfindface-extraction-api
on CPU can work only with CPU-models, whilefindface-extraction-api
on GPU supports both CPU- and GPU-models. See Face Features Recognition for details.The following models are available:
Face feature
Acceleration
Package
face (biometry)
CPU
findface-data-grapefruit-160-cpu_3.0.0_amd64.deb, findface-data-grapefruit-480-cpu_3.0.0_amd64.deb
GPU
findface-data-grapefruit-160-gpu_3.0.0_amd64.deb, findface-data-grapefruit-480-gpu_3.0.0_amd64.deb
age
CPU
findface-data-age.v1-cpu_3.0.0_amd64.deb
GPU
findface-data-age.v1-gpu_3.0.0_amd64.deb
gender
CPU
findface-data-gender.v2-cpu_3.0.0_amd64.deb
GPU
findface-data-gender.v2-gpu_3.0.0_amd64.deb
emotions
CPU
findface-data-emotions.v1-cpu_3.0.0_amd64.deb
GPU
findface-data-emotions.v1-gpu_3.0.0_amd64.deb
glasses3
CPU
findface-data-glasses3.v0-cpu_3.0.0_amd64.deb
GPU
findface-data-glasses3.v0-gpu_3.0.0_amd64.deb
beard
CPU
findface-data-beard.v0-cpu_3.0.0_amd64.deb
GPU
findface-data-beard.v0-gpu_3.0.0_amd64.deb
After that, the installation process will automatically begin. The answers will be saved to a file /tmp/<findface-installer-*>.json
. Use this file to install findface-extraction-api
on other hosts without having to answer the questions again.
sudo ./<findface-security-and-server-xxx>.run -f /tmp/<findface-installer-*>.json
After all the biometric servers are deployed, distribute load across them by using a load balancer.
Distribute Load across Biometric Servers
To distribute load across several biometric servers, you need to set up load balancing. The following step-by-step instructions demonstrate how to set up NGINX load balancing in a round-robin fashion for 3 findface-extraction-api
instances located on different physical hosts: one on the FindFace Enterprise Server principal server (172.168.1.9
), and 2 on additional remote servers (172.168.1.10
, 172.168.1.11
). Should you have more biometric servers in your system, load-balance them by analogy.
Tip
You can use any load balancer according to your preference. Please refer to the relevant official documentation for guidance.
To set up load balancing, do the following:
Designate the FindFace Enterprise Server principal server (recommended) or any other server with NGINX as a gateway to all the biometric servers.
Important
You will have to specify the gateway server IP address when configuring the FindFace Enterprise Server network.
Tip
You can install NGINX as such:
sudo apt update sudo apt install nginx
On the gateway server, create a new NGINX configuration file.
sudo vi /etc/nginx/sites-available/extapi
Insert the following entry into the newly created configuration file. In the
upstream
directive (upstream extapibackends
), substitute the exemplary IP addresses with the actual IP addresses of the biometric servers. In theserver
directive, specify the gateway server listening port aslisten
. You will have to enter this port when configuring the FindFace Enterprise Server network.upstream extapibackends { server 172.168.1.9:18666; ## ``findface-extraction-api`` on principal server server 172.168.1.10:18666; ## 1st additional extraction server server 127.168.1.11:18666; ## 2nd additional extraction server } server { listen 18667; server_name extapi; client_max_body_size 64m; location / { proxy_pass http://extapibackends; proxy_next_upstream error; } access_log /var/log/nginx/extapi.access_log; error_log /var/log/nginx/extapi.error_log; }
Enable the load balancer in NGINX.
sudo ln -s /etc/nginx/sites-available/extapi /etc/nginx/sites-enabled/
Restart nginx.
sudo service nginx restart
On the principal server and each additional biometric server, open the
/etc/findface-extraction-api.ini
configuration file. Substitute localhost in thelisten
parameter with the relevant server address that you have specified inupstream extapibackends
(/etc/nginx/sites-available/extapi
) before. In our example, the address of the 1st additional extraction server has to be substituted as such:sudo vi /etc/findface-extraction-api.ini listen: 172.168.1.10:18666
Restart the
findface-extraction-api
on the principal server and each additional biometric server.sudo systemctl restart findface-extraction-api.service
The load balancing is now successfully set up. Be sure to specify the actual gateway server IP address and listening port, when configuring the FindFace Enterprise Server network.
Distribute Database
The findface-tarantool-server
component connects the Tarantool database and the findface-sf-api
component, transferring search results from the database to findface-sf-api
for further processing. To increase search speed, multiple findface-tarantool-server
shards can be created on each Tarantool host. Their running concurrently leads to a remarkable increase in performance.
Each shard can handle up to approximately 10,000,000 faces. When deploying findface-tarantool-server
from the console installer, shards are created automatically given the server hardware.
To distribute the face database, install only a findface-tarantool-server
instance on each additional database server. Answer the installer questions as follows:
Product to install:
FindFace Server
.Installation type:
Fully customized installation
.FindFace Enterprise Server components to install:
findface-tarantool-server
. To make a selection, first, deselect all the listed components by entering-*
in the command line. Selectfindface-tarantool-server
by entering its sequence number (keyword):13
. Enterdone
to save your selection and proceed to another step.
After that, the installation process will automatically begin. The answers will be saved to a file /tmp/<findface-installer-*>.json
. Use this file to install findface-tarantool-server
on other hosts without having to answer the questions again.
sudo ./<findface-security-and-server-xxx>.run -f /tmp/<findface-installer-*>.json
As a result of the installation, findface-tarantool-server
shards will be automatically installed in the amount of N = max(min(mem_mb // 2000, cpu_cores), 1)
, i.e., equal to the RAM size in MB divided by 2000, or the number of CPU physical cores (but at least 1 shard).
Be sure to specify the shards IP addresses and ports, when configuring the FindFace Enterprise Server network. To learn the port numbers, execute on each database server:
sudo cat /etc/tarantool/instances.enabled/*shard* | grep -E ".start|(listen =)"`
You will get the following result:
listen = '127.0.0.1:33001',
FindFace.start("127.0.0.1", 8101, {
listen = '127.0.0.1:33002',
FindFace.start("127.0.0.1", 8102, {
You can find the port number of a shard in the FindFace.start
section, for example, 8101
, 8102
, etc.
Configure Network
After all the FindFace Enterprise Server components are deployed, configure their interaction over the network. Do the following:
Open the
findface-sf-api
configuration file:sudo vi /etc/findface-sf-api.ini
Specify the following parameters:
Parameter
Description
extraction-api
->extraction-api
IP address and listening port of the gateway biometric server with set up load balancing.
storage-api
->shards
->master
IP address and port of the
findface-tarantool-server
master shard. Specify each shard by analogy.upload_url
WebDAV NGINX path to send original images, thumbnails and normalized face images to the
findface-upload
service.... extraction-api: extraction-api: http://172.168.1.9:18667 ... webdav: upload-url: http://127.0.0.1:3333/uploads/ ... storage-api: ... shards: - master: http://172.168.1.9:8101/v2/ slave: '' - master: http://172.168.1.9:8102/v2/ slave: '' - master: http://172.168.1.12:8101/v2/ slave: '' - master: http://172.168.1.12:8102/v2/ slave: '' - master: http://172.168.1.13:8102/v2/ slave: '' - master: http://172.168.1.13:8102/v2/ slave: ''
Open the
findface-facerouter
configuration file. Specify the IP address of thefindface-sf-api
host.sudo vi /etc/findface-facerouter.py sfapi_url = 'http://localhost:18411'
Open the
findface-video-manager
configuration file. In therouter_url
parameter, specify the IP address and port of thefindface-facerouter
host to receive detected faces fromfindface-video-worker
.sudo vi /etc/findface-video-manager.conf ... router_url: http://127.0.0.1:18820/v0/frame
The FindFace Enterprise Server components interaction is now set up.