Guide to Typical Multi-Host Deployment
This section is all about deploying FindFace Multi in a multi-host environment.
Tip
If, after having read this section, you still have questions, do not hesitate to contact our experts at support@ntechlab.com.
Important
This section doesn’t cover the Video Recorder deployment. You can find a step-by-step instruction on this subject here.
The reasons for deploying FindFace Multi in a multi-host environment are the following:
The necessity to distribute the video processing high load.
The necessity to process video streams from a group of cameras in the place of their physical location.
Note
The most common use cases where such need comes to the fore are hotel/retail chains, several security checkpoints in the same building, etc.
The necessity to distribute the feature vector extraction high load.
Large number of objects to search through, that requires implementation of a distributed object database.
Before you start the deployment, outline your system architecture, depending on its load and allotted resources (see Requirements). The most common distributed scheme is as follows:
One principal server with the following components:
findface-ntls,findface-multi-legacy,findface-sf-api,findface-video-manager,findface-upload,findface-video-worker,findface-extraction-api,findface-tarantool-server, and third-parties.Several additional video processing servers with installed
findface-video-worker.(If needed) Several additional extraction servers with installed
findface-extraction-api.(If needed) Additional database servers with multiple Tarantool shards.
This section describes the most common distributed deployment. In high load systems, it may also be necessary to distribute the API processing (findface-sf-api and findface-video-manager) across several additional servers. This procedure requires a high level of expertise and some extra coding. Please do not hesitate to contact our experts for help (support@ntechlab.com).
Important
Installing new FindFace Multi components into a directory with already deployed FindFace Multi components will overwrite the contents of the installation directory and the docker-compose.yaml file. If you need to install a combination of components on the selected server, it is recommended to install all required components at once.
To deploy FindFace Multi in a multi-host environment, follow the steps below:
Deploy Principal Server
To deploy the principal server as part of a distributed architecture, do the following:
On the designated physical server, install FindFace Multi from the installer as follows (don’t forget to prepare a server prior to the FindFace Multi deployment):
Product to install:
FindFace Multi.Installation type:
Single Server. FindFace Multi will be installed and configured to interact with additional remotefindface-video-workerinstances.Type of the
findface-video-workeracceleration (on the principal server): CPU or GPU, subject to your hardware configuration.Type of the
findface-extraction-apiacceleration (on the principal server): CPU or GPU, subject to your hardware configuration.
Once the installation is complete, the following output will be shown in the console:
############################################################################# # Installation is complete # ############################################################################# - all configuration and data is stored in /opt/findface-multi - upload your license to http://172.168.1.9/#/license/ - user interface: http://172.168.1.9/ superuser: admin documentation: http://172.168.1.9/doc/
Upload the FindFace Multi license file via the main web interface
http://<Host_IP_address>/#/license. To access the web interface, use thesuperusercredentials, specified during installation.Note
The host IP address in FindFace Multi web interface URL is the IP address that you have specified as the external address during installation.
Important
Do not disclose the
superuser(Super Administrator) credentials to others. To administer the system, create a new user with the administrator privileges. Whatever the role, Super Administrator cannot be deprived of its rights.
Deploy Video Processing Servers
On an additional video processing server, install a findface-video-worker instance following the step-by-step instructions. Answer the installer questions as follows:
Product to install:
FindFace Video Worker.Type of the
findface-video-workeracceleration: CPU or GPU, subject to your hardware configuration.The domain name or IP address that is used to access the
findface-ntlsandfindface-video-managerservices.
After that, the installation process will automatically begin. The answers will be saved to a file /tmp/<findface-installer-*>.json. Use this file to install FindFace Video Worker on other hosts without having to answer the questions again, by executing:
sudo ./<findface-*>.run -f /tmp/<findface-installer-*>.json
Note
If findface-ntls and/or findface-video-manager are installed on different hosts, specify their IP addresses in the /opt/findface-multi/configs/findface-video-worker/findface-video-worker.yaml configuration file after the installation. Consider findface-ntls is installed on the principal server 172.168.1.9 and findface-video-manager is installed on an additional server 172.168.1.10, then configure the /opt/findface-multi/configs/findface-video-worker/findface-video-worker.yaml file accordingly.
sudo vi /opt/findface-multi/configs/findface-video-worker/findface-video-worker.yaml
In the ntls_addr parameter, specify the findface-ntls host IP address.
ntls_addr: 172.168.1.9:3133
In the mgr → static parameter, specify the findface-video-manager host IP address, which provides findface-video-worker with settings and the video stream list.
static: 172.168.1.10:18811
Deploy Extraction Servers
On an additional extraction server, install a
findface-extraction-apiinstance from the console installer. Answer the installer questions as follows:Product to install:
FindFace Multi.Installation type:
Fully customized installation. Refer to the Fully Customized Installation section to see the next following installer questions.Once the installer asks you to select FindFace Multi components to install, specify the
findface-extraction-apiandfindface-datacomponents. To make a selection, first, deselect all the listed components by entering-*in the command line, then selectfindface-extraction-apiandfindface-databy entering their sequence number (keyword). Enterdoneto save your selection and proceed to another step.Type of
findface-extraction-apiacceleration: CPU or GPU.Neural network models to install: CPU or GPU model for an object detection and object attribute recognition. Be sure to choose the right acceleration type for each model, matching the acceleration type of
findface-extraction-api: CPU or GPU. Be aware thatfindface-extraction-apion CPU can work only with CPU-models, whilefindface-extraction-apion GPU supports both CPU- and GPU-models.Detectors and attributes to install: you can install detectors and attributes at once during the installation process or continue with the face detector at default settings. Depending on your choice, the system will ask you additional questions about detectors and attributes to install or pass on to the installation of the neural network models.
The system will invite you to edit configuration files. Modify the
/opt/findface-multi/configs/findface-extraction-api/findface-extraction-api.yamlconfiguration file: in thelicense_ntls_serverparameter, specify the IP address of thefindface-ntlsserver.
After that, the installation process will automatically begin. The answers will be saved to a file
/tmp/<findface-installer-*>.json. Use this file to installfindface-extraction-apion other hosts without having to answer the questions again.sudo ./<findface-*>.run -f /tmp/<findface-installer-*>.json
To move the principal
findface-extraction-apiinstance to another host, in the/opt/findface-multi/configs/findface-sf-api/findface-sf-api.yamlconfiguration file, specify the IP address of the extraction server host. E.g., if the extraction server host is172.168.1.11, specifyextraction-api: http://172.168.1.11:18666.listen: :18411 extraction-api: timeouts: connect: 5s response_header: 30s overall: 35s idle_connection: 10s max-idle-conns-per-host: 20 keepalive: 24h0m0s trace: false url: http://127.0.0.1:18666 extraction-api: http://172.168.1.11:18666
On each extraction server, configure the
/opt/findface-multi/docker-compose.yamlfile.Specify the ports for the
findface-extraction-apiservice:findface-extraction-api: command: [--config=/etc/findface-extraction-api.yml] depends_on: [findface-ntls] image: docker.int.ntl/ntech/universe/extraction-api-cpu:ffserver-12.250328.2 logging: {driver: journald} ports: ['18666:18666'] networks: [product-network] restart: always volumes: ['./configs/findface-extraction-api/findface-extraction-api.yaml:/etc/findface-extraction-api.yml:ro', './models:/usr/share/findface-data/models:ro', './cache/findface-extraction-api/models:/var/cache/findface/models_cache']
Restart all FindFace Multi containers.
cd /opt/findface-multi/ sudo docker-compose down sudo docker-compose up -d
Important
Starting the GPU-accelerated findface-extraction-api service for the first time after deployment may take a considerable amount of time due to the caching process (approximately two hours). During this time, object detection in videos and photos, as well as feature vector extraction, will be unavailable.
To make sure that the findface-extraction-api service is up and running, view the service log.
docker logs findface-multi-findface-extraction-api-1 --tail 30 -f
After all the extraction servers are deployed, distribute load across them by using a load balancer.
Distribute Load across Extraction Servers
To distribute load across several extraction servers, you need to set up load balancing. The following step-by-step instructions demonstrate how to set up nginx load balancing in a round-robin fashion for 3 findface-extraction-api instances located on different physical hosts: one on the FindFace Multi principal server (172.168.1.9), and 2 on additional remote servers (172.168.1.10, 172.168.1.11). Should you have more extraction servers in your system, load-balance them by analogy.
Tip
You can use any load balancer according to your preference. Please refer to the relevant official documentation for guidance.
To set up load balancing, do the following:
Designate the FindFace Multi principal server (recommended) or any other server with running
findface-sf-apiservice as a gateway to all the extraction servers.Important
You will have to specify the gateway server IP address when configuring the FindFace Multi network.
On the designated server with the installed
findface-sf-apiinstance, create anginxfolder that contains theextapi.conffile in the/opt/findface-multi/configs/directory. Make sure that theextapi.conffile includes the information like in the example below. In theupstreamdirective (upstream extapibackends), substitute the exemplary IP addresses with the actual IP addresses of the extraction servers. In theserverdirective, specify the gateway server listening port aslisten. You will have to enter this port when configuring the FindFace Multi network.upstream extapibackends { server 172.168.1.9:18666; ## ``findface-extraction-api`` on principal server server 172.168.1.10:18666; ## 1st additional extraction server server 127.168.1.11:18666; ## 2nd additional extraction server } server { listen 18667; server_name extapi; client_max_body_size 64m; location / { proxy_pass http://extapibackends; proxy_next_upstream error; } access_log /var/log/nginx/extapi.access_log; error_log /var/log/nginx/extapi.error_log; }
Define the Nginx service in the
docker-compose.yamlfile. To do that, add the container with Nginx image to thedocker-compose.yamlfile:sudo vi /opt/findface-multi/docker-compose.yaml nginx: image: nginx:latest ports: - 18667:18667 volumes: - ./configs/nginx/extapi.conf:/etc/nginx/conf.d/default.conf:ro
In the
findface-sf-api.yamlconfiguration file, specify the distributor address:sudo vi /opt/findface-multi/configs/findface-sf-api/findface-sf-api.yaml ... extraction-api: http://172.168.1.9:18667
Restart all FindFace Multi containers.
cd /opt/findface-multi/ sudo docker-compose down sudo docker-compose up -d
The load balancing is now successfully set up. Be sure to specify the actual gateway server IP address and listening port, when configuring the FindFace Multi network.
Deploy Additional Database Servers
The findface-tarantool-server component connects the Tarantool-based feature vector database and the findface-sf-api component, transferring search results from the database to findface-sf-api for further processing.
To increase search speed, you can allocate several additional servers to the feature vector database and create multiple findface-tarantool-server shards on each additional server. The concurrent functioning of multiple shards will lead to a remarkable increase in performance, as each shard can handle up to approximately 10,000,000 feature vectors.
To deploy additional database servers, do the following:
Install the
findface-tarantool-servercomponent on the first designated server. Answer the installer questions as follows:Product to install:
FindFace Multi.Installation type:
Fully customized installation. Refer to the Fully Customized Installation section to see the next following installer questions.Once the installer asks you to select FindFace Multi components to install, specify the
findface-tarantool-servercomponent. To make a selection, first, deselect all the listed components by entering-*in the command line, then selectfindface-tarantool-serverby entering its sequence number (keyword). Enterdoneto save your selection and proceed to another step.Number of
tntapiinstances: the default number oftntapiinstances is 8. Specify the required number of instances according to your system configuration.Detectors and attributes to install: you can install detectors and attributes at once during the installation process or continue with the face detector at default settings. Depending on your choice, the system will ask you additional questions about detectors and attributes to install or pass on to the installation of the neural network models.
Edit configuration files: the system will invite you to edit configuration files. You can agree or skip to the next step.
After that, the installation process will automatically begin.
As a result of the installation, the
findface-tarantool-servershards will be automatically installed in the amount ofN = min(max(min(mem_mb // 2000, cpu_cores), 1), 16 * cpu_cores). I.e., it is equal to the RAM size in MB divided by 2000, or the number of CPU physical cores (but at least one shard), or the number of CPU physical cores multiplied by 16 if the first obtained value is greater.Use the created
/tmp/<findface-installer-*>.jsonfile to installfindface-tarantool-serveron other servers without answering the questions again. To do so, execute:sudo ./<findface-*>.run -f /tmp/<findface-installer-*>.json
Be sure to specify the IP addresses and ports of the shards later on when configuring the FindFace Multi network. To learn the port numbers, execute on each database server:
sudo cat /opt/findface-multi/docker-compose.yaml | grep -E "CFG_LISTEN_PORT"
You will get the following result:
CFG_LISTEN_PORT: '8101', CFG_NTLS: 'findface-ntls:3133', TT_CHECKPOINT_COUNT: 3, CFG_LISTEN_PORT: '8101', CFG_NTLS: 'findface-ntls:3133', TT_CHECKPOINT_COUNT: 3, CFG_LISTEN_PORT: '8101', CFG_NTLS: 'findface-ntls:3133', TT_CHECKPOINT_COUNT: 3, CFG_LISTEN_PORT: '8101', CFG_NTLS: 'findface-ntls:3133', TT_CHECKPOINT_COUNT: 3, CFG_LISTEN_PORT: '8101', CFG_NTLS: 'findface-ntls:3133', TT_CHECKPOINT_COUNT: 3, CFG_LISTEN_PORT: '8101', CFG_NTLS: 'findface-ntls:3133', TT_CHECKPOINT_COUNT: 3, CFG_LISTEN_PORT: '8101', CFG_NTLS: 'findface-ntls:3133', TT_CHECKPOINT_COUNT: 3, CFG_LISTEN_PORT: '8101', CFG_NTLS: 'findface-ntls:3133', TT_CHECKPOINT_COUNT: 3,
The
CFG_LISTEN_PORTnumber is 8101 (the same for all the shards), and is configured for deployment in a bridge network. On the next step, configure theCFG_LISTEN_PORTfor each shard.On the designated server with the installed
findface-tarantool-servercomponent, modify the configuration of each shard in the/opt/findface-multi/docker-compose.yamlfile. Specify thefindface-ntlslicense server address in theCFG_NTLSparameter. For each shard, except for the first one, add1to the external port, e.g., 8101, 8102, 8103, etc.sudo vi /opt/findface-multi/docker-compose.yaml findface-tarantool-server-shard-001: depends_on: [findface-ntls] environment: {CFG_EXTRA_LUA: loadfile("/tnt_schema.lua")(), CFG_LISTEN_HOST: 0.0.0.0, CFG_LISTEN_PORT: '8101', CFG_NTLS: 'findface-ntls:3133', TT_CHECKPOINT_COUNT: 3, TT_CHECKPOINT_INTERVAL: '14400', TT_FORCE_RECOVERY: 'true', TT_LISTEN: '0.0.0.0:32001', TT_MEMTX_DIR: snapshots, TT_MEMTX_MEMORY: '2147483648', TT_WAL_DIR: xlogs, TT_WORK_DIR: /var/lib/tarantool/FindFace} image: docker.int.ntl/ntech/universe/tntapi:ffserver-12.250328.2 logging: {driver: journald} networks: [product-network] restart: always ports: ['8101:8101'] volumes: ['./data/findface-tarantool-server/shard-001:/var/lib/tarantool/FindFace', './configs/findface-tarantool-server/tnt-schema.lua:/tnt_schema.lua:ro'] findface-tarantool-server-shard-002: depends_on: [findface-ntls] environment: {CFG_EXTRA_LUA: loadfile("/tnt_schema.lua")(), CFG_LISTEN_HOST: 0.0.0.0, CFG_LISTEN_PORT: '8101', CFG_NTLS: 'findface-ntls:3133', TT_CHECKPOINT_COUNT: 3, TT_CHECKPOINT_INTERVAL: '14400', TT_FORCE_RECOVERY: 'true', TT_LISTEN: '0.0.0.0:32001', TT_MEMTX_DIR: snapshots, TT_MEMTX_MEMORY: '2147483648', TT_WAL_DIR: xlogs, TT_WORK_DIR: /var/lib/tarantool/FindFace} image: docker.int.ntl/ntech/universe/tntapi:ffserver-12.250328.2 logging: {driver: journald} networks: [product-network] restart: always ports: ['8102:8101'] volumes: ['./data/findface-tarantool-server/shard-002:/var/lib/tarantool/FindFace', './configs/findface-tarantool-server/tnt-schema.lua:/tnt_schema.lua:ro']
Restart the containers.
cd /opt/findface-multi/ sudo docker-compose down sudo docker-compose up -d
In the
/opt/findface-multi/configs/findface-sf-api/findface-sf-api.yamlconfiguration file, specify shards.sudo vi /opt/findface-multi/configs/findface-sf-api/findface-sf-api.yaml ... shards: - master: http://172.168.1.11:8101/v2/ slave: '' - master: http://172.168.1.11:8102/v2/ slave: ''
To apply migrations, restart FindFace Multi containers.
cd /opt/findface-multi/ sudo docker-compose restart
Configure Network
After all the FindFace Multi components are deployed, configure their interaction over the network. Do the following:
Open the
/opt/findface-multi/configs/findface-sf-api/findface-sf-api.yamlconfiguration file:sudo vi /opt/findface-multi/configs/findface-sf-api/findface-sf-api.yaml
Specify the following parameters:
Parameter
Description
extraction-api→extraction-apiIP address and listening port of the gateway extraction server with set up load balancing.
storage-api→shards→masterIP address and port of the
findface-tarantool-servermaster shard. Specify each shard by analogy.upload_urlWebDAV Nginx path to send original images, thumbnails and normalized object images to the
findface-uploadservice.... extraction-api: extraction-api: http://172.168.1.9:18667 ... webdav: upload-url: http://findface-upload:3333/uploads/ ... storage-api: ... shards: - master: http://172.168.1.9:8101/v2/ slave: '' - master: http://172.168.1.9:8102/v2/ slave: '' - master: http://172.168.1.12:8101/v2/ slave: '' - master: http://172.168.1.12:8102/v2/ slave: '' - master: http://172.168.1.13:8102/v2/ slave: '' - master: http://172.168.1.13:8102/v2/ slave: ''
Restart the
findface-multi-findface-sf-api-1container.sudo docker restart findface-multi-findface-sf-api-1
Open the
/opt/findface-multi/configs/findface-multi-legacy/findface-multi-legacy.pyconfiguration file.sudo vi /opt/findface-multi/configs/findface-multi-legacy/findface-multi-legacy.py
Specify the following parameters:
Parameter
Description
SERVICE_EXTERNAL_ADDRESSFindFace Multi IP address or URL prioritized for webhooks. Once this parameter not specified, the system uses
EXTERNAL_ADDRESSfor these purposes. To use webhooks, be sure to specify at least one of those parameters:SERVICE_EXTERNAL_ADDRESS,EXTERNAL_ADDRESS.EXTERNAL_ADDRESS(Optional) IP address or URL that can be used to access the FindFace Multi web interface. Once this parameter not specified, the system auto-detects it as the external IP address. To access FindFace Multi, you can use both the auto-detected and specified IP addresses.
VIDEO_DETECTOR_TOKENTo authorize the video object detection module, come up with a token and specify it here.
VIDEO_MANAGER_ADDRESSIP address of the
findface-video-managerhost.NTLS_HTTP_URLIP address of the
findface-ntlshost.ROUTER_URLExternal IP address of the
findface-multi-legacyhost that will receive detected objects from thefindface-video-workerinstance(s).SF_API_ADDRESSIP address of the
findface-sf-apihost.sudo vi /opt/findface-multi/configs/findface-multi-legacy/findface-multi-legacy.py ... SERVICE_EXTERNAL_ADDRESS = 'http://172.168.1.9' ... EXTERNAL_ADDRESS = 'http://172.168.1.9' ... FFSECURITY = { 'VIDEO_DETECTOR_TOKEN': '7ce2679adfc4d74edcf508bea4d67208', ... 'NTLS_HTTP_URL': 'http://findface-ntls:3185', ... 'ROUTER_URL': 'http://172.168.1.9', ... 'VIDEO_MANAGER_ADDRESS': 'http://findface-video-manager:18810', 'SF_API_ADDRESS': 'http://findface-sf-api:18411', ... }
Restart all FindFace Multi containers.
cd /opt/findface-multi/ sudo docker-compose restart
The FindFace Multi components interaction is now set up.
Important
To preserve FindFace Multi compatibility with the installation environment, we highly recommend you to disable Ubuntu automatic update. In this case, you will be able to update your OS manually, fully controlling which packages to update.
To disable Ubuntu automatic update, execute the following commands:
sudo apt-get remove unattended-upgrades
sudo systemctl stop apt-daily.timer
sudo systemctl disable apt-daily.timer
sudo systemctl disable apt-daily.service
sudo systemctl daemon-reload
Important
FindFace Multi services log a large amount of data, which can eventually lead to disc overload. To prevent this from happening, we advise you to disable rsyslog due to its suboptimal log rotation scheme and use the appropriately configured systemd-journal service instead. See Logging for the step-by-step instructions.