Install Components¶
Now that you have prepared the FindFace Enterprise Server SDK packages and provided the licensing, install the Server components on designated host(s) according to your architecture outline.
Install facenapi¶
Install and configure the findface-facenapi
component as follows:
Install the component.
sudo apt-get update sudo apt-get install python3-facenapi
If MongoDB is installed on a remote host, specify its IP address in the
findface-facenapi
configuration file.sudo vi /etc/findface-facenapi.ini mongo_host = '192.168.113.1'
Check if the component is runnable. To do so, invoke the
findface-facenapi
application by executing the command below. As the application is invoked, hold 1 minute, and if no errors display, hit Ctrl+C.If MongoDB is installed on the same host, execute:
findface-facenapi
If MongoDB is installed on a remote host, execute:
sudo findface-facenapi --config=/etc/findface-facenapi.ini
Check if the
findface-facenapi
service autostart at system startup is disabled.systemctl list-unit-files | grep findface-facenapi
Enable the service autostart and launch the service.
sudo systemctl enable findface-facenapi.service && sudo service findface-facenapi start
Make sure that the service is up and running. The command will return a service description, a status (should be Active), path and running time.
sudo service findface-facenapi status
Tip
You can view the findface-facenapi
logs by executing:
sudo tail -f /var/log/syslog | grep facenapi
Install extraction-api¶
Install and configure the findface-extraction-api
component as follows:
Note
The extraction-api
component requires the packages with models <findface-data>.deb. Make sure they have been installed.
Install the component.
sudo apt-get update sudo apt-get install findface-extraction-api
Open the
findface-extraction-api.ini
configuration file.sudo vi /etc/findface-extraction-api.ini
If NTLS is remote, specify its IP address.
license_ntls_server: 192.168.113.2:3133
The
model_instances
parameter indicates how manyextraction-api
instances are used. Specify the number of instances that you purchased. The default value (0) means that this number is equal to the number of CPU cores.Note
This parameter severely affects RAM consumption.
model_instances: 2
Enable the
Extraction API
service autostart and launch the service.sudo systemctl enable findface-extraction-api && sudo systemctl start findface-extraction-api
Make sure that the service is up and running. The command will return a service description, a status (should be Active), path and running time.
sudo service findface-extraction-api status
Tip
You can view the extraction-api
logs by executing:
sudo tail -f /var/log/syslog | grep extraction-api
Install findface-upload¶
To store all original images which have been processed by Server, as well as such Server artifacts as face thumbnails and normalized images, install and configure the findface-upload
component.
Tip
Skip this procedure if you do not want to store these data on the FindFace Enterprise Server SDK host. In this case, only face features vectors (facens) will be stored (in the MongoDB and Tarantool databases).
Do the following:
Install the component:
sudo apt-get update sudo apt-get install findface-upload
By default the original images, thumbnails and normalized images are stored at
/var/lib/ffupload/uploads/
. You can view this folder content athttp://127.0.0.1:3333/uploads/
in your browser. Make sure that this address is available (you will seeForbidden
if so).curl -I http://127.0.0.1:3333/uploads/ ##HTTP/1.1 403 Forbidden
Important
You will have to specify this address when configuring network.
See also
Install tntapi¶
The tntapi component connects the Tarantool database and the facenapi component, transferring search results from the database to facenapi for further processing. To increase search speed, multiple tntapi shards can be created on each Tarantool host. Their running concurrently leads to a remarkable increase in performance. Each shard can handle up to approximately 10,000,000 faces. In the case of standalone deployment, you need only one shard (created by default), while in a cluster environment the number of shards has to be calculated depending on several parameters (see below).
Install tntapi standalone¶
Install and configure the tntapi
component as follows:
Install
tntapi
. Tarantool will be installed automatically along with it.sudo apt-get update sudo apt-get install findface-tarantool-server
Disable the Tarantool example service autostart and stop the service.
sudo systemctl disable tarantool@example && sudo systemctl stop tarantool@example
For a small-scale project, the
tntapi
shard created by default (tarantool@FindFace
) would suffice as 1 shard can handle up to 10,000,000 faces. Configuration settings of the default shard are defined in the file/etc/tarantool/instances.enabled/FindFace.lua
. We strongly recommend you not to add or edit anything in this file, except the maximum memory usage (memtx_memory
), the NTLS IP address required for thetntapi
licensing, and the remote access setting. The maximum memory usage should be set in bytes, depending on the number of faces the shard handles, at the rate roughly 1280 byte per face.Open the configuration file:
sudo vi /etc/tarantool/instances.enabled/FindFace.lua
Edit the value due to the number of faces the shard handles. The value
1.2*1024*1024*1024
corresponds to 1,000,000 faces:memtx_memory = 1.2 * 1024 * 1024 * 1024,
Specify the NTLS IP address if NTLS is remote:
FindFace.start(“127.0.0.1”, 8001, {license_ntls_server=“192.168.113.2:3133”})
With standalone deployment, you can access Tarantool by default only locally (
127.0.0.1
). If you want to access Tarantool from a remote host, either specify the remote host IP address in theFindFace.start
section, or change127.0.0.1
to0.0.0.0
there to allow access to Tarantool from any IP address. In the case-study below, you allow access only from192.168.113.10
:FindFace.start("192.168.113.10", 8001, {license_ntls_server=“192.168.113.2:3133”})
Now you allow access from any IP address:
FindFace.start("0.0.0.0", 8001, {license_ntls_server=“192.168.113.2:3133”})
Configure the
tntapi
shard to autostart and start the shard.sudo systemctl enable tarantool@FindFace && sudo systemctl start tarantool@FindFace
Retrieve the shard status. The command will return a service description, a status (should be Active), path and running time.
sudo systemctl status tarantool@FindFace
The
tntapi.json
file containing the tntapi shard parameters is automatically installed along withtntapi
into the/etc/
folder.Important
You will have to uncomment the path to
tntapi.json
when configuring network.
Install tntapi cluster¶
Install and configure the tntapi
component as follows:
Install
tntapi
on designated hosts. Tarantool will be installed automatically along it.sudo apt-get update sudo apt-get install findface-tarantool-server
Create
tntapi
shards on eachtntapi
host. To learn how to shard, let’s consider a case-study of a cluster environment containing 4 database hosts. We’ll create 4 shards on each.Important
When creating shards in large installations, observe the following rules:
- 1
tntapi
shard can handle approximately 10,000,000 faces. - The number of shards on a single host should not exceed the number of its physical processor cores minus 1.
- 1
Disable the Tarantool example service autostart and stop the service. Do so for all the 4 hosts.
sudo systemctl disable tarantool@example && sudo systemctl stop tarantool@example
Disable the shard created by default. Do so for all the 4 hosts.
sudo systemctl disable tarantool@FindFace
Write a bash script
shard.sh
that will automatically create configuration files for all shards on a particular host. Do so for the 4 hosts. Use the following script as a base for your own code. The exemplary script creates 4 shards listening to the ports: tntapi33001..33004
and http8001..8004
.Important
The script below creates configuration files based on the default configuration settings stored in the file
/etc/tarantool/instances.enabled/FindFace.lua
. We strongly recommend you not to add or edit anything in the default file, except the maximum memory usage (memtx_memory
) and the NTLS IP address required for thetntapi
licensing. The maximum memory usage should be set in bytes for each shard, depending on the number of faces a shard handles, at the rate roughly 1280 byte per face.Open the configuration file:
sudo vi /etc/tarantool/instances.enabled/FindFace.lua
Edit the value due the number of faces a shard handles. The value
1.2*1024*1024*1024
corresponds to 1,000,000 faces:memtx_memory = 1.2*1024*1024*1024,
Specify the NTLS IP address if NTLS is remote:
FindFace.start(“127.0.0.1”, 8001, {license_ntls_server=“192.168.113.2:3133”})
#!/bin/sh set -e for I in `seq 1 4`; do TNT_PORT=$((33000+$I)) && HTTP_PORT=$((8000+$I)) && sed " s#/opt/ntech/var/lib/tarantool/default#/opt/ntech/var/lib/tarantool/shard_$I#g; s/listen = .*$/listen = '127.0.0.1:$TNT_PORT',/; s/\"127.0.0.1\", 8001,/\"0.0.0.0\", $HTTP_PORT,/; " /etc/tarantool/instances.enabled/FindFace.lua > /etc/tarantool/instances.enabled/FindFace_shard_$I.lua; mkdir -p /opt/ntech/var/lib/tarantool/shard_$I/snapshots mkdir -p /opt/ntech/var/lib/tarantool/shard_$I/xlogs mkdir -p /opt/ntech/var/lib/tarantool/shard_$I/index chown -R tarantool:tarantool /opt/ntech/var/lib/tarantool/shard_$I echo "Shard #$I inited" done;
Tip
Download the
exemplary script
.Run the script from the home directory.
sudo sh ~/shard.sh
Check the configuration files created.
ls /etc/tarantool/instances.enabled/ ##example.lua FindFace.lua FindFace_shard_1.lua FindFace_shard_2.lua FindFace_shard_3.lua FindFace_shard_4.lua
Launch all the 4 shards. Do so on each host.
for I in `seq 1 4`; do sudo systemctl enable tarantool@FindFace_shard_$I; done; for I in `seq 1 4`; do sudo systemctl start tarantool@FindFace_shard_$I; done;
Retrieve the shards status.
sudo systemctl status tarantool@FindFace*
You should get the following output:
tarantool@FindFace_shard_3.service - Tarantool Database Server Loaded: loaded (/lib/systemd/system/tarantool@.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2017-01-10 16:22:07 MSK; 32s ago ... tarantool@FindFace_shard_2.service - Tarantool Database Server Loaded: loaded (/lib/systemd/system/tarantool@.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2017-01-10 16:22:07 MSK; 32s ago ... tarantool@FindFace_shard_1.service - Tarantool Database Server Loaded: loaded (/lib/systemd/system/tarantool@.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2017-01-10 16:22:07 MSK; 32s ago ... tarantool@FindFace_shard_4.service - Tarantool Database Server Loaded: loaded (/lib/systemd/system/tarantool@.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2017-01-10 16:22:07 MSK; 32s ago ...
Tip
You can view the
tntapi
logs by executing:sudo tail -f /var/log/tarantool/FindFace_shard_{1,2,3,4}.log
On the
findface-facenapi
host, create a filetntapi_cluster.json
containing the addresses and ports of all the shards. Distribute available shards evenly over ~1024 cells in one line. Click here to see the file for 4 hosts with 4 shards on each.Tip
You can create
tntapi_cluster.json
as follows:Create a file that lists all the shards, each shard with a new line (click here to view the example).
sudo vi s.txt
Run the script below (click here to view the script). As a result, a new file
tntapi_cluster.json
will be created and contain a list of all shards distributed evenly over 1024 cells.
cat s.txt | perl -lane 'push(@s,$_); END{$m=1024; $t=scalar @s;for($i=0;$i<$m;$i++){$k=int($i*$t/$m); push(@r,"\"".$s[$k]."\"")} print "[[".join(", ",@r)."]]"; }' > tntapi_cluster.json
Move
tntapi_cluster.json
to the directory/etc/
.Important
You will have to uncomment and specify the path to
tntapi_cluster.json
when configuring network.