Deploy with Docker Compose
The FindFace Server 12.240830.2 distribution package now includes example docker-compose
files (for CPU and GPU acceleration) that can be used to quickly set up a simple single-server single-shard installation that may be useful for development environments or small installations that are not expected to receive heavy load and do not require high availability. For production applications we recommend using Kubernetes, Docker Swarm or other container orchestration platforms.
Do the following:
Download the FindFace Server package.
wget http://path/to/ffserver-12.240830.2.zip
Put the
.zip
archive into some directory on the designated host (for example,/home/username/
).Create a
ffserver-12.240830.2
directory and unzip the archive.mkdir ffserver-12.240830.2 unzip ffserver-12.240830.2.zip -d ffserver-12.240830.2
Note
To install unzip in Ubuntu, use the following command:
sudo apt install unzip
List files to check the content.
ls ffserver-12.240830.2 configs docker-compose.cpu.yml docker-compose.gpu.yml images load-images.sh models push-images.sh SHA1SUMS
The default configuration files of all services are located in the directory
/path/to/ffserver-12.240830.2/configs
. Please configure them to your needs before deploying. See Deploy Components.ls counter.yaml deduplicator.yaml extraction-api.yaml liveness-api.yaml ntls.yaml sf-api.yaml tnt-FindFace.lua tnt-scheme.lua video-manager.yaml video-storage.yaml video-streamer.yaml video-worker.yaml
Put recognition models, subject to your needs, into the directory
/path/to/ffserver-12.240830.2/models
.sudo cp -rT /path/to/your-dir/models /path/to/ffserver-12.240830.2/models
Enable recognition models in the
/path/to/ffserver-12.240830.2/configs/extraction-api.yaml
file. See steps 3-6 in the Deploy extraction-api section.Open the
/path/to/configs/video-worker.yaml
configuration file and complete all required parameters. See the example in step #3 that shows the typical structure of themodels
section. The specifics will depend on your selected recognition objects and models. Please configurevideo-manager.yaml
to your needs. See Deploy Video Objects Detection.From the
/path/to/ffserver-12.240830.2
directory, makeload-images.sh
andpush-images.sh
scripts executable.chmod +x load-images.sh push-images.sh
Run the script to load images.
./load-images.sh
If you plan to deploy FindFace Server in a distributed manner, either on a container orchestration platform (i.e. Kubernetes), or by manually running different containers on different machines, you may find it useful to store all container images in a centralized registry.
If you don’t yet have a registry, you can set up a simple one as follows:
docker run -d -p 5000:5000 --restart always --name registry docker.io/library/registry:2
The
-d
flag will run the container in detached mode. The-p
flag publishes port 5000 on your local machine’s network.FindFace Server images loaded into your local docker daemon can be then uploaded to the registry you set up using
push-images.sh
script:./push-images.sh "localhost:5000"
From the
/path/to/ffserver-12.240830.2
directory, rundocker-compose.cpu.yml
ordocker-compose.gpu.yml
on CPU- and GPU-acceleration accordingly.docker-compose -f docker-compose.cpu.yml up -d
docker-compose -f docker-compose.gpu.yml up -d
Upload the license file via the
ntls
web interface or directly put the license file into the license directory/path/to/ffserver-12.240830.2/data/licenses
. See steps #3-5 in the Provide Licensing section.Following any modifications to the configuration files, restart the service containers.
docker-compose -f docker-compose.cpu.yml restart
docker-compose -f docker-compose.gpu.yml restart