Deploy with Docker Compose

The FindFace Server 12.240830.2 distribution package now includes example docker-compose files (for CPU and GPU acceleration) that can be used to quickly set up a simple single-server single-shard installation that may be useful for development environments or small installations that are not expected to receive heavy load and do not require high availability. For production applications we recommend using Kubernetes, Docker Swarm or other container orchestration platforms.

Do the following:

  1. Download the FindFace Server package.

    wget http://path/to/ffserver-12.240830.2.zip
    
  2. Put the .zip archive into some directory on the designated host (for example, /home/username/).

  3. Create a ffserver-12.240830.2 directory and unzip the archive.

    mkdir ffserver-12.240830.2
    unzip ffserver-12.240830.2.zip -d ffserver-12.240830.2
    

    Note

    To install unzip in Ubuntu, use the following command:

    sudo apt install unzip
    
  4. List files to check the content.

    ls ffserver-12.240830.2
    
     configs
     docker-compose.cpu.yml
     docker-compose.gpu.yml
     images
     load-images.sh
     models
     push-images.sh
     SHA1SUMS
    
  5. The default configuration files of all services are located in the directory /path/to/ffserver-12.240830.2/configs. Please configure them to your needs before deploying. See Deploy Components.

    ls
    
    counter.yaml
    deduplicator.yaml
    extraction-api.yaml
    liveness-api.yaml
    ntls.yaml
    sf-api.yaml
    tnt-FindFace.lua
    tnt-scheme.lua
    video-manager.yaml
    video-storage.yaml
    video-streamer.yaml
    video-worker.yaml
    
  6. Put recognition models, subject to your needs, into the directory /path/to/ffserver-12.240830.2/models.

    sudo cp -rT /path/to/your-dir/models /path/to/ffserver-12.240830.2/models
    
  7. Enable recognition models in the /path/to/ffserver-12.240830.2/configs/extraction-api.yaml file. See steps 3-6 in the Deploy extraction-api section.

  8. Open the /path/to/configs/video-worker.yaml configuration file and complete all required parameters. See the example in step #3 that shows the typical structure of the models section. The specifics will depend on your selected recognition objects and models. Please configure video-manager.yaml to your needs. See Deploy Video Objects Detection.

  9. From the /path/to/ffserver-12.240830.2 directory, make load-images.sh and push-images.sh scripts executable.

    chmod +x load-images.sh push-images.sh
    
  10. Run the script to load images.

    ./load-images.sh
    
  11. If you plan to deploy FindFace Server in a distributed manner, either on a container orchestration platform (i.e. Kubernetes), or by manually running different containers on different machines, you may find it useful to store all container images in a centralized registry.

    1. If you don’t yet have a registry, you can set up a simple one as follows:

      docker run -d -p 5000:5000 --restart always --name registry docker.io/library/registry:2
      

      The -d flag will run the container in detached mode. The -p flag publishes port 5000 on your local machine’s network.

    2. FindFace Server images loaded into your local docker daemon can be then uploaded to the registry you set up using push-images.sh script:

      ./push-images.sh "localhost:5000"
      
  12. From the /path/to/ffserver-12.240830.2 directory, run docker-compose.cpu.yml or docker-compose.gpu.yml on CPU- and GPU-acceleration accordingly.

    docker-compose -f docker-compose.cpu.yml up -d
    
    docker-compose -f docker-compose.gpu.yml up -d
    
  13. Upload the license file via the ntls web interface or directly put the license file into the license directory /path/to/ffserver-12.240830.2/data/licenses. See steps #3-5 in the Provide Licensing section.

  14. Following any modifications to the configuration files, restart the service containers.

    docker-compose -f docker-compose.cpu.yml restart
    
    docker-compose -f docker-compose.gpu.yml restart