.. _step-by-step: Deploy Components ===================================== This section will guide you through the FindFace Server installation process. FindFace Server is configurable through a configuration file, various command-line flags, and environment variables. A reusable configuration file is a yaml/env/ini file made with name and value of one or more command-line flags described below. In order to use this file, specify the file path as a value to the ``-c`` / ``--config`` flag or ``CFG_CONFIG`` environment variable. The sample configuration file can be used as a starting point to create a new configuration file as needed. Options set on the command line take precedence over those from the environment. .. If a configuration file is provided, other command line flags and environment variables will be ignored. For example, ``--config sample_conf.yml --data-dir /tmp`` will ignore the ``--data-dir`` flag. The format of environment variable for flag ``--my-flag`` is ``CFG_MY_FLAG``. It applies to all flags. The instructions below are given only for a reference and a base configurations. See :ref:`components` to build a configuration suitable for a specific project. For the latest available flags, use ``--help`` flag: .. code:: docker run --rm -ti --name docker.int.ntl/ntech/universe/:ffserver-12.240830.2 --help .. rubric:: In this section: .. contents:: :local: .. _docker_network: Create a Docker Network ---------------------------- To create a docker network, use the following command: .. code:: docker network create --attachable server Network name must be unique. The ``--attachable`` option used to enable manual container attachment. ``server`` is a name of your network. .. tip:: Useful commands: #. List networks. .. code:: docker network ls #. Display detailed information on your network. .. code:: docker network inspect server When a container is created and connected to a created network, Docker automatically creates a DNS record for that container, using the container name as the hostname and the IP address of the container as the record's value. This enables other containers on the same network to access each other by name, rather than needing to know the IP address of the target container. .. _deploy_etcd: Running ``etcd`` under Docker ----------------------------------- The ``etcd`` is a third-party software that implements a distributed key-value store for ``video-manager``. It is used as a coordination service in the distributed system, providing the video object detector with fault tolerance. Run ``etcd`` under Docker: .. code:: bash docker run -tid --name etcd-1 --network server --restart always \ quay.io/coreos/etcd:v3.5.11 /usr/local/bin/etcd \ -advertise-client-urls http://0.0.0.0:2379 \ -listen-client-urls http://0.0.0.0:2379 This ``docker run`` command will expose the ``etcd`` client API over port 2379. This will run ``v3.5.11`` version of ``etcd``. You can specify a different version after consulting with our specialists. Use docker options: * ``--name``: specify a custom identifier for a container, for example ``etcd-1``. * ``--network``: connect a container to a network, named ``server``. * ``--restart``: restart policy to apply when a container exits. Set ``always`` to always restart the container if it stops. Use configuration flags to ``etcd``: * ``-advertise-client-urls``: list of this member's client URLs to advertise to the public. Default ``http://localhost:2379``. * ``-listen-client-urls``: list of URLs to listen on for client traffic. This flag tells the ``etcd`` to accept incoming requests from the clients on the specified scheme://IP:port combinations. Scheme can be either http or https. If 0.0.0.0 is specified as the IP, ``etcd`` listens to the given port on all interfaces. If an IP address is given as well as a port, ``etcd`` will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The ``etcd`` will respond to requests from any of the listed addresses and ports. default: ``http://localhost:2379``. See `etcd `_ for more information. .. _deploy_memcached: Running ``memcached`` under Docker ----------------------------------- The ``memcached`` is a third-party software that implements a distributed memory caching system. Used by ``sf-api`` as a temporary storage for extracted object feature vectors before they are written to the feature vector database powered by ``Tarantool``. Run ``memcached`` under Docker: .. code:: bash docker run -tid --name memcached --restart always --network server \ docker.io/library/memcached:1.5.22 -v -m '1024' -I 16m -u memcache This ``docker run`` command will expose the ``memcached`` client API over port 11211. This will run ``v1.5.22`` version of ``memcached``. You can specify a different version after consulting with our specialists. Use docker options: * ``--name``: specify a custom identifier for a container, for example ``memcached``. * ``--network``: connect a container to a network, named ``server``. * ``--restart``: restart policy to apply when a container exits. Set ``always`` to always restart the container if it stops. Use configuration flags to ``memcached``: * ``-v``: be verbose during the event loop; print out errors and warnings. * ``-m``: use 1 GB of memory to store features. * ``-I``: override the default size of each slab page. * ``-u``: assume the identity of ``memcache``. .. _deploy_redis: Running ``redis`` under Docker ----------------------------------- The ``redis`` is a third-party software that implements a distributed memory caching system. Used by ``sf-api`` as a temporary storage for extracted object feature vectors before they are written to the feature vector database powered by ``Tarantool``. Run ``redis`` under Docker: .. code:: bash docker run -tid --name redis --restart always --network server \ --volume /opt/ffserver/redis-data:/data \ docker.io/redis:7 --appendonly no --save "" --maxmemory 1073741824 --maxmemory-policy allkeys-lru This ``docker run`` command will expose the ``redis`` client API over port 6379. This will run ``7`` version of ``redis``. You can specify a different version after consulting with our specialists. Use docker options: * ``--name``: specify a custom identifier for a container, for example ``redis``. * ``--network``: connect a container to a network, named ``server``. * ``--restart``: restart policy to apply when a container exits. Set ``always`` to always restart the container if it stops. Use configuration flags to ``redis``: * ``--appendonly``: asynchronously dumps the dataset on disk. * ``--save``: save a snapshot of the database to disk. * ``--maxmemory``: limiting memory usage to the specified number of bytes. * ``--maxmemory-policy``: how ``redis`` will select what to remove when ``maxmemory`` is reached. .. _deploy_postgres: Running ``postgres`` under Docker ----------------------------------- The ``postgres`` is a third-party object-relational database system. Used by ``counter`` as storage for extracted object feature vectors and other information. Run ``postgres`` under Docker: .. code:: bash docker run -tid --name db-postgres --restart always --network server \ --env POSTGRES_USER=login \ --env POSTGRES_PASSWORD=password \ --env POSTGRES_DB=ntech \ --volume /opt/ffserver/postgres-data:/var/lib/postgresql \ docker.io/postgres:14.12 This ``docker run`` command will expose the ``postgres`` client API over port 5432. This will run ``14.12`` version of ``postgres``. You can specify a different version after consulting with our specialists. Use docker options: * ``--name``: specify a custom identifier for a container, for example ``db-postgres``. * ``--network``: connect a container to a network, named ``server``. * ``--restart``: restart policy to apply when a container exits. Set ``always`` to always restart the container if it stops. Use environment variables to ``postgres``: * ``POSTGRES_USER`` and ``POSTGRES_PASSWORD``: these are the credentials used to authenticate with the database. * ``POSTGRES_DB``: the name of the database. .. _provide_licensing: Provide Licensing ----------------------------------- You receive a license file from your NtechLab manager. If you opt for the on-premise licensing, we will also send you a USB dongle. The FindFace Server licensing is provided as follows: #. Follow the steps required for your type of licensing. These steps are described here: :ref:`licensing-principles` #. Deploy ``ntls``, license server in the FindFace Server. .. code:: docker run -tid --name ntls --restart always --network server \ --env CFG_LISTEN=0.0.0.0:3133 \ --env CFG_UI=0.0.0.0:3185 \ --volume /opt/ffserver/licenses:/ntech/license \ --publish 127.0.0.1:3185:3185 \ docker.int.ntl/ntech/universe/ntls:ffserver-12.240830.2 Use docker options: * ``--name``: specify a custom identifier for a container, for example ``ntls``. * ``--network``: connect a container to a network, named ``server``. You should use the ``host`` namespace instead of ``server`` if your license type requires it. But be aware that in this case other services won't be able to connect to ntls using container name of ``ntls``. As a workaround, you can specify the IP address of the ``ntls`` host instead of the ``ntls`` container name in the configuration files of licensable components. * ``--restart``: restart policy to apply when a container exits. Set ``always`` to always restart the container if it stops. * ``--publish 127.0.0.1:3185:3185``: this allows you to connect to the ``ntls`` API from the host with ``http://localhost:3185`` (if you need to access not only from the local machine, remove ``127.0.0.1:``). This option is needless if ``--net=host`` is specified. Use environment variables: * ``CFG_LISTEN``: address to accept incoming client connections (IP:PORT) (``-l`` flag). * ``CFG_UI``: bind address for embedded UI (IP:PORT) (``--ui`` flag). The directory you chose to store the licenses must be mounted to ``/ntech/license``. In the example, ``/opt/ffserver/licenses`` is used as such a directory. .. important:: There must be only one ``ntls`` instance in each FindFace Server installation. .. tip:: In the ``ntls`` configuration file, you can change the license folder and specify your proxy server IP address if necessary. You can also change the ``ntls`` web interface remote access settings. See :ref:`ntls-config` for details. #. Upload the license file via the ``ntls`` web interface in one of the following ways: * Navigate to the ``ntls`` web interface ``http://:3185/#/``. Upload the license file. .. tip:: Later on, use the ``ntls`` web interface to consult your license information, and upgrade or extend your license. * Directly put the license file into the license folder (by default, ``/ntech/license``, can be changed in the configuration file). #. For the on-premise licensing, insert the USB dongle into a USB port. #. If the licensable components are installed on remote hosts, specify the IP address of the ``ntls`` host in their configuration files. See :ref:`extraction-api-config`, :ref:`tntapi-config`, :ref:`video-config` for details. .. seealso:: :ref:`ntls` .. _deploy_extraction-api: Deploy ``extraction-api`` -------------------------------------- The ``extraction-api`` is a service that uses neural networks to detect an object in an image and extract its feature vector. It also recognizes object attributes (for example, gender, age, emotions, beard, glasses, face mask - for face objects). To deploy the ``extraction-api`` component, do the following: .. important:: This component requires the installation of neural network models. Load the desired models manually and mount a volume to the container. .. note:: To deploy the ``extraction-api`` service with acceleration on the GPU, use the ``extraction-api-gpu`` image. Don't forget to use the ``--runtime=nvidia`` flag in the ``docker run`` command. You can specify the value of the GPU device identifier on which the output will be started using the ``-gpu-device`` or ``CFG_GPU_DEVICE`` variable flag. #. Create a default ``extraction-api`` configuration file. .. code:: docker run --rm -ti docker.int.ntl/ntech/universe/extraction-api-cpu:ffserver-12.240830.2 \ --config-template > /opt/ffserver/configs/extraction-api.yaml * ``/opt/ffserver/configs``: the directory on the host to store the configuration file. #. Open the ``extraction-api.yaml`` configuration file. .. code:: sudo vi /opt/ffserver/configs/extraction-api.yaml #. Enable recognition models, subject to your needs. Be sure to choose the right acceleration type for each model, matching the acceleration type of ``extraction-api``: CPU or GPU. Be aware that ``extraction-api`` on CPU can work only with CPU-models, while ``extraction-api`` on GPU supports both CPU- and GPU-models. .. code:: detectors: models: jasmine: aliases: - face model: detector/facedet.jasmine_fast.004.cpu.fnk options: min_object_size: 32 resolutions: [2048x2048] objects: face: base_normalizer: facenorm/crop2x.v2_maxsize400.cpu.fnk quality_attribute: face_quality normalizers: crop2x: model: facenorm/crop2x.v2_maxsize400.cpu.fnk norm200: model: facenorm/bee.v3.cpu.fnk extractors: models_root: /usr/share/findface-data/models models: face_emben: default: model: face/nectarine_m_160.cpu.fnk face_age: default: model: faceattr/faceattr.age.v3.cpu.fnk face_quality: default: model: faceattr/faceattr.quality.v5.cpu.fnk #. Configure other parameters, if needed. For example, enable or disable image fetching from a remote server for some kind of request. .. code:: fetch: enabled: true size_limit: 10485760 #. (Optional) Enable emben cutting. .. code:: extractors: models: face_emben: default: model: face/nectarine_xl_320.gpu.fnk params: emben_cut: - 64 * Currently, only 1 value in the ``ember_cut`` list is supported. * The value must be divisible by 16 without remainder. * Make sure the model supports the emben cutting. Be sure to consult with our technical experts prior (support@ntechlab.com). #. Specify upper limit on detection/extraction/normalization batch size, if needed. .. code:: detectors: max_batch_size: 1 extractors: max_batch_size: 1 normalizers: max_batch_size: 1 .. note:: The ``max_batch_size`` value determines the maximum number of images that are processed in parallel on the GPU or CPU. For GPU, it is recommended ``max_batch_size: 8 or 16``. For CPU, you can specify ``max_batch_size: -1``, this means ``max_batch_size`` is equal to the number of CPU cores. .. warning:: The ``*-instances`` parameter is DEPRECATED. Its fields are outdated. It indicated how many ``extraction-api`` instances were used. The number of instances were specified from your license. The value (0) doesn't means that this number is equal to the number of CPU cores! #. When you have edited the configuration file, run ``extraction-api`` component with the docker run command. .. code:: docker run -tid --name extraction-api --restart always --network server \ --env CFG_LICENSE_NTLS_SERVER=ntls:3133 \ --volume /opt/ffserver/models:/usr/share/findface-data/models \ --volume /opt/ffserver/configs/extraction-api.yaml:/extraction-api.yaml \ --publish 127.0.0.1:18666:18666 \ docker.int.ntl/ntech/universe/extraction-api-cpu:ffserver-12.240830.2 \ --config /extraction-api.yaml Use docker options: * ``--name``: specify a custom identifier for a container, for example ``extraction-api``. * ``--network``: connect a container to a network, named ``server``. * ``--restart``: restart policy to apply when a container exits. Set ``always`` to always restart the container if it stops. Use environment variables and configuration flags: * ``CFG_LICENSE_NTLS_SERVER=ntls:3133``: host and port of the ``ntls`` container. See :ref:`licensing-principles`. * ``/opt/ffserver/configs``: the directory on the host to store the configuration file. * ``/opt/ffserver/models``: the directory on the host to store the models. * ``--publish 127.0.0.1:18666:18666``: this allows you to connect to the ``extraction-api`` API from the host with ``http://localhost:18666`` (if you need to access not only from the local machine, remove ``127.0.0.1:``). .. _deploy_tntapi: Deploy ``tntapi`` ---------------------------------------- The ``tntapi`` component provides interaction of the ``sf-api`` component with the Tarantool database for efficient storage and search by feature vectors of objects. To increase search speed, multiple ``tntapi`` shards can be created on each Tarantool host. Their running concurrently leads to a remarkable increase in performance. Each shard can handle up to approximately 10,000,000 faces. In the case of the standalone deployment, you need only one shard (already created by default). In a cluster environment, the number of shards has to be calculated depending on your hardware configuration and database size (see details below). To deploy the ``tntapi`` component, do the following: #. Create a directory for snapshots and xlogs. .. code:: sudo mkdir -p /opt/ffserver/tnt/001-01/{snapshots,xlogs} #. Run docker command and configure parameters: .. code:: docker run -tid --name tnt-1-1 --restart always --network server \ --env CFG_LISTEN_HOST=0.0.0.0 \ --env CFG_NTLS=ntls:3133 \ --env TT_LISTEN=0.0.0.0:32001 \ --env TT_MEMTX_MEMORY=$((1024 * 1024 * 1024)) \ --volume /opt/ffserver/tnt/001-01:/opt/ntech/var/lib/tarantool/default \ docker.int.ntl/ntech/universe/tntapi:ffserver-12.240830.2 Use docker options: * ``--name``: specify a custom identifier for a container, for example ``tnt-1-1``. Throughout this page we use ``tnt--`` naming convention for ``tntapi`` containers. * ``--network``: connect a container to a network, named ``server``. * ``--restart``: restart policy to apply when a container exits. Set ``always`` to always restart the container if it stops. By default, the configuration is via environment variables (ENV), but it is also possible to use the configuration file ``FindFace.lua``. * ``CFG_LISTEN_HOST=0.0.0.0``: host to public HTTP API. * ``CFG_NTLS=ntls:3133``: host and port of the ``ntls`` server. * ``TT_LISTEN=0.0.0.0:32001``: binary host/port, used for admin operations and replication. * ``TT_MEMTX_MEMORY=$((1024 * 1024 * 1024))``: the maximum memory usage in bytes. * ``/opt/ffserver/tnt/``: the directory on the host to store ``tntapi`` data. If needed, configure via :download:`FindFace.lua <_scripts/FindFace.lua>` file: #. Download the ``FindFace.lua`` file and put it into some directory on the host (for example, ``/opt/ffserver/configs``). Open the configuration file: .. code:: sudo vi /opt/ffserver/configs/FindFace.lua #. Edit the maximum memory usage. The memory usage must be set in bytes, depending on the number of faces the shard handles, at the rate roughly 1280 byte per face. For example, the value ``1.2*1024*1024*1024`` corresponds to 1,000,000 faces:: memtx_memory = 1.2 * 1024 * 1024 * 1024, #. When using the default ``FindFace.lua`` configuration file, you can set a set of ``meta_scheme/meta_indexes`` in the global variable ``cfg_spaces``. Create a database structure to store the face recognition results. The structure is created as a set spaces of fields. Describe each field with the following parameters: * ``id``: field id (starting from 1!); * ``name``: field name, must be the same as the name of a relevant object parameter, string; * ``field_type``: data type (``unsigned|string|set[string]|set[unsigned]``); * ``default``: field default value. If a default value exceeds ``1e14 – 1``, use a string data type to specify it, for example, ``"123123.."`` instead of ``123123..``. * ``meta_indexes``: the names of the meta fields to build the index on. You can find the custom ``tnt-schema.lua`` :ref:`here `. Mount your edited ``tnt-scheme.lua`` file into the container when running the ``tntapi`` component: .. code:: docker run ... \ --volume /opt/ffserver/configs/tnt-scheme.lua:/tnt-scheme.lua \ ... Use the environment variable ``CFG_EXTRA_LUA`` for the file ``tnt-scheme.lua``: .. code:: CFG_EXTRA_LUA='dofile("/tnt-scheme.lua")' #. Mount your edited ``FindFace.lua`` file into the container when running the ``tntapi`` component, otherwise, the default one will be used. .. code:: docker run ... \ --volume /opt/ffserver/configs/FindFace.lua:/etc/tarantool/instances.enabled/FindFace.lua \ ... .. _deploy_upload: Deploy ``upload`` ---------------------------------------- The ``upload`` service is a storage of normalized images of objects and video chunks. FindFace Server uses the saved normalized images of objects in order to migrate the database to the EmbeN of another neural network, and video chunks are used to implement the Video Recorder. The ``upload`` API is based on the `WebDAV `_ protocol. .. tip:: If you don't need this functionality, skip this step. Install ``upload`` as such: .. code:: docker run -tid --name upload --restart always --network server \ -v /opt/ffserver/upload:/var/lib/ffupload \ docker.int.ntl/ntech/universe/upload:ffserver-12.240830.2 Use docker options: * ``--name``: specify a custom identifier for a container, for example ``upload``. * ``--network``: connect a container to a network, named ``server``. * ``--restart``: restart policy to apply when a container exits. Set ``always`` to always restart the container if it stops. * ``/var/lib/ffupload``: directory to store the data. .. _deploy_sf-api: Deploy ``sf-api`` ---------------------------------------- .. note:: ``sf-api`` requires such third-party software as ``memcached``. :ref:`Deploy ` it first. The ``sf-api`` is a service that implements an internal user-friendly HTTP API for the ``extraction-api`` and ``tntapi``. That is, it provides an opportunity to detect an object in an image and extract its feature vector (or do it from a normalized object images), as well as save it in the Tarantool database. To deploy the ``sf-api`` component, do the following: #. Create a default ``sf-api`` configuration file. .. code:: docker run --rm -ti docker.int.ntl/ntech/universe/sf-api:ffserver-12.240830.2 \ --config-template > /opt/ffserver/configs/sf-api.yaml * ``/opt/ffserver/configs/``: the directory on the host to store the configuration file. #. Open the ``/opt/ffserver/configs/sf-api.yaml`` configuration file. .. code:: sudo vi /opt/ffserver/configs/sf-api.yaml #. Specify the addresses and ports of the ``extraction-api`` container (``extraction-api -> url``, in the format: ``http://:``), the ``tntapi`` shards (``storage-api -> shards -> master``, in the format: ``http://:/v2/``). .. code:: extraction-api: url: http://extraction-api:18666 ... storage-api: shards: - master: http://tnt-1-1:8001/v2/ slaves: [] - master: http://tnt-2-1:8001/v2/ slaves: [] read_slave_only: false read_slave_first: false galleries_read_slave_first: false max_slave_attempts: 2 cooldown: 2s .... #. The main part of the detected object data is stored in the ``sf-api`` cache. Depending on the load profile, the cache may be: * ``inmemory`` .. code:: cache: type: inmemory inmemory: size: 16384 * ``memcache`` (see `Memcached `_). .. code:: cache: type: memcache memcache: nodes: - memcached:11211 * ``redis`` .. code:: cache: type: redis redis: # https://redis.io/ nodes: - redis:6379 timeout: 5s network: tcp password: "" db: 0 #. The ``sf-api`` service implements HTTP API to access the FindFace Server functions such as object detection and object recognition. In order to allow migration from one ``EmbeN`` model to another, the ``sf-api`` provides API to get normalized object images. One normalized object image can take from one to hundreds of kilobytes, so the storage for it should be several times larger than for vectors. Two storage types are currently supported: * ``webdav`` .. code:: normalized-storage: type: webdav enabled: true webdav: # upload-url: http://upload:3333/uploads/ * ``s3`` storage .. code:: normalized-storage: type: s3 enabled: true s3: endpoint: "" bucket-name: "" access-key: "" secret-access-key: "" secure: true region: "" public-url: "" operation-timeout: 30 * If a vector migration is not needed, you can disable the storage of normalized object images: .. code:: normalized-storage: enabled: false #. When you have edited the configuration file, run ``sf-api`` component with the docker run command. .. code:: docker run -tid --name sf-api --restart always --network server \ --volume /opt/ffserver/configs/sf-api.yaml:/sf-api.yaml \ --publish 127.0.0.1:18411:18411 \ docker.int.ntl/ntech/universe/sf-api:ffserver-12.240830.2 \ --config /sf-api.yaml Use docker options: * ``--name``: specify a custom identifier for a container, for example ``sf-api``. * ``--restart``: restart policy to apply when a container exits. Set ``always`` to always restart the container if it stops. * ``/opt/ffserver/configs``: the directory on the host to store the configuration file. * ``--network``: connect a container to a network, named ``server``. * ``--publish 127.0.0.1:18411:18411``: this allows you to connect to the ``sf-api`` API from the host with ``http://localhost:18411`` (if you need to access not only from the local machine, remove ``127.0.0.1:``). Use ``sf-api`` options: * ``--config``: path to ``sf-api.yaml`` configuration file. .. _deploy_storage-api-proxy: Deploy ``storage-api-proxy`` (optional) ---------------------------------------------------------- The ``storage-api-proxy`` is an optional component that proxies requests to the specified service that implements ``storage-api`` (for example, ``tntapi`` service or another ``storage-api-proxy`` services). To run the ``storage-api-proxy``, do the following: #. Create a default ``storage-api-proxy`` configuration file. .. code:: bash docker run --rm -ti --entrypoint "/storage-api-proxy" docker.int.ntl/ntech/universe/sf-api:ffserver-12.240830.2 \ --config-template > /opt/ffserver/configs/storage-api-proxy.yaml * ``/opt/ffserver/configs/``: the directory on the host to store the configuration file. #. Modify configuration file depending on your needs, refer to the ``storage-api`` parameters: .. code:: storage-api: # shards: - master: http://tnt-1-1:8001/v2/ slaves: [] # an array of URLs to the other shard nodes - master: http://tnt-2-1:8001/v2/ slaves: [] read_slave_only: false # If true: Ignore master on read requests. read_slave_first will be ignored. read_slave_first: false # If true: Prefer slaves over master for requests. galleries_read_slave_first: false # If true: Prefer slaves over master for get/list galleries requests. max_slave_attempts: 2 # Give up after trying to read from max_slave_attempts slaves (default 2). cooldown: 2s # Cooldown timeout after communication error. #. Run ``storage-api-proxy`` with the configuration file. .. code:: docker run --rm -ti --network server --entrypoint "/storage-api-proxy" \ --volume /opt/ffserver/configs/storage-api-proxy.yaml:/storage-api-proxy.yaml \ --publish 127.0.0.1:18411:18411 \ docker.int.ntl/ntech/universe/sf-api:ffserver-12.240830.2 \ --config /storage-api-proxy.yaml \ Use docker options: * ``/opt/ffserver/configs``: the directory on the host to store the configuration file. * ``--network``: connect a container to a network, named ``server``. * ``--publish 127.0.0.1:18411:18411``: this allows you to connect to the ``storage-api-proxy`` API from the host with ``http://localhost:18411`` (if you need to access not only from the local machine, remove ``127.0.0.1:``). Use ``storage-api-proxy`` options: * ``--config``: path to ``storage-api-proxy.yaml`` configuration file. .. _deploy_video-objects-detection: Deploy Video Objects Detection ---------------------------------------- Video objects detection is provided by the ``video-manager`` and ``video-worker`` services. .. note:: ``video-manager`` requires such third-party software as ``etcd``. :ref:`Deploy ` it first. To deploy the ``video-manager`` component, run its image and specify the environment variables: .. code:: docker run -tid --name video-manager-1 --restart always --network server \ --env CFG_ETCD_ENDPOINTS=http://etcd-1:2379,http://etcd-2:2379,http://etcd-3:2379 \ --env CFG_RPC_LISTEN=:18811 \ --env CFG_MASTER_SELF_URL=video-manager-1:18811 \ --env CFG_MASTER_SELF_URL_HTTP=video-manager-1:18810 \ --publish 127.0.0.1:18810:18810 \ docker.int.ntl/ntech/universe/video-manager:ffserver-12.240830.2 Use docker options: * ``--name``: specify a custom identifier for a container, for example ``video-manager-1``. * ``--network``: connect a container to a network, named ``server``. * ``--restart``: restart policy to apply when a container exits. Specify ``always`` to always restart the container if it stops. Environment variables and flags: * ``CFG_ETCD_ENDPOINTS=http://etcd-1:2379,http://etcd-2:2379,http://etcd-3:2379``: list of ``etcd`` URLs, where ``etcd-1`` is etcd container name. * ``CFG_RPC_LISTEN=:18811``: allow accessing the ``video-manager`` service from any IP address. * ``CFG_MASTER_SELF_URL=video-manager-1:18811``: self url. * ``CFG_MASTER_SELF_URL_HTTP=video-manager-1:18810``: self url http. * ``--publish 127.0.0.1:18810:18810``: this allows you to connect to the ``video-manager`` API from the host with ``http://localhost:18810`` (if you need to access not only from the local machine, remove ``127.0.0.1:``). If needed, configure the following parameters using environment variables/flags or through a configuration file: #. If you need to deploy ``video-worker`` instances on remote hosts, specify ``--publish 18811:18811`` and in ``CFG_MASTER_SELF_URL`` and ``CFG_MASTER_SELF_URL_HTTP`` replace the container's hostname with the hostname or IP of the host. #. If needed, in the ``router_url`` parameter, specify IP address and port of the ``facerouter`` component (if installed) which will receive detected faces from ``video-worker`` (env: ``CFG_ROUTER_URL``). .. code:: --env CFG_ROUTER_URL=http://:8085/ .. note:: ```` must be substituted into your IP address for the ``facerouter`` component. #. If you need to run multiple ``video-manager`` instances, and you need the license limit on the number of cameras was checked by the ``video-manager``, then specify the environment variables ``CFG_NTLS_ENABLED=true`` and ``CFG_NTLS_URL=ntls:3185`` where ``video-manager`` can send requests. #. If necessary, configure the video processing settings which apply to all video streams in the system. .. tip:: You can skip this step: when creating a job for ``video-manager``, you will be able to individually configure processing settings for each video stream (see :ref:`video-api`). As an alternative way, you can use the configuration file: #. Create a default ``video-manager.yaml`` configuration file. .. code:: docker run --rm -ti docker.int.ntl/ntech/universe/video-manager:ffserver-12.240830.2 \ --config-template > /opt/ffserver/configs/video-manager.yaml * ``/opt/ffserver/configs/``: the directory on the host to store the configuration file. #. Open the ``/opt/ffserver/configs/video-manager.yaml`` configuration file and specify parameters as above. .. code:: sudo vi /opt/ffserver/configs/video-manager.yaml #. Run the ``video-manager`` service using ``--config /video-manager.yaml`` flag with your modified configuration file mounted into the container. .. code:: docker run -tid --name video-manager-1 --restart always --network server \ -v /opt/ffserver/configs/video-manager.yaml:/video-manager.yaml \ docker.int.ntl/ntech/universe/video-manager:ffserver-12.240830.2 --config /video-manager.yaml .. _deploy_video-worker: To deploy the ``video-worker`` component, do the following: .. note:: To deploy the ``video-worker`` service with acceleration on the GPU, use the ``video-worker-gpu`` image. Don't forget to use the ``--runtime=nvidia`` flag in the ``docker run`` command. #. Create a default ``video-worker.yaml`` configuration file. .. code:: docker run --rm -ti docker.int.ntl/ntech/universe/video-worker-cpu:ffserver-12.240830.2 \ --config-template > /opt/ffserver/configs/video-worker.yaml * ``/opt/ffserver/configs/``: the directory on the host to store the configuration file. #. Open the ``/opt/ffserver/configs/video-worker.yaml`` configuration file. .. code:: sudo vi /opt/ffserver/configs/video-worker.yaml #. Fill out the values for all required parameters. Make sure that ``detectors``, ``normalizers``, and ``extractors`` are specified in the ``models`` section and ``objects`` section has corresponding values. Below is an example of how the section should look like. It may vary depending on the recognition objects that you have selected. .. code:: models: cache_dir: /var/cache/findface/models_cache detectors: car: fnk_path: /usr/share/findface-data/models/detector/cardet.kali.008.cpu.fnk min_size: 60 body: fnk_path: /usr/share/findface-data/models/detector/bodydet.kali.021.cpu.fnk min_size: 60 face: fnk_path: /usr/share/findface-data/models/detector/facedet.jasmine_fast.004.cpu.fnk min_size: 60 normalizers: car_norm: fnk_path: /usr/share/findface-data/models/facenorm/cropbbox.v2.cpu.fnk car_norm_quality: fnk_path: /usr/share/findface-data/models/facenorm/cropbbox.v2.cpu.fnk body_norm: fnk_path: /usr/share/findface-data/models/facenorm/cropbbox.v2.cpu.fnk body_norm_quality: fnk_path: /usr/share/findface-data/models/facenorm/cropbbox.v2.cpu.fnk face_norm: fnk_path: /usr/share/findface-data/models/facenorm/crop2x.v2_maxsize400.cpu.fnk face_norm_quality: fnk_path: /usr/share/findface-data/models/facenorm/crop1x.v2_maxsize400.cpu.fnk extractors: car_quality: fnk_path: /usr/share/findface-data/models/carattr/carattr.quality.v1.cpu.fnk normalizer: car_norm_quality body_quality: fnk_path: /usr/share/findface-data/models/pedattr/pedattr.quality.v0.cpu.fnk normalizer: body_norm_quality face_quality: fnk_path: /usr/share/findface-data/models/faceattr/faceattr.quality.v5.cpu.fnk normalizer: face_norm_quality objects: car: normalizer: car_norm quality: car_quality track_features: '' body: normalizer: body_norm quality: body_quality track_features: '' face: normalizer: face_norm quality: face_quality track_features: '' #. Run ``video-worker`` image and specify the environment variables. Mount volumes with models and modified configuration file. .. code:: docker run -tid --name video-worker --network server --restart always \ --env CFG_MGR_STATIC=video-manager-1:18811 \ --env CFG_NTLS_ADDR=ntls:3133 \ --volume /opt/ffserver/models:/usr/share/findface-data/models \ --volume /opt/ffserver/configs/video-worker.yaml:/video-worker.yaml \ docker.int.ntl/ntech/universe/video-worker-cpu:ffserver-12.240830.2 \ --config /video-worker.yaml Use docker options: * ``--name``: specify a custom identifier for a container, for example ``video-worker``. * ``--network``: connect a container to a network, named ``server``. * ``--restart``: restart policy to apply when a container exits. Specify ``always`` to always restart the container if it stops. Environment variables and flags: * ``CFG_MGR_STATIC=video-manager-1:18811``: videomanager grpc ip:port, which provides ``video-worker`` with settings and the video stream list. * ``CFG_NTLS_ADDR=ntls:3133``: ``ntls`` server ip:port. * ``/opt/ffserver/models``: the directory on the host to store models. * ``/opt/ffserver/configs``: the directory on the host to store the configuration file. .. _deploy_VMS: Deploy Video Recorder: ``video-storage`` and ``video-streamer-cpu`` ----------------------------------------------------------------------------------------- The Video Recorder operation requires a third-party software, MongoDB. Install it as follows: .. code:: docker run -tid --name mongo --restart always --network server \ --volume /opt/ffserver/mongo-data:/data/db \ docker.io/library/mongo:4.4 To deploy the ``video-storage`` component, run its image and specify the environment variables: .. code:: docker run -tid --name video-storage --restart always --network server \ --env CFG_STREAMER_ENDPOINTS=video-streamer:9000 \ --env CFG_CHUNK_STORAGE_TYPE=webdav \ --env CFG_CHUNK_STORAGE_WEBDAV_UPLOAD_URL=http://upload:3333/uploads/ \ --env CFG_META_STORAGE_MONGO_URI=mongodb://mongo \ --publish 127.0.0.1:18611:18611 \ docker.int.ntl/ntech/universe/video-storage:ffserver-12.240830.2 Use docker options: * ``--name``: specify a custom identifier for a container, for example ``video-storage``. * ``--network``: connect a container to a network, named ``server``. * ``--restart``: restart policy to apply when a container exits. Specify ``always`` to always restart the container if it stops. Environment variables and flags: * ``CFG_STREAMER_ENDPOINTS=video-streamer:9000``: list of streamer endpoint URLs. * ``CFG_CHUNK_STORAGE_TYPE=webdav``: Set ``webdav`` in chunk storage type. * ``CFG_CHUNK_STORAGE_WEBDAV_UPLOAD_URL=http://upload:3333/uploads/``: webdav storage URL. * ``CFG_META_STORAGE_MONGO_URI=mongodb://mongo``: meta storage mongo URI. * ``--publish 127.0.0.1:18611:18611``: this allows you to connect to the ``video-storage`` API from the host with ``http://localhost:18611`` (if you need to access not only from the local machine, remove ``127.0.0.1:``). To deploy the ``video-streamer-cpu`` component, run its image and specify the environment variables: .. code:: docker run -tid --name video-streamer --restart always --network server \ --env CFG_VIDEO_STORAGE_URL=http://video-storage:18611 \ --volume /opt/ffserver/video-streamer:/var/cache/findface/video-streamer \ --publish 127.0.0.1:9000:9000 \ docker.int.ntl/ntech/universe/video-streamer-cpu:ffserver-12.240830.2 Use docker options: * ``--name``: specify a custom identifier for a container, for example ``video-streamer``. * ``--network``: connect a container to a network, named ``server``. * ``--restart``: restart policy to apply when a container exits. Specify ``always`` to always restart the container if it stops. Environment variables and flags: * ``CFG_VIDEO_STORAGE_URL=http://video-storage:18611`` * ``--volume /opt/ffserver/video-streamer:/var/cache/findface/video-streamer``: mount cache with volume. * ``--publish 127.0.0.1:9000:9000``: this allows you to connect to the ``video-streamer`` API from the host with ``http://localhost:9000`` (if you need to access not only from the local machine, remove ``127.0.0.1:``). .. _deploy_liveness: Deploy ``liveness-api`` -------------------------------------------------- To deploy the ``liveness-api`` component, run its image and specify the environment variables: .. code:: docker run -tid --name liveness-api --restart always --network server \ --env CFG_EXTRACTION_API_EXTRACTION_API=http://extraction-api:18666 \ --env CFG_SF_API_SF_API=http://sf-api:18411 \ --publish 127.0.0.1:18301:18301 \ docker.int.ntl/ntech/universe/liveness-api:ffserver-12.240830.2 Use docker options: * ``--name``: specify a custom identifier for a container, for example ``liveness-api``. * ``--network``: connect a container to a network, named ``server``. * ``--restart``: restart policy to apply when a container exits. Specify ``always`` to always restart the container if it stops. Environment variables and flags: * ``--publish 127.0.0.1:18301:18301``: this allows you to connect to the ``liveness-api`` API from the host with ``http://localhost:18301`` (if you need to access not only from the local machine, remove ``127.0.0.1:``). .. seealso:: :ref:`liveness-api` .. _deploy_counter: Deploy ``counter`` --------------------------------------------- .. note:: ``counter`` requires such third-party software as ``postgres``. :ref:`Deploy ` it first. The ``counter`` lets you get statistics of unique individuals. To deploy the ``counter`` component, run its image and specify the environment variables: .. code:: docker run -tid --name counter --restart always --network server \ --env CFG_DATABASE_CONNECTION_STRING=postgres://login:password@db-postgres/ntech?sslmode=disable \ --publish 127.0.0.1:18300:18300 \ docker.int.ntl/ntech/universe/counter:ffserver-12.240830.2 Use docker options: * ``--name``: specify a custom identifier for a container, for example ``counter``. * ``--network``: connect a container to a network, named ``server``. * ``--restart``: restart policy to apply when a container exits. Specify ``always`` to always restart the container if it stops. Environment variables and flags: * ``CFG_DATABASE_CONNECTION_STRING=postgres://login:password@db-postgres/ntech?sslmode=disable``: PostgreSQL connection string. * ``login:password``: these are the credentials used to authenticate with the database. * ``db-postgres``: name of the container, where PostgreSQL database is located, or remote host. * ``ntech``: the name of the database. * ``--publish 127.0.0.1:18300:18300``: this allows you to connect to the ``counter`` API from the host with ``http://localhost:18300`` (if you need to access not only from the local machine, remove ``127.0.0.1:``). .. _deploy_deduplicator: Deploy ``deduplicator`` --------------------------------------------- To deploy the ``deduplicator`` component, run its image and specify the environment variables and flags: .. code:: docker run -tid --name deduplicator --restart always --network server \ --publish 127.0.0.1:18310:18310 \ docker.int.ntl/ntech/universe/deduplicator:ffserver-12.240830.2 Use docker options: * ``--name``: specify a custom identifier for a container, for example ``deduplicator``. * ``--network``: connect a container to a network, named ``server``. * ``--restart``: restart policy to apply when a container exits. Specify ``always`` to always restart the container if it stops. Environment variables and flags: * ``--publish 127.0.0.1:18310:18310``: this allows you to connect to the ``deduplicator`` API from the host with ``http://localhost:18310`` (if you need to access not only from the local machine, remove ``127.0.0.1:``).