Welcome to FindFace Multi documentation!
FindFace Multi is a multifunctional multi-object video analytics software, based on FindFace Enterprise Server, a cutting-edge AI recognition technology. FindFace Multi is a turnkey solution that you can harness in such areas as retail, banking, social networking, entertainment, sports, event management, dating services, video surveillance, public safety, homeland security, and others.
FindFace Multi can detect, identify, and analyze the following objects in the video:
Human faces, along with recognition of such facial attributes as gender, age, emotions, glasses, face mask, beard, and many others. The integrated 2D anti-spoofing system ensures that it is a live person in front of a camera and eliminates the possibility of fraud using images on paper or mobile device screens.
Human bodies (silhouettes), along with recognition of clothing type and color.
Cars, with recognition of such car attributes as make, model, body style, color, license plate number, and others.
After FindFace Multi identifies and analyzes an object, it notifies responsible officials about its appearance within fractions of a second. Additional information about the object, such as a person’s gender, age, car license plate number, etc., is displayed in the relevant recognition event.
FindFace Multi supports the integration of third-party solutions via HTTP API and webhooks, so you can enhance your current system or application with multi-object recognition functionality.
Core features
AI-based platform.
Comprehensive cards that can accommodate aggregated information about a person and car/vehicle. Possibility of creating relations between cards of the same type or different types.
Fast and robust real-time video monitoring against a card index.
Multi-object identification and analytics: faces, bodies (silhouettes), cars.
Support for live video and archives, most video formats and codecs that FFmpeg can decode.
Advanced camera management, including ONVIF support, the possibility of changing video orientation, fine-tuning a camera for each object type, video recording (if enabled).
Multi-object verification: faces, bodies, cars.
AI recognition of gender, age, emotions, glasses, beard, face mask, and other face attributes.
AI face liveness detector.
AI recognition of an exact person and vehicle.
AI recognition of clothing type and color.
AI recognition of special vehicles, a car’s make, model, body style, color, and license plate number.
Database search for faces, bodies, cars.
Possibility of counting faces, bodies, and cars on connected cameras and measuring distance between bodies. Single- and multi-camera counting support.
Video surveillance. Video Recorder.
Possibility of monitoring the presence of people in given areas, using rules and monitoring schedules.
Automatic clustering of objects of the same origin (face/body images belonging to the same person, images of the same car), enriched with the end-to-end integration with the card index.
Deployment environment
Developer-friendly installer and user-friendly interface.
Single- and multi-host deployment.
Increased performance and fault-tolerance in high load systems with numerous cameras and clients.
Network or on-premise licensing.
CPU- and GPU-based acceleration for your choice.
Mobile app.
System security
Advanced user management.
Authentication based on a password, certificate, and face recognition for guaranteed system protection.
Card security.
Comprehensive, friendly, searchable audit logs.
Backup and recovery utilities.
Possibility of monitoring user sessions and blocking devices without deactivating user accounts.
Ethical data usage
Full compliance with personal data protection laws (GDPR and similar).
Make the most of your system
Social interaction analysis.
Know your customer analytics (KYC).
Detailed reports on face recognition events, episodes, search events, clusters, counters, cameras, cards, audit logs, KYC analytics, and areas.
Face liveness detector as a standalone service.
Useful little things
Quick card index creation.
Complete card customization.
Deduplication support for events and cards.
Extended set of search filters.
Scheduled database cleanup.
Integration
Integration via HTTP API and webhooks.
Integrations with favored vendors.
Edge device integration.
What’s New in FindFace Multi 1.2
New Features:
Video Recorder (BETA): a brand-new functionality that records, stores, and plays back video data from cameras.
See:
Card index: a logical result of the dossier database development. The card index contains cards of two types. A person card aggregates data about a person, such as photos of their face and body. Vehicle-related data, such as photos and license plate numbers, are stored in car cards.
Relations: enrich your card index and analytics capacity by drawing relations between cards of the same type or different types.
See Card Relations.
Special vehicles: a new AI-based feature that allows you to automatically recognize police, ambulance, rescue service, and taxi vehicles.
Counters: a possibility of counting cars in addition to faces and bodies.
Clusters: a completely revised functionality of what was formerly known as Persons. Includes automatic clustering of objects of the same origin (face/body images belonging to the same person, images of the same car), end-to-end integration with the card index, and many other features.
Episodes: a major enhancement that divides episodes into two types: people and cars.
Webhooks: a possibility of setting webhooks for areas and new episode types.
See Webhooks.
Reports: a possibility of building reports on areas, area activations, and activation records.
See Reports.
Reports: new work time reports with a possibility of export to Excel. Allow you to regard the exact moments of staff entering and exiting an enterprise or a designated area and calculate the total time spent at it. Can be built on the Events and Areas tabs.
See Reports.
Enhanced Algorithms, UI, UX:
Expanded usage of progress bars throughout the web interface.
Technical Changes:
To save disc space, the system now saves normalized images to JPEG with the 95% quality instead of PNG.
Video processing: statistics is now partitioned by object type.
Cleanup utility: new entities removal support.
Cluster migration to a different neural network.
Renamed:
dossier
→card
.Relations → Interaction Analysis.
Persons → Clusters.
Fresh Neural Networks:
Special vehicles recognition: a brand-new neural network
carattr.special_types.v0
to recognize police cars, ambulance, rescue service, and taxi.Liveness recognition: brand-new neural networks
liveness.pacs.v0
(embedded intofindface-video-worker
) andliveness.colombo
(findface-liveness-api
) with improved characteristics.Car license plate normalizer:
anaferon.v3
with improved characteristics.Car license plate recognition: a new version
license_plate.v4
that supports license plates of Uzbekistan, Brazil, and India in addition to the UAE, Russia, Kazakhstan, Georgia, South Africa, Vietnam, Belarus, Ukraine, Armenia, Kyrgyzstan.
Coming Soon:
Distributed card index: a possibility of distributing a card index among several hosts with synchronization and replication.
Flexible structure of the feature vector database: a possibility of creating different meta-fields in different Tarantool galleries.
Note
This functionality is already available in the FindFace Core. If you need this functionality now, contact our support team (support@ntechlab.com).
System Administrator’s Guide
This chapter is all about FindFace Multi deployment and further updates and maintenance during exploitation.
Architecture
Though you mostly interact with FindFace Multi through its web interface, be sure to take a minute to learn the FindFace Multi architecture. This knowledge is essential for the FindFace Multi deployment, integration, maintenance, and troubleshooting.
In this chapter:
Recognition Objects and Recognition Process
FindFace Multi can recognize the following objects and their features:
human faces
human bodies (silhouettes)
cars
Note
The face recognition functionality is enabled by default. Make changes to configuration files to enable the body and car recognition.
FindFace Multi detects an object in the photo or video and prepares its image through normalization. The normalized image is then used for extracting the object’s feature vector (an n-dimensional vector of numerical features that represent the object). Object feature vectors are stored in the database and further used for verification and identification purposes.
Architectural Elements
FindFace Multi consists of the following fundamental architectural elements:
FindFace Core, a cutting-edge AI-based recognition technology that can be used as a separate product FindFace Enterprise Server.
FindFace Multi, which is a turnkey application module for FindFace Enterprise Server.
The FindFace Core internal architecture allows for embedding Video Recorder, an additional functionality that records, stores, and plays back video data from cameras.
Architecture scheme
FindFace Core
The FindFace Core includes the following components:
Component |
Ports in use |
Description |
Vendor |
---|---|---|---|
findface-extraction-api |
18666 |
Service that uses neural networks to detect an object in an image and extract its feature vector. It also recognizes object attributes (for example, gender, age, emotions, beard, glasses, face mask - for face objects). CPU- or GPU-acceleration. |
NtechLab own deployment |
findface-sf-api |
18411 |
Service that implements the internal HTTP API for object detection and recognition. |
|
findface-tarantool-server |
32001, shard ports (default 330xx, 81xx) |
Service that provides interaction between the |
|
findface-upload |
3333 |
NginX-based web server used as a storage for original images, thumbnails, and
normalized object images. If Video Recorder is installed, |
|
findface-facerouter |
18820 |
Service used to define processing directives for detected objects. In FindFace
Multi, its functions are performed by |
|
findface-video-manager |
18810, 18811 |
Service, part of the video object detection module, that is used for managing the video object detection functionality, configuring the video object detector settings and specifying the list of to-be-processed video streams. |
|
findface-video-worker |
18999 |
Service, part of the video object detection module, that recognizes an object in
the video and posts its normalized image, full frame and metadata (such as
detection time) to the |
|
findface-ntls |
443 (TCP), 3133, 3185 |
License server that interfaces with the NtechLab Global License Server, a USB dongle, or hardware fingerprint to verify the license of your FindFace Multi instance. |
|
findface-counter |
18300 |
Service used for event deduplication. |
|
findface-liveness-api |
18301 |
Besides the embedded functionality provided by |
|
Tarantool |
Shard ports (default 330xx, 81xx) |
Third-party software that implements the feature vector database that stores extracted object feature vectors and identification events. The system data, cards, user accounts, and camera settings are stored in PostgreSQL (part of the FindFace Multi application module). |
|
etcd |
2379 |
Third-party software that implements a distributed key-value store for
|
|
NginX |
80; SSL: 8002, 8003, 443, 80 |
Third-party software that implements the system web interfaces. |
|
memcached |
11211 |
Third-party software that implements a distributed memory caching system.
Used by |
FindFace Multi Application Module
The FindFace Multi application module includes the following components:
Component |
Ports in use |
Description |
Vendor |
---|---|---|---|
findface-security |
Configurable |
Component that serves as a gateway to the FindFace Core.
Provides interaction between the FindFace Core and the web interface, the system
functioning as a whole, HTTP and web socket, object monitoring, event
notifications, episodes, webhooks, object clusterization, and counters.
Includes the following internal services: NTLS checker, Counter manager, Webhooks
manager, Clusterizator, Event episodes manager, and Video archive queue
manager. The last four can be enabled and disabled via the
|
NtechLab own deployment |
findface-security-ui |
Configurable |
Main web interface used to interact with FindFace Multi. Based on the Django framework. Allows you to work with object recognition events, search for objects, manage cameras, users, cards, watch lists, collect real-time statistics, and many more. |
|
NATS |
4222 |
Third-party software that implements a message broker inside |
|
etcd |
2379 |
Third-party software that implements locks in the |
|
Pgbouncer |
5439 |
Third-party software, a lightweight connection pooler for PostgreSQL. Optional, used to increase the database performance under high load. |
|
PostgreSQL |
5432 |
Third-party software that implements the main system database. This database stores cards of persons and cars and data for internal use, including user accounts and camera settings. The object feature vectors and object identification events are stored in Tarantool (part of the FindFace Core). |
Video Recorder
Video Recorder is an optional part of the FindFace Core. It includes the following components:
Component |
Ports in use |
Description |
Vendor |
---|---|---|---|
findface-video-storage |
18611 |
Service that implements video chunk management. It takes video chunks from the
|
NtechLab own deployment |
findface-video-streamer-cpu |
9000 |
After receiving a |
|
MongoDB |
27017 |
Third-party software that implements the Video Recorder database. The database stores
meta-information of the video chunks, including their location. The video chunks
themselves are stored in the |
Single- and Multi-Host Deployment
You can deploy FindFace Multi on a single host or in a multi-host environment. If you opt for the latter, we offer you one of the following deployment schemes:
Deploy FindFace Multi standalone and distribute additional
findface-video-worker
components across multiple hosts.See Additional findface-video-worker Deployment on Remote Hosts.
Distribute the FindFace Multi components across multiple hosts. If necessary, set up load balancing.
CPU- and GPU-acceleration
The findface-extraction-api
and findface-video-worker
services can be either CPU- or GPU-based. During installation from the developer-friendly installer, you will have an opportunity to choose the acceleration type you need.
If you opt to install FindFace Multi from the repository package, deploy the findface-extraction-api
and findface-video-worker-cpu
packages on a CPU-based server, and the findface-extraction-api-gpu
and/or findface-video-worker-gpu
packages on a GPU-based server.
Important
Refer to Requirements when choosing hardware configuration.
Important
If the resolution of a camera(s) in use is more than 1280x720px, it is strongly recommended to use the GPU-accelerated package findface-video-worker-gpu
.
Note
The liveness detector is much slower on CPU than on GPU.
Requirements
In this chapter:
System Requirements for Basic Configuration
To calculate the FindFace Multi host(s) characteristics, use the requirements provided below.
Tip
Be sure to learn about the FindFace Multi architecture first.
Important
If the resolution of a camera(s) in use is more than 1280x720px, it is strongly recommended to use the GPU-accelerated package findface-video-worker-gpu
.
Important
On AMD CPU servers, the full functionality of the CPU-accelerated findface-extraction-api
service is not guaranteed. Use the GPU-accelerated service findface-extraction-api-gpu
along with the GPU-version of neural networks instead.
Note
In the case of a high-load system (~> 15 events per second), we recommend using an SSD.
Minimum |
Recommended |
|
---|---|---|
CPU |
Intel Core i5 CPU with 4+ physical cores 3+ GHz. AVX2 support |
Intel Xeon Silver/Gold with 6+ physical cores |
The own needs of FindFace Multi require 2 cores HT > 2.5 GHz. The characteristics also depend on the number of cameras in use. A single camera 720p@25FPS requires 2 cores >2.5 GHz. AVX2 support |
||
GPU (optional) |
NVIDIA GeForce® GTX 1060 6 GB |
NVIDIA GeForce® GTX 1080Ti+ with 11+ GB RAM |
Supported series: GeForce (Maxwell, Pascal, Turing, and above), Tesla (Maxwell, Pascal, Volta v100, Turing, and above) Note: NVIDIA GeForce RTX 40 Series graphics cards are currently not supported |
||
RAM |
10 Gb |
16+ Gb |
The own needs of FindFace Multi require 8 Gb. The RAM consumption also depends on the number of cameras in use. A single camera 720p@25FPS requires 2 GB RAM |
||
HDD (SSD for best performance) |
16 Gb |
16+ Gb |
The own needs of the operating system and FindFace Multi require 15 GB. The total volume is subject to the required depth of the event archive in the database and in the log, at the rate of 1.5 Mb per 1 event |
||
Operating system |
Ubuntu 18.04, x64 only |
Note
You can also use an Intel-based VM if there is AVX2 support, and eight physical cores are allocated exclusively to the VM.
Tip
For more accurate hardware selection, contact our support team by support@ntechlab.com.
Required Administrator Skills
A FindFace Multi administrator must know and understand OS Ubuntu at the level of an advanced user.
Requirements for CCTV Cameras
Face Recognition
The primary requirements for installation and characteristics of CCTV cameras in your FindFace Multi-based face recognition system are the following:
For correct face detection in a video stream, mount the camera so that the face of each individual entering the monitored area surely appears in the camera field of view.
The vertical tilt angle of the camera should not exceed 15°. The vertical tilt is a deviation of the camera’s optical axis from the horizontal plane, positioned at the face center’s level for an average height person (160 cm).
The horizontal deflection angle should not exceed 30°. The horizontal deflection is a deviation of the camera’s optical axis from the motion vector of the main flow of objects subject to recognition.
The minimum pixel density required for identification is 500 pixels/m (roughly corresponds to a face width of 80 pixels).
Select such a focal length of the camera’s lenses that provides the required pixel density at a predetermined distance to the recognition objects. The picture below demonstrates how to calculate the focal length subject to the distance between the camera and recognition objects. Estimating the focal length for a particular camera requires either calculators or a methodology provided by the camera manufacturer.
The exposure must be adjusted so that the face images are sharp (“in focus”), non-blurred, and evenly lit (not overlit or too dark).
For imperfect lighting conditions such as flare, too bright or too dim illumination, choose cameras with WDR hardware (Wide Dynamic Range) or other technologies that provide compensation for backlight and low illumination. Consider BLC, HLC, DNR, high optical sensitivity, Smart infrared backlight, AGC, and such.
Video compression: most video formats and codecs that FFmpeg can decode.
Video stream delivery protocols: RTSP, HTTP.
Tip
To calculate the precise hardware configuration tailored to your purposes, contact our experts by support@ntechlab.com.
Body and Car Recognition
Since the body and car recognition are relatively new features, we still need a great deal of testing in the field to formulate universal requirements for CCTV cameras applicable to the operational phase. However, we are always happy to individually assist you with hardware selection and deployment tailored to your business needs. Please do not hesitate to contact our experts (support@ntechlab.com).
Licensing Info
In this chapter:
Licensing Principles
The FindFace Multi licensing is granted using the following criteria:
The overall number of extracted feature vectors, regardless of the object type (face, body, car).
Note
The feature vectors are extracted from objects detected in the video, from card photos and user photos, and when building so-called cluster centroids.
The licensing scheme is the following:
Events: 1 event of video object detection = 1 object in a license.
Card: 1 photo in a card = 1 object in a license.
Clusters: 1 person = 1 object in a license; 1 car = 1 object in a license.
Users: 1 photo of a user = 1 object in a license.
The number of video sources currently in use (i.e., active video processing jobs for cameras and video files).
The number of model instances in use in the
findface-extraction-api
component.Face attribute recognition: gender/age/emotions/glasses/beard/face mask.
Body attribute recognition: clothing color/type.
Car attribute recognition: make/model/color/body style.
License plate recognition.
Face liveness detection.
Integration with partners.
You can choose between the following licensing methods:
The online licensing is provided by interaction with the NtechLab Global License Manager
license.ntechlab.com
and requires a stable internet connection, DNS, and open port 443 TCP. Upon being disconnected from the internet, the system will continue working off-grid for 4 hours.
Note
It is possible to prolongate the off-grid period for up to 2 days. Inform your manager if you need that.
The offline licensing via a USB dongle requires a USB port on the physical server with the
findface-ntls
component (license server in the FindFace core).The offline licensing via hardware fingerprint requires Sentinel drivers installed on the physical server with the
findface-ntls
component.
Important
For the system to function, a single instance of findface-ntls
should be enough. If your system requires more license servers, contact your NtechLab manager beforehand to prevent your system from being blocked.
View and Update License
After installing FindFace Multi, upload the license file you obtained from the manager into the system. To do so, navigate to
.Use the same tab to consult current licensing information and upgrade your license.
Offline Licensing via Hardware Fingerprint
Note
Sentinel is a type of offline licenses that, unlike guardant licenses, do not require any physical media for its work.
Glossary:
Sentinel is a software protection and licensing system by Thales. It allows you to implement offline licensing without access to a global server.
The C2V file is a file, containing data about a hardware fingerprint of the client’s machine, for binding the license only to this machine. This file is generated by the sentinel library. The C2V file is generated on the client’s machine where the license key will be installed later.
To implement the fingerprint licensing to your system, do the following:
Inform your manager that you intend to apply this licensing method and request your unique license id.
Install the Sentinel drivers on the physical server with the
findface-ntls
component.Do the following:
Download Sentinel drivers from the official website.
Unzip the downloaded archive and browse to it.
tar -xvzf Sentinel_LDK_Linux_Runtime_Installer_script.tar.gz cd Sentinel_LDK_Linux_Runtime_Installer_script/
There is another archive
aksusbd-8.31.1.tar.gz
inside the archive. Unzip it and browse to the resulting directory.tar -xvzf aksusbd-8.31.1.tar.gz cd aksusbd-8.31.1/
Run the installation command.
sudo ./dinst
Run and check the statuses of the Sentinel services.
sudo systemctl start aksusbd.service hasplmd.service sudo systemctl status aksusbd.service hasplmd.service
Put the
findface-sentinel-lib_*.deb
package received from your manager into some directory on the same host. Install the package.sudo dpkg -i /path/to/findface-sentinel-lib_*.deb
Get C2V file from the FindFace Multi.
In the FindFace Multi web interface, navigate to Download C2V for activation button.
. Take a hardware fingerprint (C2V file) by clicking theTip
If you prefer working with the console, you can send the following API request to
findface-ntls
instead:curl 'http://<findface-ntls-server-ip>/ntls/c2v' >my_pc.c2v
Send the License ID and the C2V file to your manager and receive your license file in return.
Upload the license file on the License tab.
Deploy FindFace Multi
FindFace Multi provides the following deployment options:
from a console installer
step-by-step from an APT repository
Important
Starting the GPU-accelerated services findface-extraction-api
and findface-video-worker-gpu
for the first time after deployment may take up a considerable amount of time due to the caching process (up to 45 minutes).
Important
Although FindFace Multi provides tools to ensure its protection from unauthorized access, they are not replacing a properly configured firewall. Be sure to use a firewall to heighten the FindFace Multi network protection.
Deploy from Console Installer
To deploy FindFace Multi, use a developer-friendly console installer.
Tip
Before deployment, be sure to consult the system requirements.
Important
If you intend to use the additional functionality of the Video Recorder and you are located in Russia or Belarus, before you start deploying, you need to contact support@ntechlab.com to get the MongoDB distribution package. Then follow the installation instructions in the section Troubleshoot MongoDB distribution.
Important
The FindFace Multi host must have a static IP address in order to be running successfully. To make the IP address static, open the /etc/network/interfaces
file and modify the current primary network interface entry as shown in the case study below. Be sure to substitute the suggested addresses with the actual ones, subject to your network specification.
sudo vi /etc/network/interfaces
iface eth0 inet static
address 192.168.112.144
netmask 255.255.255.0
gateway 192.168.112.254
dns-nameservers 192.168.112.254
Restart networking.
sudo service networking restart
Be sure to edit the etc/network/interfaces
file with extreme care. Please refer to the Ubuntu guide on networking before proceeding.
To deploy FindFace Multi from the console installer, do the following:
Download the installer file
findface-multi-1.2-and-server-5.2.run
.Put the
.run
file into some directory on the designated host (for example,/home/username
).From this directory, make the
.run
file executable.chmod +x findface-multi-1.2-and-server-5.2.run
Execute the
.run
file.sudo ./findface-multi-1.2-and-server-5.2.run
The installer will ask you a few questions and perform several automated checks to ensure that the host meets the system requirements. Fill out the prompts appropriately once requested. The questions and answers are the following:
Product to install: FindFace Multi.
Installation type:
1
: install FindFace Multi standalone.2
: install FindFace Multi and configure it to interact with additional remotefindface-video-worker
instances.Tip
To install only
findface-video-worker
on a host, refer to Additional findface-video-worker Deployment on Remote Hosts.3
: install the apt repository for the step-by-step deployment.
Note
If you select installation type #3 or #4, keep in mind to install necessary neural network models along with the
findface-extraction-api
component.Type of
findface-video-worker
package: CPU or GPU.Type of
findface-extraction-api
package: CPU or GPU.Do you want to install NtechLab VMS?: yes or no to install Video Recorder.
Should you choose to install FindFace Multi standalone, its components will be automatically installed, configured and/or started in the following configuration:
Important
In the case of a clean install, the installer will automatically configure
findface-extraction-api
to use thekiwi_320
neural network. Otherwise, you will be able to choose betweenkiwi_320
and the previous model. It is strictly not recommended to use the installer to update the system. See Update to FindFace Multi 1.2 for the instructions.Service
Configuration
postgresql-10
Installed and started.
nats-server
Installed and started.
etcd
Installed and started.
pgbouncer
Installed and started.
memcached
Installed and started.
nginx
Installed and started.
django
Installed and started as a web framework for the FindFace Multi web interface.
findface-ntls
Installed and started.
findface-tarantool-server
Installed and started. The number of instances (shards) is calculated using the formula:
N = min(max(min(mem_mb // 2000, cpu_cores), 1), 16 * cpu_cores)
. I.e., it is equal to the RAM size in MB divided by 2000, or the number of CPU physical cores (but at least one shard), or the number of CPU physical cores multiplied by 16, if the first obtained value is greater.findface-extraction-api
Installed and started (CPU/GPU-acceleration).
findface-sf-api
Installed and started.
findface-upload
Installed.
findface-video-manager
Installed and started.
findface-video-worker-*
Installed and started (CPU/GPU-acceleration).
findface-data-*
Neural network models for object and object attribute recognition. Installed.
findface-security
Installed and started.
findface-counter
Installed and started.
findface-liveness-api
Installed and started.
jq
Installed. Used to pretty-print API responses from FindFace Multi.
python3-ntech.*
Internal and auxiliary services. Installed and started.
findface-video-storage
Installed and started (only with Video Recorder).
findface-video-streamer-cpu
Installed and started (only with Video Recorder).
mongod
Installed and started (only with Video Recorder).
After the installation is complete, the following output is shown on the console:
Tip
Be sure to save this data: you will need it later.
############################################################################# # Installation is complete # ############################################################################# - upload your license to http://172.20.77.17/#/license/ - user interface: http://172.20.77.17/ superuser: admin password: admin documentation: http://172.20.77.17/doc/
Specify your time zone in the
/etc/findface-security/config.py
configuration file, either in theRegion/Country/City
orEtc/GMT+H
format. The time zone determines the time in reports, logs, and names of such FindFace Multi artifacts as event full frames and thumbnails, counter screenshots, etc.Tip
The best way to do so is to copy/paste your time zone from this table on Wikipedia.
sudo vi /etc/findface-security/config.py # time zone TIME_ZONE = 'America/Argentina/Buenos_Aires'
(Optional) In the same configuration file, you can modify the default web interface language.
# language code LANGUAGE_CODE = 'es-ar'
See also
Restart the
findface-security
service.sudo systemctl restart findface-security.service
Upload the FindFace Multi license file via the main web interface
http://<Host_IP_address>/#/license
. To access the web interface, use the providedsuperuser
credentials.Note
The host IP address is shown in the links to FindFace web services in the following way: as an external IP address if the host belongs to a network, or
127.0.0.1
otherwise.Important
Do not disclose the
superuser
(Super Administrator) credentials to others. To administer the system, create a new user with administrator privileges. Whatever the role, the Super Administrator cannot be deprived of its rights.The answers to the installer questions were saved to a file
/tmp/<findface-installer-*>.json
. You can edit this file and use it to install FindFace Multi on other hosts without having to answer the questions again.To do so, execute:
sudo ./findface-multi-1.2-and-server-5.2.run -f /tmp/<findface-installer-*>.json
Tip
You can find an example of the installation file in Installation File.
Important
To preserve the FindFace Multi compatibility with the installation environment, we highly recommend you to disable the Ubuntu automatic update. In this case, you will be able to update your OS manually, fully controlling which packages to update.
To disable the Ubuntu automatic update, execute the following commands:
sudo apt-get remove unattended-upgrades
sudo systemctl stop apt-daily.timer
sudo systemctl disable apt-daily.timer
sudo systemctl disable apt-daily.service
sudo systemctl daemon-reload
Important
The FindFace Multi services log a large amount of data, which can eventually lead to disc overload. To prevent this from happening, we advise you to disable rsyslog
due to its suboptimal log rotation scheme and use the appropriately configured systemd-journal
service instead. See Service Logs for the step-by-step instructions.
Note
To configure Video Recorder after the deployment, follow these instructions.
Basic Step-by-Step Deployment from Repository
This section will guide you through the FindFace Multi basic step-by-step deployment process. Follow the instructions below minding the sequence.
Tip
Be sure to learn the FindFace Multi architecture first.
Important
This section doesn’t cover the Video Recorder deployment. You can find the step-by-step instruction on this subject here.
Important
If you are located in Russia or Belarus and intend to use Video Recorder, before you start deploying, you need to contact support@ntechlab.com to get the MongoDB distribution package. Then follow the installation instructions in the section Troubleshoot MongoDB distribution.
In this section:
Install APT Repository
First of all, install the FindFace apt repository as follows:
Download the installer file
findface-multi-1.2-and-server-5.2.run
.Put the
.run
file into some directory on the designated host (for example,/home/username
).From this directory, make the
.run
file executable.chmod +x findface-multi-1.2-and-server-5.2.run
Execute the
.run
file.sudo ./findface-multi-1.2-and-server-5.2.run
The installer will ask you a few questions and perform several automated checks to ensure that the host meets the system requirements. Fill out the prompts appropriately once requested. The questions and answers are the following:
Product to install:
FindFace Multi
.Installation type:
repo: Don't install anything, just set up the APT repository
.Neural network models to install if necessary. To select a model(s), deselect all those on the list by entering
-*
in the command line, then select the required models by entering their sequence numbers (keyword): for example,1 3 4
. Enter done to save your selection and proceed to another step.Important
You must install a detector model for at least one object type (face, body, or car).
After that, the FindFace apt repository will be automatically installed.
Prerequisites
FindFace Multi requires such third-party software as PostgreSQL, Pgbouncer, NATS, etcd, and memcached. Do the following:
Install the prerequisite packages as such:
sudo apt update sudo apt install -y postgresql-10 nats-server etcd memcached pgbouncer
Open the
/etc/memcached.conf
configuration file. Set the maximum memory in megabytes to use formemcached
items:-m 1024
. Set the maximum item size:-I 16m
. If one or both of these parameters are absent, add them to the file.sudo vi /etc/memcached.conf -m 1024 -I 16m
Give a strong password to the
ntech
user (9T3g1nXy9yx3y8MIGm9fbef3dia8UTc3
in the example below). Output the credentials to thepgbouncer
user list.echo '"ntech" "9T3g1nXy9yx3y8MIGm9fbef3dia8UTc3"' | sudo tee -a /etc/pgbouncer/userlist.txt
Configure
pgbouncer
. In/etc/pgbouncer/pgbouncer.ini
, add theffsecurity
database to thedatabases
section. Configure named parameters, as shown in the example below. Parameters other than those must be commented out.sudo vi /etc/pgbouncer/pgbouncer.ini [databases] ffsecurity = dbname=ffsecurity host=localhost port=5432 user=ntech [pgbouncer] pidfile = /var/run/postgresql/pgbouncer.pid listen_addr = 127.0.0.1 listen_port = 5439 unix_socket_dir = /var/run/postgresql auth_type = plain auth_file = /etc/pgbouncer/userlist.txt pool_mode = transaction server_reset_query = DISCARD ALL max_client_conn = 16384 default_pool_size = 20 syslog = 1
Enable the prerequisite services autostart on boot and re-launch the services:
sudo systemctl enable postgresql@10-main.service nats-server etcd.service memcached.service pgbouncer.service sudo systemctl restart postgresql@10-main.service nats-server etcd.service memcached.service pgbouncer.service
Provide Licensing
Important
See Licensing Info to learn about the NtechLab licensing policy.
To provide the FindFace Multi licensing, deploy findface-ntls
, license server in the FindFace core.
Important
There must be only one
findface-ntls
instance in eachFindFace Multi
installation.
sudo apt update
sudo apt install -y findface-ntls
sudo systemctl enable findface-ntls.service && sudo systemctl start findface-ntls.service
Deploy Main Database
In FindFace Multi, the main system database is based on PostgreSQL. To deploy the main database, do the following:
Open the
pgbouncer
list of users/etc/pgbouncer/userlist.txt
. Copy thentech
user’s password (9T3g1nXy9yx3y8MIGm9fbef3dia8UTc3
in the example below).sudo cat /etc/pgbouncer/userlist.txt "ntech" "9T3g1nXy9yx3y8MIGm9fbef3dia8UTc3"
Using the PostgreSQL console, create a new user
ntech
with the copied password, and databasesffsecurity
andffcounter
in PostgreSQL.sudo -u postgres psql postgres=# CREATE ROLE ntech WITH LOGIN PASSWORD '9T3g1nXy9yx3y8MIGm9fbef3dia8UTc3'; postgres=# CREATE DATABASE ffsecurity WITH OWNER ntech ENCODING 'UTF-8' LC_COLLATE='en_US.UTF-8' LC_CTYPE='en_US.UTF-8' TEMPLATE template0; postgres=# CREATE DATABASE ffcounter WITH OWNER ntech ENCODING 'UTF-8' LC_COLLATE='C.UTF-8' LC_CTYPE='C.UTF-8' TEMPLATE template0;
Tip
To quit from the PostgreSQL console, type
\q
and press Enter.Allow authentication by UID of a socket client in PostgreSQL. Restart PostgreSQL.
echo 'local all ntech peer' | sudo tee -a /etc/postgresql/10/main/pg_hba.conf sudo systemctl restart postgresql@10-main.service
Deploy FindFace Core
To deploy the FindFace core, do the following:
Tip
You can find the description of the FindFace core components and their configuration parameters in Architecture and Components in Depth.
For FindFace Multi on GPU, install NVIDIA drivers.
Important
Be sure the restart the server after the NVIDIA drivers installation is complete. Otherwise, the subsequent installation of the GPU-based components experiences a failure.
Install the FindFace core components:
On CPU:
sudo apt update sudo apt install -y findface-tarantool-server findface-extraction-api findface-sf-api findface-upload findface-video-manager findface-video-worker-cpu findface-liveness-api
On GPU:
sudo apt update sudo apt install -y findface-tarantool-server findface-extraction-api-gpu findface-sf-api findface-upload findface-video-manager findface-video-worker-gpu findface-liveness-api
Note
If you have several video cards on your server, see Multiple Video Cards Usage.
Important
Be sure to manually install neural network models on the host(s) with
findface-extraction-api
.In the
/etc/findface-sf-api.ini
configuration file, enable theallow-return-facen
parameter.sudo vi /etc/findface-sf-api.ini ... limits: ... allow-return-facen: true ...
Open the
/etc/findface-video-worker-cpu.ini
(/etc/findface-video-worker-gpu.ini
) configuration file. Specify the following parameters:In the
mgr-static
parameter, specify thefindface-video-manager
host IP address, which providesfindface-video-worker
with settings and the video stream list.In the
capacity
parameter, specify the maximum number of video streams to be processed byfindface-video-worker
.In the
streamer
section, specify the IP address and port to access the Video Wall. The streamerport
must be set to18999
. Settracks = true
to improve how the object bboxes are displayed on the Video Wall.
sudo vi /etc/findface-video-worker-cpu.ini sudo vi /etc/findface-video-worker-gpu.ini mgr-static=127.0.0.1:18811 capacity=10 [streamer] #------------------------------ ## streamer/shots webserver port, 0=disabled ## type:number env:CFG_STREAMER_PORT longopt:--streamer-port port = 18999 ## streamer url - how to access this worker on streamer_port ## type:string env:CFG_STREAMER_URL longopt:--streamer-url url = 127.0.0.1:18999 ## use tracks instead detects for streamer ## type:bool env:CFG_STREAMER_TRACKS longopt:--streamer-tracks tracks = true
Enable the FindFace core services autostart and launch the services.
On CPU:
sudo systemctl enable findface-extraction-api findface-sf-api findface-video-manager findface-video-worker-cpu findface-liveness-api sudo systemctl start findface-extraction-api findface-sf-api findface-video-manager findface-video-worker-cpu findface-liveness-api
On GPU:
sudo systemctl enable findface-extraction-api findface-sf-api findface-video-manager findface-video-worker-gpu findface-liveness-api sudo systemctl start findface-extraction-api findface-sf-api findface-video-manager findface-video-worker-gpu findface-liveness-api
Deploy FindFace Multi Application Module and Feature Vector Database
To deploy the FindFace Multi application module, do the following:
Install the
findface-security
,findface-security-ui
, andfindface-counter
components. Enable thefindface-counter
autostart and launch the service.sudo apt update sudo apt install -y findface-security findface-security-ui findface-counter sudo systemctl enable findface-counter && sudo systemctl start findface-counter
Migrate the database architecture from FindFace Multi to PostgreSQL, create predefined user roles and the first administrator (a.k.a. Super Administrator).
Important
The Super Administrator cannot be deprived of its rights, whatever the role.
sudo findface-security migrate sudo findface-security create_groups sudo findface-security create_default_user
Create a structure of the Tarantool-based feature vector database by executing the command below.
sudo findface-security make_tnt_schema | sudo tee /etc/findface-security/tnt_schema.lua
Open the
/etc/tarantool/instances.available/FindFace.lua
configuration file. Check whether it contains thedofile
command,meta_indexes
andmeta_scheme
definitions, as in the example below.sudo vi /etc/tarantool/instances.available/FindFace.lua dofile("/etc/findface-security/tnt_schema.lua") -- host:port to bind, HTTP API FindFace = require("FindFace") FindFace.start("127.0.0.1", 8101, { license_ntls_server="127.0.0.1:3133", meta_indexes=meta_indexes, meta_scheme=meta_scheme })
Important
The IP address and port number specified in the
shards
section of the/etc/findface-sf-api.ini
configuration file must be identical to those in theFindFace.start
section.sudo vi /etc/tarantool/instances.available/FindFace.lua ... FindFace.start("127.0.0.1", 8101...)
sudo vi /etc/findface-sf-api.ini storage-api: ... shards: - master: http://127.0.0.1:8101/v2/ ...
Important
If you change the
/etc/findface-sf-api.ini
configuration file, be sure to restart thefindface-sf-api
service:sudo systemctl restart findface-sf-api.service
Enable the
findface-tarantool-server
service autostart and launch the service.sudo systemctl enable tarantool@FindFace.service && sudo systemctl start tarantool@FindFace.service
Open the
/etc/findface-security/config.py
configuration file. Specify the following parameters:Tip
You can find the
/etc/findface-security/config.py
default versionhere
.SERVICE_EXTERNAL_ADDRESS
: FindFace Multi IP address or URL prioritized for the Genetec integration and webhooks. If this parameter is not specified, the system will be usingEXTERNAL_ADDRESS
for these purposes.Important
To use Genetec and webhooks, be sure to specify at least one of these parameters:
SERVICE_EXTERNAL_ADDRESS
/EXTERNAL_ADDRESS
.EXTERNAL_ADDRESS
: (Optional) IP address or URL used to access the FindFace Multi web interface. If this parameter is not manually set, the system auto-detects it as the external IP address of the host.Note
To access FindFace Multi, you can use both the auto-detected and manually set IP addresses.
VIDEO_DETECTOR_TOKEN
: to authorize the video object detection module, come up with a token and specify it here.Tip
It’s a good idea to generate a token by executing:
pwgen -sncy 50 1|tr "'" "."
VIDEO_MANAGER_ADDRESS
: IP address of thefindface-video-manager
host.NTLS_HTTP_URL
: IP address of thefindface-ntls
host.ROUTER_URL
: IP address of thefindface-security
host that will receive detected objects from thefindface-video-worker
instance(s). Specify either external or internal IP address, subject to the network through whichfindface-video-worker
interacts withfindface-security
. Change the default port, subject to the redirect settings from HTTP to HTTPS, or omit it leaving only the IP address.SF_API_ADDRESS
: IP address of thefindface-sf-api
host.DATABASES
(section): fill it in as such:'PORT': 5439, 'USER': 'ntech', 'PASSWORD': '<password from /etc/pgbouncer/userlist.txt>'
(see Prerequisites).
Tip
If necessary, ensure data security by enabling SSL.
Tip
If necessary, set
’IGNORE_UNMATCHED’: True
to disable logging events for the objects that have no match with the cards. Enable this option if the system has to process a large number of objects.Generate a signature key for the session encryption (used by Django) by executing the command below. Specify this key as
SECRET_KEY
.pwgen -sncy 50 1|tr "'" "."
Start the services.
sudo systemctl enable findface-security sudo systemctl start findface-security
Disable the default nginx server and add the
findface-security
server to the list of enabled servers. Restart nginx.sudo rm /etc/nginx/sites-enabled/default sudo ln -s /etc/nginx/sites-available/ffsecurity-nginx.conf /etc/nginx/sites-enabled/ sudo nginx -s reload
Provide licensing:
Use the FindFace Multi main web interface to upload the license file you have prior received from your manager ( ).
For the on-premise licensing via a USB dongle, insert it into a USB port.
For the on-premise licensing via hardware fingerprint, refer to Offline Licensing via Hardware Fingerprint.
Important
To log in for the first time, use the default Super Administrator account
admin:admin
.
Note
To create more users or change the Super Administrator password, refer to User Management.
Important
To preserve the FindFace Multi compatibility with the installation environment, we highly recommend you to disable the Ubuntu automatic update. In this case, you will be able to update your OS manually, fully controlling which packages to update.
To disable the Ubuntu automatic update, execute the following commands:
sudo apt-get remove unattended-upgrades
sudo systemctl stop apt-daily.timer
sudo systemctl disable apt-daily.timer
sudo systemctl disable apt-daily.service
sudo systemctl daemon-reload
Important
The FindFace Multi services log a large amount of data, which can eventually lead to disc overload. To prevent this from happening, we advise you to disable rsyslog
due to its suboptimal log rotation scheme and use the appropriately configured systemd-journal
service instead. See Service Logs for the step-by-step instructions.
Additional findface-video-worker
Deployment on Remote Hosts
Important
Before deploying findface-video-worker
instances on remote hosts, do the following:
Allow accessing the
findface-ntls
license server from any IP address. To do so, open the/etc/findface-ntls.cfg
configuration file on the server withfindface-ntls
and setlisten = 0.0.0.0:3133
. Restart thefindface-ntls
service.sudo vi /etc/findface-ntls.cfg ## Address to accept incoming client connections (IP:PORT) ## type:string env:CFG_LISTEN longopt:--listen listen = 0.0.0.0:3133
sudo systemctl restart findface-ntls.service
Allow accessing the
findface-video-manager
service from any IP address. To do so, open the/etc/findface-video-manager.conf
configuration file on the server withfindface-video-manager
and setlisten: 0.0.0.0:18810
andrpc:listen: 0.0.0.0:18811
. Restart thefindface-video-manager
service.sudo vi /etc/findface-video-manager.conf listen: 0.0.0.0:18810 ... rpc: listen: 0.0.0.0:18811
sudo systemctl restart findface-video-manager.service
On the FindFace Multi server, open the
/etc/findface-security/config.py
configuration file and make sure that theROUTER_URL
parameter contains the external IP address of the FindFace Multi server and not the localhost. Thefindface-video-worker
instances on the remote hosts will be using this address for posting objects.sudo vi /etc/findface-security/config.py ... 'ROUTER_URL': 'http://192.168.0.12', ...
To install only a findface-video-worker
service, do the following:
Tip
Before deployment, be sure to consult the system requirements.
Tip
If you have several video cards on your server, see Multiple Video Cards Usage before deploying findface-video-worker-gpu
.
Download the installer file
findface-multi-1.2-and-server-5.2.run
.Put the
.run
file into some directory on the designated host (for example,/home/username
).From this directory, make the
.run
file executable.chmod +x findface-multi-1.2-and-server-5.2.run
Execute the
.run
file.sudo ./findface-multi-1.2-and-server-5.2.run
The installer will ask you a few questions and perform several automated checks to ensure that the host meets the system requirements. Fill out the prompts appropriately once requested. The questions and answers are the following:
Product to install: FindFace Video Worker.
Type of
findface-video-worker
package: CPU or GPU.IP address of the
findface-security
host.
After that, the installation process will automatically begin.
Note
If you chose to install findface-ntls
and/or findface-video-manager
on different hosts than that with findface-security
, specify their IP addresses in the /etc/findface-video-worker-cpu.ini
(/etc/findface-video-worker-gpu.ini
) configuration file after the installation.
sudo vi /etc/findface-video-worker-cpu.ini
sudo vi /etc/findface-video-worker-gpu.ini
In the ntls-addr
parameter, specify the findface-ntls
host IP address.
ntls-addr=127.0.0.1:3133
In the mgr-static
parameter, specify the findface-video-manager
host IP address, which provides findface-video-worker
with settings and the video stream list.
mgr-static=127.0.0.1:18811
Tip
To automatically install findface-video-worker
on another host without answering the installation questions, use the /tmp/<findface-installer-*>.json
file. Execute:
sudo ./findface-multi-1.2-and-server-5.2.run -f /tmp/<findface-installer-*>.json
You can find an example of the installation file in Installation File.
Important
To preserve the FindFace Multi compatibility with the installation environment, we highly recommend you to disable the Ubuntu automatic update. In this case, you will be able to update your OS manually, fully controlling which packages to update.
To disable the Ubuntu automatic update, execute the following commands:
sudo apt-get remove unattended-upgrades
sudo systemctl stop apt-daily.timer
sudo systemctl disable apt-daily.timer
sudo systemctl disable apt-daily.service
sudo systemctl daemon-reload
Important
The FindFace Multi services log a large amount of data, which can eventually lead to disc overload. To prevent this from happening, we advise you to disable rsyslog
due to its suboptimal log rotation scheme and use the appropriately configured systemd-journal
service instead. See Service Logs for the step-by-step instructions.
Installation of Neural Network Models
To detect and recognize objects and object attributes, findface-extraction-api
uses neural networks.
If you want to manually initiate the installation of neural network models, use the console installer as follows:
Execute the prepared
findface-multi-1.2-and-server-5.2.run
file.sudo ./findface-multi-1.2-and-server-5.2.run
Select the installation type:
Fully customized installation
.Select a FindFace Multi component to install:
findface-data
. To do so, first, deselect all the listed components by entering-*
in the command line, then select the required component by entering its sequence number (keyword). Enterdone
to save your selection and proceed to another step.In the same manner, select models to install. After that, the installation process will automatically begin.
You can find installed models for the object and object attribute recognition at /usr/share/findface-data/models/
. See Neural Network Models.
Fully Customized Installation
The FindFace Multi developer-friendly installer provides you with quite a few installation options, including the fully customized installation. This option is mostly used when deploying FindFace Multi in a highly distributed environment.
To initiate the fully customized installation, answer the installer questions as follows:
Product to install:
FindFace Multi
.Installation type:
Fully customized installation
.FindFace Multi components to install: whenever you have to make a selection, first, deselect all the listed components by entering
-*
in the command line, then select required components by entering their sequence number (keyword), for example:1 7 13
, etc. Enterdone
to save your selection and proceed to another step.Related questions such as about the acceleration type: CPU or GPU.
Guide to Typical Multi-Host Deployment
This section is all about deploying FindFace Multi in a multi-host environment.
Tip
If after having read this section, you still have questions, do not hesitate to contact our experts by support@ntechlab.com.
Important
This section doesn’t cover the Video Recorder deployment. You can find the step-by-step instruction on this subject here.
The reasons for deploying FindFace Multi in a multi-host environment are the following:
The necessity to distribute the video processing high load.
The necessity to process video streams from a group of cameras in the place of their physical location.
Note
The most common use cases where such need comes to the fore are hotel chains, chain stores, several security checkpoints in the same building, etc.
The necessity to distribute the feature vector extraction high load.
Large number of objects to search through, that requires implementation of a distributed object database.
Before you start the deployment, outline your system architecture, depending on its load and allotted resources (see Requirements). The most common distributed scheme is as follows:
One principal server with the following components:
findface-ntls
,findface-security
,findface-sf-api
,findface-video-manager
,findface-upload
,findface-video-worker
,findface-extraction-api
,findface-tarantool-server
, and third-parties.Several additional video processing servers with installed
findface-video-worker
.(If needed) Several additional extraction servers with installed
findface-extraction-api
.(If needed) Additional database servers with multiple Tarantool shards.
This section describes the most common distributed deployment. In high load systems, it may also be necessary to distribute the API processing (findface-sf-api
and findface-video-manager
) across several additional servers. This procedure requires a high level of expertise and some extra coding. Please do not hesitate to contact our experts for help (support@ntechlab.com).
To deploy FindFace Multi in a multi-host environment, follow the steps below:
Deploy Principal Server
To deploy the principal server as part of a distributed architecture, do the following:
On the designated physical server, install FindFace Multi from installer as follows:
Product to install:
FindFace Multi
.Installation type:
Single server, multiple video workers
. In this case, FindFace Multi will be installed and configured to interact with additional remotefindface-video-worker
instances.Type of the
findface-video-worker
acceleration (on the principal server): CPU or GPU, subject to your hardware configuration.Type of the
findface-extraction-api
acceleration (on the principal server): CPU or GPU, subject to your hardware configuration.
After the installation is complete, the following output will be shown on the console:
############################################################################# # Installation is complete # ############################################################################# - upload your license to http://172.20.77.17/#/license/ - user interface: http://172.20.77.17/ superuser: admin password: admin documentation: http://172.20.77.17/doc/
Upload the FindFace Multi license file via the main web interface
http://<Host_IP_address>/#/license
. To access the web interface, use the providedsuperuser
credentials.Note
The host IP address is shown in the links to FindFace web services in the following way: as an external IP address if the host belongs to a network, or
127.0.0.1
otherwise.Important
Do not disclose the
superuser
(Super Administrator) credentials to others. To administer the system, create a new user with the administrator privileges. Whatever the role, Super Administrator cannot be deprived of its rights.Allow the licensable services to access the
findface-ntls
license server from any IP address, To do so, open the/etc/findface-ntls.cfg
configuration file and setlisten = 0.0.0.0:3133
. Restartfindface-ntls.service
.sudo vi /etc/findface-ntls.cfg ## Address to accept incoming client connections (IP:PORT) ## type:string env:CFG_LISTEN longopt:--listen listen = 0.0.0.0:3133
sudo systemctl restart findface-ntls.service
Allow accessing the
findface-video-manager
service from any IP address. To do so, open the/etc/findface-video-manager.conf
configuration file and setlisten: 0.0.0.0:18810
andrpc:listen: 0.0.0.0:18811
. Restart thefindface-video-manager
service.sudo vi /etc/findface-video-manager.conf listen: 0.0.0.0:18810 ... rpc: listen: 0.0.0.0:18811
sudo systemctl restart findface-video-manager.service
Deploy Video Processing Servers
On an additional video processing server, install only a findface-video-worker
instance following the step-by-step instructions. Answer the installer questions as follows:
Product to install: FindFace Video Worker.
Type of the
findface-video-worker
acceleration: CPU or GPU, subject to your hardware configuration.FindFace Multi IP address: IP address of the principal server.
After that, the installation process will automatically begin. The answers will be saved to a file /tmp/<findface-installer-*>.json
. Use this file to install FindFace Video Worker
on other hosts without having to answer the questions again, by executing:
sudo ./findface-multi-1.2-and-server-5.2.run -f /tmp/<findface-installer-*>.jsonNote
If
findface-ntls
and/orfindface-video-manager
are installed on a different host than that withfindface-security
, specify their IP addresses in the/etc/findface-video-worker-gpu.ini
(/etc/findface-video-worker-cpu.ini
) configuration file after the installation.sudo vi /etc/findface-video-worker-cpu.ini sudo vi /etc/findface-video-worker-gpu.iniIn the
ntls-addr
parameter, specify thefindface-ntls
host IP address.ntls-addr=127.0.0.1:3133In the
mgr-static
parameter, specify thefindface-video-manager
host IP address, which providesfindface-video-worker
with settings and the video stream list.mgr-static=127.0.0.1:18811
Deploy Extraction Servers
On an additional extraction server, install only a findface-extraction-api
instance from the console installer. Answer the installer questions as follows:
Product to install:
FindFace Multi
.Installation type:
Fully customized installation
.FindFace Multi components to install:
findface-extraction-api
andfindface-data
. To make a selection, first, deselect all the listed components by entering-*
in the command line, then selectfindface-extraction-api
andfindface-data
by entering their sequence number (keyword). Enterdone
to save your selection and proceed to another step.Type of
findface-extraction-api
acceleration: CPU or GPU.Modification of the
/etc/findface-extraction-api.ini
configuration file: specify the IP address of thefindface-ntls
server.Neural network models to install: CPU or GPU model for face biometrics (mandatory), and (optional) CPU/GPU models to recognize face attributes, car and car attributes, and body and body attributes. Be sure to choose the right acceleration type for each model, matching the acceleration type of
findface-extraction-api
: CPU or GPU. Be aware thatfindface-extraction-api
on CPU can work only with CPU-models, whilefindface-extraction-api
on GPU supports both CPU- and GPU-models.
After that, the installation process will automatically begin. The answers will be saved to a file /tmp/<findface-installer-*>.json
. Use this file to install findface-extraction-api
on other hosts without having to answer the questions again.
sudo ./findface-multi-1.2-and-server-5.2.run -f /tmp/<findface-installer-*>.json
After all the extraction servers are deployed, distribute load across them by using a load balancer.
Distribute Load across Extraction Servers
To distribute load across several extraction servers, you need to set up load balancing. The following step-by-step instructions demonstrate how to set up nginx
load balancing in a round-robin fashion for 3 findface-extraction-api
instances located on different physical hosts: one on the FindFace Multi principal server (172.168.1.9
), and 2 on additional remote servers (172.168.1.10
, 172.168.1.11
). Should you have more extraction servers in your system, load-balance them by analogy.
Tip
You can use any load balancer according to your preference. Please refer to the relevant official documentation for guidance.
To set up load balancing, do the following:
Designate the FindFace Multi principal server (recommended) or any other server with nginx as a gateway to all the extraction servers.
Important
You will have to specify the gateway server IP address when configuring the FindFace Multi network.
Tip
You can install nginx as such:
sudo apt update sudo apt install nginx
On the gateway server, create a new nginx configuration file.
sudo vi /etc/nginx/sites-available/extapi
Insert the following entry into the just created configuration file. In the
upstream
directive (upstream extapibackends
), substitute the exemplary IP addresses with the actual IP addresses of the extraction servers. In theserver
directive, specify the gateway server listening port aslisten
. You will have to enter this port when configuring the FindFace Multi network.upstream extapibackends { server 172.168.1.9:18666; ## ``findface-extraction-api`` on principal server server 172.168.1.10:18666; ## 1st additional extraction server server 127.168.1.11:18666; ## 2nd additional extraction server } server { listen 18667; server_name extapi; client_max_body_size 64m; location / { proxy_pass http://extapibackends; proxy_next_upstream error; } access_log /var/log/nginx/extapi.access_log; error_log /var/log/nginx/extapi.error_log; }
Enable the load balancer in nginx.
sudo ln -s /etc/nginx/sites-available/extapi /etc/nginx/sites-enabled/
Restart nginx.
sudo service nginx restart
On the principal server and each additional extraction server, open the
/etc/findface-extraction-api.ini
configuration file. Substitute localhost in thelisten
parameter with the relevant server address that you have specified inupstream extapibackends
(/etc/nginx/sites-available/extapi
) before. In our example, the address of the 1st additional extraction server has to be substituted as such:sudo vi /etc/findface-extraction-api.ini listen: 172.168.1.10:18666
Restart the
findface-extraction-api
on the principal server and each additional extraction server.sudo systemctl restart findface-extraction-api.service
The load balancing is now successfully set up. Be sure to specify the actual gateway server IP address and listening port, when configuring the FindFace Multi network.
Deploy Additional Database Servers
The findface-tarantool-server
component connects the Tarantool-based feature vector database and the findface-sf-api
component, transferring search results from the database to findface-sf-api
for further processing.
To increase search speed, you can allocate several additional servers to the feature vector database and create multiple findface-tarantool-server
shards on each additional server. The concurrent functioning of multiple shards will lead to a remarkable increase in performance, as each shard can handle up to approximately 10,000,000 feature vectors.
To deploy additional database servers, do the following:
Install the
findface-tarantool-server
component on the first designated server. Answer the installer questions as follows:Product to install:
FindFace Multi
.Installation type:
Fully customized installation
.FindFace Multi components to install:
findface-tarantool-server
. To make a selection, first, deselect all the listed components by entering-*
in the command line, then selectfindface-tarantool-server
by entering its sequence number (keyword). Enterdone
to save your selection and proceed to another step.
After that, the installation process will automatically begin.
As a result of the installation, the
findface-tarantool-server
shards will be automatically installed in the amount ofN = min(max(min(mem_mb // 2000, cpu_cores), 1), 16 * cpu_cores)
. I.e., it is equal to the RAM size in MB divided by 2000, or the number of CPU physical cores (but at least one shard), or the number of CPU physical cores multiplied by 16 if the first obtained value is greater.Use the created
/tmp/<findface-installer-*>.json
file to installfindface-tarantool-server
on other servers without answering the questions again. To do so, execute:sudo ./findface-multi-1.2-and-server-5.2.run -f /tmp/<findface-installer-*>.json
Be sure to specify the IP addresses and ports of the shards later on when configuring the FindFace Multi network. To learn the port numbers, execute on each database server:
sudo cat /etc/tarantool/instances.enabled/*shard* | grep -E ".start|(listen =)"`
You will get the following result:
listen = '127.0.0.1:33001', FindFace.start("127.0.0.1", 8101, { listen = '127.0.0.1:33002', FindFace.start("127.0.0.1", 8102, {
You can find the port number in the
FindFace.start
section, for example,8101
,8102
, etc.
Configure Network
After all the FindFace Multi components are deployed, configure their interaction over the network. Do the following:
Open the
/etc/findface-sf-api.ini
configuration file:sudo vi /etc/findface-sf-api.ini
Specify the following parameters:
Parameter
Description
extraction-api
->extraction-api
IP address and listening port of the gateway extraction server with set up load balancing.
storage-api
->shards
->master
IP address and port of the
findface-tarantool-server
master shard. Specify each shard by analogy.upload_url
WebDAV NginX path to send original images, thumbnails and normalized object images to the
findface-upload
service.... extraction-api: extraction-api: http://172.168.1.9:18667 ... webdav: upload-url: http://127.0.0.1:3333/uploads/ ... storage-api: ... shards: - master: http://172.168.1.9:8101/v2/ slave: '' - master: http://172.168.1.9:8102/v2/ slave: '' - master: http://172.168.1.12:8101/v2/ slave: '' - master: http://172.168.1.12:8102/v2/ slave: '' - master: http://172.168.1.13:8102/v2/ slave: '' - master: http://172.168.1.13:8102/v2/ slave: ''
Open the
/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-security/config.py
Specify the following parameters:
Parameter
Description
SERVICE_EXTERNAL_ADDRESS
FindFace Multi IP address or URL prioritized for the Genetec integration and webhooks. Once this parameter not specified, the system uses
EXTERNAL_ADDRESS
for these purposes. To use Genetec and webhooks, be sure to specify at least one of those parameters:SERVICE_EXTERNAL_ADDRESS
,EXTERNAL_ADDRESS
.EXTERNAL_ADDRESS
(Optional) IP address or URL that can be used to access the FindFace Multi web interface. Once this parameter not specified, the system auto-detects it as the external IP address. To access FindFace Multi, you can use both the auto-detected and specified IP addresses.
VIDEO_DETECTOR_TOKEN
To authorize the video object detection module, come up with a token and specify it here.
VIDEO_MANAGER_ADDRESS
IP address of the
findface-video-manager
host.NTLS_HTTP_URL
IP address of the
findface-ntls
host.ROUTER_URL
External IP address of the
findface-security
host that will receive detected objects from thefindface-video-worker
instance(s).SF_API_ADDRESS
IP address of the
findface-sf-api
host.sudo vi /etc/findface-security/config.py ... # SERVICE_EXTERNAL_ADDRESS prioritized for webhooks and genetec SERVICE_EXTERNAL_ADDRESS = 'http://localhost' EXTERNAL_ADDRESS = 'http://127.0.0.1' ... FFSECURITY = { 'VIDEO_DETECTOR_TOKEN': '7ce2679adfc4d74edcf508bea4d67208', ... 'VIDEO_MANAGER_ADDRESS': 'http://127.0.0.1:18810', ... 'NTLS_HTTP_URL': 'http://127.0.0.1:3185', 'ROUTER_URL': 'http://172.168.1.9', ... 'SF_API_ADDRESS': 'http://127.0.0.1:18411', ... }
The FindFace Multi components interaction is now set up.
Important
To preserve the FindFace Multi compatibility with the installation environment, we highly recommend you to disable the Ubuntu automatic update. In this case, you will be able to update your OS manually, fully controlling which packages to update.
To disable the Ubuntu automatic update, execute the following commands:
sudo apt-get remove unattended-upgrades
sudo systemctl stop apt-daily.timer
sudo systemctl disable apt-daily.timer
sudo systemctl disable apt-daily.service
sudo systemctl daemon-reload
Important
The FindFace Multi services log a large amount of data, which can eventually lead to disc overload. To prevent this from happening, we advise you to disable rsyslog
due to its suboptimal log rotation scheme and use the appropriately configured systemd-journal
service instead. See Service Logs for the step-by-step instructions.
Deploy Video Recorder Step-by-Step
The FindFace Core internal architecture allows for embedding Video Recorder, an additional functionality that records, stores, and plays back video data from cameras.
There are the following ways to deploy Video Recorder:
Automatically during the FindFace Multi deployment from a console installer. See Deploy from Console Installer for details.
Step-by-step. Recommended in a multi-host environment.
This section will guide you through the Video Recorder step-by-step deployment.
Important
If you are located in Russia or Belarus and intend to use Video Recorder, before you start deploying, you need to contact support@ntechlab.com to get the MongoDB distribution package. Then follow the installation instructions in the section Troubleshoot MongoDB distribution. After you’ve done that, follow the instructions below.
Tip
Be sure to learn the FindFace Multi architecture first.
In this section:
Install APT Repository
First of all, install the FindFace apt repository as follows:
Download the installer file
findface-multi-1.2-and-server-5.2.run
.Put the
.run
file into some directory on the designated host (for example,/home/username
).From this directory, make the
.run
file executable.chmod +x findface-multi-1.2-and-server-5.2.run
Execute the
.run
file.sudo ./findface-multi-1.2-and-server-5.2.run
The installer will ask you a few questions and perform several automated checks to ensure that the host meets the system requirements. Fill out the prompts appropriately once requested. The questions and answers are the following:
Product to install:
FindFace Multi
.Installation type:
repo: Don't install anything, just set up the APT repository
.APT repository doesn't include recognition models. Do you want to install them now?
: no.
After that, the FindFace apt repository will be automatically installed.
Install Prerequisite, MongoDB
The Video Recorder operation requires a third-party software, MongoDB. Install it as follows:
sudo apt update
sudo apt install -y mongodb
After the installation, the mongodb
service will be launched and enabled to auto-start on boot. You can check its status by executing the command bellow:
sudo systemctl status mongodb
● mongodb.service - An object/document-oriented database
Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2022-05-16 10:23:04 UTC; 46s ago
...
Install Main Components
To install the Video Recorder components, do the following:
Install the
findface-video-storage
andfindface-video-streamer-cpu
services from the previously installed APT repository.sudo apt install -y findface-video-storage findface-video-streamer-cpu
After the installation,
findface-video-storage
will be launched and enabled to auto-start on boot, while thefindface-video-streamer-cpu
service will remain disabled and inactive.Open the
/etc/findface-video-storage.conf
configuration file. Setwebdav
for thechunk-storage
->type
parameter.sudo vi /etc/findface-video-storage.conf ... chunk-storage: type: webdav
Restart the
findface-video-storage
service after saving the changes.sudo systemctl restart findface-video-storage
Launch the
findface-video-streamer-cpu
service and enable it to auto-start on boot.sudo systemctl enable findface-video-streamer-cpu && sudo systemctl start findface-video-streamer-cpu
Video Recorder is now successfully deployed. To configure Video Recorder after the deployment, follow these instructions.
Add NVIDIA Repository and Install Drivers (GPU only)
FindFace Multi on GPU requires the prior installation of NVIDIA drivers.
To add the NVIDIA repository and install the drivers, do the following:
Important
You will need a stable Internet connection, as the driver packages will be downloaded from the NVIDIA web resource.
Download the installer file
findface-multi-1.1-and-server-5.1.run
.Put the
.run
file into some directory on the designated host (for example,/home/username
).From this directory, make the
.run
file executable.chmod +x findface-multi-1.1-and-server-5.1.run
Execute the
.run
file.sudo ./findface-multi-1.1-and-server-5.1.run
Choose the product to install:
NVIDIA CUDA drivers
.After the NVIDIA drivers installation is complete, restart the server.
General Preferences
The FindFace Multi general preferences determine your system functioning and resource consumption. Here they are:
generic confidence thresholds for face, body, car recognition (subject to the enabled objects)
thumbnail JPEG quality
schedule for automatic events/episodes cleanup
To configure the general preferences, navigate to the Preferences tab and click General. After you are finished with adjustments, click Update. Find the detailed explanation of each setting below.
In this section:
Generic Confidence Threshold
FindFace Multi verifies that a detected face and some face from the card index belong to the same person (i.e., the faces match), based on the pre-defined similarity threshold. The default threshold is set to the optimum value. If necessary, you can change it.
Note
The higher is the threshold, the less are chances that a wrong person will be positively verified, however, some valid photos may also fail verification.
The same principle applies to the body and car recognition, should you have enabled these objects. Based on the pre-defined threshold, FindFace Multi verifies that a detected body and a body from the card index belong to the same person. Likewise, a detected car matches a car from the card index.
Tip
You can configure the confidence thresholds individually for each camera group and watch list.
Important
The default generic confidence threshold is optimal for the majority of recognition cases. We do not recommend changing it on your own. Be sure to consult with our technical experts prior (support@ntechlab.com).
Thumbnail JPEG Quality
Subject to JPEG quality, thumbnails may take up a significant amount of disc volume. Use the General tab to configure the parameter.
Automatic Event And Episode Cleanup
Use the same tab to schedule automatically purging old events and related episodes from the database. For example, you can purge matched and unmatched events/episodes on different schedules and purge only full frames. You can also separately purge events with faces, cars, and bodies.
User Management and System Security
Important
Although FindFace Multi provides tools to ensure its protection from unauthorized access, they are not replacing a properly configured firewall. Be sure to use a firewall to heighten the FindFace Multi network protection.
User Management
In this chapter:
Predefined Roles
FindFace Multi provides the following predefined roles:
Administrator is granted full access to the FindFace Multi functionality, integrative and administrative tools.
Important
Whatever the role, the first administrator (Super Administrator) cannot be deprived of its rights.
Operator is granted full access to the FindFace Multi functionality.
User is granted rights to receive and acknowledge events and episodes, search for objects on the event list, modify their profile data. The other functions are available read-only.
You can change the predefined roles privileges, as well as create various custom roles.
Create Custom Role
To create a custom role, do the following:
Navigate to the Preferences tab. Click Roles.
Click +.
On the Information tab, specify the role name.
Click Save. You will see additional tabs appear next to the Information tab. You can use these tabs to assign the role privileges for specific watch lists (the Watch Lists tab) and camera groups (Camera Groups), as well as for entire system functions and entities (Permissions).
Note
For example, if you set
None
for a certain camera group on the Camera Groups tab, users with this role won’t be able to work with this very group of cameras. SettingNone
forcameragroup
on the Permissions tab will prevent users from viewing and working with all camera groups.Note
The right for an event consists of the rights for a corresponding camera and watch list. To see unmatched events, you only need the rights for a camera.
The full list of the FindFace Multi entities is as follows:
area
: areafaceevent
: face recognition eventfaceobject
: face photo in a cardcarevent
: car recognition eventcarobject
: car photo in a cardbodyevent
: body recognition eventbodyobject
: full-length photo in a carddeviceblacklistrecord
: blocklistwatchlist
: watch listcameragroup
: camera groupcamera
: camerauploadlist
: list of photos in batch uploadupload
: item (photo) in batch photo uploaduser
: userwebhook
: webhookvideoarchive
: object identification in offline videoreport
: reportall_own_sessions
: all sessions of the current user on different devicesNote
If relevant permissions for this entity are set, users will be able to view (
view
) and close (delete
) all their sessions on different devices. Otherwise, users will be only allowed to view and close their session on the current device. Working with sessions takes place on the Sessions tab (Preferences).humancard
: person cardcarcard
: car cardrelation
: card-to-card relationhumanepisode
: person-related episodecarepisode
: car-related episodefacecluster
: cluster of facesbodycluster
: cluster of bodiescarcluster
: cluster of cars
You can also enable and disable rights for the following functionality:
configure_ntls
: configuration of thefindface-ntls
license serverbatchupload_cards
: batch photo uploadview_runtimesetting
: viewing the FindFace Multi general preferenceschange_runtimesetting
: changing the FindFace Multi general preferencesview_auditlog
: viewing and working with the audit logs.
Primary and Additional User Privileges
You assign privileges to a user by using roles:
Primary role: main user role, mandatory for assignment. You can assign only one primary role to a user.
Role: additional user role, optional for assignment. You can assign several roles to one user. The rights associated with the additional roles will be added to the primary privileges.
All users belonging to a particular primary role automatically get access to camera groups (and cameras within the group) and watch lists (and cards in the watchlist) created by a user with the same primary role, subject to the privileges defined by their additional role(s).
See also
Create User
To create a user, do the following:
Navigate to the Preferences tab. Click Users.
Click +.
Specify such user data as name, login and password. If necessary, add a comment. Attach the user’s photo.
Important
A face in the photo must be of high quality, i.e. close to a frontal position. Distance between pupils: 60 px. Supported formats: WEBP, JPG, BMP, PNG. Photos that do not meet the requirements will be rejected with a detailed error description.
Tip
The photo can be used for biometric authentication.
From the Roles drop-down menu, select one or several user roles. Set one of them as the Primary role.
Check Active.
Click Create.
Deactivate or Delete User
In order to deactivate a user, uncheck Active on the user list ( ).
To delete a user from FindFace Multi, click on the user login on the list. Click Delete.
To filter the user list, use the following parameters:
Active: user status
Primary role: one or several primary roles
Enable Administrator Privileges for System Plugins
The FindFace Multi package incorporates an extensive set of system plugins that provide the following functionality:
log-in through a crypto certificate (contact your manager for details)
Note
You have to manually enable the system plugins via the /etc/findface-security/config.py
configuration file.
By default, the Administrator role is granted no privileges for any of the plugins. To assign relevant privileges to Administrator, do the following:
Enable a system plugin in the
/etc/findface-security/config.py
configuration file, following the step-by-step instructions provided by our team.Re-migrate the main database architecture from FindFace Multi to PostgreSQL.
sudo findface-security migrate
Re-create user groups in the main database.
sudo findface-security create_groups
Restart the
findface-security
service.sudo systemctl restart findface-security.service
Authentication and Session Monitoring
In this section:
Authentication Types
FindFace Multi provides the following authentication types:
password
: standard login/password authentication. Enabled by default.face
: authentication is possible only by the user’s face.face_or_password
: authentication is possible using either a face or login/password.face_and_password
: two-factor authentication. After a face is successfully recognized, the user must enter their credentials.
Important
For all the authentication types based on face recognition, you need the following configuration:
standalone liveness service (
findface-liveness-api
)
Important
Before using face recognition for authentication, you need to attach photos to users’ profiles and equip their workplaces with webcams.
Note
You can enable a work session monitoring for the authentication types face
and face_or_password
. In this case, the system will be periodically renewing the session after verifying that the face of a person at the workplace matches the user’s face that has logged in (see Configure Authentication and Session Renewal for details).
Tip
FindFace Multi also provides a certificate-based authentication that is configured independently. Contact our support team for details (support@ntechlab.com).
Configure Authentication and Session Renewal
To configure authentication and session monitoring, do the following:
Open the
/etc/findface-security/config.py
configuration file. Find theFFSECURITY
andFFSECURITY_AUTH_CONFIG
sections.sudo vi /etc/findface-security/config.py FFSECURITY = { # auth config # available options: face, password, face_and_password, face_or_password 'AUTH_TYPE': 'face_or_password', # 180 days by default 'MAXIMUM_SESSION_LENGTH': 15552000, ... } ... # - FindFace Security authorization configuration dictionary - FFSECURITY_AUTH_CONFIG = { 'FACE_AUTH_CONFIDENCE': 0.740, # FAR = 2.5E-09 # model: [kiwi_320] # 3 settings below are for front-end only # session renew works only with face or face_or_password authorization type 'NEED_SESSION_RENEW': False, 'RENEW_SESSION_INTERVAL': 0, 'MAXIMUM_RENEW_ATTEMPTS': 2, }
In the
FFSECURITY
section, set the following authentication parameters:AUTH_TYPE
: authentication type. Available options:face
,password
,face_and_password
,face_or_password
.MAXIMUM_SESSION_LENGTH
: the maximum session length, in seconds. After a session expires, the user will be automatically logged out unless the session is renewed.
In the
FFSECURITY_AUTH_CONFIG
section, set the following authentication and session monitoring parameters:FACE_AUTH_CONFIDENCE
: after a face in the webcam video is detected as alive, the system checks this face against the database of user photos with this confidence threshold.NEED_SESSION_RENEW
: ifTrue
, a session can be renewed and prolonged by the time equal toMAXIMUM_SESSION_LENGTH
, after verifying that the face of a person at the workplace matches the user’s face that has logged in.RENEW_SESSION_INTERVAL
: period in seconds before the expected time of the session expiry, during which the system will attempt to renew the session by enabling the webcam and verifying the user’s face.MAXIMUM_RENEW_ATTEMPTS
: the number of user verification attempts. The attempts occur in a row during the renewal interval.
Note
A verification attempt takes about 3 seconds to complete.
Tip
We recommend you to set up the monitoring parameters so that
MAXIMUM_RENEW_ATTEMPTS
multiplied by the attempt duration is less thanRENEW_SESSION_INTERVAL
. Otherwise, the system will extend the renewal interval x2, x3, and so on, subject to the number of attempts.Restart
findface-security
.sudo systemctl restart findface-security.service
Log out All Users
To automatically log out all users, execute the following command on the FindFace Multi principal server console:
sudo findface-security logout_all_users
Tip
This command comes in handy when switching to a different authentication type.
Enable Data Encryption
To ensure data security, we recommend you enabling SSL encryption. Do the following:
Under the nginx configuration directory, create a directory that will be used to hold all of the SSL data:
sudo mkdir /etc/nginx/ssl
Create the SSL key and certificate files:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/my-example-domain.com.key -out /etc/nginx/ssl/my-example-domain.com.crt
You will be asked a few questions about your server in order to embed the information correctly in the certificate. Fill out the prompts appropriately. The most important line is the one that requests the
Common Name
. You need to enter the domain name or public IP address that you want to be associated with your server. Both of the files you created (my-example-domain.com.key
andmy-example-domain.com.crt
) will be placed in the/etc/nginx/ssl
directory.Configure nginx to use SSL. Open the nginx configuration file
/etc/nginx/sites-available/ffsecurity-nginx.conf
. Apply the following modifications to the file:Add the new
server {...}
section that contains the URL replacement rule:server { listen 80; server_name my-example-domain.com www.my-example-domain.com; rewrite ^(.*) https://my-example-domain.com$1 permanent; access_log off; }
Comment out the following lines in the existing
server {...}
section:# listen 80 default_server; # listen [::]:80 default_server;
Add the following lines, including the paths to the certificate and the key, to the existing
server {...}
section:listen 443 ssl; ssl_certificate /etc/nginx/ssl/my-example-domain.com.crt; ssl_certificate_key /etc/nginx/ssl/my-example-domain.com.key;
In the generic nginx configuration file
/etc/nginx/nginx.conf
, find theSSL Settings
section and append the following lines:
ssl_session_cache shared:SSL:10m; ssl_session_timeout 1h;
The example of the configuration file
/etc/nginx/sites-available/ffsecurity-nginx.conf
with correctly configured SSL settings is shown below:upstream ffsecurity { server 127.0.0.1:8002; } upstream ffsecurity-ws { server 127.0.0.1:8003; } map $http_upgrade $ffsec_upstream { default "http://ffsecurity-ws"; "" "http://ffsecurity"; } server { listen 80; server_name my-example-domain.com www.my-example-domain.com; rewrite ^(.*) https://my-example-domain.com$1 permanent; access_log off; } server { # listen 80 default_server; # listen [::]:80 default_server; listen 443 ssl; ssl_certificate /etc/nginx/ssl/my-example-domain.com.crt; ssl_certificate_key /etc/nginx/ssl/my-example-domain.com.key; root /var/lib/findface-security; autoindex off; server_name _; location = / { alias /usr/share/findface-security-ui/; try_files /index.html =404; } location /static/ { } location /uploads/ { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'GET'; add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization'; add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range'; add_header 'Access-Control-Max-Age' 2592000; } location /ui-static/ { alias /usr/share/findface-security-ui/ui-static/; } location /doc/ { alias /opt/findface-security/doc/; } location ~ /videos/(?<video_id>[0-9]+)/upload/(.*)$ { if ($request_method = 'OPTIONS') { add_header 'Content-Type' 'text/plain; charset=utf-8'; add_header 'Content-Length' 0; return 204; } set $auth_request_uri "http://ffsecurity/videos/$video_id/auth-upload/"; auth_request /video-upload-auth/; alias "/var/lib/findface-security/uploads/videos/$video_id.bin"; client_max_body_size 15g; dav_access user:rw group:rw all:rw; dav_methods PUT; create_full_put_path on; autoindex off; autoindex_exact_size off; autoindex_localtime on; charset utf-8; add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'PUT, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'authorization'; } location = /video-upload-auth/ { internal; client_max_body_size 15g; proxy_set_header Content-Length ""; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass_request_body off; proxy_pass $auth_request_uri; } location / { client_max_body_size 300m; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_pass $ffsec_upstream; proxy_read_timeout 5m; location ~ ^/(cameras|videos)/([0-9]+)/stream/?$ { proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://ffsecurity; } location ~ ^/streams/(.*)$ { internal; proxy_pass $1; } } }
Restart nginx.
sudo systemctl restart nginx.service
Edit the
/etc/findface-security/config.py
configuration file. In theEXTERNAL_ADDRESS
andROUTER_URL
parameters, substitute thehttp://
prefix withhttps://
.sudo vi /etc/findface-security/config.py ... EXTERNAL_ADDRESS="https://my-example-domain.com" ... ROUTER_URL="https://IP_address"
Restart
findface-security
.sudo systemctl restart findface-security
If there are running
findface-video-worker
services in the system, you need to either recreate cameras in the web interface, or change therouter_url
parameter in relevant video processing jobs, substituting thehttp://
prefix withhttps://
. This can be done with the following command:curl -s localhost:18810/jobs | jq -r '.[]["id"]' | xargs -I {} curl -X PATCH -d '{"router_url": "https://my-example-domain.com/video-detector/frame"}' http://localhost:18810/job/{}
Enable Card Index Protection
If the card index security is disabled, the card photos and attachments will be available by direct link regardless of the user rights. Configure FindFace Multi to run all media requests through the DJANGO application for ACL checks to increase card index security.
Important
Enable the card media security only if you need it, as this setting severely negatively impacts the system performance.
Important
For the ACL checks to work properly, you must set the view permission for the card photos of faces, bodies, and cars. To do so, navigate Preferences → Roles → specific role → Permissions and set View for the faceobject
, bodyobject
, and carobject
entities, subject to the object types in the cards. See Create Custom Role for details.
See also
To enable card index security, do the following:
Open the
/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-security/config.py
Uncomment
OVERPROTECT_MEDIA
and set itTrue
.... 'OVERPROTECT_MEDIA': False,
Open the nginx configuration file
/etc/nginx/sites-available/ffsecurity-nginx.conf
. Uncommentinternal
in thelocation /uploads
section.location /uploads/ { internal; # Uncomment if you intend to enable OVERPROTECT_MEDIA ... }
Restart
findface-security
and nginx.sudo systemctl restart findface-security.service sudo systemctl restart nginx.service
After the new security policy is applied, logged-in users must re-authenticate. To make the users do so, execute the logout-all command:
sudo findface-security logout_all_users
Disable ACL
You can turn off FindFace Multi ACL if you do not need it, as the constant permission checks consume a significant amount of system resources.
Do the following:
Open the
/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-security/config.py
Set
ENABLE_ACL = False
.... ENABLE_ACL = False
Restart
findface-security
.sudo systemctl restart findface-security.service
Audit Logs
The FindFace Multi comprehensive and searchable audit logs are an excellent complementary tool for user management that provides you with a thorough audit of the user actions and strengthens your system protection. You can access this functionality on the Audit Logs tab.
Each record provides the following data:
username of the user who performed the action
IP address where the request came from
device id: the unique identifier of the client device
action type such as authorization, search, object modification, restart, and so on
object type to which the action applies, for example, a card or a camera
object identifier
details, subject to the action type
timestamp
Use the filter panel to the right to set up the search conditions.
List of User Sessions. Blocklist
In this chapter:
FindFace Multi allows you to monitor user sessions and learn associated data, such as the connected device UUID, type of user interface (mobile app or web interface), IP address, last ping time, and so on.
If necessary, you can add a device to the blocklist without deactivating the user account. The device block may come in handy in various situations. For example, if you want users to access the system only from their workplaces. Use the blocklist functionality to take your system safety to the next level.
Grant Permissions to Work with Sessions
A user’s access to the list of sessions depends on the granted permissions:
Administrator: can view and close sessions of all users
User with the
all_own_sessions
permissions: can view/close all sessions initiated with their usernameUser without the
all_own_sessions
permissions: can only view/close their current session
View User Sessions
To view the list of user sessions, navigate to Preferences -> Sessions.
Each session record provides the following data:
device UUID
username
device information
type of the user interface (mobile/web)
IP address
status (online, offline, blocked)
last ping time
Use the filter panel to the right to set up the search conditions.
To close a session, select it in the list and click Close.
Block Device
The list of blocked devices is available on the Blocklist Records tab (Preferences).
You can add a device to the blocklist on both the Sessions and Blocklist Records tabs. Blocking a device leads to the user’s automatic log-out.
To block a device on the Sessions tab, do the following:
Select the relevant session record(s).
Click Block.
Specify the reason for the device to be blocked (mandatory) and the block expiry date (optional). If no date is specified, the block will be permanent.
Click Save.
Blocking a device on the Blocklist Records tab is similar:
Click +.
Manually enter the device UUID.
Specify the reason and the block expiry date.
Click Save.
Allowed File Extensions in Cards
By default, you can attach a file of any extension to a card. It is possible to strengthen your system safety by creating the allowlist of file extensions. It will prevent your users from uploading files of unwanted formats, including those that might contain hidden malicious code, such as .js
, .swf
, and such.
To create the allowlist of file extensions, do the following:
Open the
/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-security/config.py
In the
FFSECURITY
section, find theCARD_ATTACHMENTS_FILENAME_REGEXP
parameter. Set an expression with the allowed file extensions. Any valid Python regular expression will do.Examples:
r'.*\.png'
: allows only files with the.png
extensionr'.*\(png|jpg)'
: allows the.png
and.jpg
extensionsr'.*'
: allows all file extensionsNone
: allows all file extensions'XXXXXX'
: uploading files of any extension is prohibited
FFSECURITY = { ... 'CARD_ATTACHMENTS_FILENAME_REGEXP': r'.*\.txt', ... }
Tip
Commenting out the
CARD_ATTACHMENTS_FILENAME_REGEXP
parameter also allows all file extensions.Restart the
findface-security
service.sudo systemctl restart findface-security.service
Advanced Functionality Setup
Enable Face Attribute Recognition
Subject to your needs, you can enable automatic recognition of such face attributes as gender, age, emotions, glasses, beard, and face mask. This functionality can be activated on both GPU- and CPU-accelerated video object detectors.
To enable automatic recognition of face attributes, do the following:
Open the
/etc/findface-extraction-api.ini
configuration file.sudo vi /etc/findface-extraction-api.ini
Specify the relevant recognition models in the
extractors
section, as shown in the example below. Be sure to indicate the right acceleration type for each model, matching the acceleration type offindface-extraction-api
: CPU or GPU. Be aware thatfindface-extraction-api
on CPU can work only with CPU-models, whilefindface-extraction-api
on GPU supports both CPU- and GPU-models.extractors: face_age: faceattr/age.v2.gpu.fnk face_beard: faceattr/beard.v0.gpu.fnk face_emotions: faceattr/emotions.v1.gpu.fnk face_gender: faceattr/gender.v2.gpu.fnk face_glasses3: faceattr/glasses3.v0.gpu.fnk face_medmask3: faceattr/medmask3.v2.gpu.fnk
The following models are available:
Face attribute
Acceleration
Configure as follows
age
CPU
face_age: faceattr/age.v2.cpu.fnk
GPU
face_age: faceattr/age.v2.gpu.fnk
gender
CPU
face_gender: faceattr/gender.v2.cpu.fnk
GPU
face_gender: faceattr/gender.v2.gpu.fnk
emotions
CPU
face_emotions: faceattr/emotions.v1.cpu.fnk
GPU
face_emotions: faceattr/emotions.v1.gpu.fnk
glasses
CPU
face_glasses3: faceattr/glasses3.v0.cpu.fnk
GPU
face_glasses3: faceattr/glasses3.v0.gpu.fnk
beard
CPU
face_beard: faceattr/beard.v0.cpu.fnk
GPU
face_beard: faceattr/beard.v0.gpu.fnk
face mask
CPU
face_medmask3: faceattr/medmask3.v2.cpu.fnk
GPU
face_medmask3: faceattr/medmask3.v2.gpu.fnk
Tip
To leave a recognition model disabled, pass the empty value
""
to the relevant parameter. Do not remove the parameter itself. Otherwise, the system will be searching for the default model.extractors: face_age: "" face_beard: "" face_emotions: "" face_gender: "" face_glasses3: "" face_medmask3: ""
Note
You can find face attribute recognition models at
/usr/share/findface-data/models/faceattr/
.ls /usr/share/findface-data/models/faceattr/ age.v2.cpu.fnk age.v2.gpu.fnk beard.v0.cpu.fnk beard.v0.gpu.fnk emotions.v1.cpu.fnk emotions.v1.gpu.fnk gender.v2.cpu.fnk gender.v2.gpu.fnk glasses3.v0.cpu.fnk glasses3.v0.gpu.fnk medmask3.v2.cpu.fnk medmask3.v2.gpu.fnk liveness.colombo.cpu.fnk liveness.colombo.gpu.fnk liveness.pacs.v0.cpu.fnk liveness.pacs.v0.gpu.fnk quality.v1.cpu.fnk quality.v1.gpu.fnk
Restart
findface-extraction-api
.sudo systemctl restart findface-extraction-api
To display the face attribute recognition results in the event list, open the
/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-security/config.py
Specify the required models in the following line of the
FFSECURITY
section, subject to the list of enabled models:FFSECURITY = { ... 'FACE_EVENTS_FEATURES': ['gender', 'age', 'emotions', 'beard', 'glasses', 'medmask'], ... }
Restart
findface-security
.sudo systemctl restart findface-security
Enable Car and Car Attribute Recognition
FindFace Multi allows you to recognize individual cars and car attributes.
The car attributes are the following:
license plate number (for selected countries),
color,
make,
model,
car body style,
whether a car is a special vehicle (police, ambulance, rescue service, or taxi).
Important
Recognition of individual cars is an experimental feature. Therefore, we highly recommend enabling the additional attribute analysis to improve the recognition quality. In this case, the system compares not only the feature vectors of two cars but also their attributes such as color, body style, make, model, and belonging to a special vehicle type. A conclusion about the cars’ match is only made if both the feature vectors and attributes of the cars coincide.
See the detailed description of how to enable the additional attribute analysis in the step-by-step instructions below.
To enable recognition of cars and their attributes, do the following:
Specify neural network models for individual car recognition and car attribute recognition in the
/etc/findface-extraction-api.ini
configuration file. Do the following:Important
Be sure to choose the right acceleration type for each model, matching the acceleration type of
findface-extraction-api
: CPU or GPU. Be aware thatfindface-extraction-api
on CPU can work only with CPU-models, whilefindface-extraction-api
on GPU supports both CPU- and GPU-models.Open the
/etc/findface-extraction-api.ini
configuration file.sudo vi /etc/findface-extraction-api.ini
Specify the car detector model in the
detectors -> models
section by pasting the following code:GPU
detectors: ... models: ... efreitor: aliases: - car model: cadet/efreitor.gpu.fnk options: min_object_size: 32 resolutions: [256x256, 384x384, 512x512, 768x768, 1024x1024, 1536x1536, 2048x2048] ...
CPU
detectors: ... models: ... efreitor: aliases: - car model: cadet/efreitor.cpu.fnk options: min_object_size: 32 resolutions: [256x256, 384x384, 512x512, 768x768, 1024x1024, 1536x1536, 2048x2048] ...
Specify the extraction models in the
extractors -> models
section, subject to the extractors you want to enable:GPU
extractors: ... models: car_color: '' car_description: carattr/description.v0.gpu.fnk car_emben: carrec/alonso.gpu.fnk car_license_plate: carattr/carattr.license_plate.v4.gpu.fnk car_license_plate_quality: carattr/carattr.license_plate_quality.v0.gpu.fnk car_make: '' car_quality: carattr/carattr.quality.v0.gpu.fnk car_special_types: carattr/carattr.special_types.v0.gpu.fnk
CPU
extractors: ... models: car_color: '' car_description: carattr/description.v0.cpu.fnk car_emben: carrec/alonso.cpu.fnk car_license_plate: carattr/carattr.license_plate.v4.cpu.fnk car_license_plate_quality: carattr/carattr.license_plate_quality.v0.cpu.fnk car_make: '' car_quality: carattr/carattr.quality.v0.cpu.fnk car_special_types: carattr/carattr.special_types.v0.cpu.fnk
The following extractors are available:
Extractor
Configure as follows
individual car object
car_emben: carrec/alonso.cpu.fnk
car_emben: carrec/alonso.gpu.fnk
license plate number
car_license_plate: carattr/carattr.license_plate.v4.cpu.fnk
car_license_plate_quality: carattr/carattr.license_plate_quality.v0.cpu.fnk
car_license_plate: carattr/carattr.license_plate.v4.gpu.fnk
car_license_plate_quality: carattr/carattr.license_plate_quality.v0.gpu.fnk
set of attributes: make / color / model / body style
car_description: carattr/description.v0.cpu.fnk
car_description: carattr/description.v0.gpu.fnk
car image quality
car_quality: carattr/carattr.quality.v0.cpu.fnk
car_quality: carattr/carattr.quality.v0.gpu.fnk
special vehicle
car_special_types: carattr/carattr.special_types.v0.cpu.fnk
car_special_types: carattr/carattr.special_types.v0.gpu.fnk
Tip
To leave a model disabled, pass the empty value
''
to the relevant parameter. Do not remove the parameter itself. Otherwise, the system will be searching for the default model.extractors: ... models: car_color: "" car_description: "" car_emben: "" car_license_plate: "" car_license_plate_quality: "" car_make: "" car_quality: "" car_special_types: ""
Specify the normalizers required for the extractors specified in the previous step. For example, if you need license plate recognition, specify the
carlicplate
normalizer.Normalizer
Normalizer model
Used for extractors
carlicplate
carnorm/anaferon.v3.gpu.fnk
carnorm/anaferon.v3.cpu.fnk
car_license_plate
cropbbox
facenorm/cropbbox.v2.gpu.fnk
facenorm/cropbbox.v2.cpu.fnk
car_license_plate_quality
,car_description
,car_quality
,car_special_types
GPU
normalizers: ... models: carlicplate: model: carnorm/anaferon.v3.gpu.fnk ... cropbbox: model: facenorm/cropbbox.v2.gpu.fnk
CPU
normalizers: ... models: carlicplate: model: carnorm/anaferon.v3.cpu.fnk ... cropbbox: model: facenorm/cropbbox.v2.cpu.fnk
Make sure that the
objects -> car
section contains thequality_attribute: car_quality
:GPU
objects: ... car: base_normalizer: facenorm/cropbbox.v2.gpu.fnk quality_attribute: car_quality ...
CPU
objects: ... car: base_normalizer: facenorm/cropbbox.v2.cpu.fnk quality_attribute: car_quality
Restart
findface-extraction-api
.sudo systemctl restart findface-extraction-api
Modify the
/etc/findface-video-worker-gpu.ini
(/etc/findface-video-worker-cpu.ini
) configuration file. In thecar
section, specify the neural network models by analogy with the example below. Restartfindface-video-worker-gpu
(findface-video-worker-cpu
).GPU
sudo vi /etc/findface-video-worker-gpu.ini #------------------------------ [car] #------------------------------ ## detector param ## type:number env:CFG_CAR_MIN_SIZE longopt:--car-min-size min_size = 60 ## path to car detector ## type:string env:CFG_CAR_DETECTOR longopt:--car-detector detector = /usr/share/findface-data/models/cadet/efreitor.gpu.fnk ## path to normalizer (usually crop2x) ## type:string env:CFG_CAR_NORM longopt:--car-norm norm = /usr/share/findface-data/models/facenorm/cropbbox.v2.gpu.fnk ## path to car quality extractor ## type:string env:CFG_CAR_QUALITY longopt:--car-quality quality = /usr/share/findface-data/models/carattr/carattr.quality.v0.gpu.fnk ## path to car quality normalizer ## type:string env:CFG_CAR_NORM_QUALITY longopt:--car-norm-quality norm_quality = /usr/share/findface-data/models/facenorm/cropbbox.v2.gpu.fnk ## path to car track features extractor ## type:string env:CFG_CAR_TRACK_FEATURES longopt:--car-track-features track_features = ## path to car track features normalizer ## type:string env:CFG_CAR_TRACK_FEATURES_NORM longopt:--car-track-features-norm track_features_norm =
sudo systemctl restart findface-video-worker-gpu.service
CPU
sudo vi /etc/findface-video-worker-cpu.ini #------------------------------ [car] #------------------------------ ## detector param ## type:number env:CFG_CAR_MIN_SIZE longopt:--car-min-size min_size = 60 ## path to car detector ## type:string env:CFG_CAR_DETECTOR longopt:--car-detector detector = /usr/share/findface-data/models/cadet/efreitor.cpu.fnk ## path to normalizer (usually crop2x) ## type:string env:CFG_CAR_NORM longopt:--car-norm norm = /usr/share/findface-data/models/facenorm/cropbbox.v2.cpu.fnk ## path to car quality extractor ## type:string env:CFG_CAR_QUALITY longopt:--car-quality quality = /usr/share/findface-data/models/carattr/carattr.quality.v0.cpu.fnk ## path to car quality normalizer ## type:string env:CFG_CAR_NORM_QUALITY longopt:--car-norm-quality norm_quality = /usr/share/findface-data/models/facenorm/cropbbox.v2.cpu.fnk
sudo systemctl restart findface-video-worker-cpu.service
Open the
/etc/findface-video-manager.conf
configuration file and make sure it contains thecar
section indetectors
that looks similar to the example below.Tip
As a reference value for the
filter_min_quality
parameter, you can take theMINIMUM_CAR_QUALITY
parameter value from the/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-video-manager.conf detectors: ... car: filter_min_quality: 0.65 filter_min_size: 1 filter_max_size: 8192 roi: "" fullframe_crop_rot: false fullframe_use_png: false jpeg_quality: 95 overall_only: false realtime_post_first_immediately: false realtime_post_interval: 1 realtime_post_every_interval: false track_interpolate_bboxes: true track_miss_interval: 1 track_overlap_threshold: 0.25 track_max_duration_frames: 0 track_send_history: false post_best_track_frame: true post_best_track_normalize: true post_first_track_frame: false post_last_track_frame: false tracker_type: simple_iou track_deep_sort_matching_threshold: 0.65 track_deep_sort_filter_unconfirmed_tracks: true
Enable the recognition of cars and car attributes in the
/etc/findface-security/config.py
configuration file. Do the following:In the
FFSECURITY
section, set'ENABLE_CARS': True
.sudo vi /etc/findface-security/config.py FFSECURITY = { ... # optional objects to detect 'ENABLE_CARS': True, ...
In the same section, specify the car attributes you want to display for the car recognition events.
# available features are: description, license_plate, special_vehicle_type 'CAR_EVENTS_FEATURES': ['description', 'license_plate', 'special_vehicle_type'],
To improve the quality of individual car recognition, we highly recommend you enable the additional attribute analysis. In this case, the system compares not only the feature vectors of two cars but also their attributes. A conclusion about the cars’ match is only made if both the feature vectors and attributes of the cars coincide.
You can use the following attributes for additional analysis:
color
: car color,body
: body style,make
: make,model
: model,special_vehicle_type
: belonging to a special vehicle type.
To enable the additional attribute analysis, set
True
in theFFSECURITY
->EXTRA_CAR_MATCHING
section for the attributes that you want to compare.FFSECURITY = { # use additional features for extra confidence when matching cars by emben 'EXTRA_CAR_MATCHING': { 'color': {'enabled': False, 'min_confidence': 0}, 'body': {'enabled': False, 'min_confidence': 0}, 'make': {'enabled': False, 'min_confidence': 0}, 'model': {'enabled': False, 'min_confidence': 0}, 'special_vehicle_type': {'enabled': False, 'min_confidence': 0} },
Important
For the attribute analysis to function, the
description
model must be enabled in the/etc/findface-extraction-api.ini
and/etc/findface-security/config.py
configuration files (see above).Note
Enabling the additional attribute analysis reduces the number of false positives. However, the system might miss out on some real matches as well.
Warning
Do not change the default values of
min_confidence
without consulting with our technical experts (support@ntechlab.com).Restart the
findface-security
service.sudo systemctl restart findface-security.service
Enable Body and Body Attribute Recognition
FindFace Multi allows you to recognize individual human bodies and body attributes.
The body attributes are the following:
clothing type:
generalized category of upper body wear: long sleeves, short sleeves, no sleeve
specific type of upper body wear: jacket, coat, sleeveless vest, sweatshirt, T-shirt, shirt, dress
type of lower body wear: pants, skirt, shorts, obscured
type of headgear: hat/cap, hood/headscarf, none
clothing color (top/bottom)
To enable recognition of human bodies and their attributes, do the following:
Specify neural network models for body object and body attribute recognition in the
/etc/findface-extraction-api.ini
configuration file. Do the following:Important
Be sure to choose the right acceleration type for each model, matching the acceleration type of
findface-extraction-api
: CPU or GPU. Be aware thatfindface-extraction-api
on CPU can work only with CPU-models, whilefindface-extraction-api
on GPU supports both CPU- and GPU-models.Open the
/etc/findface-extraction-api.ini
configuration file.sudo vi /etc/findface-extraction-api.ini
Specify the body detector model in the
detectors -> models
section by pasting the following code:GPU
detectors: ... models: ... glenn: aliases: - body - silhouette model: pedet/glenn_005.gpu.fnk options: min_object_size: 32 resolutions: [256x256, 384x384, 512x512, 768x768, 1024x1024, 1536x1536, 2048x2048] ...
CPU
detectors: ... models: ... glenn: aliases: - body - silhouette model: pedet/glenn_005.cpu.fnk options: min_object_size: 32 resolutions: [256x256, 384x384, 512x512, 768x768, 1024x1024, 1536x1536, 2048x2048] ...
Specify the extraction models in the
extractors -> models
section, subject to the extractors you want to enable:GPU
extractors: ... models: body_clothes: pedattr/pedattr.clothes_type.v0.gpu.fnk body_color: pedattr/pedattr.color.v1.gpu.fnk body_emben: pedrec/andariel.gpu.fnk body_quality: pedattr/pedattr.quality.v0.gpu.fnk
CPU
extractors: ... models: body_clothes: pedattr/pedattr.clothes_type.v0.cpu.fnk body_color: pedattr/pedattr.color.v1.cpu.fnk body_emben: pedrec/andariel.cpu.fnk body_quality: pedattr/pedattr.quality.v0.cpu.fnk
The following extractors are available:
Recognition type
Configure as follows
clothing type
body_clothes: pedattr/pedattr.clothes_type.v0.gpu.fnk
body_clothes: pedattr/pedattr.clothes_type.v0.cpu.fnk
clothing color
body_color: pedattr/pedattr.color.v1.gpu.fnk
body_color: pedattr/pedattr.color.v1.cpu.fnk
individual body object
body_emben: pedrec/andariel.gpu.fnk
body_emben: pedrec/andariel.cpu.fnk
body quality
body_quality: pedattr/pedattr.quality.v0.gpu.fnk
body_quality: pedattr/pedattr.quality.v0.cpu.fnk
Tip
To leave a model disabled, pass the empty value
''
to the relevant parameter. Do not remove the parameter itself. Otherwise, the system will be searching for the default model.extractors: ... models: body_clothes: '' body_color: '' body_emben: '' body_quality: ''
Make sure that the
normalizers
section contains a model for thecropbbox
normalizer, as shown in the example below. This normalizer is required for the extractors specified in the previous step.GPU
normalizers: ... models: ... cropbbox: model: facenorm/cropbbox.v2.gpu.fnk
CPU
normalizers: ... models: ... cropbbox: model: facenorm/cropbbox.v2.cpu.fnk
Make sure that the
objects -> body
section contains thequality_attribute: body_quality
:GPU
objects: ... body: base_normalizer: facenorm/cropbbox.v2.gpu.fnk quality_attribute: body_quality ...
CPU
objects: ... body: base_normalizer: facenorm/cropbbox.v2.cpu.fnk quality_attribute: body_quality
Restart
findface-extraction-api
.sudo systemctl restart findface-extraction-api
Modify the
/etc/findface-video-worker-gpu.ini
(/etc/findface-video-worker-cpu.ini
) configuration file. In thebody
section, specify the neural network models by analogy with the example below. Restartfindface-video-worker-gpu
(findface-video-worker-cpu
).GPU
sudo vi /etc/findface-video-worker-gpu.ini #------------------------------ [body] #------------------------------ ## detector param ## type:number env:CFG_BODY_MIN_SIZE longopt:--body-min-size min_size = 60 ## path to body detector ## type:string env:CFG_BODY_DETECTOR longopt:--body-detector detector = /usr/share/findface-data/models/pedet/glenny_005_fast.gpu.fnk ## path to normalizer (usually crop2x) ## type:string env:CFG_BODY_NORM longopt:--body-norm norm = /usr/share/findface-data/models/facenorm/cropbbox.v2.gpu.fnk ## path to body quality extractor ## type:string env:CFG_BODY_QUALITY longopt:--body-quality quality = /usr/share/findface-data/models/pedattr/pedattr.quality.v0.gpu.fnk ## path to body quality normalizer ## type:string env:CFG_BODY_NORM_QUALITY longopt:--body-norm-quality norm_quality = /usr/share/findface-data/models/facenorm/cropbbox.v2.gpu.fnk
sudo systemctl restart findface-video-worker-gpu.service
CPU
sudo vi /etc/findface-video-worker-cpu.ini #------------------------------ [body] #------------------------------ ## detector param ## type:number env:CFG_BODY_MIN_SIZE longopt:--body-min-size min_size = 60 ## path to body detector ## type:string env:CFG_BODY_DETECTOR longopt:--body-detector detector = /usr/share/findface-data/models/pedet/glenny_005_fast.cpu.fnk ## path to normalizer (usually crop2x) ## type:string env:CFG_BODY_NORM longopt:--body-norm norm = /usr/share/findface-data/models/facenorm/cropbbox.v2.cpu.fnk ## path to body quality extractor ## type:string env:CFG_BODY_QUALITY longopt:--body-quality quality = /usr/share/findface-data/models/pedattr/pedattr.quality.v0.cpu.fnk ## path to body quality normalizer ## type:string env:CFG_BODY_NORM_QUALITY longopt:--body-norm-quality norm_quality = /usr/share/findface-data/models/facenorm/cropbbox.v2.cpu.fnk ## path to body track features extractor ## type:string env:CFG_BODY_TRACK_FEATURES longopt:--body-track-features track_features = ## path to body track features normalizer ## type:string env:CFG_BODY_TRACK_FEATURES_NORM longopt:--body-track-features-norm track_features_norm =
sudo systemctl restart findface-video-worker-cpu.service
Open the
/etc/findface-video-manager.conf
configuration file and make sure it contains thebody
section indetectors
that looks similar to the example below.Tip
As a reference value for the
filter_min_quality
parameter, you can take theMINIMUM_BODY_QUALITY
parameter value from the/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-video-manager.conf detectors: ... body: filter_min_quality: 0.65 filter_min_size: 1 filter_max_size: 8192 roi: "" fullframe_crop_rot: false fullframe_use_png: false jpeg_quality: 95 overall_only: false realtime_post_first_immediately: false realtime_post_interval: 1 realtime_post_every_interval: false track_interpolate_bboxes: true track_miss_interval: 1 track_overlap_threshold: 0.25 track_max_duration_frames: 0 track_send_history: false post_best_track_frame: true post_best_track_normalize: true post_first_track_frame: false post_last_track_frame: false tracker_type: simple_iou track_deep_sort_matching_threshold: 0.65 track_deep_sort_filter_unconfirmed_tracks: true
Enable the recognition of bodies and body attributes in the
/etc/findface-security/config.py
configuration file. Do the following:In the
FFSECURITY
section, set'ENABLE_BODIES': True
.sudo vi /etc/findface-security/config.py FFSECURITY = { ... # optional objects to detect 'ENABLE_BODIES': True, ...
In the same section, specify the body attributes that you want to display for the body recognition events.
# available features are: color, clothes 'BODY_EVENTS_FEATURES': ['color', 'clothes'],
Restart the
findface-security
service.sudo systemctl restart findface-security.service
Real-time Face Liveness Detection
Note
The liveness detector is much slower on CPU than on GPU.
To spot fake faces and prevent photo attacks, use the integrated 2D anti-spoofing system that distinguishes a live face from a face image. Due to the analysis of not one, but a number of frames, the algorithm captures any changes in a facial expression and skin texture. This ensures that it is a live person in front of a camera and eliminates the possibility of fraud using images on paper or mobile device screens.
The liveness detector estimates a face liveness with a certain level of confidence and returns the confidence score along with a binary result real/fake
, depending on the pre-defined liveness threshold.
In this section:
Enable Face Liveness Detector
To enable the face liveness detector, do the following:
Open the
/etc/findface-video-worker-gpu.ini
(/etc/findface-video-worker-cpu.ini
) configuration file. In theliveness
section, specify the path to the neural network model (fnk
) used in the face liveness detector.sudo vi /etc/findface-video-worker-gpu.ini #------------------------------ [liveness] #------------------------------ ## path to liveness fnk ## type:string env:CFG_LIVENESS_FNK longopt:--liveness-fnk fnk = /usr/share/findface-data/models/faceattr/liveness.pacs.v0.gpu.fnk
sudo vi /etc/findface-video-worker-cpu.ini #------------------------------ [liveness] #------------------------------ ## path to liveness fnk ## type:string env:CFG_LIVENESS_FNK longopt:--liveness-fnk fnk = /usr/share/findface-data/models/faceattr/liveness.pacs.v0.cpu.fnk
Restart
findface-video-worker
.sudo systemctl restart findface-video-worker-gpu sudo systemctl restart findface-video-worker-cpu
Configure Liveness Threshold
If necessary, you can adjust the liveness threshold
in the /etc/findface-security/config.py
configuration file. The liveness detector will estimate a face liveness with a certain level of confidence. Depending on the threshold value, it will return a binary result real
or fake
.
Note
The default value is optimal. Before changing the threshold, we recommend you to seek advice from our experts by support@ntechlab.com.
sudo vi /etc/findface-security/config.py
‘LIVENESS_THRESHOLD’: 0.85,
Face Liveness in Web Interface
Once the face liveness detector configured, you will see liveness estimation for each event.
Note
The liveness score is null
when the liveness detector is unable to estimate the face liveness in the provided image.
Use the Liveness filter to display only real or only fake faces in the event list.
Liveness Detection as Standalone Service
See also
Besides the integrated anti-spoofing system that distinguishes a live face from a face image, FindFace Multi provides an API-based face liveness detection service findface-liveness-api
.
The findface-liveness-api
service takes a specific number of frames from a provided video fragment and returns the best quality face, along with a decimal liveness result for it, averaged across the taken frames. If configured, the service can also return full-frame and normalized face images and save the detection result in the findface-sf-api
cache, returning detection_id
.
The findface-liveness-api
service is automatically installed and activated, as FindFace Multi requires it for face-based authentication.
You can install and use the findface-liveness-api
service standalone, apart from FindFace Multi. This is what this section is about.
In this section:
Install and Configure findface-liveness-api
To install the service standalone, install the FindFace Multi APT repository and execute the following commands:
sudo apt update
sudo apt install findface-liveness-api
You can configure the findface-liveness-api
parameters in the /etc/findface-liveness-api.ini
configuration file:
sudo vi /etc/findface-liveness-api.ini
listen: :18301
liveness-threshold: 0.95
fullframe-jpeg-quality: 75
max-decoded-frames: 30
min-selected-frames: 10
mf-selector: reject
extraction-api:
request-batch-size: 16
timeouts:
connect: 5s
response_header: 30s
overall: 35s
idle_connection: 10s
extraction-api: http://127.0.0.1:18666
sf-api:
timeouts:
connect: 5s
response_header: 30s
overall: 35s
idle_connection: 10s
sf-api: http://127.0.0.1:18411
limits:
video-size: 10485760
video-length-sec: 60
video-fps: 30
video-width-px: 1920
video-height-px: 1080
Parameter |
Description |
---|---|
|
JPEG quality of full frames in the |
|
Finish decoding after reaching the specified number of frames. |
|
The minimum number of final frames successfully passed
through decoding and liveness extraction. Must be equal or less than |
|
Service behavior upon having multiple faces in the video frame: |
|
Batch size for liveness extraction. |
|
Maximum video size, bytes. |
|
Maximum video length, seconds. |
|
Maximum video FPS. |
|
Maximum video width, pixels. |
|
Maximum video height, pixels. |
To start the findface-liveness-api
service and enable its autostart, execute:
sudo systemctl start findface-liveness-api.service && sudo systemctl enable findface-liveness-api.service
HTTP API Requests to findface-liveness-api
To interact with the findface-liveness-api
service, use HTTP API requests. In the example below, the POST request is sent with the following optional parameters:
return_detection
(default=False
): save the best face in thefindface-sf-api
cache and return itsdetection_id
.return_normalized
(default=False
): return the face normalized image in thenormalized
field.return_photo
(default=False
): return the full frame in thephoto
field.
Example
Request
curl -i -X POST \
'http://127.0.0.1:18301/v1/video-liveness?return_detection=true&return_normalized=true&return_photo=true' \
-H 'Content-Type: video/mp4' \
--data-binary '@/home/my_video.mp4'
Response
HTTP/1.1 100 Continue
HTTP/1.1 200 OK
Content-Type: application/json
X-Request-Id: LA:WSP2NcHc
Date: Mon, 07 Sep 2020 15:30:05 GMT
Transfer-Encoding: chunked
{
"alive": true,
"average_liveness": 0.9706386,
"best_face": {
"liveness": 0.97768883,
"quality": 0.89638597,
"bbox": {
"left": 0,
"top": 578,
"right": 307,
"bottom": 1154
},
"detection_id": "btb53vbp688s1njt3bv0",
"photo": "/9j/2wCEAAgGBgcGBQgHBwcJ...",
"normalized": "iVBORw0KGgoAAAANSUhEU...",
"frame_no": 1,
"frame_ts": 0.033275817
}
}
Allocate findface-video-worker
to Camera Group
In a distributed architecture, it is often necessary that video streams from a group of cameras be processed in situ, without being redistributed across remote findface-video-worker
instances by the principal server.
Note
Among typical use cases are hotel chains, chain stores, several security checkpoints in the same building, etc.
In this case, allocate the local findface-video-worker
to the camera group.
Do the following:
Navigate to the Preferences tab. Click Camera Groups.
Open the camera group settings.
In the Labels, create or select one or several allocation labels. Save changes.
Open the
/etc/findface-video-worker-cpu.ini
(/etc/findface-video-worker-gpu.ini
) configuration file and specify the allocation labels in the following format:label_name=true
(labelterminal_1
in the example below).sudo vi /etc/findface-video-worker-cpu.ini sudo vi /etc/findface-video-worker-gpu.ini labels = terminal_1=true
Restart
findface-video-worker
.sudo systemctl restart findface-video-worker-cpu.service sudo systemctl restart findface-video-worker-gpu.service
Note
If a camera is assigned an allocation label, its video stream can be processed by a findface-video-worker
instance with the same label, as well as by all unlabeled findface-video-worker
instances.
Warning
If a labeled camera is processed by an unlabeled findface-video-worker
instance and a free similar-labeled instance appears, the camera won’t automatically switch to the latter. To switch the camera, restart the similar-labeled findface-video-worker
instance.
Card Custom Tabs, Fields, and Filters
See also
To create custom fields in the feature vector database, refer to Custom Metadata in Tarantool.
To add custom tabs and fields to the person and car cards, do the following:
Prepare the list of custom tabs and fields you want to add to your cards.
Open the
/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-security/config.py
Customize the person cards. To do so, uncomment the
FFSECURITY
->CUSTOM_FIELDS
->human_card
section and modify the exemplary content, considering the following:'items'
: the list of fields in a card. Describe each field with the following parameters:'name'
: field’s internal name, string.'default'
: field’s default value. If a default value exceeds ‘1e14 – 1’, use a string data type to specify it, for example,"123123.."
instead of123123..
.'label'
: field’s label in a card, string.'tab'
: tab that features the field. If not specified, the field appears on the main card page (that with a photograph).'display'
: display format (form
orlist
), string or array.'description'
: field’s description, string.'editable'
: field’s editability, boolean.'type'
: field data type, string. Possible values:list
: requiresitems
, additional parameter for lists (see below), expects objects {id, name} in dictionaries;valuelist
: expects elements of primitive types.objectlist
: allows for creating arrays of objects of required types.datetime
: primitive data type displayed as a datetime list.date
: primitive data type displayed as a date picker.boolean
: primitive data type displayed as a checkbox.string
: primitive data typestring
.
additional parameters for lists (type=list, type=valuelist):
multiple
: possibility of selecting several items in the list, boolean.items
: dictionary used as a data source for the list.allow_create
: possibility of adding new items to the list.custom_id
: custom field for id (type=list).
additional parameters for object lists (type=objectlist).
object
: objects used as a data source for the object list.simple
: indicator that the field expects data of a primitive type instead of objects, for example, expects strings with phone numbers.
'filters'
: the list of search filters associated with the custom fields. Parameters:'name'
: filter’s internal name,'label'
: filter’s label in the web interface,'field'
: associated field in the format[field name]
.
'tabs'
: the list of tabs in a card. The first listed tab corresponds to the main card page.
FFSECURITY = { ... # -- Custom model fields -- # Edit CUSTOM_FIELDS -> `human_card` section to customize human card fields. # Edit CUSTOM_FIELDS -> `car_card` section to customize car card fields. ... 'CUSTOM_FIELDS': { 'human_card': { 'items': [ { 'name': 'personid', 'default': '', 'label': 'PersonID', 'display': ['list', 'form'], 'description': 'Sigur person ID', 'editable': False }, { 'name': 'firstname', 'default': '', 'label': 'First Name', 'display': ['list', 'form'], 'description': 'Sigur first name', 'editable': False }, { 'name': 'lastname', 'default': '', 'label': 'Last Name', 'display': ['list', 'form'], 'description': 'Sigur last name', 'editable': False }, { 'name': 'version', 'default': '', 'label': 'Version', 'display': ['list', 'form'], 'description': 'Sigur photo version', 'editable': False } ], 'filters': [ { 'name': 'personid', 'label': 'Sigur person ID filter', 'field': 'personid' } ] }, 'car_card': {}, # same fields are available }, }
Customize the car cards. To do so, duplicate the
human_card
section content into thecar_card
section and modify it by analogy.Restart the
findface-security
service.sudo systemctl restart findface-security.service
You will see the custom content appear in the cards.
Custom Metadata in Tarantool
It is often necessary to assign additional metadata to the objects extracted from images uploaded to the card index and now stored in the feature vector database.
In this section:
Customize Meta Fields of Face Objects
To assign custom meta fields to the face objects, do the following:
Prepare the list of custom meta fields to assign.
Open the
/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-security/config.py
In the
FFSECURITY
section, uncomment theCUSTOM_FIELDS
->face_object
section and modify the exemplary content, considering the following:field_name
: field’s name;type
: data type;default
: field’s default value. If a default value exceeds ‘1e14 – 1’, use a string data type to specify it, for example,"123123.."
instead of123123..
.
FFSECURITY = { ... # -- Custom model fields -- ... # Edit CUSTOM_FIELDS -> `face_object` section to customize face object fields. ... # 'CUSTOM_FIELDS': { ... 'face_object': { 'items': [ { "field_name": "tag_name_1", "type": "string", "default": "change_me" }, { "field_name": "tag_name_2", "type": "uint", "default": 123 }, { "field_name": "tag_name_3", "type": "bool", "default": True }, ] } }, }
Add the new meta fields to the feature vector database structure.
Restart the
findface-security
service.sudo systemctl restart findface-security.service
You can work with the new meta fields through HTTP API using the
objects/faces/
methods.
Customize Meta Fields of Body and Car Objects
Assigning custom meta fields to the body and car objects are similar to the faces. The only difference is that you will need to manually create the CUSTOM_FIELDS
-> body_object
and CUSTOM_FIELDS
-> car_object
sections. It is a good idea to duplicate the CUSTOM_FIELDS
-> face_object
section and use it as a starting point for further modifications.
FFSECURITY = {
...
# -- Custom model fields --
...
# Edit CUSTOM_FIELDS -> `face_object` section to customize face object fields.
...
# 'CUSTOM_FIELDS': {
...
'body_object': {
'items': [
{
"field_name": "tag_name_1",
"type": "string",
"default": "change_me"
},
{
"field_name": "tag_name_2",
"type": "uint",
"default": 123
},
{
"field_name": "tag_name_3",
"type": "bool",
"default": True
},
]
}
'car_object': {
'items': [
{
"field_name": "tag_name_1",
"type": "string",
"default": "change_me"
},
{
"field_name": "tag_name_2",
"type": "uint",
"default": 123
},
{
"field_name": "tag_name_3",
"type": "bool",
"default": True
},
]
}
},
}
Similarly, you can work with the new meta fields using the objects/bodies/
and objects/cars/
methods of HTTP API.
See also
To create custom tabs, fields, and filters in cards, refer to Card Custom Tabs, Fields, and Filters.
Console Bulk Photo Upload
In addition to the web interface upload, you can bulk-upload photos to the card index via the findface-security-uploader console utility. We recommend preferring this utility over the web interface if the number of uploaded photos is more than 10,000.
Warning
In the current version, the findface-security-uploader utility does not support cars and bodies, only faces.
Tip
To view the findface-security-uploader help, execute:
findface-security-uploader --help
Usage: findface-security-uploader [OPTIONS] COMMAND [ARGS]...
Options:
--job PATH Job file (default: enroll-job.db)
--log-level TEXT Log level
--fsync BOOLEAN Call fsync() to prevent data loss on power failure
--help Show this message and exit.
Commands:
add Add items from CSV or TSV file to job
print Print contents of job file as JSON
run Run upload job
findface-security-uploader add --help
Usage: findface-security-uploader add [OPTIONS] FILES...
Options:
--format [csv|tsv] Input file format - CSV or TSV
--delimiter TEXT Field delimiter - by default it's "\t" for TSV and ","
for CSV
--help Show this message and exit.
findface-security-uploader print --help
Usage: findface-security-uploader print [OPTIONS]
Print contents of job file as JSON
Options:
--failed Show only failed images
--noface Show only images without detection
--help Show this message and exit.
findface-security-uploader run --help
Usage: uploader.py run [OPTIONS]
Run upload job
Options:
--parallel INTEGER Number of enroll threads (default: 10)
--api TEXT API url (default: http://127.0.0.1:80/) [required]
--user TEXT API username [required]
--password TEXT API password [required]
--watch-lists TEXT Comma-separated list of card list ids [required]
--inactive Mark new cards as inactive
--failed Include failed images
--noface Include images without detection
--all-faces Enroll all found faces on each image
--logging-delta INTEGER Logging period delta
--help Show this message and exit.
Do the following:
Write the list of photos and metastrings to a CSV or TSV file.
Important
The file used as a metadata source must have the following format:
path to photo | metastring
.To prepare a TSV file, use either a
script
or thefind
command.Note
Both the script and the command in the examples below create the
images.tsv
file. Each image in the list will be associated with a metastring coinciding with the image file name in the formatpath to photo | metastring
.To build a TSV file listing photos from a specified directory (
/home/user/25_celeb/
in the example below), run the following command:python3 tsv_builder.py /home/user/25_celeb/
The
find
usage example:find photos/ -type f -iname '*g' | while read x; do y="${x%.*}"; printf "%s\t%s\n" "$x" "${y##*/}"; done
Create a job file out of a CSV or TSV file by using
add
. As a result, a fileenroll-job.db
will be created and saved in a current directory.findface-security-uploader add images.tsv
The
add
options:--format
: input file format,tsv
by default,--delimiter
: field delimiter, by default"\t"
for TSV, and","
for CSV.
Note
A job file represents a sqlite database which can be opened on the sqlite3 console.
Process the job file by using
run
.findface-security-uploader run --watch-lists 2 --api http://127.0.0.1:80 --user admin --password password
The important
run
options:--parallel
: the number of photo upload threads, 10 by default. The more threads you use, the faster the bulk upload is completed, however it requires more resources too.--all-faces
: upload all faces from a photo if it features several faces.--api
:findface-security
API URL,http://127.0.0.1:80/
by default. Mandatory option.--user
: login. Mandatory option.--password
: password. Mandatory option.--watch-lists
: comma-separated list of the watch lists id’s. Mandatory option.--failed
: should an error occur during the job file processing, correct the mistake and try again with this option.--inactive
: mark new cards as inactive.--noface
: by default, images classified as having no faces will be assigned theNOFACE
status and automatically excluded from the upload. To attempt re-detecting faces in such images, re-run the job file with this option. If the re-detection gives a negative result again, an image will be skipped and a relevant record will appear in the upload log.
(Optional) Print the job processing results as JSON. If necessary, you can print only failed images/ images without detected faces.
findface-security-uploader print --failed findface-security-uploader print --noface
Deduplicate Events
In this section:
Consider enabling deduplication to exclude coinciding object recognition events within one camera group.
Enable Deduplication
To enable event deduplication, do the following:
Enable the offline video detection mode for each camera in the group. See Add Camera for details.
Navigate to the Preferences tab. Click Camera Groups.
Open the camera group settings.
Check Deduplicate Events and specify the deduplication interval in seconds.
How It Works
The deduplication algorithm works as follows. In the offline mode, the server receives one best object snapshot per tracking session on a camera.
Note
A tracking session continues until an object disappears from the camera field.
If there are several tracking sessions on a camera(s) of a camera group within the specified deduplication interval, FindFace Multi handles the received snapshots in the following way:
If there is a match with a card within the preceding deduplication interval, FindFace Multi drops a newly acquired snapshot. Otherwise, it saves the snapshot to the database.
For unmatched objects, FindFace Multi considers both the similarity between objects and snapshot quality when performing deduplication. As a result, FindFace Multi drops all snapshots within the deduplication interval unless a new object snapshot is of higher quality. Thus, it guarantees the system deduplicates events without skipping high-quality objects, which are essential for further video analytics.
Enable Personal Data Protection
FindFace Multi supports laws related to the processing of personal data of individuals (GDPR and similar).
To apply personal data protection to your system, do the following:
Open the
/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-security/config.py
Disable saving unmatched events by setting
'IGNORE_UNMATCHED': True
.... FFSECURITY = { ... # do not save unmatched events (GDPR support) 'IGNORE_UNMATCHED': False, ... }
For events with matches, enable blurring all unmatched objects in full frames. To do so, set
'BLUR_UNMATCHED_OBJECTS': True
. Optionally, you can modify the default JPEG quality of those frames.... FFSECURITY = { ... # blur all unmatched objects on the full frame of the matched event (GDPR support) 'BLUR_UNMATCHED_OBJECTS': False, # full frame jpeg quality when `BLUR_UNMATCHED_OBJECTS` is enabled 'BLURRED_FULLFRAME_JPEG_QUALITY': 85, ... }
Enable blurring all unmatched objects on the Video Wall. To do so, set
"gdpr": True
in theFFSECURITY_UI_CONFIG
->available_video_wall_features
section.FFSECURITY_UI_CONFIG = { ... "available_video_wall_features": { ... "gdpr": True } }
Restart
findface-security
.sudo systemctl restart findface-security.service
See also
Configure Video Recorder
The FindFace Core internal architecture allows for embedding Video Recorder, an additional functionality that records, stores, and plays back video data from cameras.
This section is about the Video Recorder configuration.
Note
If you haven’t deployed it yet, do so by using the FindFace Multi console installer or step-by-step instructions.
In this section:
Enable Video Recorder
To enable Video Recorder, do the following:
Enable the
findface-video-worker
service to transfer video chunks to thefindface-video-storage
service.Important
This setting makes the Video Recorder functionality available over HTTP API. It’s also obligatory if you want to harness Video Recorder as part of the FindFace Multi web interface.
Do the following:
Open the
/etc/findface-video-worker-cpu.ini
(/etc/findface-video-worker-gpu.ini
) configuration file for eachfindface-video-worker
instance that will supply Video Recorder with video.sudo vi /etc/findface-video-worker-cpu.ini sudo vi /etc/findface-video-worker-gpu.ini
Set
enabled = true
in therecorder
section.#------------------------------ [recorder] #------------------------------ ## video recording enabled ## type:bool env:CFG_RECORDER_ENABLED longopt:--recorder-enabled enabled = true
Restart each
findface-video-worker
instance involved.sudo systemctl restart findface-video-worker*
Configure Video Recorder to work as part of the FindFace Multi web interface.
Note
Omit the following steps if you do not need the Video Recorder tools to appear in the FindFace Multi web interface.
Do the following:
Open the
/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-security/config.py
Find the
FFSECURITY_UI_CONFIG
->vms
section.The list of parameters to configure is the following:
Parameter
Description
"enabled"
Set
True
to add the Enable recording checkbox to camera settings."video_player"
Set
True
to enable the video player to open in event notifications and camera preview (instead of static frames). This setting requires"enabled": True
."timeline"
->"min_zoom"
Defines the largest possible scale on the video player timeline, seconds per pixel.
"timeline"
->"max_zoom"
Defines the smallest possible scale on the timeline, seconds per pixel.
"timeline"
->"objects"
->"faces"
->"enabled"
Set
True
to enable marking face events on the timeline."timeline"
->"objects"
->"faces"
->"limit"
The maximum number of face events simultaneously marked on the timeline. If there is more face events than that, you will be asked to zoom in.
"timeline"
->"objects"
->"cars"
->"enabled"
Set
True
to enable marking car events on the timeline."timeline"
->"objects"
->"cars"
->"limit"
The maximum number of car events simultaneously marked on the timeline. If there is more car events than that, you will be asked to zoom in.
FFSECURITY_UI_CONFIG = { "vms": { "enabled": True, "video_player": True, "timeline": { "min_zoom": 0.2, "max_zoom": 200, "objects": { "events": { "faces": { "enabled": True, "limit": 500, }, "cars": { "enabled": False, "limit": 500, }, }, }, }, }, }
Set the maximum storage time for the recorded video chunks in the
VMS_VIDEOS_MAX_AGE
parameter. Video chunks older than the given number of days will be automatically deleted.Tip
To roughly estimate the required storage time, consider that a
1920×1080
camera produces around 60 GB of video per day. We recommend that you measure the exact video volume from your cameras and calculate the required storage time, based on the video volume and the available disk space.Tip
If you are going to keep the video chunks for a long time, be sure to monitor the remaining free space using Zabbix, Prometheus, or a similar software. It will prevent the video from occupying the entire space and thus having blocked up the server operation.
# VMS video chunks will be deleted older than VMS_VIDEOS_MAX_AGE 'VMS_VIDEOS_MAX_AGE': 7,
Restart the
findface-security
service.sudo systemctl restart findface-security.service
Nuances of Disabling Video Recorder
If Video Recorder is running and recording video on selected cameras, and you need to disable it, be sure to disable video recording on the cameras first, before proceeding with the /etc/findface-security/config.py
and /etc/findface-video-worker-*.ini
configuration files.
Remove Video Manually
You can manually remove video chunks older than the given number of days by executing the command below. To set the number of days, use the --vms-videos-max-age
argument.
sudo findface-security cleanup_vms --vms-videos-max-age=30
Maintenance and Troubleshooting
Update to FindFace Multi 1.2
Tip
If you use our product FindFace Security deployed on Ubuntu 18.04, upgrade it to FindFace Multi 1.0 and then update it to FindFace Multi 1.2.
To update FindFace Multi from earlier versions to 1.2, do the following:
Open the
/etc/findface-security/config.py
(/etc/ffsecurity/config.py
) configuration file. Save the values of the following parameters for later use:EXTERNAL_ADDRESS
,SECRET_KEY
,VIDEO_DETECTOR_TOKEN
,ROUTER_URL
.sudo vi /etc/findface-security/config.py EXTERNAL_ADDRESS = "http://172.20.77.58" ... # use pwgen -sncy 50 1|tr "'" "." to generate your own unique key SECRET_KEY = 'c8b533847bbf7142102de1349d33a1f6' FFSECURITY = { 'VIDEO_DETECTOR_TOKEN': '381b0f4a20495227d04185ab02f5085f', ... 'ROUTER_URL': 'http://172.20.77.58', ... }
In FindFace Multi 1.2, we significantly refined the internal structure of the
report
entity. However, it misses backward compatibility with the product earlier versions. Before the update, be sure to download all the reports that you need from the Reports tab. They can later be used as the samples to recreate the previously existing reports in the updated system.Stop the
findface-security
service.sudo systemctl stop findface-security*.service
Create a backup of the Tarantool-based feature vector database in any directory of your choice, for example,
/etc/findface_dump
.Tip
See Back Up and Recover FindFace Multi and Its Data for details.
sudo mkdir -p /etc/findface_dump cd /etc/findface_dump sudo findface-storage-api-dump -config /etc/findface-sf-api.ini
Install the apt repository with FindFace Multi, using the console installer as described in this section.
Tip
On the stage of installing neural network models (
APT repository doesn't include recognition models. Do you want to install them now?
), use one of the following options:Choose
No
Choose
Yes
and leave all the neural network models selected. EnterDone
Take a look at the step #9 comments to decide which option suits you best.
Install NATS, enable its autostart, and launch the service.
Important
Omit this step if updating from FindFace Multi 1.1.
sudo apt install -y nats-server sudo systemctl enable nats-server sudo systemctl restart nats-server
Install the FindFace Multi services from the repository, following your architecture outline.
CPU-version:
sudo apt update sudo apt install findface-security findface-security-ui findface-extraction-api findface-ntls findface-sf-api findface-tarantool-server findface-upload findface-video-manager findface-video-worker-cpu findface-counter findface-liveness-api
GPU-version:
sudo apt update sudo apt install findface-security findface-security-ui findface-extraction-api-gpu findface-ntls findface-sf-api findface-tarantool-server findface-upload findface-video-manager findface-video-worker-gpu findface-counter findface-liveness-api
Important
FindFace Multi on GPU requires the prior installation of NVIDIA drivers.
Important
At some moment, you will be prompted to choose which version of the
findface-security
configuration file to keep. Opt forInstall the packages maintainer’s version
. Do the same if such a message appears for another component (findface-extraction-api
,findface-video-worker
, etc.). Always prefer a newer version of a configuration file.Open the
/etc/findface-security/config.py
configuration file and paste the savedEXTERNAL_ADDRESS
,SECRET_KEY
,VIDEO_DETECTOR_TOKEN
, andROUTER_URL
into it. Fill in theDATABASES
section by analogy:'PORT': 5439, 'USER': 'ntech', 'PASSWORD': '9T3g1nXy9yx3y8MIGm9fbef3dia8UTc3'
(password from/etc/pgbouncer/userlist.txt
).sudo vi /etc/findface-security/config.py ... # Database is used by FindFace Security to store cameras, # camera groups, watchlists and so on. Only PostgreSQL is supported. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'DISABLE_SERVER_SIDE_CURSORS': True, 'NAME': 'ffsecurity', 'PORT': 5439, 'USER': 'ntech', 'PASSWORD': '9T3g1nXy9yx3y8MIGm9fbef3dia8UTc3', } } ... # Use pwgen -sncy 50 1|tr "'" "." to generate your own unique key SECRET_KEY = '002231ccb690586f4d33e98322c591bb' ... SERVICE_EXTERNAL_ADDRESS = 'http://172.20.77.58' # EXTERNAL_ADDRESS is used to access objects created inside FFSecurity via external links. EXTERNAL_ADDRESS = 'http://172.20.77.58' ... # findface-video-worker authorization token 'VIDEO_DETECTOR_TOKEN': '8977e1b0067d43f6c908d0bf60363255', ... # findface-video-worker face posting address, # it must be set to either FFSecurity EXTERNAL_ADDRESS (by default) # or findface-facerouter url (in some specific cases) 'ROUTER_URL': 'http://127.0.0.1:80',
Open the old version of the
findface-ntls
configuration file available at/etc/findface-ntls.cfg.dpkg-old
and check it against the new version/etc/findface-ntls.cfg
. Move all the custom parameters from the old version to the new one. Do the same for other components, e.g. forfindface-extraction-api
, check/etc/findface-extraction-api.ini.ucf-old
against/etc/findface-extraction-api.ini
, etc.sudo vi /etc/findface-ntls.cfg.dpkg-old sudo vi /etc/findface-ntls.cfg sudo vi /etc/findface-extraction-api.ini.ucf-old sudo vi /etc/findface-extraction-api.ini ...
Important
Be aware that the new version of the
findface-extraction-api
configuration file by default contains theliveness.colombo
model that was absent in the product older versions. This can potentially cause the system to fail. Use one of the following ways to prevent the failure:Revert the
face_liveness
parameter to the previous value:liveness.alleyn.v2.cpu.fnk
/liveness.alleyn.v2.gpu.fnk
Install
liveness.colombo.cpu.fnk
/liveness.colombo.gpu.fnk
on the step #5
Open the new version of the
/etc/findface-video-worker-cpu.ini
(/etc/findface-video-worker-gpu.ini
) configuration file. In thestreamer
section, settracks = true
andtracks_last = true
to ensure bboxes on the Video Wall are appropriately displayed.sudo vi /etc/findface-video-worker-cpu.ini sudo vi /etc/findface-video-worker-gpu.ini [streamer] #------------------------------ ... ## use tracks instead detects for streamer ## type:bool env:CFG_STREAMER_TRACKS longopt:--streamer-tracks tracks = true ## use tracks with lastFrameId=currentFrameId (.tracks must be true) ## type:bool env:CFG_STREAMER_TRACKS_LAST longopt:--streamer-tracks-last tracks_last = true
Restart the services.
sudo systemctl restart findface-ntls findface-extraction-api findface-video-worker* findface-video-manager findface-sf-api findface-counter findface-liveness-api
Modify the Tarantool database structure by applying the
tnt_schema.lua
file from FindFace Multi.sudo findface-security make_tnt_schema | sudo tee /etc/findface-security/tnt_schema.lua
Stop the
findface-tarantool-server
shards. Purge data from all the directories relevant to active shards.sudo systemctl stop 'tarantool@*' sudo rm /opt/ntech/var/lib/tarantool/shard-*/{index,snapshots,xlogs}/*
Navigate to the directory with Tarantool configuration file(s)
/etc/tarantool/instances.available/
. Check whether each configuration fileshard-*.lua
contains thedofile
command,meta_indexes
andmeta_scheme
definitions, as in the example below.sudo vi /etc/tarantool/instances.available/shard-*.lua ... dofile("/etc/findface-security/tnt_schema.lua") ... FindFace.start("127.0.0.1", 8101, { license_ntls_server="127.0.0.1:3133", meta_indexes=meta_indexes, meta_scheme = meta_scheme })
Restart the
findface-tarantool-server
shards.TNT=$(ls /etc/tarantool/instances.enabled/ | cut -c 7,8,9) for i in $TNT; do sudo systemctl restart tarantool@shard-$i.service ; done
Restore the Tarantool database from the backup.
cd /etc/findface_dump for x in *.json; do sudo findface-storage-api-restore -config /etc/findface-sf-api.ini < "$x"; done
Migrate the main database architecture from FindFace Multi to PostgreSQL, re-create predefined user roles and the first administrator.
sudo findface-security migrate sudo findface-security create_groups sudo findface-security create_default_user
Restart the
findface-security
andnginx
services.sudo systemctl restart findface-security.service nginx.service
Recreate the previously existing reports.
Important
The following tabs got evolved and renamed in FindFace Multi 1.2:
Dossiers got evolved into Cards
Episodes became Episodes -> Humans
Persons got evolved into Clusters
Important
We highly recommend disabling the Ubuntu automatic update to preserve the FindFace Multi compatibility with the installation environment. In this case, you will be able to update your OS manually, fully controlling which packages to update.
To disable the Ubuntu automatic update, execute the following commands:
sudo apt-get remove unattended-upgrades
sudo systemctl stop apt-daily.timer
sudo systemctl disable apt-daily.timer
sudo systemctl disable apt-daily.service
sudo systemctl daemon-reload
Note
You can additionally deploy Video Recorder. See Deploy Video Recorder Step-by-Step.
Back Up and Recover FindFace Multi and Its Data
This section is all about the backup and recovery of the FindFace Multi data storages and your system as a whole.
In this section:
List of Data Storages
FindFace Multi uses the following data storages:
Tarantool-based feature vector database that stores object feature vectors and events.
PostgreSQL-based main system database
ffsecurity
that stores internal system data, cards, user accounts, and camera settings.The
/var/lib/findface-security/uploads
directory that stores uploaded card photos, video files, full frames of events and counters, and object thumbnails.The
/var/lib/ffupload/
directory that stores such event artifacts as normalized object images.Note
It also stores video chunks if you are using Video Recorder.
(Only with Video Recorder) MongoDB-based database that stores meta-information of the video chunks, including their exact location in the storage,
/var/lib/ffupload/
.
Feature Vector Database Backup and Recovery (Tarantool)
There are the following galleries in the Tarantool-based feature vector database:
ffsec_body_events
: feature vectors extracted from bodies detected in the video.ffsec_body_objects
: feature vectors extracted from body images uploaded to the card index.ffsec_body_clusters
: centroids of body clusters.ffsec_car_events
: feature vectors extracted from cars detected in the video.ffsec_car_objects
: feature vectors extracted from car images uploaded to the card index.ffsec_car_clusters
: centroids of car clusters.ffsec_face_events
: feature vectors extracted from faces detected in the video.ffsec_face_objects
: feature vectors extracted from face images uploaded to the card index.ffsec_face_clusters
: centroids of face clusters.ffsec_user_face
: feature vectors extracted from the FindFace Multi users’ photos for face-based authentication.
The database backup/recovery functionality allows you to fully restore all the galleries when needed.
To avoid data loss, we recommend you back up a feature vector database at least once a week. Overall, the backups’ frequency depends on the number of cards and object recognition events, and available disk space.
Be sure to back up the database before migrating your system to another neural network model.
Utilities
To back up and recover the FindFace Multi feature vector database, the following utilities are needed:
backup:
findface-storage-api-dump
,recovery:
findface-storage-api-restore
.
These utilities are automatically installed along with findface-sf-api
.
Back Up Database
To back up the feature vector database, use the findface-storage-api-dump
utility as follows:
Important
The following services must be active: findface-tarantool-server
, findface-sf-api
.
Note
The backup functionality can be applied to a distributed database. In this case, the findface-storage-api-dump
utility will back up galleries on all the shards specified in /etc/findface-sf-api.ini
.
On the server with
findface-sf-api
, create a directory to store the backup files (/etc/findface_dump
in the example below).Launch the
findface-storage-api-dump
utility by executing:sudo findface-storage-api-dump -output-dir=/etc/findface_dump -config /etc/findface-sf-api.ini
The utility will back up at once all the galleries into the files with corresponding names (
ffsec_body_events.json
,ffsec_face_events
, etc.) and save them into the directory. These files contain all the data needed to restore the entire database.
Recover Database
To recover the feature vector database from the backup, launch the findface-storage-api-restore
utility for all the files in the backup folder:
sudo findface-storage-api-restore -config /etc/findface-sf-api.ini /etc/findface_dump/*.json
The recovery process can be interrupted and resumed whenever necessary. To resume the process after the interruption, launch the findface-storage-api-restore
utility again.
See also
Main Database Backup and Recovery (PostgreSQL)
To back up the main database ffsecurity
based on PostgreSQL, execute:
sudo -u postgres pg_dump ffsecurity > ffsecurity_postgres_backup.sql
To recover the main database, do the following:
Stop the
findface-security
service.sudo systemctl stop findface-security.service
Stop the
pgbouncer
service to delete its active sessions with theffsecurity
database.sudo systemctl stop pgbouncer.service
Open the PostgreSQL interactive terminal.
sudo -u postgres psql
Remove the old
ffsecurity
database.DROP DATABASE ffsecurity;
Create a new
ffsecurity
database. Leave the PostgreSQL interactive terminal.CREATE DATABASE ffsecurity WITH OWNER ntech ENCODING 'UTF-8' LC_COLLATE='C.UTF-8' LC_CTYPE='C.UTF-8' TEMPLATE template0;
Start the
pgbouncer
service.sudo systemctl start pgbouncer.service
Recover the database content from the backup.
sudo -u postgres psql -d ffsecurity -f ffsecurity_postgres_backup.sql
Migrate the database architecture from FindFace Multi to PostgreSQL, re-create user groups with predefined rights and the first user with administrator rights.
sudo findface-security migrate sudo findface-security create_groups sudo findface-security create_default_user
Start the
findface-security
service.sudo systemctl start findface-security.service
Artifacts Backup and Recovery (findface-upload
)
The FindFace Multi artifacts, such as photos uploaded to the card index, video files, and such event artifacts as full frames, object thumbnails, and normalized object images, are stored in the following directories:
/var/lib/findface-security/uploads
/var/lib/ffupload/
Note
The /var/lib/ffupload/
directory also stores video chunks if there is a running instance of Video Recorder in your system.
Note
Both directories are operated by the findface-upload
component.
To back up the artifacts, execute:
sudo tar -cvzf /home/some_directory/var_lib_ffsecurity_uploads.tar.gz /var/lib/findface-security/uploads/
sudo tar -cvzf /home/some_directory/var_lib_ffupload.tar.gz /var/lib/ffupload/
To recover the artifacts, execute the following commands from the root directory:
cd /
sudo tar -xvf /home/some_directory/var_lib_ffsecurity_uploads.tar.gz
sudo tar -xvf /home/some_directory/var_lib_ffupload.tar.gz
Video Recorder Database Backup and Recovery (MongoDB)
MongoDB is a third-party software that implements the Video Recorder database. This database stores meta-information of the video chunks, including their exact location in the storage, /var/lib/ffupload/
.
To back up MongoDB, execute:
sudo mongodump -o=/home/some_directory/mongodb_backup
To recover MongoDB, execute:
sudo mongorestore /home/some_directory/mongodb_backup
Settings Backup and Recovery
The entire set of FindFace Multi configuration files including the Tarantool structural schema is automatically backed up during the instance removal. It is saved to the ~/ffmulti_bak_${datetime}/etc/
directory.
When re-installing FindFace Multi, recover the settings after completing the console installation, or right after installing services from the APT repository.
The entire set of backed up files is the following:
ls -R -p
.:
findface-counter.ini.bak findface-liveness-api.ini.bak findface-security/ findface-video-manager.conf.bak findface-video-streamer-cpu.ini.bak nginx/
findface-extraction-api.ini.bak findface-ntls.cfg.bak findface-sf-api.ini.bak findface-video-storage.conf.bak findface-video-worker-cpu(gpu).ini.bak
./findface-security:
config.py tnt_schema.lua
./nginx:
sites-enabled/
./nginx/sites-enabled:
ffsecurity-nginx.conf
To recover the FindFace Multi settings, do the following:
Open the
/etc/findface-security/config.py
configuration file of the fresh FindFace Multi instance. Find theDATABASES
->default
->PASSWORD
parameter that stores thentech
user password from thefindface-security
database of PostgreSQL. Copy/paste it to the~/ffmulti_bak_${datetime}/etc/findface-security/config.py
backup.sudo vi /etc/findface-security/config.py DATABASES = { ‘default’: ‘PASSWORD’: ‘some_pass’
In the
~/ffmulti_bak_${datetime}/etc/
directory, use any method to eliminate the.bak
extension from the files. For example, you can execute the following command.sudo rename 's/.ini.bak/.ini/' * && sudo rename 's/.conf.bak/.conf/' * && sudo rename 's/.cfg.bak/.cfg/' *
Tip
If the
rename
command is absent in your system, you can install it as follows:sudo apt install rename
Recursively copy the backup files to the
/etc
directory.sudo cp -r * /etc
Modify the database structure by applying the initial
tnt_schema.lua
file.sudo findface-security make_tnt_schema | sudo tee /etc/findface-security/tnt_schema.lua
Restart the services.
On CPU:
sudo systemctl restart findface-counter findface-liveness-api findface-video-manager findface-extraction-api findface-ntls findface-sf-api findface-video-worker-cpu findface-security
On GPU:
sudo systemctl restart findface-counter findface-liveness-api findface-video-manager findface-extraction-api findface-ntls findface-sf-api findface-video-worker-gpu findface-security
Video Recorder services:
sudo systemctl restart findface-video-streamer-cpu findface-video-storage
Back Up and Recover Entire System
If you intend to back up FindFace Multi before uninstalling it, it will be sufficient to follow the step-by-step instructions in the Remove FindFace Multi Instance section. The provided ffmulti_uninstall.sh
script can automatically back up the FindFace Multi configuration files and all data storages to the ~/ffmulti_bak_${datetime}/
directory.
To recover FindFace Multi after uninstalling it, use the following algorithm:
Recover the settings from the backed up configuration files.
Recover MongoDB (if you are using Video Recorder).
Add Custom Language
By default, only English and Russian languages are available in the web interface. Other languages are provided by request (support@ntechlab.com) and added to the system via the /etc/findface-security/config.py
configuration file.
To add a custom language to the system, do the following:
Download the localization file provided by our experts into the
/usr/share/ffsecurity-ui/ui-static/
directory on the FindFace Multi principal server.Open the
/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-security/config.py
Into the
FFSECURITY_UI_CONFIG
section, insert thelanguages
section and fill it by analogy with the example below.FFSECURITY_UI_CONFIG = { ... "languages": { "items": [ { "name": "es", "label": "Español", "url": "/ui-static/es_i18n_ffsec.po" }, ] },
Restart the
findface-security
service.sudo systemctl restart findface-security.service
The new language will be automatically applied to the system and from now on be available on the
tab. You can at any time switch between it and other available languages.See also
Multiple Video Cards Usage
Should you have several video cards installed on a physical server, you can create additional findface-extraction-api-gpu
or findface-video-worker-gpu
instances and distribute them across the video cards, one instance per card.
In this section:
Distribute findface-extraction-api-gpu
Instances Across Several Video Cards
To distribute the findface-extraction-api-gpu
instances across several video cards, do the following:
Prepare the initial configuration file of
findface-extraction-api-gpu
for future copying. Open/etc/findface-extraction-api.ini
. Limit the number of thefindface-extraction-api-gpu
instances down to1
per video card.sudo vi /etc/findface-extraction-api.ini ... extractors: instances: 1
Tip
If necessary, make other adjustments. For example, you can enable other attribute recognition. See Enable Face Attribute Recognition, Enable Car and Car Attribute Recognition, Enable Body and Body Attribute Recognition.
Create several copies of the
/etc/findface-extraction-api.ini
configuration file, subject to how many video cards you are going to use for feature vector extraction. Append numbers of the GPU devices that will be running the instances to the new names (GPU devices #0 and #7 in the example below).Note
By default, GPU device numeration in a system starts from #0. To list the video cards in use, execute:
nvidia-smi
cp findface-extraction-api.ini findface-extraction-api-0.ini cp findface-extraction-api.ini findface-extraction-api-7.ini
Open the new configuration files. Specify the relevant GPU device numbers and adjust the listening ports. Be sure to associate each instance to a unique port.
sudo vi /etc/findface-extraction-api-0.ini listen: 127.0.0.1:18667 ... gpu_device: 0 ...
sudo vi /etc/findface-extraction-api-7.ini listen: 127.0.0.1:18668 ... gpu_device: 7 ...
For the
findface-extraction-api-gpu
instances to work within one system, bind them via a load balancer, e.g.,nginx
. For simplicity, we recommend enablingnginx
on the standardfindface-extraction-api
port18666
(see details below).To set up load balancing, do the following:
Create a new configuration file for
nginx
.sudo vi /etc/nginx/sites-available/lb_extractions
Insert the following entry into the just created file. Be sure to specify the actual listening ports of the
findface-extraction-api-gpu
instances in theserver
directive (18667
,18668
in our example).upstream extractions { server 127.0.0.1:18667 max_fails=3 fail_timeout=30s; server 127.0.0.1:18668 max_fails=3 fail_timeout=30s; } server { listen 18666 default_server; server_name _; location / { client_max_body_size 100m; proxy_pass http://extractions; } }
Start the load balancer.
ln -s /etc/nginx/sites-available/lb_extractions /etc/nginx/sites-enabled/lb_extractions nginx -t nginx -s reload
Stop the initial
findface-extraction-api-gpu
service and disable its starting on boot.systemctl stop findface-extraction-api.service systemctl disable findface-extraction-api.service
Create a new service for the
findface-extraction-api-gpu
instances./etc/systemd/system/findface-extraction-api@.service
Insert the following entry:
[Unit] Description=Findface Extraction API service %i instance [Service] User=ntech Restart=always RestartSec=5 Type=notify ExecStart=/usr/bin/findface-extraction-api -config="/etc/findface-extraction-api-%i.ini" TimeoutStartSec=60m [Install] WantedBy=multi-user.target
Start the
findface-extraction-api-gpu
instances. The first start might take up to 10 minutes.systemctl daemon-reload systemctl enable findface-extraction-api@{0,7} systemctl start findface-extraction-api@{0,7}
Tip
To check the status, use the following command:
sudo systemctl list-units 'findface*' -a
Allocate findface-video-worker-gpu
to Additional Video Card
To create an additional findface-video-worker-gpu
instance and allocate it to a different video card, do the following:
Display the status of the
findface-video-worker-gpu
primary service by executing:sudo systemctl status findface-video-worker-gpu.service
Find the full path to the service in the following line:
Loaded: loaded (/usr/lib/systemd/system/findface-video-worker-gpu.service); enabled; vendor preset: enabled
It is
findface-video-worker-gpu.service
in our example (name may vary). Create a copy of the service under a new name.sudo cp /usr/lib/systemd/system/findface-video-worker-gpu.service /usr/lib/systemd/system/findface-video-worker-gpu2.service`
In the same manner, create a copy of the primary service configuration file under a new name.
sudo cp /etc/findface-video-worker-gpu.ini /etc/findface-video-worker-gpu2.ini
Open the just created configuration file and actualize the GPU device number to use. Modify the streamer port number by the following formula:
18999 (port number for GPU #0) - GPU device number
, i.e. for the GPU #1,port = 18998
, for the GPU #2,port = 18997
, and so on.sudo vi /etc/findface-video-worker-gpu2.ini ## cuda device number device_number = 1 ... #------------------------------ [streamer] #------------------------------ ## streamer/shots webserver port, 0=disabled ## type:number env:CFG_STREAMER_PORT longopt:--streamer-port port = 18999 ...
Open the new service and specify the just created configuration file.
sudo vi /usr/lib/systemd/system/findface-video-worker-gpu2.service ExecStart=/usr/bin/findface-video-worker-gpu --config /etc/findface-video-worker-gpu2.ini
Reload the
systemd
daemon to apply the changes.sudo systemctl daemon-reload
Enable the new service autostart.
sudo systemctl enable findface-video-worker-gpu2.service Created symlink from /etc/systemd/system/multi-user.target.wants/findface-video-worker-gpu2.service to /usr/lib/systemd/system/findface-video-worker-gpu2.service
Launch the new service.
sudo systemctl start findface-video-worker-gpu2.service
Check the both
findface-video-worker-gpu
services status.sudo systemctl status findface-video-worker-* | grep -i 'Active:' -B 3 ● findface-video-worker-gpu2.service - findface-video-worker-gpu daemon Loaded: loaded (/usr/lib/systemd/system/findface-video-worker-gpu2.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2019-07-18 10:32:02 MSK; 1min 11s ago ... ● findface-video-worker-gpu.service - findface-video-worker-gpu daemon Loaded: loaded (/usr/lib/systemd/system/findface-video-worker-gpu.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2019-07-15 15:18:33 MSK; 2 days ago
Migrate Feature Vectors to Different Neural Network Model
Tip
Do not hesitate to contact our experts on migration by support@ntechlab.com.
Important
In the case, you are doing migration as part of the system update to a newer version, complete the update first. Only after that can you proceed to the migration.
This section is about how to migrate object feature vectors to another neural network model.
Do the following:
Create a backup of the Tarantool-based feature vector database in any directory of your choice, for example,
/etc/findface_dump
.Tip
See Back Up and Recover FindFace Multi and Its Data for details.
sudo mkdir -p /etc/findface_dump sudo cd /etc/findface_dump sudo findface-storage-api-dump -config /etc/findface-sf-api.ini
Stop the
findface-sf-api
service.sudo systemctl stop findface-sf-api.service
Create new shards that will host regenerated feature vectors.
Open the
/etc/tarantool/instances.available/
directory and find out the number of shards by counting the number of configuration filesshard-*.lua
.Note
There are four shards in the example below.
cd /etc/tarantool/instances.available/ ls -l shard-001.lua shard-002.lua shard-003.lua shard-004.lua
Create the same number of new shards by copying the configuration files
shard-*.lua
.Note
For convenience, the second digit in the new names is
1
:shard-01*.lua
.sudo cp shard-001.lua shard-011.lua sudo cp shard-002.lua shard-012.lua sudo cp shard-003.lua shard-013.lua sudo cp shard-004.lua shard-014.lua
Modify the following lines in each new shard’s configuration file, subject to its name (
shard-011
,shard-012
, etc., in our example):Old value
New value
listen = ‘127.0.0.1:32001’
Listen = ‘127.0.0.1:32011’
vinyl_dir = ‘/opt/ntech/var/lib/tarantool/shard-001’
vinyl_dir = ‘/opt/ntech/var/lib/tarantool/shard-011’
work_dir = ‘/opt/ntech/var/lib/tarantool/shard-001’
work_dir = ‘/opt/ntech/var/lib/tarantool/shard-011’
memtx_dir = ‘/opt/ntech/var/lib/tarantool/shard-001/snapshots’
memtx_dir = ‘/opt/ntech/var/lib/tarantool/shard-011/snapshots’
wal_dir = ‘/opt/ntech/var/lib/tarantool/shard-001/xlogs’
wal_dir = ‘/opt/ntech/var/lib/tarantool/shard-011/xlogs’
FindFace.start(“127.0.0.1”, 8101, {
FindFace.start(“127.0.0.1”, 8111, {
Create symbolic links to the new shards.
cd /etc/tarantool/instances.enabled/ sudo ln -s /etc/tarantool/instances.available/shard-01*.lua /etc/tarantool/instances.enabled/
Create directories that will host files of the new shards. Assign permissions for the created directories.
cd /opt/ntech/var/lib/tarantool/ mkdir -p shard-01{1..4}/{index,snapshots,xlogs} chown tarantool:tarantool shard-01* shard-01*/*
Open the
/etc/findface-extraction-api
configuration file and replace the extraction models with the new ones in thebody_emben
,car_emben
, andface_emben
parameters, depending on the object types to migrate.sudo vi /etc/findface-extraction-api.ini extractors: ... models: ... body_emben: pedrec/<new_model_body>.gpu.fnk ... car_emben: carrec/<new_model_car>.gpu.fnk ... face_emben: face/<new_model_face>.gpu.fnk
Restart the
findface-extraction-api
service.sudo systemctl restart findface-extraction-api.service
Start the new shards.
for i in {11..14}; do sudo systemctl start tarantool@shard-0$i; done
Create a configuration file with migration settings
<migration.ini>
based on the example below.extraction-api: timeouts: connect: 5s response_header: 30s overall: 35s idle_connection: 0s extraction-api: http://127.0.0.1:18666 storage-api-from: # current location of the gallery timeouts: connect: 5s response_header: 30s overall: 35s idle_connection: 10s max-idle-conns-per-host: 20 shards: - master: http://127.0.0.1:8101/v2/ slave: "" - master: http://127.0.0.1:8102/v2/ slave: "" - master: http://127.0.0.1:8103/v2/ slave: "" - master: http://127.0.0.1:8104/v2/ slave: "" storage-api-to: timeouts: connect: 5s response_header: 30s overall: 35s idle_connection: 10s max-idle-conns-per-host: 20 shards: - master: http://127.0.0.1:8111/v2/ slave: "" - master: http://127.0.0.1:8112/v2/ slave: "" - master: http://127.0.0.1:8113/v2/ slave: "" - master: http://127.0.0.1:8114/v2/ slave: "" workers_num: 3 faces_limit: 100 extraction_batch_size: 8 normalized_storage: type: webdav enabled: True webdav: upload-url: http://127.0.0.1:3333/uploads/ s3: endpoint: "" bucket-name: "" access-key: "" secret-access-key: "" secure: False region: "" public-url: "" operation-timeout: 30
In the
storage-api-from
section, specify the old shards to migrate the data from.storage-api-from: # current location of the gallery ... shards: - master: http://127.0.0.1:8101/v2/ slave: "" - master: http://127.0.0.1:8102/v2/ slave: "" - master: http://127.0.0.1:8103/v2/ slave: "" - master: http://127.0.0.1:8104/v2/ ...
In the
storage-api-to
section, specify the new shards that will host migrated data.storage-api-to: ... shards: - master: http://127.0.0.1:8111/v2/ slave: "" - master: http://127.0.0.1:8112/v2/ slave: "" - master: http://127.0.0.1:8113/v2/ slave: "" - master: http://127.0.0.1:8114/v2/ slave: "" ...
Launch the
findface-sf-api-migrate
utility with the-config
option and provide the<migration.ini>
configuration file.findface-sf-api-migrate -config migration.ini
Note
The migration process can take up a significant amount of time if there are many events and cards in the system.
After the migration is complete, stop the old shards and disable their autostart in OS (do not remove them).
for i in {01..04}; do sudo systemctl stop tarantool@shard-0$i.service ; done for i in {01..04}; do sudo systemctl disable tarantool@shard-0$i.service ; done
Open the
/etc/findface-sf-api.ini
configuration file and adjust the shards ports, subject to the new shards settings. Restart thefindface-sf-api
service.sudo vi /etc/findface-sf-api.ini shards: - master: http://127.0.0.1:8111/v2/ slave: "" - master: http://127.0.0.1:8112/v2/ slave: "" - master: http://127.0.0.1:8113/v2/ slave: "" - master: http://127.0.0.1:8114/v2/ slave: "" sudo systemctl start findface-sf-api.service
Import the new database structure from the
tnt_schema.lua
file.sudo findface-security make_tnt_schema | sudo tee /etc/findface-security/tnt_schema.lua
See also
Migrate the main database architecture from FindFace Multi to PostgreSQL, re-create predefined user roles and the first administrator.
sudo findface-security migrate sudo findface-security create_groups sudo findface-security create_default_user
Restart the services.
sudo systemctl restart findface-security.service sudo systemctl restart findface-extraction-api findface-video-worker* findface-video-manager findface-sf-api
Migrate clusters as well if this functionality is enabled in your system. To do so, execute the following command:
Note
List the object types to migrate as the command options:
--face
,--body
,--car
.sudo findface-security migrate_clusters --face --body --car --use-best-event --use-thumbnail --force-clusterin
As a result, the system will regenerate feature vectors for the existing cluster events and automatically launch the scheduled clustering to rebuild clusters.
Modify Feature Vector Database Structure
Sometimes it may be necessary to apply a new structural schema to your Tarantool-based feature vector database, for example, when updating to the latest version of the product, or when you want to enhance the default database structure with additional parameters, advanced object metadata, and so on.
In this section:
About Database Structure
In FindFace Multi, the database structure is set via the /etc/findface-security/tnt_schema.lua
file.
The structure is created as a set of fields. Each field is described with the following parameters:
id: field id;
name: field name, must be the same as the name of a relevant object parameter;
field_type: data type;
default: field default value. If a default value exceeds ‘1e14 – 1’, use a string data type to specify it, for example,
"123123.."
instead of123123..
.
You can find the default tnt_schema.lua
file here
.
Structure Modification
To modify the database structure, do the following:
Stop the
findface-security
service.sudo systemctl stop findface-security.service
Create a backup of the Tarantool-based feature vector database in any directory of your choice, for example,
/etc/findface_dump
.Tip
See Back Up and Recover FindFace Multi and Its Data for details.
mkdir -p /etc/findface_dump cd /etc/findface_dump sudo findface-storage-api-dump -config /etc/findface-sf-api.ini
Prepare the
tnt_schema.lua
file containing the new database structure.Modify the database structure by applying the new
tnt_schema.lua
file.sudo findface-security make_tnt_schema | sudo tee /etc/findface-security/tnt_schema.lua
Navigate to the directory with Tarantool configuration file(s)
/etc/tarantool/instances.available/
. For each shard, make sure that there is a linedofile("/etc/findface-security/tnt_schema.lua")
before theFindFace.start
section andmeta_scheme
andmeta_indexes
are defined in theFindFace.start
parameters.sudo vi /etc/tarantool/instances.available/<shard_00N>.lua dofile("/etc/findface-security/tnt_schema.lua") FindFace.start("127.0.0.1", 8101, { license_ntls_server="127.0.0.1:3133", meta_indexes=meta_indexes, meta_scheme = meta_scheme })
Stop the
findface-tarantool-server
shards. Purge data from all the directories relevant to active shards.sudo systemctl stop 'tarantool@*' sudo rm /opt/ntech/var/lib/tarantool/shard-*/{index,snapshots,xlogs}/*
Restart the
findface-tarantool-server
shards.TNT=$(ls /etc/tarantool/instances.enabled/ | cut -c 7,8,9) for i in $TNT; do sudo systemctl restart tarantool@shard-$i.service ; done
Restore the Tarantool database from the backup.
Important
If some fields were removed from the new database structure, you have to first manually delete the corresponding data from the backup copy.
cd /etc/findface_dump for x in *.json; do curl -X POST "http://127.0.0.1:18411/v2/galleries/${x%%.json}"; done for x in *.json; do sudo findface-storage-api-restore -config /etc/findface-sf-api.ini < "$x"; done
Start the
findface-security
service.sudo systemctl start findface-security.service
See also
Remove FindFace Multi Instance
You can automatically remove FindFace Multi along with the data storages by using the ffmulti_uninstall.sh
script. The FindFace Multi configuration files and data storages will be backed up if you request it.
Important
Regarding directories with the FindFace Multi artifacts, the script will back up and clean only /var/lib/findface-security/uploads
and /var/lib/ffupload/
(set by default). If you modified the artifacts’ location, you will have to create a backup and purge them by yourself. See Back Up and Recover FindFace Multi and Its Data for reference.
Do the following:
Download the
ffmulti_uninstall.sh
script to some directory on a designated host (for example, to/home/username/
).From this directory, make the script executable.
sudo chmod +x ffmulti_uninstall.sh
Run the script.
sudo ./ffmulti_uninstall.sh
Answer backup to create a backup and then remove FindFace Multi along with the data storages. Answer all to completely wipe out FindFace Multi and the data storages without a backup.
The script purges all content from the
/var/lib/ffupload
directory. However, the directory itself will remain in your file system. Remove/var/lib/ffupload
manually by executing:sudo rmdir /var/lib/ffupload
Important
To recover FindFace Multi from the backup, see Back Up and Recover FindFace Multi and Its Data.
Check Component Status
Check the status of components once you have encountered a system problem.
Component |
Command to view service status |
---|---|
|
sudo systemctl status findface-extraction-api.service |
|
sudo systemctl status findface-sf-api.service |
|
sudo systemctl status tarantool.service |
|
sudo systemctl status tarantool@shard-00* |
|
sudo systemctl status findface-video-manager.service |
|
sudo systemctl status findface-video-worker*.service |
|
sudo systemctl status findface-ntls |
|
sudo systemctl status findface-security.service |
|
sudo systemctl status findface-counter.service |
|
sudo systemctl status findface-liveness-api.service |
|
sudo systemctl status etcd.service |
|
sudo systemctl status nginx.service |
|
sudo systemctl status memcached.service |
|
sudo systemctl status postgresql* |
|
sudo systemctl status nats.service |
|
sudo systemctl status pgbouncer.service |
Service Logs
Service logs provide a complete record of each FindFace Multi component activity. Consulting logs is one of the first things you should do to identify a cause for any system problem.
In this section:
Configure Logging
The FindFace Multi services log a large amount of data, which can eventually lead to disc overload. To prevent this from happening, we advise you to disable rsyslog
due to its suboptimal log rotation scheme and use the appropriately configured systemd-journal
service instead.
Do the following:
Check whether the
/var/log/journal
directory already exists. If not, create it by executing the following command:sudo mkdir /var/log/journal sudo chmod 2755 /var/log/journal
Open the
/etc/systemd/journald.conf
configuration file. Enable savingjournald
logs to your hard drive by uncommenting theStorage
parameter and changing its value topersistent
. Disable filtering insystemd-journal
as well:sudo vi /etc/systemd/journald.conf [Journal] ... Storage=persistent ... RateLimitInterval=0 RateLimitBurst=0 ...
If necessary, uncomment and edit the
SystemMaxUse
parameter. This parameter determines the maximum volume of log files on your hard drive. Specify its value in bytes or use K, M, G, T, P, E as units for the specified size (equal to \(1024\), \(1024^2\), … bytes).... SystemMaxUse=3G
Restart the
journald
service.sudo systemctl restart systemd-journald.service
Stop and disable the
syslog
service.sudo systemctl stop syslog.socket rsyslog.service sudo systemctl disable syslog.socket rsyslog.service
If necessary, delete the existing log files created through
syslog
, and the kernel logs.sudo rm /var/log/syslog* sudo rm /var/log/kern.log*
Consult Service Logs
Use the journalctl -u <component>
command to consult a component log, for example as follows:
journalctl -u findface-extraction-api
See also
Troubleshoot Licensing and findface-ntls
When troubleshooting licensing and findface-ntls
(see Licensing Info), the first step is to retrieve the licensing information and findface-ntls
status. You can do so by sending an API request to findface-ntls
. Necessary actions are then to be undertaken, subject to the response content.
Tip
Please do not hesitate to contact our experts on troubleshooting by support@ntechlab.com.
Note
The online licensing is done via the NtechLab Global License Manager license.ntechlab.com
. Check its availability. A stable internet connection and DNS are required.
To retrieve the FindFace Multi licensing information and findface-ntls
status, execute on the findface-ntls
host console:
curl http://localhost:3185/license.json -s | jq
The response will be given in JSON. One of the most significant parameters is last_updated
. It indicates in seconds how long ago the local license has been checked for the last time.
Interpret the last_updated
value as follows:
[0, 5] — everything is alright.
(5, 30] — there may be some problems with connection, or with the local drive where the license file is stored.
(30; 120] — almost certainly something bad happened.
(120; ∞) — the licensing source response has been timed out. Take action.
"valid" -> "value": false
: connection with the licensing source was never established.
curl http://localhost:3185/license.json -s | jq
{
"name": "NTLS",
"time": 1565186356,
"type": "online",
"license_id": "61063ce4b86945e1b70c3bdbedea453b",
"generated": 1514467939,
"last_updated": 5,
"valid": {
"value": true,
"description": ""
},
"source": "/opt/ntech/license/import_b68d7b7ec9a7310d18832035318cff0c9ddf11e3a9ab0ae962fbe48645e196d1.lic",
"limits": [
{
"type": "time",
"name": "end",
"value": 1609161621
},
{
"type": "number",
"name": "faces",
"value": 9007199254740991,
"current": 0
},
{
"type": "number",
"name": "cameras",
"value": 4294967295,
"current": 0
},
{
"type": "number",
"name": "extraction_api",
"value": 256,
"current": 0
},
{
"type": "boolean",
"name": "gender",
"value": true
},
{
"type": "boolean",
"name": "age",
"value": true
},
{
"type": "boolean",
"name": "emotions",
"value": true
},
{
"type": "boolean",
"name": "fast-index",
"value": true
},
{
"type": "boolean",
"name": "sec-genetec",
"value": false
},
{
"type": "boolean",
"name": "beard",
"value": false
},
{
"type": "boolean",
"name": "glasses",
"value": false
},
{
"type": "boolean",
"name": "liveness",
"value": false
}
],
"services": [
{
"name": "video-worker",
"ip": "127.0.0.1:53276"
},
{
"name": "FindFace-tarantool",
"ip": "127.0.0.1:53284"
},
{
"name": "FindFace-tarantool",
"ip": "127.0.0.1:53288"
}
]
}
Automatic Tarantool Recovery
Important
We highly recommend that you turn off your Tarantool servers properly in every unstable situation. This measure will prevent data corruption, so there will be no need in the functionality described in this section.
Warning
Be extremely careful with enabling this functionality as it may lead to silent data loss.
If your system architecture doesn’t imply uninterrupted availability of Tarantool servers, it is recommended to enable automatic database recovery. In this case, each time an error occurs while reading a snapshot or xlog file, Tarantool will skip invalid records, read as much data as possible, and re-build the file.
To enable automatic database recovery, do the following:
For each Tarantool shard, open the configuration file
/etc/tarantool/instances.available/shard-*.lua
and uncommentforce_recovery = true
.sudo vi /etc/tarantool/instances.available/shard-*.lua box.cfg{ force_recovery = true, }
Restart the shards.
systemctl restart tarantool@shard-*
Manually Purge Old Data from Database
Tip
To schedule automatic database cleanup, see Automatic Event And Episode Cleanup.
To manually remove old data from the FindFace Multi database, use the cleanup
utility. You can separately remove the following data:
matched events (faces, bodies, cars) and related episodes,
unmatched events (faces, bodies, cars) and related episodes,
full frames of matched events (faces, bodies, cars),
full frames of unmatched events (faces, bodies, cars),
counter records,
cluster events (faces, bodies, cars),
audit-logs,
area activations.
To invoke the cleanup
help message, execute:
sudo findface-security cleanup --help
usage: findface-security cleanup [-h] [--as-configured]
[--face-events-max-fullframe-unmatched-age FACE_EVENTS_MAX_FULLFRAME_UNMATCHED_AGE]
[--face-events-max-fullframe-matched-age FACE_EVENTS_MAX_FULLFRAME_MATCHED_AGE]
[--face-events-max-unmatched-age FACE_EVENTS_MAX_UNMATCHED_AGE]
[--face-events-max-matched-age FACE_EVENTS_MAX_MATCHED_AGE]
[--body-events-max-fullframe-unmatched-age BODY_EVENTS_MAX_FULLFRAME_UNMATCHED_AGE]
[--body-events-max-fullframe-matched-age BODY_EVENTS_MAX_FULLFRAME_MATCHED_AGE]
[--body-events-max-unmatched-age BODY_EVENTS_MAX_UNMATCHED_AGE]
[--body-events-max-matched-age BODY_EVENTS_MAX_MATCHED_AGE]
[--car-events-max-fullframe-unmatched-age CAR_EVENTS_MAX_FULLFRAME_UNMATCHED_AGE]
[--car-events-max-fullframe-matched-age CAR_EVENTS_MAX_FULLFRAME_MATCHED_AGE]
[--car-events-max-unmatched-age CAR_EVENTS_MAX_UNMATCHED_AGE]
[--car-events-max-matched-age CAR_EVENTS_MAX_MATCHED_AGE]
[--car-cluster-events-max-age CAR_CLUSTER_EVENTS_MAX_AGE]
[--body-cluster-events-max-age BODY_CLUSTER_EVENTS_MAX_AGE]
[--face-cluster-events-max-age FACE_CLUSTER_EVENTS_MAX_AGE]
[--area-activations-max-age AREA_ACTIVATIONS_MAX_AGE]
[--audit-logs-max-age AUDIT_LOGS_MAX_AGE]
[--counter-records-max-age COUNTER_RECORDS_MAX_AGE]
[--configuration CONFIGURATION] [--version]
[-v {0,1,2,3}] [--settings SETTINGS]
[--pythonpath PYTHONPATH] [--traceback]
[--no-color] [--force-color] [--skip-checks]
Delete FFSecurity entities
optional arguments:
-h, --help show this help message and exit
--as-configured Apply config age options for events, counter records
and clusters. Can't be used with other arguments.
--face-events-max-fullframe-unmatched-age FACE_EVENTS_MAX_FULLFRAME_UNMATCHED_AGE
face events max fullframe unmatched age to clean up
(in days)
--face-events-max-fullframe-matched-age FACE_EVENTS_MAX_FULLFRAME_MATCHED_AGE
face events max fullframe matched age to clean up (in
days)
--face-events-max-unmatched-age FACE_EVENTS_MAX_UNMATCHED_AGE
face events max unmatched age to clean up (in days)
--face-events-max-matched-age FACE_EVENTS_MAX_MATCHED_AGE
face events max matched age to clean up (in days)
--body-events-max-fullframe-unmatched-age BODY_EVENTS_MAX_FULLFRAME_UNMATCHED_AGE
body events max fullframe unmatched age to clean up
(in days)
--body-events-max-fullframe-matched-age BODY_EVENTS_MAX_FULLFRAME_MATCHED_AGE
body events max fullframe matched age to clean up (in
days)
--body-events-max-unmatched-age BODY_EVENTS_MAX_UNMATCHED_AGE
body events max unmatched age to clean up (in days)
--body-events-max-matched-age BODY_EVENTS_MAX_MATCHED_AGE
body events max matched age to clean up (in days)
--car-events-max-fullframe-unmatched-age CAR_EVENTS_MAX_FULLFRAME_UNMATCHED_AGE
car events max fullframe unmatched age to clean up (in
days)
--car-events-max-fullframe-matched-age CAR_EVENTS_MAX_FULLFRAME_MATCHED_AGE
car events max fullframe matched age to clean up (in
days)
--car-events-max-unmatched-age CAR_EVENTS_MAX_UNMATCHED_AGE
car events max unmatched age to clean up (in days)
--car-events-max-matched-age CAR_EVENTS_MAX_MATCHED_AGE
car events max matched age to clean up (in days)
--car-cluster-events-max-age CAR_CLUSTER_EVENTS_MAX_AGE
car cluster events max age to clean up (in days)
--body-cluster-events-max-age BODY_CLUSTER_EVENTS_MAX_AGE
body cluster events max age to clean up (in days)
--face-cluster-events-max-age FACE_CLUSTER_EVENTS_MAX_AGE
face cluster events max age to clean up (in days)
--area-activations-max-age AREA_ACTIVATIONS_MAX_AGE
area activations max age to clean up (in days)
--audit-logs-max-age AUDIT_LOGS_MAX_AGE
audit logs max age to clean up (in days)
--counter-records-max-age COUNTER_RECORDS_MAX_AGE
counter records max age to clean up (in days)
--configuration CONFIGURATION
The name of the configuration class to load, e.g.
"Development". If this isn't provided, the
DJANGO_CONFIGURATION environment variable will be
used.
--version show program's version number and exit
-v {0,1,2,3}, --verbosity {0,1,2,3}
Verbosity level; 0=minimal output, 1=normal output,
2=verbose output, 3=very verbose output
--settings SETTINGS The Python path to a settings module, e.g.
"myproject.settings.main". If this isn't provided, the
DJANGO_SETTINGS_MODULE environment variable will be
used.
--pythonpath PYTHONPATH
A directory to add to the Python path, e.g.
"/home/djangoprojects/myproject".
--traceback Raise on CommandError exceptions
--no-color Don't colorize the command output.
--force-color Force colorization of the command output.
--skip-checks Skip system checks.
To entirely remove events and episodes older than a given number of days, use the --*-events-max-matched-age
/--*-events-max-unmatched-age
options, subject to the object type. For example, to remove unmatched car events older than 5 days, execute:
sudo findface-security cleanup --car-events-max-unmatched-age 5
To remove only matched car events older than 5 days, execute:
sudo findface-security cleanup --car-events-max-matched-age 5
The following commands remove only full frames of matched/unmatched body events:
sudo findface-security cleanup --body-events-max-fullframe-matched-age 5
sudo findface-security cleanup --body-events-max-fullframe-unmatched-age 5
To remove only counter records, execute:
sudo findface-security cleanup --counter-records-max-age 5
To remove only cluster events with faces, execute:
sudo findface-security cleanup --face-cluster-events-max-age 5
To remove only audit logs, execute:
sudo findface-security cleanup --audit-logs-max-age 5
To remove only area activations, execute:
sudo findface-security cleanup --area-activations-max-age 5
Important
You must provide at least one of the mentioned arguments.
Disable Services
You can disable the following FindFace Multi services should you no longer need them:
episodes
video archive queue manager
webhooks
To do so, open the /etc/findface-security/config.py
configuration file and modify the SERVICES
section, setting False
for the services that are no longer in use. Restart the findface-security
service.
sudo vi /etc/findface-security/config.py
# disable unused services to increase
# overall system performance in some cases.
SERVICES = {
"ffsecurity": {
"episodes": True,
"webhooks": True,
# use queue manager to prevent drops of video archive events
"video_archive_events_manager": True,
}
}
sudo systemctl restart findface-security.service
After that, the corresponding tabs will disappear from the web interface.
Note
A tab will remain if there are some entities on it (for example, webhooks on the Webhooks tab). However, the service will stop working and generating data.
Reset Password
To reset a user password to the FindFace Multi web interface, execute the following command:
findface-security changepassword %username
Migrate Data to Another Disk
High disk load may lead to delays in event arrivals. In severe cases, it might result in complete inoperability of FindFace Multi. One of the means for reducing the disk load is to migrate the FindFace Multi data storages to another disk.
In this section:
Prepare Disk
To prepare a disk for the data migration, do the following:
Create a new mount point (
/mnt/ffdata
in our example).sudo mkdir /mnt/ffdata sudo chown ntech:ntech /mnt/ffdata
Create a partition.
sudo parted /dev/sdb mklabel gpt mkpart primary ext4 1MiB 100% q sudo mkfs.ext4 /dev/sdb1
Learn the UUID of the partition (
sdb1
in our example).sudo blkid | grep sdb1 /dev/sdb1: LABEL="data" UUID="0638ebe0-853e-43ea-8f35-bfae305695d1" TYPE="ext4" PARTUUID="8cebaacc-77d7-4757-b4c6-14147e92646c"
Add the partition to fstab to make it automatically mount on booting.
sudo vi /etc/fstab ------------ #DATA mount UUID=0638ebe0-853e-43ea-8f35-bfae305695d1 /mnt/ffdata/ ext4 auto,user,rw 0 2 -------------
Mount all the filesystems.
sudo mount -a
Migrate Photo Storage
To migrate the FindFace Multi photo storage, do the following:
Stop the
findface-security
service to prevent the data loss.sudo systemctl stop findface-security
By default, the photo data are stored at
/var/lib/
. Migrate the photo storage to the new disk.sudo cp -ax /var/lib/findface-security/ -R /mnt/ffdata/ sudo rm -r /var/lib/findface-security/ sudo cp -ax /var/lib/ffupload/ -R /mnt/ffdata/ sudo rm -r /var/lib/ffupload/
Create symbolic links for the new directories.
sudo ln -s /mnt/ffdata/findface-security/ /var/lib/ sudo ln -s /mnt/ffdata/ffupload/ /var/lib/
Ensure that the rights are correctly assigned.
sudo chown ntech:ntech /mnt/ffdata/findface-security/
Start the
findface-security
service.sudo systemctl start findface-security
Migrate Main Database (PostgreSQL)
To migrate the PostgreSQL database, do the following:
Stop the
findface-security
,pgbouncer
, and PostgreSQL services.sudo systemctl stop findface-security.service pgbouncer.service postgresql.service postgresql@10-main.service
On the new disk, create a directory for the database.
sudo mkdir -p /mnt/ffdata/some_directory/db
Migrate the database to the new directory.
sudo mv /var/lib/postgresql /mnt/ffdata/some_directory/db
Create a symlink to the new directory.
sudo ln -s /mnt/ffdata/some_directory/db/postgresql /var/lib/postgresql
Start the PostgreSQL,
pgbouncer
, andfindface-security
services.sudo systemctl start postgresql.service postgresql@10-main.service pgbouncer.service sudo systemctl start findface-security.service
Deactivate findface-liveness-api
installed with FindFace Multi
If you don’t utilize the findface-liveness-api
service installed with FindFace Multi and the face-based authentication does not apply to your system, we recommend deactivating findface-liveness-api
.
Do the following:
Stop the
findface-liveness-api
service and disable its autostart by executing:sudo systemctl stop findface-liveness-api.service && sudo systemctl disable findface-liveness-api.service
Open the
/etc/findface-extraction-api.ini
configuration file.sudo vi /etc/findface-extraction-api.ini
Disable the neural network model used by the
findface-liveness-api
service. To do so, pass the empty value''
to theface_liveness
parameter.Note
Do not remove the parameter itself. Otherwise, the system will be searching for the default model.
extractors: models: ... face_liveness: '' ...
Restart
findface-extraction-api
.sudo systemctl restart findface-extraction-api
Troubleshoot MongoDB distribution
Tip
Please do not hesitate to contact our experts on troubleshooting by support@ntechlab.com.
The Video Recorder operation requires a third-party software MongoDB.
Note
MongoDB – document-oriented database management system.
Currently, it is not possible to install MongoDB from a public repository on the area of Russia and Belarus. If you are located in these countries and intend to use the Video Recorder functionality, you need to install manually the MongoDB database from the deb package before performing any FindFace Multi deployment procedure.
Do the following:
Install the MongoDB distribution package provided by technical support using the command:
sudo dpkg -i <mongodb-repo-bionic-4.4_2022.06.10_amd64>.deb
Deploy FindFace Multi including the Video Recorder by one of the following ways:
When deploying from Console Installer, answer «No» to the question «Do you want to install NtechLab VMS?». After you’ve done that, Deploy Video Recorder step-by-step.
When deploying from Console Installer, answer «Yes» to the question «Do you want to install NtechLab VMS?» to install Video Recorder.
Do Basic Step-by-Step Deployment from Repository and Deploy Video Recorder step-by-step.
Appendices
Components in Depth
findface-extraction-api
The findface-extraction-api
service uses neural networks to detect an object in an image, extract the object feature vector, and recognize object attributes (for example, the clothing color for bodies).
It interfaces with the findface-sf-api
service as follows:
Gets original images with objects and normalized object images.
Returns the object bounding box coordinates and, if requested by
findface-sf-api
, feature vector and object attribute data.
Functionality:
object detection in an original image (with a return of the bbox coordinates),
object normalization,
feature vector extraction from a normalized image,
object attribute recognition (a person’s gender, age, emotions; clothing color; car color, car model, etc.).
The findface-extraction-api
service can be based on CPU (installed from the findface-extraction-api
package) or GPU (installed from the findface-extraction-api-gpu
package). For both CPU- and GPU-accelerated services, configuration is done through the /etc/findface-extraction-api.ini
configuration file. You can find its default content here for CPU
and here for GPU
.
When configuring findface-extraction-api
(on CPU or GPU), refer to the following parameters:
Parameter |
Description |
---|---|
|
The minimum size of a face (bbox) guaranteed to be detected. The larger the value, the less resources required for face detection. |
|
(Only for GPU) The number of the GPU device used by |
|
The |
If necessary, you can also enable recognition models for face attributes, body and body attributes, car and car attributes, and liveness detection. You can find the detailed step-by-step instructions in the following sections:
Important
The acceleration type for each model must match the acceleration type of findface-extraction-api
: CPU or GPU. Note that findface-extraction-api
on CPU can work only with CPU-models, while findface-extraction-api
on GPU supports both CPU- and GPU-models.
Tip
To disable an extractor model, simply pass an empty value to a relevant parameter. Do not remove the parameter itself as in this case the system will be searching for the default model.
...
extractors:
...
models:
body_color: ''
body_emben: ''
body_quality: ''
car_color: ''
car_description: ''
car_emben: ''
...
findface-sf-api
The findface-sf-api
service implements HTTP API to access the FindFace core functions such as object detection and object recognition.
Note
The mentioned functions themselves are provided by findface-extraction-api
.
The findface-sf-api
interfaces with the following FindFace core components:
feature vector database powered by Tarantool via the
findface-tarantool-server
servicefindface-extraction-api
that provides object detection and object recognitionfindface-upload
that provides a storage for original images and FindFace core artifacts
To detect an object in an image, you need to send the image in an API request to findface-sf-api
. The findface-sf-api
will then redirect the request to findface-extraction-api
for object detection and recognition.
If there is a configured video object detection module in the system (like in FindFace Multi), findface-sf-api
also interfaces with the findface-facerouter
service. It receives data of detected objects and processing directives from findface-facerouter
and executes the received directives (for example, saves objects into a specific database gallery).
Note
In FindFace Multi, findface-facerouter
functions are performed by findface-security
.
Functionality:
HTTP API implementation (object detection and object recognition methods, performed via
findface-extraction-api
).saving object data to the feature vector database (performed via
findface-tarantool-server
),saving original images, object thumbnails and normalized object images to an NginX-powered web server (via
findface-upload
).provides interaction between all the FindFace core components.
The findface-sf-api
configuration is done through the /etc/findface-sf-api.ini
configuration file. You can find its default content here
.
When configuring findface-sf-api
, refer to the following parameters:
Parameter |
Description |
---|---|
|
IP address of the |
|
The maximum size of an image in an API request, bytes. |
|
WebDAV NginX path to send original images, thumbnails and normalized object images to the
|
|
IP address of the |
|
IP address of the |
findface-tarantool-server
The findface-tarantool-server
service provides interaction between the findface-sf-api
service and the Tarantool-based feature vector database in the following way:
Tip
See Tarantool official documentation for details.
From
findface-sf-api
,findface-tarantool-server
receives data, such as information of detected objects, to write into the feature vector database.By request from
findface-sf-api
,findface-tarantool-server
performs database searches and returns search results.
Multiple findface-tarantool-server
shards can be created on each Tarantool host to increase search speed. Their running concurrently leads to a remarkable increase in performance (70x-100x).
Functionality:
saving object data to the feature vector database,
database search,
implementation of direct API requests to the database (see Direct API Requests to Tarantool).
The findface-tarantool-server
configuration is done through the /etc/tarantool/instances.available/<shard-*>.lua
configuration file. You can find its default content here
.
Important
In a multi-shard environment, the configuration has to be done for each shard.
When configuring findface-tarantool-server
, refer to the following parameters:
Parameter |
Description |
---|---|
|
Enables automatic database recovery. In this case, each time an error occurs while reading a snapshot or xlog file, Tarantool will skip invalid records, read as much data as possible, and re-build the file. |
|
IP address and port of the |
|
Maximum RAM that can be used by a Tarantool shard. Set in bytes, depending on the number of objects the shard handles. Consult our experts by support@ntechlab.com before setting this parameter. |
|
A database structure to store the object recognition results. The structure is created as a set of fields.
Describe each field with the following parameters: |
The default database structure is passed from /etc/findface-security/tnt_schema.lua
to the meta_scheme
parameter. See Modify Feature Vector Database Structure for details.
findface-upload
The findface-upload
component is an NginX-based web server used as a storage for original images, thumbnails and normalized object images which it receives from the findface-sf-api
component.
By default the original images, thumbnails and normalized images are stored at /var/lib/ffupload/uploads/
.
The findface-upload
component is automatically configured upon installation. Custom configuration is not supported.
Video Object Detection: findface-video-manager
and findface-video-worker
Note
The findface-video-worker
is delivered in a CPU-accelerated (findface-video-worker-cpu
) and a GPU-accelerated (findface-video-worker-gpu
) packages.
In this section:
Functions of findface-video-manager
The findface-video-manager
service is the part of the video object detection module that is used for managing the video object detection functionality.
The findface-video-manager
service interfaces with findface-video-worker
as follows:
It supplies
findface-video-worker
with settings and the list of to-be-processed video streams. To do so, it issues a so-called job, a video processing task that contains configuration settings and stream data.In a distributed system, it distributes video streams (jobs) across vacant
findface-video-worker
instances.
Note
The configuration settings passed via jobs have priority over the /etc/findface-video-manager.conf
configuration file.
The findface-video-manager
service functioning requires ETCD, third-party software that implements a distributed key-value store for findface-video-manager
. In the FindFace core, ETCD is used as a coordination service, providing the video object detector with fault tolerance.
Functionality:
allows for configuring video object detection parameters
allows for managing the list of to-be-processed video streams
Functions of findface-video-worker
The findface-video-worker
service (on CPU/GPU) is the part of the video object detection module, that recognizes objects in the video. It can work with both live streams and files, and supports most video formats and codecs that can be decoded by FFmpeg.
The findface-video-worker
service interfaces with the findface-video-manager
and findface-facerouter
services as follows:
By request,
findface-video-worker
gets a job with settings and the list of to-be-processed video streams fromfindface-video-manager
.The
findface-video-worker
posts extracted normalized object images, along with the full frames and meta data (such as bbox, camera ID and detection time) to thefindface-facerouter
service for further processing.
Note
In FindFace Multi, the findface-facerouter
functions are performed by findface-security
.
Functionality:
detects objects in the video,
extracts normalized object images,
searches for the best object snapshot,
snapshot deduplication (only one snapshot per object detection event).
When processing a video, findface-video-worker
consequently uses the following algorithms:
Motion detection. Used to reduce resource consumption. Only when the motion detector recognizes the motion of certain intensity that the object tracker can be triggered.
Object tracking. The object tracker traces, detects, and captures objects in the video. It can simultaneously be working with several objects. It also searches for the best object snapshot using the embedded neural network. After the best object snapshot is found, it is posted to
findface-facerouter
.
The best object snapshot can be found in one of the following modes:
Real-time
Offline
Real-Time Mode
In the real-time mode, findface-video-worker
posts an object on-the-fly after it appears in the camera field. The following posting options are available:
If
realtime_post_every_interval: true
, the object tracker searches for the best object snapshot within each time period equal torealtime_post_interval
and posts it tofindface-facerouter
.If
realtime_post_every_interval: false
, the object tracker searches for the best face snapshot dynamically:First, the object tracker estimates whether the quality of an object snapshot exceeds a pre-defined internal threshold. If so, the snapshot is posted to
findface-facerouter
.The threshold value increases after each post. Each time the object tracker gets a higher quality snapshot of the same object, it is posted.
When the object disappears from the camera field, the threshold value resets to default.
If
realtime_post_first_immediately: true
, the object tracker doesn’t wait for the firstrealtime_post_interval
to complete and posts the first object from a track immediately after it passes through the quality, size, and ROI filters. The way the subsequent postings are sent depends on therealtime_post_every_interval
value. Ifrealtime_post_first_immediately: false
, the object tracker posts the first object after the firstrealtime_post_interval
completes.
Offline Mode
The offline mode is less storage intensive than the real-time one as in this mode findface-video-worker
posts only one snapshot per track but of the highest quality. In this mode, the object tracker buffers a video stream with an object until the object disappears from the camera field. Then the object tracker picks up the best object snapshot from the buffered video and posts it to findface-facerouter
.
Configure Video Object Detection
The video object detector configuration is done through the following configuration files:
The
findface-video-manager
configuration file/etc/findface-video-manager.conf
. You can find its default contenthere
When configuring
findface-video-manager
, refer to the following parameters:Option
Description
etcd
->endpoints
IP address and port of the
etcd
service. Default value:127.0.0.1:2379
.ntls
->enabled
If true,
findface-video-manager
will send a job tofindface-video-worker
only if the total number of processed cameras does not exceed the allowed number of cameras from the license. Default value: false.ntls
->url
IP address and port of the
findface-ntls
host. Default value:http://127.0.0.1:3185/
.router_url
IP address and port of the
findface-facerouter
host to receive detected faces fromfindface-video-worker
. In FindFace Multi,findface-facerouter
functions are performed byfindface-security
. Default value:http://127.0.0.1:18820/v0/frame
.play_speed
If less than zero, the speed is not limited. In other cases, the stream is read with the given
play_speed
. Not applicable for live streams.disable_drops
Enables posting all appropriate objects without drops. By default, if
findface-video-worker
does not have enough resources to process all frames with objects, it drops some of them. If this option is active,findface-video-worker
puts odd frames on the waiting list to process them later. Default value: false.imotion_threshold
Minimum motion intensity to be detected by the motion detector. The threshold value is to be fitted empirically. Empirical units: zero and positive rational numbers. Milestones: 0 = detector disabled, 0.002 = default value, 0.05 = minimum intensity is too high to detect motion.
router_timeout_ms
Timeout for a
findface-facerouter
(orfindface-security
in the standard FindFace Multi configuration) response to afindface-video-worker
API request, in milliseconds. If the timeout has expired, the system will log an error. Default value: 15000.router_verify_ssl
Enables a https certificate verification when
findface-video-worker
andfindface-facerouter
(orfindface-security
in the standard FindFace Multi configuration) interact over https. Default value: true. If false, a self-signed certificate can be accepted.router_headers
Additional header fields in a request when posting an object: [“key = value”]. Default value: headers not specified.
router_body
Additional body fields in a request body when posting an object: [“key = value”]. Default value: body fields not specified.
ffmpeg_params
List of a video stream ffmpeg options with their values as a key=value array: [“rtsp_transpotr=tcp”, .., “ss=00:20:00”]. Check out the FFmpeg web site for the full list of options. Default value: options not specified.
ffmpeg_format
Pass FFMPEG format (mxg, flv, etc.) if it cannot be detected automatically.
use_stream_timestamp
If true, retrieve and post timestamps from a video stream. If false, post the actual date and time.
start_stream_timestamp
Add the specified number of seconds to timestamps from a stream.
rot
Enables detecting and tracking objects only inside a clipping rectangle WxH+X+Y. You can use this option to reduce
findface-video-worker
load. Default value: rectangle not specified.video_transform
Change a video frame orientation right after decoding. Values (case insensitive, JPEG Exif Orientation Tag in brackets): None (1), FlipHorizontal (2), Rotate180 (3), FlipVertical (4), Transpose (5), Rotate90 (6), Transverse (7), Rotate270 (8). Default value: not specified.
enable_recorder
Enables video recording for Video Recorder (must be installed).
The following parameters are available for configuration for each detector type (face, body, car):
Option
Description
filter_min_quality
Minimum threshold value for an object image quality. Default value: subject to the object type. Do not change the default value without consulting with our technical experts (support@ntechlab.com).
filter_min_size
Minimum size of an object in pixels. Calculated as the square root of the relevant bbox area. Undersized objects are not posted. Default value: 1.
filter_max_size
Maximum size of an object in pixels. Calculated as the square root of the relevant bbox area. Oversized objects are not posted. Default value: 8192.
roi
Enable posting objects detected only inside a region of interest WxH+X+Y. Default value: region not specified.
fullframe_crop_rot
Crop posted full frames by ROT. Default value: false.
fullframe_use_png
Send full frames in PNG and not in JPEG as set by default. Do not enable this parameter without supervision from our team as it can affect the entire system functioning. Default value: false (send in JPEG).
jpeg_quality
Quality of an original frame JPEG compression, in percents. Default value: 95%.
overall_only
Enables the offline mode for the best object search. Default value: true (CPU), false (GPU).
realtime_post_first_immediately
Enable posting an object image right after it appears in a camera field of view (real-time mode). Default value: false.
realtime_post_interval
Only for the real-time mode. Defines the time period in seconds within which the object tracker picks up the best snapshot and posts it to
findface-facerouter
. Default value: 1.realtime_post_every_interval
Only for the realtime mode. Post best snapshots obtained within each
realtime_post_interval
time period. If false, search for the best snapshot dynamically and send snapshots in order of increasing quality. Default value: false.track_interpolate_bboxes
Interpolate missed bboxes of objects in track. For example, if frames #1 and #4 have bboxes and #2 and #3 do not, the system will reconstruct the absent bboxes #2 and #3 based on the #1 and #4 data. Enabling this option allows you to increase the detection quality on account of performance. Default value: true.
track_miss_interval
The system closes a track if there has been no new object in the track within the specified time (seconds). Default value: 1.
track_max_duration_frames
The maximum approximate number of frames in a track after which the track is forcefully completed. Enable it to forcefully complete “eternal tracks,” for example, tracks with objects from advertisement media. The default value: 0 (option disabled).
track_send_history
Send track history. Default value: false.
post_best_track_frame
Send full frames of detected objects. Default value: true.
post_best_track_normalize
Send normalized images for detected objects. Default value: true.
post_first_track_frame
Post the first frame of a track. Default value: false.
post_last_track_frame
Post the last frame of a track. Default value: false.
The
findface-video-worker
configuration file/etc/findface-video-worker-cpu.ini
or/etc/findface-video-worker-gpu.ini
, subject to the acceleration type in use.When configuring
findface-video-worker
(on CPU/GPU), refer to the following parameters:CPU
GPU
Description
batch_size
Post faces in batches of the given size.
capacity
Maximum number of video streams to be processed by
findface-video-worker
.N/a
cpu
If necessary, decode video on CPU.
N/a
device_number
GPU device number to use.
exit_on_first_finished
(Only if
input
is specified) Exit on the first finished job.input
Process streams from file, ignoring stream data from
findface-video-manager
.labels
Labels used to allocate a video object detector instance to a certain group of cameras. See Allocate findface-video-worker to Camera Group.
mgr-cmd
(Optional, instead of the
mgr-static
parameter) A command to obtain the IP address of thefindface-video-manager
host.mgr-static
IP address of the
findface-video-manager
host to providefindface-video-worker
with settings and the list of to-be-processed streams.metrics_port
HTTP server port to send metrics. If 0, the metrics are not sent.
min_size
Minimum object size to be detected.
ntls-addr
IP address and port of the
findface-ntls
host.resize_scale
Rescale video frames with the given coefficient.
resolutions
Preinitialize
findface-video-worker
to work with specified resolutions. Example: “640x480;1920x1080”.save_dir
(For debug) Save detected objects to the given directory.
streamer
->port
,url
IP address and port to access the video wall.
use_time_from_sei
(For MPEG-2) Use SEI (supplemental enhancement information) timestamps.
If necessary, you can also enable neural network models and normalizers to detect bodies, cars, and liveness. You can find the detailed step-by-step instructions in the following sections:
See also
Jobs
The findface-video-manager
service provides findface-video-worker
with a so-called job, a video processing task that contains configuration settings and stream data.
You can find a job example here
.
Each job has the following parameters:
id
: job id.enabled
: active status.url
: URL/address of video stream/file to process.labels
: tag(s) that will be used by thefindface-facerouter
component (findface-security
in the standard FindFace Multi configuration) to find processing directives for faces detected in this stream.single_pass
: if true, disable restarting video processing upon error (by default, false).stream_settings
: video stream settings that duplicate those in the/etc/findface-video-manager.conf
configuration file (while having priority over them).status
: job status.msg
: additional job status info.statistic
: job progress statistics (progress duration, number of posted and not posted objects, processing fps, the number of processed and dropped frames, job start time, etc.).service and reference job information.
findface-ntls
The findface-ntls
service is to be installed on a designated host to verify the FindFace license. For verification purposes, findface-ntls
uses one of the following sources:
Ntech Lab global license center if you opt for the online licensing, direct or via a proxy server.
USB dongle if you opt for the on-premise licensing.
Use the FindFace Multi web interface (Preferences -> License) to manage findface-ntls
in the following way:
view the list of purchased features,
view license limitations,
upload a license file,
view the list of currently active components.
The following components are licensable:
findface-tarantool-server
,findface-extraction-api
,findface-video-manager
,findface-video-worker
.
Important
After connection between findface-ntls
and a licensable component, or between findface-ntls
and the global license server is broken, you will have 4 hours to restore it before the licensable components will be automatically stopped. It is possible to prolongate the off-grid period for up to 2 days. Inform your manager if you need that.
The findface-ntls
configuration is done through a configuration file /etc/findface-ntls.cfg
. You can find its default content here
.
When configuring findface-ntls
, refer to the following parameters:
Parameter |
Description |
---|---|
|
Directory to store a license file. |
|
IP address from which licensable services access |
|
(Optional) IP address and port of your proxy server. |
|
IP address from which accessing the |
findface-security
The findface-security
component serves as a gateway to the FindFace core. It provides interaction between the FindFace Core and the web interface, the system functioning as a whole.
The findface-security
component also performs the functions of findface-facerouter
(part of the FindFace Core), setting processing directives for detected objects. It accepts an object bbox and normalized image along with the original image and other data (for example, the detection date and time) from the findface-video-worker
service and redirect them to findface-sf-api
for further processing.
The findface-security
configuration is done through the /etc/findface-security/config.py
configuration file. You can find its default content here
.
The /etc/findface-security/config.py
file has detailed comments for each setting. Refer to them when configuring findface-security
.
findface-facerouter
and Custom Plugins
Important
The findface-facerouter
is not included in the FindFace Multi standard configuration. Use it for integration if necessary.
The findface-facerouter
service sets processing directives for objects detected in the video. The directives are set through custom plugins.
The findface-facerouter
service accepts an object bbox and normalized image along with the original image and other data (for example, the detection date and time) from the findface-video-worker
service.
In general, findface-facerouter
allows you to apply arbitrary processing directives to the received objects, including directly sending objects to a partner application. In FindFace Multi, the findface-facerouter
functions are performed by the findface-security
service that redirects the objects to findface-sf-api
.
Functionality:
sets processing directives for objects detected in the video,
redirects objects detected in the video to
findface-sf-api
or other service (including a third-party application) for further processing.
The findface-facerouter
configuration is done through a configuration file /etc/findface-facerouter.py
. You can find its default content here
.
When configuring findface-facerouter
, refer to the following parameters:
Parameter |
Description |
---|---|
|
IP address and port of the |
|
Directory with plugins to define object processing directives. |
Deploy findface-facerouter
in FindFace Multi
To deploy the findface-facerouter
component, do the following:
Install
findface-facerouter
either from the console installer or from the apt repository as such:sudo apt update sudo apt install -y findface-facerouter
Open the
/etc/findface-facerouter.py
configuration file.sudo vi /etc/findface-facerouter.py
If the
findface-facerouter
andfindface-sf-api
components are installed on different hosts, uncomment thesfapi_url
parameter and specify thefindface-sf-api
host IP address.sfapi_url = 'http://localhost:18411'
Open the
/etc/findface-security/config.py
configuration file. In theROUTER_URL
parameter, actualize thefindface-facerouter
IP address and port (18820 by default). Specify either external or internal IP address, subject to the network through whichfindface-video-worker
interacts withfindface-facerouter
.sudo vi /etc/findface-security/config.py ... FFSECURITY = { 'ROUTER_URL': 'http://127.0.0.1:18820/v0/frame?',
Enable the
findface-facerouter
service autostart and launch the service.sudo systemctl enable findface-facerouter.service && sudo systemctl start findface-facerouter.service
Restart the
findface-security
service.sudo systemctl restart findface-security.service
Configure findface-facerouter
to Use Plugins
Important
Be sure to change the Tarantool database structure prior, according to the processing directive in the plugin.
Important
The findface-facerouter
component must be installed and configured.
To configure findface-facerouter
to use plugins, do the following:
Put a plugin into a directory of your choice. All plugins in use have to be in the same directory.
Open the
/etc/findface-facerouter.py
configuration file. Uncomment theplugin_dir
parameter and specify the plugin directory.Warning
The
findface-facerouter.py
content must be correct Python code.sudo vi /etc/findface-facerouter.py plugin_dir = '/etc/findface/plugins/'
Restart
findface-facerouter
.sudo systemctl restart findface-facerouter.service
Plugin Basics
In this section:
After the findface-video-worker
component detects a face, the face is posted to the findface-facerouter
component via an HTTP API request. To process this request, each findface-facerouter
plugin must export the activate(app, ctx, plugin_name, plugin_source)
function.
The activate
function has the following parameters:
app
: atornado.web.Application
entity of thefindface-facerouter
component.ctx
: data context to be passed to a plugin upon activation.plugin_name
: the name of the plugin to be activated.plugin_source
: source object to load the plugin from.
Upon activation, a plugin is passed the following data context:
request.ctx.sfapi
: a set upntech.sfapi_client.Client
instance that can be invoked directly to process the result of video face detection (for example, to create a new gallery, add a face to a gallery, etc.).plugins
:OrderedDict
with all the plugins as (key
: plugin name,value
: the result returned by theactivate
function).idgen
: id generator that can be invoked asctx.idgen()
.
The activate(app, ctx, plugin_name, plugin_source)
function must return an object with the following methods:
preprocess
,process
,shutdown
(optional).
preprocess
methodIn this method, a findface-facerouter
plugin decides if it is interested in the face received from the findface-video-worker
component. If so, it returns a tuple or a list that contains one or several strings 'facen'
, 'gender'
, 'age'
, 'emotions'
. This means that it is necessary to extract a feature vector, recognize gender, age, emotions respectively. If the returned tuple/list is non-empty, the findface-facerouter
redirects the face to the findface-sf-api
in a /detect POST
request with relevant query string parameters (facen=on
, gender=on
, age=on
, emotions=on
).
The basic preprocess
method to inherit from has the following syntax (see the Plugin
class):
- preprocess(self, request: FrHTTPRequest, labels: Mapping[str, str]) Tuple[str]
- Parameters
FrHTTPRequest (tornado.httpserver.HTTPRequest) – a HTTP API request that includes an extra argument
params
labels (dictionary) – a custom set of a frame labels, which are initially specified in a job parameters for
findface-video-worker
and then assigned to the frameThe
params
argument of FrHTTPRequest includes the following fields:
- Parameters
photo (bytes) – JPEG video frame featuring a detected face
face0 (bytes) – normalized face image
bbox (list of integers [[x1,y1,x2,y2]], where
x1
: x coordinate of the top-left corner,y1
: y coordinate of the top-left corner,x2
: x coordinate of the bottom-right corner,y2
: y coordinate of the bottom-right corner) – coordinates of the face region in the video framecam_id (string) – camera id
timestamp (datetime.datetime) – video frame timestamp
detectorParams (dictionary) – debug information from the video face detector
bs_type (string) – best face search mode. Available options: overall (the
findface-video-worker
posts only one snapshot per track, but of the highest quality.), realtime (thefindface-video-worker
posts the best snapshot within each of consecutive time intervals).labels (dictionary) – (duplicates params.labels) a custom set of a frame labels, which are specified in a job parameters for
findface-video-worker
and then assigned to the frame
The decision about face processing is made based on the data in the request.params
, including the custom set of labels, as well as for any other reasons.
process
methodThis method is called if the preprocess
method returns a non-empty tuple or list (i.e. with ‘facen’, ‘gender’, ‘age’, an/or ‘emotions’ strings). After the findface-sf-api
returns a response with the result of face detection (see the /detect POST
request) with all the requested face features, the findface-facerouter
component calls the process
method of the plugin in order to the perform face processing itself.
To process a face, a plugin uses request.ctx.sfapi
.
The basic process
method to inherit from has the following syntax (see the Plugin
class):
shutdown
methodThis method is only called before the findface-facerouter
shutdown.
The basic shutdown
method to inherit from has the following syntax (see the Plugin
class):
- shutdown(self)
Plugin Classes and Methods
In this section:
- class facerouter.plugin.Plugin
Provides the basic methods for writing a plugin (see Plugin Basics). A custom class that wraps a plugin must inherit from the
Plugin
class.- preprocess(self, request: FrHTTPRequest, labels: Mapping[str, str]) Tuple[str]
Returns a tuple that contains one or several strings
'facen'
,'gender'
,'age'
,'emotions'
. This means thatfindface-facerouter
must requestfindface-extraction-api
to extract a biometric sample, recognize gender, age, emotions respectively.- Parameters
FrHTTPRequest (tornado.httpserver.HTTPRequest) – a HTTP API request that includes an extra argument
params
labels (dictionary) – a custom set of a frame labels from
request.params
- Returns
one or several strings
'facen'
,'gender'
,'age'
,'emotions'
- Return type
The
params
argument ofFrHTTPRequest
includes the following fields:- Parameters
photo (bytes) – JPEG video frame featuring a detected face
face0 (bytes) – normalized face image
bbox (list of integers [[x1,y1,x2,y2]], where
x1
: x coordinate of the top-left corner,y1
: y coordinate of the top-left corner,x2
: x coordinate of the bottom-right corner,y2
: y coordinate of the bottom-right corner) – coordinates of the face region in the video framecam_id (string) – camera id
timestamp (datetime.datetime) – video frame timestamp
detectorParams (dictionary) – debug information from the video face detector
bs_type (string) – best face search mode. Available options: overall (the
findface-video-worker
posts only one snapshot per track, but of the highest quality.), realtime (thefindface-video-worker
posts the best snapshot within each of consecutive time intervals).labels (dictionary) – (duplicates params.labels) a custom set of a frame labels, which are specified in a job parameters for
findface-video-worker
and then assigned to the frame
- process(self, request: FrHTTPRequest, photo: bytes, bbox: List[int], event_id: int, detection: DetectFace)
Accepts the detected face features.
- Parameters
request (tornado.httpserver.HTTPRequest) – a HTTP API request from
findface-video-worker
photo (bytes) – JPEG video frame featuring a detected face, from
request.params
bbox (list of integers [[x1,y1,x2,y2]], where
x1
: x coordinate of the top-left corner,y1
: y coordinate of the top-left corner,x2
: x coordinate of the bottom-right corner,y2
: y coordinate of the bottom-right corner) – coordinates of the face region in the video frame, fromrequest.params
event_id (uint64) – id of the face automatically set by
findface -facerouter
upon receiving it fromfindface-video-worker
. Can be used as a face custom identifier in the biometric database.detection (objects.DetectFace) – detection result received from
findface-sf-api
, that contains requested face features such as faces, gender, age and emotions.
- Returns
n/a
- Return type
n/a
- shutdown(self)
This method is invoked before the
findface-facerouter
shutdown.- Parameters
n/a
- Returns
n/a
- class objects.BBox
Represents coordinates of the rectangle around a face.
- class objects.DetectFace
Represents a detection result with the following fields:
- Parameters
id (string) – id of the detection result in memcached
bbox (objects.Bbox) – coordinates of the rectangle around a face
features (dictionary) – (optional) information about gender, age and emotions
- class objects.DetectResponse
Represents a list of
objects.DetectionFace
objects with an additional fieldorientation
featuring information about the face EXIF orientation in the image.- Parameters
orientation (EXIF orientation) – orientation of a detected face
- class objects.FaceId(namedtuple('FaceId', ('gallery', 'face')))
Represents a custom face identifier object in the gallery.
- Parameters
gallery (string) – gallery name
face (integer) – custom face identifier in the gallery
- class objects.Face
Represents a result of database search by biometric sample
- Parameters
id (objects.FaceId) – FaceId object.
features (dictionary) – information about gender, age and emotions
meta (dictionary) – face meta data
confidence (float) – similarity between the biometric sample and a face in the search result
- class objects.ListResponse
Represents a list of
objects.Face
objects (i.e. a list of biometric sample search results) with an additional fieldnext_page
featuring the cursor for the next page with search results.- Parameters
next_page (string) – cursor for the next page with search results
- class ntech.sfapi_client.client.Client
Represents basic methods to detect faces in images and work with galleries.
- detect(self, *, url=None, image=None, facen=False, gender=False, age=False, emotions=False, return_facen=False, autorotate=False, detector: str = None, timeout=None) DetectResponse
Detects a face and returns the result of detection.
- Parameters
url (URL) – image URL if you pass an image that is publicly accessible on the internet
image (bytes) – PNG/JPG/WEBP image file is you pass an image as a file
facen (boolean) – extract a biometric sample from the detected face. To save the detection result in memcached pass
facen=True
gender (boolean) – extract and return information about gender
age (boolean) – extract and return information about age
emotions (boolean) – extract and return information about emotions
return_facen (boolean) – return facen in the method result
autorotate (boolean) – automatically rotate the image in 4 different orientations to detect faces in each of them. Overlapping detections with IOU > 0.5 will be merged
detector (boolean) – nnd or normalized. The
normalized
detector is used to process normalized images, for example, those which are received fromfkvideo_worker
.timeout (number) – FindFace core response timeout, in seconds (if
none
, the default value is used)
- Returns
Detection result
- Return type
DetectorResponse
object.
- gallery(self, name)
Returns a gallery object
sfapi_client.Gallery
to refer to it later (for example, to list gallery faces).- Parameters
name (string) – gallery name
- Returns
a gallery object
- Return type
sfapi_client.Gallery
- list_galleries(self, timeout=None):
Returns the list of galleries.
- Parameters
timeout (number) – FindFace core response timeout, in seconds (if
none
, the default value is used)- Returns
list of galleries with the fields
name
(a gallery name, string) andnumber
(the number of faces in the gallery, number)- Return type
list of GalleryListItem
- class ntech.sfapi_client.gallery.Gallery
Provides methods to work with galleries and faces.
- list(self, *, filters: Iterable[filters.Filter] = None, limit: int = 1000, sort: str = '', page=None, ignore_errors=False, timeout=None) ListResponse
Returns a list-like object with faces from the gallery, that match the given filters. The returned list-like object has an additional property
next_page
which can be used as a value for thepage
parameter in next requests.- Parameters
filters (sfapi_client.filters.Filter) – list of filters
limit (integer) – maximum number of returned faces
sort (string) – sorting order. Pass one of the following values:
id
: increasing order by id,-id
: decreasing order by id (sorting by id is used if you have NOT specified a feature vector to search for),-confidence
: decreasing order by face similarity (only if you have specified a feature vector to search for). By default, the method uses theid
order (no feature vector specified), or-confidence
(with feature vector).page – cursor of the next page with search results. The
page
value is returned in the response in thenext_page
parameter along with the previous page results.ignore_errors (boolean) – By default, if one or several
findface-tarantool-server
shards are out of service during face identification,findface-sf-api
returns an error. Enable this Boolean parameter to use availablefindface-tarantool-server
shards to obtain face identification results.timeout (number) – FindFace core response timeout, in seconds (if
none
, the default value is used)
- Returns
list with faces from the gallery, that match the given filters.
- Return type
ListResponse object
- add(self, new_id: Union[int, Callable], source: Union[DetectFace, Face, str], *, meta: Dict[str, Union[int, str, List[str]]] = None, regenerate_attempts=None, timeout=None) Face
Creates a face in the gallery.
- Parameters
new_id (integer or callable) – custom face identifier (Face ID) in the database gallery. May be a (async) callable which returns the id. To generate id, you can use the
ctx.idgen()
function delivered with the context.source (sfapi_client.DetectFace, sfapi_client.Face, sfapi_client.FaceId, or string) – face source: create a face using another face in the database or a detection result as a source.
meta (dictionary) – face metadata. Keys must be strings and values must be either ints, strings or lists of strings. Metadata keys and types must be previously specified in the storage configuration files (
/etc/tarantool/instances.available/*.lua
).regenerate_attempts – number of attempts to regenerate a unique Face ID with the
ctx.idgen()
function if new_id is callabletimeout (number) – FindFace core response timeout, in seconds (if
none
, the default value is used)
- Returns
representation of the newly created face
- Return type
Face object
- delete(self, face: Union[Face, int], timeout=None) None
Removes a face from the gallery.
- Parameters
face (sfapi_client.Face, sfapi_client.FaceId or id in integer) – face to be removed
timeout (number) – FindFace core response timeout, in seconds (if
none
, the default value is used)
- Returns
None
- get(self, face: Union[Face, int], timeout=None) Face
Retrieves a face from the gallery.
- Parameters
face (sfapi_client.Face, sfapi_client.FaceId or id in integer) – face to be retrieved
timeout (number) – FindFace core response timeout, in seconds (if
none
, the default value is used)
- Returns
representation of the face
- Return type
Face object
- create(self, timeout=None) None
Creates a gallery in
findface-sf-api
as asfapi_client.Gallery
object. Being a proxy object,sfapi_client.Gallery
doesn’t require a gallery to be existing on the server.- Parameters
timeout (number) – FindFace core response timeout, in seconds (if
none
, the default value is used)- Returns
None
- drop(self, timeout=None) None:
Removes a gallery from
findface-sf-api
.- Parameters
timeout (number) – FindFace core response timeout, in seconds (if
none
, the default value is used)- Returns
None
- update(self, face: Union[Face, str], *, meta: Dict[str, Union[int, str, List[str]]] = None, timeout=None) Face
Update face meta data in the gallery.
- Parameters
face (sfapi_client.Face, sfapi_client.FaceId or id in integer) – face to be updated
meta (dictionary) – face meta data to be updated. Keys must be strings and values must be either ints, strings or lists of strings. If a meta string is not passed or passed as null, it won’t be updated in the database.
timeout (number) – FindFace core response timeout, in seconds (if
none
, the default value is used)
- Returns
representation of the updated face
- Return type
Face object
- class ntech.sfapi_client.filters.Filter
Generic class. Represents a list of filters (with assigned values) that have to be applied to the gallery content.
- class ntech.sfapi_client.filters.Id
Represents methods for filtering gallery content by id. Don’t instantiate, use relevant classmethods to call a filter.
- classmethod lte(cls, value: int) Filter
LTE filter. Select all faces with
id
less or equal tovalue
.- Parameters
value (integer) – id value
- Returns
filter name (LTE) and its value.
- Return type
object of
Filter
class.
Example:
Id.lte(1234)
selects faces with id less or equal to1234
.
- classmethod gte(cls, value: int) Filter
GTE filter. Select all faces with
id
greater or equal tovalue
.- Parameters
value (integer) – id value
- Returns
filter name (GTE) and its value.
- Return type
object of
Filter
class.
Example:
Id.lte(1234)
selects faces with id greater or equal to1234
.
- classmethod oneof(cls, *value: Union[int]) Filter
IN filter. Select a face(s) with id from a given set.
- Parameters
value (list of integers) – list of id values
- Returns
filter name (IN) and its value.
- Return type
object of
Filter
class.
Example:
Id.oneof(1234, 5678)
selects a face(s) with id1234
and/or5678
.
- class ntech.sfapi_client.filters.Meta
Represents methods for filtering gallery content by metadata. Don’t instantiate, use relevant classmethods to call a filter.
- classmethod lte(self, value: Union[str, int]) Filter
LTE filter. Select all faces with a metastring less or equal to
value
- Parameters
value (string or integer) – metastring value
- Returns
filter name (LTE) and its value.
- Return type
object of
Filter
class.
Example:
Meta('foo').lte(1234)
selects faces with a metastringfoo
less or equal to1234
.
- classmethod gte(self, value: Union[str, int]) Filter
GTE filter. Select all faces with a metastring greater or equal to
value
- Parameters
value (string or integer) – metastring value
- Returns
filter name (GTE) and its value.
- Return type
object of
Filter
class.
Example:
Meta('foo').gte(1234)
selects faces with a metastringfoo
greater or equal to1234
.
- classmethod oneof(self, *value: Union[str, int]) Filter
IN filter. Select a face(s) with a metastring from a given set.
- Parameters
value (list of strings or integers) – list of metastring values
- Returns
filter name (IN) and its value.
- Return type
object of
Filter
class.
Example:
Meta.oneof(1234, 5678)
selects a face(s) with a metastring1234
and/or5678
.
- classmethod subset(self, *value: str) Filter
SUBSET filter. Select all faces with a metastring featuring all values from a given set.
- Parameters
value (list of strings or integers) – list of metastring values
- Returns
filter name (SUBSET) and its value.
- Return type
object of
Filter
class.
Example:
Meta('foo').subset("male", "angry")
selects face with a metastringfoo
featuring all values from the set [“male”, “angry”].
- class ntech.sfapi_client.filters.Detection(Filter)
Represents a method that identifies a detected face (searches the database for similar faces).
- __init__(self, id: Union[str, objects.DetectFace], threshold: float)
- Parameters
id (
objects.DetectFace
or temporary face id in memcached returned by sfapi_client.Client.detect(), string) – face (detection result) to be identifiedthreshold (float) – identification threshold similarity between faces from 0 to 1.
Example:
Detection(det1, 0.77)
selects faces similar to the detection resultdet1
with similarity greater or equal to0.77
.
- class ntech.sfapi_client.filters.Face(Filter)
Represents a method that searches the database for faces similar to a given face from a gallery.
- __init__(self, id: Union[str, objects.Face], threshold: float)
- Parameters
id (
objects.Face
,objects.FaceId
or custom face id in the gallery, string) – face from a gallery to be identifiedthreshold (float) – identification threshold similarity between faces from 0 to 1.
Example: Detection(FaceId(“gal1”, 1234), 0.77) selects faces similar to the face 1234 from the
gal1
gallery with similarity greater or equal than 0.77.
Several Filters Usage Example
filters=[filters.Id.gte(123456), filters.Meta('age').gte(45), filters.Meta('camera').oneof('abc', 'def')]
- class sfapi_client.SFApiRemoteError
This error message appears if the error occurred for a reason other than a network failure.
The error body always includes at least two fields:
code
is a short string inCAPS_AND_UNDERSCORES
, usable for automatic decoding.reason
is a human-readable description of the error and should not be interpreted automatically.
Common Error Codes
Error code |
Description |
---|---|
|
Error with unknown origin. |
|
The request can be read, however, some method parameters are invalid. This response type
contains additional attributes |
|
Conflict. |
|
Error upon a face feature vector extraction. |
|
The system configuration does not match license. |
|
The request is malformed and cannot be read. |
|
The |
|
The face in the |
|
The gallery in the |
|
The biometric database not available. |
|
Memcached not available. |
|
Matching faces not found. |
|
This functionality not implemented. |
|
Matching galleries not found. |
- class sfapi_client.SFApiMalformedResponseError
This error message appears if the error occurred due to a network failure, or if Client was unable to read an API response from
findface-sf-api
.
Video Recorder: findface-video-storage
and findface-video-streamer-cpu
Video Recorder is an optional part of the FindFace Core. It works in the following way:
The
findface-video-storage
service implements video chunk management. It takes video chunks from thefindface-video-worker
component, puts them into the storage (findface-upload
), and writes their meta-information and whereabouts to the Video Recorder database (MongoDB). By request fromfindface-security
, it issues information about existing video chunks in the form of Websocket-links to the corresponding streams. These links are further used byfindface-video-streamer-cpu
to deliver the video to a user for viewing and downloading.By request from
findface-security-ui
, thefindface-video-streamer-cpu
service extracts requested video chunks fromfindface-video-storage
andfindface-video-worker
(only the last chunk if it’s not yet recorded to the storage). Then it merges the video chunks into a one-piece video and delivers it to a user for viewing and downloading.
The findface-video-storage
configuration is done through the /etc/findface-video-storage.conf
configuration file. You can find its default content here
.
The findface-video-streamer-cpu
configuration is done through the /etc/findface-video-streamer-cpu.ini
configuration file. You can find its default content here
.
To configure Video Recorder, refer to this section.
Installation File
FindFace Multi installation configuration is automatically saved to a file /tmp/<findface-installer-*>.json
. You can edit this file and use it to install FindFace Multi on other hosts without having to answer the installation questions again.
Tip
See Deploy from Console Installer to learn more about the FindFace Multi installer.
Important
Be sure to remove fields *.config
, exp_ip
, and int_ip
before installing FindFace Multi on a host with a different IP address.
Here is an example of the installation file
.
Neural Network Models
Here you can see a summary for neural network models created by our Lab and used in FindFace Multi.
You can find installed models at /usr/share/findface-data/models/
.
Important
The default face biometrics model upon a clean install is kiwi_320
.
Face detection
ls /usr/share/findface-data/models/facedet/
cheetah.cpu.fnk cheetah_fast.cpu.fnk cheetah_fast.gpu.fnk cheetah.gpu.fnk
Face and body image normalization
ls /usr/share/findface-data/models/facenorm/
bee_fast.cpu.fnk bee.v2.gpu.fnk crop2x.v2_maxsize400.cpu.fnk crop2x.v2_no_maxsize.gpu.fnk
bee_fast.gpu.fnk crop1x.v2_maxsize400.cpu.fnk crop2x.v2_maxsize400.gpu.fnk cropbbox.v2.cpu.fnk
bee.v2.cpu.fnk crop1x.v2_maxsize400.gpu.fnk crop2x.v2_no_maxsize.cpu.fnk cropbbox.v2.gpu.fnk
Face recognition
ls /usr/share/findface-data/models/face/
kiwi_160.cpu.fnk kiwi_320.cpu.fnk
kiwi_160.gpu.fnk kiwi_320.gpu.fnk
Face attribute recognition
ls /usr/share/findface-data/models/faceattr/
age.v2.cpu.fnk emotions.v1.cpu.fnk glasses3.v0.cpu.fnk liveness.pacs.v0.cpu.fnk quality.v1.cpu.fnk
age.v2.gpu.fnk emotions.v1.gpu.fnk glasses3.v0.gpu.fnk liveness.pacs.v0.gpu.fnk quality.v1.gpu.fnk
beard.v0.cpu.fnk gender.v2.cpu.fnk liveness.colombo.cpu.fnk medmask3.v2.cpu.fnk
beard.v0.gpu.fnk gender.v2.gpu.fnk liveness.colombo.gpu.fnk medmask3.v2.gpu.fnk
Car detection
ls /usr/share/findface-data/models/cadet/
efreitor.cpu.fnk efreitor.gpu.fnk
Car image normalization
ls /usr/share/findface-data/models/carnorm/
anaferon.v3.cpu.fnk anaferon.v3.gpu.fnk
Car recognition
ls /usr/share/findface-data/models/carrec/
alonso.cpu.fnk alonso.gpu.fnk
Car attribute recognition
ls /usr/share/findface-data/models/carattr/
carattr.license_plate_quality.v0.cpu.fnk carattr.quality.v0.cpu.fnk description.v0.cpu.fnk
carattr.license_plate_quality.v0.gpu.fnk carattr.quality.v0.gpu.fnk description.v0.gpu.fnk
carattr.license_plate.v4.cpu.fnk carattr.special_types.v0.cpu.fnk
carattr.license_plate.v4.gpu.fnk carattr.special_types.v0.gpu.fnk
Body detection
ls /usr/share/findface-data/models/pedet/
glenn_005.cpu.fnk glenn_005.gpu.fnk glenny_005_fast.cpu.fnk glenny_005_fast.gpu.fnk
Body recognition
ls /usr/share/findface-data/models/pedrec/
andariel.cpu.fnk andariel.gpu.fnk
Body attribute recognition
ls /usr/share/findface-data/models/pedattr/
pedattr.clothes_type.v0.cpu.fnk pedattr.color.v1.cpu.fnk pedattr.quality.v0.cpu.fnk
pedattr.clothes_type.v0.gpu.fnk pedattr.color.v1.gpu.fnk pedattr.quality.v0.gpu.fnk
FindFace Multi Data Storages
In this section:
List of Storages
FindFace Multi uses the following data storages:
Tarantool-based feature vector database that stores object feature vectors and events.
PostgreSQL-based main system database
ffsecurity
that stores internal system data, cards, user accounts, and camera settings.The
/var/lib/findface-security/uploads
directory that stores uploaded card photos, video files, full frames of events and counters, and object thumbnails.The
/var/lib/ffupload/
directory that stores such event artifacts as normalized object images.Note
It also stores video chunks if you are using Video Recorder.
(Only with Video Recorder) MongoDB-based database that stores meta-information of the video chunks, including their exact location in the storage,
/var/lib/ffupload/
.
Feature Vector Database Galleries
There are the following galleries in the Tarantool-based feature vector database:
ffsec_body_events
: feature vectors extracted from bodies detected in the video.ffsec_body_objects
: feature vectors extracted from body images uploaded to the card index.ffsec_body_clusters
: centroids of body clusters.ffsec_car_events
: feature vectors extracted from cars detected in the video.ffsec_car_objects
: feature vectors extracted from car images uploaded to the card index.ffsec_car_clusters
: centroids of car clusters.ffsec_face_events
: feature vectors extracted from faces detected in the video.ffsec_face_objects
: feature vectors extracted from face images uploaded to the card index.ffsec_face_clusters
: centroids of face clusters.ffsec_user_face
: feature vectors extracted from the FindFace Multi users’ photos for face-based authentication.
Backup Options
To backup the feature vector database, you need the findface-storage-api-dump
utility. It can be launched with the following options:
Note
You can find the detailed information on the findface-storage-api-dump
usage in Back Up and Recover FindFace Multi and Its Data.
findface-storage-api-dump --help
...
Command line flags:
-cache string
Cache type: inmemory, redis or memcache (default "memcache")
-cache-inmemory-size int
Maximum number of items in ARC cache (default 16384)
-cache-memcache-dns-cache-timeout duration
DNS cache timeout (default 1m0s)
-cache-memcache-nodes value
Comma-separated list of memcache shards (default 127.0.0.1:11211)
-cache-memcache-timeout duration
Specifies read/write timeout (default 100ms)
-cache-redis-db int
Database to be selected after connecting to the server.
-cache-redis-network string
Network type, either tcp or unix (default "tcp")
-cache-redis-nodes value
Array of Host:Port addresses (default localhost:6379)
-cache-redis-password string
Optional password. Must match the password specified in the requirepass server configuration option.
-cache-redis-timeout duration
Specifies dial/read/write timeout (default 5s)
-config string
Path to config file
-config-template
Output config template and exit
-continue-on-errors
Continue on errors instead of exiting
-cpu-profile string
Enable CPU profile and set output file
-debug
Enable debug logging
-extraction-api-extraction-api string
Extraction API address (default "http://127.0.0.1:18666")
-extraction-api-keepalive duration
keep-alive connection timeout (default 24h0m0s)
-extraction-api-max-idle-conns-per-host int
max idle keep-alive connections per host (default 20)
-extraction-api-timeouts-connect duration
extraction-api-timeouts-connect (default 5s)
-extraction-api-timeouts-idle-connection duration
extraction-api-timeouts-idle-connection (default 10s)
-extraction-api-timeouts-overall duration
extraction-api-timeouts-overall (default 35s)
-extraction-api-timeouts-response-header duration
extraction-api-timeouts-response-header (default 30s)
-extraction-api-trace
Enable HTTP tracing (extremely verbose, slows everything down considerably)
-help
Print help information
-limits-allow-return-facen
Allow returning raw feature vectors to detect responses if ?return_facen=true (v2) or ?return_emben=true (v3)
-limits-body-image-length int
Maximum length of image supplied in request body (default 33554432)
-limits-deny-networks string
Comma-separated list of subnets that are not allowed to fetch from (default "127.0.0.0/8,192.168.0.0/16,10.0.0.0/8,::1/128,fe00::/8")
-limits-url-length int
Maximum supported url length in bytes (default 4096)
-listen string
IP:port to listen on (default ":18411")
-normalized-storage-enabled
Enables normalize saving (default true)
-normalized-storage-s3-access-key string
Access key for the object storage
-normalized-storage-s3-bucket-name string
S3 storage bucket name
-normalized-storage-s3-endpoint string
S3 compatible object storage endpoint
-normalized-storage-s3-operation-timeout int
Storage operations (Get,Put,Delete) timeout in seconds (default 30)
-normalized-storage-s3-public-url string
Storage public url
-normalized-storage-s3-region string
Storage region
-normalized-storage-s3-secret-access-key string
Secret key for the object storage
-normalized-storage-s3-secure
If 'true' API requests will be secure (HTTPS), and insecure (HTTP) otherwise (default true)
-normalized-storage-webdav-keepalive duration
keep-alive connection timeout (default 24h0m0s)
-normalized-storage-webdav-max-idle-conns-per-host int
max idle keep-alive connections per host (default 20)
-normalized-storage-webdav-timeouts-connect duration
normalized-storage-webdav-timeouts-connect (default 5s)
-normalized-storage-webdav-timeouts-idle-connection duration
normalized-storage-webdav-timeouts-idle-connection (default 10s)
-normalized-storage-webdav-timeouts-overall duration
normalized-storage-webdav-timeouts-overall (default 35s)
-normalized-storage-webdav-timeouts-response-header duration
normalized-storage-webdav-timeouts-response-header (default 30s)
-normalized-storage-webdav-trace
Enable HTTP tracing (extremely verbose, slows everything down considerably)
-normalized-storage-webdav-upload-url string
webdav storage for normalized, disable normalized if empty string (default "http://127.0.0.1:3333/uploads/")
-normalized_storage string
Normalized storage type: webdav, s3 (default "webdav")
-objects value
Supported object types (default face,body,car)
-output-dir string
Output directory (default ".")
-storage-api-cooldown duration
Cooldown timeout after communication error (default 2s)
-storage-api-galleries-read-slave-first
Prefer slaves over master for get/list galleries requests
-storage-api-keepalive duration
keep-alive connection timeout (default 24h0m0s)
-storage-api-max-idle-conns-per-host int
max idle keep-alive connections per host (default 20)
-storage-api-max-slave-attempts int
Give up after trying to read from max_slave_attempts slaves (default 2)
-storage-api-read-slave-first
Prefer slaves over master for requests
-storage-api-read-slave-only
Ignore master on read requests. If true: ReadSlaveFirst will be ignored
-storage-api-timeouts-connect duration
storage-api-timeouts-connect (default 5s)
-storage-api-timeouts-idle-connection duration
storage-api-timeouts-idle-connection (default 10s)
-storage-api-timeouts-overall duration
storage-api-timeouts-overall (default 35s)
-storage-api-timeouts-response-header duration
storage-api-timeouts-response-header (default 30s)
-storage-api-trace
Enable HTTP tracing (extremely verbose, slows everything down considerably)
...
Restore Options
To restore the feature vector database from a backup, you need the findface-storage-api-restore
utility. It can be launched with the following options:
Note
You can find the detailed information on the findface-storage-api-restore
usage in Back Up and Recover FindFace Multi and Its Data.
findface-storage-api-restore --help
...
Command line flags:
-cache string
Cache type: inmemory, redis or memcache (default "memcache")
-cache-inmemory-size int
Maximum number of items in ARC cache (default 16384)
-cache-memcache-dns-cache-timeout duration
DNS cache timeout (default 1m0s)
-cache-memcache-nodes value
Comma-separated list of memcache shards (default 127.0.0.1:11211)
-cache-memcache-timeout duration
Specifies read/write timeout (default 100ms)
-cache-redis-db int
Database to be selected after connecting to the server.
-cache-redis-network string
Network type, either tcp or unix (default "tcp")
-cache-redis-nodes value
Array of Host:Port addresses (default localhost:6379)
-cache-redis-password string
Optional password. Must match the password specified in the requirepass server configuration option.
-cache-redis-timeout duration
Specifies dial/read/write timeout (default 5s)
-config string
Path to config file
-config-template
Output config template and exit
-cpu-profile string
Enable CPU profile and set output file
-debug
Enable debug logging
-dont-create-gallery
Don't create gallery, fail if doesn't exist
-extraction-api-extraction-api string
Extraction API address (default "http://127.0.0.1:18666")
-extraction-api-keepalive duration
keep-alive connection timeout (default 24h0m0s)
-extraction-api-max-idle-conns-per-host int
max idle keep-alive connections per host (default 20)
-extraction-api-timeouts-connect duration
extraction-api-timeouts-connect (default 5s)
-extraction-api-timeouts-idle-connection duration
extraction-api-timeouts-idle-connection (default 10s)
-extraction-api-timeouts-overall duration
extraction-api-timeouts-overall (default 35s)
-extraction-api-timeouts-response-header duration
extraction-api-timeouts-response-header (default 30s)
-extraction-api-trace
Enable HTTP tracing (extremely verbose, slows everything down considerably)
-help
Print help information
-limits-allow-return-facen
Allow returning raw feature vectors to detect responses if ?return_facen=true (v2) or ?return_emben=true (v3)
-limits-body-image-length int
Maximum length of image supplied in request body (default 33554432)
-limits-deny-networks string
Comma-separated list of subnets that are not allowed to fetch from (default "127.0.0.0/8,192.168.0.0/16,10.0.0.0/8,::1/128,fe00::/8")
-limits-url-length int
Maximum supported url length in bytes (default 4096)
-listen string
IP:port to listen on (default ":18411")
-normalized-storage-enabled
Enables normalize saving (default true)
-normalized-storage-s3-access-key string
Access key for the object storage
-normalized-storage-s3-bucket-name string
S3 storage bucket name
-normalized-storage-s3-endpoint string
S3 compatible object storage endpoint
-normalized-storage-s3-operation-timeout int
Storage operations (Get,Put,Delete) timeout in seconds (default 30)
-normalized-storage-s3-public-url string
Storage public url
-normalized-storage-s3-region string
Storage region
-normalized-storage-s3-secret-access-key string
Secret key for the object storage
-normalized-storage-s3-secure
If 'true' API requests will be secure (HTTPS), and insecure (HTTP) otherwise (default true)
-normalized-storage-webdav-keepalive duration
keep-alive connection timeout (default 24h0m0s)
-normalized-storage-webdav-max-idle-conns-per-host int
max idle keep-alive connections per host (default 20)
-normalized-storage-webdav-timeouts-connect duration
normalized-storage-webdav-timeouts-connect (default 5s)
-normalized-storage-webdav-timeouts-idle-connection duration
normalized-storage-webdav-timeouts-idle-connection (default 10s)
-normalized-storage-webdav-timeouts-overall duration
normalized-storage-webdav-timeouts-overall (default 35s)
-normalized-storage-webdav-timeouts-response-header duration
normalized-storage-webdav-timeouts-response-header (default 30s)
-normalized-storage-webdav-trace
Enable HTTP tracing (extremely verbose, slows everything down considerably)
-normalized-storage-webdav-upload-url string
webdav storage for normalized, disable normalized if empty string (default "http://127.0.0.1:3333/uploads/")
-normalized_storage string
Normalized storage type: webdav, s3 (default "webdav")
-objects value
Supported object types (default face,body,car)
-rename string
Ignore dump header and use this string as gallery name
-storage-api-cooldown duration
Cooldown timeout after communication error (default 2s)
-storage-api-galleries-read-slave-first
Prefer slaves over master for get/list galleries requests
-storage-api-keepalive duration
keep-alive connection timeout (default 24h0m0s)
-storage-api-max-idle-conns-per-host int
max idle keep-alive connections per host (default 20)
-storage-api-max-slave-attempts int
Give up after trying to read from max_slave_attempts slaves (default 2)
-storage-api-read-slave-first
Prefer slaves over master for requests
-storage-api-read-slave-only
Ignore master on read requests. If true: ReadSlaveFirst will be ignored
-storage-api-timeouts-connect duration
storage-api-timeouts-connect (default 5s)
-storage-api-timeouts-idle-connection duration
storage-api-timeouts-idle-connection (default 10s)
-storage-api-timeouts-overall duration
storage-api-timeouts-overall (default 35s)
-storage-api-timeouts-response-header duration
storage-api-timeouts-response-header (default 30s)
-storage-api-trace
Enable HTTP tracing (extremely verbose, slows everything down considerably)
...
Direct API Requests to Tarantool
You can use HTTP API to extract object data (faces, bodies, cars) directly from the Tarantool Database.
Note
In the current implementation, Tarantool operates objects as faces. For example, to add an object, send POST /:ver/faces/add/:name
.
In this section:
General Information
API requests to Tarantool are to be sent to http://<tarantool_host_ip:port>
.
Tip
The port for API requests can be found in the FindFace.start
section of the Tarantool configuration file /etc/tarantool/instances.available/*.lua
:
cat /etc/tarantool/instances.available/*.lua
##8101:
FindFace.start("127.0.0.1", 8101)
Note
In the case of the standalone deployment, you can access Tarantool by default only locally (127.0.0.1). If you want to access Tarantool remotely, alter the Tarantool configuration file (/etc/tarantool/instances.available/*.lua
).
API requests to Tarantool may contain the following parameters in path segments:
:ver
: API version (v2 at the moment).:name
: gallery name.
By default, there are the following galleries in the Tarantool database:
ffsec_body_events
: feature vectors extracted from bodies detected in the video.ffsec_body_objects
: feature vectors extracted from body images uploaded to the card index.ffsec_body_clusters
: centroids of body clusters.ffsec_car_events
: feature vectors extracted from cars detected in the video.ffsec_car_objects
: feature vectors extracted from car images uploaded to the card index.ffsec_car_clusters
: centroids of car clusters.ffsec_face_events
: feature vectors extracted from faces detected in the video.ffsec_face_objects
: feature vectors extracted from face images uploaded to the card index.ffsec_face_clusters
: centroids of face clusters.ffsec_user_face
: feature vectors extracted from the FindFace Multi users’ photos for face-based authentication.
Tip
To list gallery names on a shard, type in the following command in the address bar of your browser:
http://<tarantool_host_ip:shard_port>/stat/list/1/99
The same command on the console is as such:
curl <tarantool_host_ip:shard_port>/stat/list/1/99 \| jq
You can also list gallery names by using a direct request to Tarantool:
echo 'box.space.galleries:select()' | tarantoolctl connect <tarantool_host_ip:shard_port>
Note that if there is a large number of shards in the system, chances are that a randomly taken shard does not contain all the existing galleries. In this case, just list galleries on several shards.
Add Object
POST /:ver/faces/add/:name
Parameters in body:
JSON-encoded array of objects with the following fields:
"id"
: object id in the gallery, uint64_t,"facen"
: raw feature vector, base64,"meta"
: object metadata, dictionary.
Returns:
HTTP 200 and empty body on success.
HTTP 404 if a gallery with the given name doesn’t exist.
HTTP with a status other than 200 and error description in the body on failure.
Example
Request
curl -D - -s 'http://localhost:8101/v2/faces/add/testgal' --data '
[
{
"id": 9223372036854776000,
"facen": "qgI3vZRv/z…NpO9MdHavW1WuT0=",
"meta": {
"cam_id": "223900",
"person_name": "Mary Ostin",
}
}
]
Response
HTTP/1.1 200 Ok
Content-length: 1234
Server: Tarantool http (tarantool v1.7.3-673-g23cc4dc)
Connection: keep-alive
Remove Object
POST /v2/faces/delete/:name
Parameters in body:
JSON-encoded array of object ids to be removed
Returns:
HTTP 200 and empty body on success.
HTTP 404 if an object with the given id is not found in the gallery.
HTTP with a status other than 200 and error description in the body on failure.
Example
Request
curl -D - -s 'http://localhost:8101/v2/faces/delete/testgal' --data '[1, 4, 922, 3]'
Response
HTTP/1.1 200 Ok
Content-length: 111
Server: Tarantool http (tarantool v1.7.3-673-g23cc4dc)
Connection: keep-alive
Object Search
POST /v2/faces/search/:name
Parameters in body:
JSON-encoded search request with the following fields:
limit
: maximum number of objects in the response.sort
: sorting order. Pass one of the following values:id
: increasing order by id,-id
: decreasing order by id,-score
: decreasing order by object similarity (only if you search for objects with similar feature vectors).filter
(filters):facen
: (optional) search for objects with similar feature vectors. Pass a dictionary with the following fields:data
: raw feature vector, base64;score
: range of similarity between objects [threshold similarity; 1], where1
is 100% match.id
andmeta/<meta_key>
: search by object id and metastring content. To set this filter, use the following operators:range
: range of values, only for numbers.set
: id or metastring must contain at least one value from a given set, for numbers and strings.subset
: id or metastring must include all values from a given subset, for numbers and strings.like
: by analogy withlike
in SQL requests: only ‘aa%’, ‘%aa’, and ‘%aa%’ are supported. Only for strings and set[string]. In the case of set[string], the filter will return result if at least one value meets the filter condition.ilike
: by analogy withlike
but case-insensitive, only for strings and set[string].
Returns:
JSON-encoded array with objects on success. The value in the
X-search-stat
header indicates whether the fast index was used for the search:with_index
orwithout_index
.Note
Fast index is not used in API v2.
HTTP with a status other than 200 and error description in the body on failure.
Example
Request
curl -D - -s 'http://localhost:8101/v2/testgal/search' --data '
{
"limit": 2,
"sort": {
"score": -1
},
"filter": {
"facen": {
"data": "qgI3vZRv/z0BQTk9rcirOyZrNpO9MdHavW1WuT0=",
"score": [0.75, 1]
},
"id": {
"range": [9223372036854000000, 9223372036854999000]
},
"meta": {
"person_id": {
"range": [444, 999]
},
"cam_id": {
"set": ["12767", "8632", "23989"]
}
}
}
}'
Response
HTTP/1.1 200 Ok
Content-length: 1234
X-search-stat: without_index
Server: Tarantool http (tarantool v1.7.3-673-g23cc4dc)
Connection: keep-alive
{
"results": [
{
"facen": " qgI3vZRv/z0BQTk9rcirOyZrNpO9MdHavW1WuT0=",
"meta": {
"timestamp": 0,
"photo_hash": "",
"person_id": 777,
"cam_id": "8632"
},
"score": 0.9964,
"id": 9223372036854776000
}
]
}
Edit Object Metadata and Feature Vector
POST /v2/faces/update/:name
Parameters in body:
JSON-encoded array with objects with the following fields:
"id"
: object id, uint64_t."facen"
: (optional) new feature vector, base64. If omitted or passed asnull
, the relevant field in the database won’t be updated."meta"
: dictionary with metadata to be updated. If some metastring is omitted or passed asnull
, the relevant field in the database won’t be updated.
Returns:
HTTP 200 and dictionary with all object parameters, including not updated, on success.
HTTP 404 and error description if an object with the given id doesn’t exist.
HTTP with a status other than 200 and error description in the body on failure.
Example
Request
curl -D - -s 'http://localhost:8101/v2/faces/update/sandbox' --data '[{"id":1,"facen":null,"meta":{"m:timestamp":1848}}]'
Response
HTTP/1.1 200 Ok
Content-length: 151
Server: Tarantool http (tarantool v1.7.3-673-g23cc4dc)
Connection: keep-alive
{"meta":{"m:timestamp":1848,"normalized_id":"1_b9pkrf00mjt6h1vmq1kg.png","m:cam_id":"a9f7a973-f07e-469d-a3bd-41ddd510b26f","feat":"{\"score\":0.123}"}, "id":1, ... }
List Galleries
POST /v2/galleries/list
Returns:
JSON-encoded array with galleries with the following fields: name
: gallery name, faces
: number of objects in a gallery.
Example
Request
curl -D - -s -X POST http://localhost:8101/v2/galleries/list
Response
HTTP/1.1 200 Ok
Content-length: 42
Server: Tarantool http (tarantool v1.7.3-673-g23cc4dc)
Connection: keep-alive
{
"results": [
{
"name": "testgal",
"faces": 2
}
]
}
Get Gallery Info
POST /v2/galleries/get/:name
Returns:
HTTP 200 and dictionary with gallery parameters on success.
HTTP 404 and error description if a gallery with the given name doesn’t exist.
HTTP with a status other than 200 and error description in the body on failure.
Example
Request
curl -D - -s -X POST http://localhost:8101/v2/galleries/get/testgal
HTTP/1.1 200 Ok
Content-length: 11
Server: Tarantool http (tarantool v1.7.3-673-g23cc4dc)
Connection: keep-alive
{"faces":2}
Create Gallery
POST /v2/galleries/add/:name
Returns:
HTTP 200 and empty body on success.
with a status other than 200 and error description in the body on failure.
Example
Request
curl -D - -X POST -s 'http://localhost:8101/v2/galleries/add/123'
Response
HTTP/1.1 200 Ok
Content-length: 111
Server: Tarantool http (tarantool v1.7.3-673-g23cc4dc)
Connection: keep-alive
Remove Gallery
POST /v2/galleries/delete/:name
Returns:
HTTP 200 and empty on success.
with a status other than 200 and error description in the body on failure.
Example
Request
curl -D - -X POST -s 'http://localhost:8101/v2/galleries/delete/123'
Response
HTTP/1.1 204 No content
Content-length: 0
Server: Tarantool http (tarantool v1.7.3-673-g23cc4dc)
Connection: keep-alive
User’s Guide
This chapter describes how to work with the FindFace Multi web interface, including its advanced possibilities, and will be of interest to administrators, analysts, operators, and other users.
Getting Started
Once you have successfully deployed and configured FindFace Multi, it’s time to open the web interface, and get started. In this chapter, you can find a recommended sequence of steps that will help you harness your system’s complete functionality.
In this chapter:
Gear Up for Work
Perform the primary configuration of your system:
Organize Cameras
Create a new camera group or use the default one. A camera group is an entity that allows you to group cameras subject to their physical location. For example, cameras at the same entrance to a building can be combined into one camera group.
Add cameras to the camera group and check their statuses.
You may also need:
Configure your system to process video from the group of cameras at their physical location. It may come in handy in a distributed architecture. Learn more.
Consider enabling event deduplication if observation scenes of cameras within the group overlap. This feature allows you to exclude coinciding object recognition events among cameras belonging to the same group. Learn more.
Enable video recording for selected cameras (available with Video Recorder).
Organize Watch Lists and Card Index
Create a new watch list or use the default one. A watch list is an entity that allows you to classify objects (faces, bodies, cars) by arbitrary criteria, e.g., persona non grata, wanted, VIP, staff, etc.
Upload cards and add them in the watch list either manually, in bulk via the web interface, or use the console bulk upload function.
You may also need:
Customize card content. Create additional fields, tabs, and search filters.
Create relations between cards to improve your system analytical capacity.
Start Monitoring Objects
By default, FindFace Multi is monitoring only unmatched objects. To enable a watch list monitoring, make this list active. You can also turn on sound notifications and request manual acknowledgment for the events associated with the list.
You may also need:
Enable automatic clustering of objects of the same origin: face/body images belonging to the same person, images of the same car.
Support laws related to the processing of personal data of individuals (GDPR and similar). Learn more.
Organize Video Surveillance
Create a camera layout for essential video surveillance.
You may also need:
For advanced video surveillance, enable Video Recorder.
Count People and Cars. Measure Distance between People
Set up counters to count faces, bodies, and cars on connected cameras. You can also configure counters to measure distance between bodies. The Counters possibilities can apply to a wide range of situations, such as people counting in queues and waiting areas, monitoring public gatherings, crowding prevention, health protocol enforcement, traffic jam detection, and more.
Manage Areas
Create areas to monitor the presence of people there, using specific rules and schedules. Possible use cases: long queues prevention at retail outlets, theft prevention at enterprises during non-working hours, hazardous area control, work time tracking.
Analyze People
FindFace Multi provide a set of people-related analytical tools:
Analyze interactions. Examine a circle of people with whom a person has previously been in contact.
View ‘know your customer’ analytics (KYC). It is analytics on the number of visitors, their gender, average age, most frequently visited zones, and the character of visits (first-timers or returners). Learn more.
FindFace Multi in Action
Automatically identify objects (faces, bodies, cars) in live video and check them against watch lists. Work with the event history by using various filters.
Harness the episodes. An episode is a set of identification events that feature objects of the same origin (face and body images of the same person and images of the same car) detected within a certain period. As events on the Events tab show up in an arbitrary order, a large number of miscellaneous events can make the work challenging and unproductive. With the Episodes, the system uses AI to organize incoming events based on the objects similarity and detection time. This allows for the effortless processing of diverse events, even in large numbers.
Search for objects in the database of detected objects and card index. Learn more.
Search archived videos for objects under monitoring.
Manually compare two objects and verify that they match.
Build detailed reports on object recognition events, episodes, search events, clusters, counters, cameras, card index, KYC analytics, audit logs, and areas.
Basic Maintenance
Go Further
Set up webhooks to automatically send notifications about specific events, episodes, counter records, and areas to a given URL. In this case, when such an event occurs, FindFace Multi will send an HTTP request to the URL configured for the webhook. You can use webhooks for various purposes, for example, to notify a user about a particular event, invoke required behavior on a target website, and solve security tasks such as automated access control. Learn more.
Harness the FindFace Multi functions through HTTP API.
Check out the list of our partner integrations.
Integrate an edge device.
See also
Web Interface Basics
Use the web interface to interact with FindFace Multi. To open the web interface, enter its basic address in the address bar of your browser, and log in.
Note
The basic address is set during deployment.
Important
To log in for the first time, use the admin account created during deployment. To create more users, refer to User Management.
Tip
Take your system security up a notch with face-based authentication.
Web Interface Language
You can change the system language on the
tab.Note
By default, only English and Russian languages are available in the web interface. Other languages are provided by request (support@ntechlab.com) and added to the system via the /etc/findface-security/config.py
configuration file.
See also
Camera Management and Video Recording
To configure video-based object monitoring, add cameras to FindFace Multi, grouping them subject to their location.
Note
Privileges to create camera groups and cameras are managed in user’s permissions (see User Management).
In this chapter:
Create Camera Group
Tip
A default preconfigured camera group is available in the system.
To create a group of cameras, do the following:
Navigate to the Preferences tab. Click Camera Groups.
Click +.
On the Information tab, specify the group name. Add a comment if needed.
If you want to allocate a certain
findface-video-worker
instance to process video streams from the group, create or select one or several allocation labels.Note
To complete the allocation, list the labels in the
/etc/findface-video-worker-gpu.ini
(/etc/findface-video-worker-cpu.ini
) configuration file. See Allocate findface-video-worker to Camera Group for details.If you want to deduplicate events from cameras that belong to the same group, i. e. exclude coinciding events, check Deduplicate Events and specify the deduplication interval (interval between 2 consecutive checks for event uniqueness).
Warning
Use deduplication with extreme caution. If cameras within a group observe different scenes, some objects may be skipped. See Deduplicate Events for details.
By default, video from all camera groups is processed using the generic confidence threshold. To set an individual threshold for the camera group, check Confidence Threshold and specify the threshold value.
Important
The default generic confidence threshold is optimal for the majority of recognition cases. We do not recommend changing it on your own. Be sure to consult with our technical experts prior (support@ntechlab.com).
Check Active.
Click Save.
On the Permissions tab, assign privileges on the camera group, specifying which user roles are allowed to change/view the camera group settings.
Click Save.
Add Camera
To add a camera, do the following:
Navigate to the Cameras tab.
Click +.
On the Settings -> Main tab, enter the general camera information:
Specify the camera name.
Add the camera to a camera group.
Specify the camera URL (Stream). If the camera is ONVIF, select it from the list of detected devices to automatically load available settings and streams.
Note
A camera object can also be used for integrating an edge device. Learn more.
If necessary, add a comment.
Check Active.
On the Settings -> Advanced tab, fine-tune the camera:
Check detectors that you want to enable on this camera: faces, bodies, cars.
If needed, change the video orientation.
Important
Be aware that the
findface-security
server rotates the video using post-processing tools. It can negatively affect performance. Rotate the video via the camera functionality wherever possible.Timeout in ms: Specify the timeout in milliseconds for posting detected objects.
Retrieve timestamps from stream: Check to retrieve and post timestamps from the video stream. Uncheck the option to post the current date and time.
Add to timestamps: Add the specified number of seconds to timestamps from the stream.
FFMPEG format: Pass FFMPEG format (mxg, flv, etc.) if it cannot be detected automatically.
FFMPEG parameters: FFMPEG options for the video stream in the key-value format, for example, [“rtsp_transpotr=tcp”, “ss=00:20:00”].
Imotion threshold: Minimum motion intensity to be detected by the motion detector.
Verify SSL: Check to enable verification of the server SSL certificate when the object tracker posts objects to the server over https. Uncheck the option if you use a self-signed certificate.
(Optional) On the GEO tab, specify the camera geographical location.
On the Analytics tab, specify settings for each object type detector.
Minimum object snapshot quality: Minimum quality of an object image to detect. Do not change the default value without consulting with our technical experts (support@ntechlab.com).
Minimum object size: Minimum object size in pixels to post.
Maximum object size: Maximum object size in pixels to post.
Compression quality: Full frame compression quality.
Offline mode: Offline mode. Enable posting one snapshot of the best quality per entire track for each object.
Time interval: Time interval in seconds (integer or decimal) within which the object tracker picks up the best snapshot in the real-time mode.
Post first object immediately: Check to post the first object snapshot from a track immediately after it passes through the quality, size, and ROI filters, without waiting for the first
Time interval
to complete. The way the subsequent snapshots are posted will depend on thePost best snapshot
value. Uncheck the option to post the first object snapshot only after the firstTime interval
completes.Post best snapshot: Check to post the best snapshot obtained within each
Time interval
in the real-time mode, regardless of its quality. Uncheck the option to post the best snapshot only if its quality has improved compared to the previously posted snapshot.
Click Apply changes.
Specify the region of tracking within the camera field (General -> Region of Tracking) and detection zones (Regions of Interest) for each object type detector if necessary. Click Apply changes.
Monitor Camera Operation
To monitor the operation of cameras, navigate to the Cameras tab.
Camera statuses:
Green: the video stream is being processed without errors.
Yellow: the video stream is being processed for less than 30 seconds, or one or more errors occurred when posting an object.
Red: the video stream cannot be processed.
Grey: camera disabled.
Tip
You can configure the yellow and red statuses based on the portion of dropped frames and failed object postings. To do so, modify the following parameters in the /etc/findface-security/config.py
configuration file:
sudo vi /etc/findface-security/config.py
FFSECURITY = {
...
# max camera frames_dropped percent
'MAX_CAMERA_DROPPED_FRAMES': {'yellow': 0.1, 'red': 0.3},
# max camera objects_failed percent
'MAX_CAMERA_FAILED_FACES': {'yellow': 0.1, 'red': 0.3},
...
}
Each created camera is associated with a so called job, a video processing task that contains configuration settings and stream data and is assigned to findface-video-worker
. This task can be restarted.
To restart a job, click in the Action column. In this case, the number of errors will be reset to
0
.
With a large number of cameras in the system, use the following filters:
Camera groups,
Active,
Status.
For each camera, you will be provided with complete statistics such as current session duration, the number of successfully posted objects, the number of objects processed with errors after the last job restart, the number of frame drops, and other data. To consult these data, click the camera and go to the Camera Info tab.
Video Recorder
Enable Video Recording
If Video Recorder is deployed and configured, you will see the Enable recording checkbox appear in the main settings of cameras. Check it to enable recording video chunks from a camera and sending them over to Video Recorder for further processing.
View Camera Video in Video Player
Under specific settings, clicking on camera preview opens up not a static frame, but a video player, streaming live video from the camera.
The video player has a highly intuitive design. It provides the following possibilities:
Stream live video from the camera.
Watch video chunks recorded from the camera. They will be marked in purple on the timeline. To quickly switch from archived video to the camera live stream, click the Live button.
Visual indicating of moments corresponding to face or car events (if enabled) as colorful markers on the timeline. The unmatched events are marked in pink, while the matched ones are light green.
Note
This feature is optional and must be pre-configured. See Configure Video Recorder.
Important
The time on the timeline is displayed in the spectator’s time zone. For example, if an event happened at 2 p.m. in Abu Dhabi, its marker would be at 12 p.m. for a spectator in Paris.
Navigate to the past and future over the timeline, with the possibility of zooming it in and out.
Tip
You can navigate along the timeline by using the < / > buttons, or by moving the cursor along the timeline with the right mouse button held down.
Tip
To zoom in and out, use the + / - buttons, or the mouse wheel.
Export selected clips.
Set Up Object Monitoring and Card Index
FindFace Multi provides video monitoring of the following objects:
human face
human body (silhouette)
car/vehicle
Object monitoring is implemented using a set of default and custom watch lists, e.g., wanted, VIP, etc., and a card index. You can create as many custom watch lists as necessary.
The card index contains two types of cards:
person card: aggregates data about a person, such as photos of their face and body, their name, and associated files.
car card: contains car-related data, such as car photos and license plate number information.
Each card is allocated to one or several watch lists. To put a person/car on video monitoring, you need to make a relevant watch list active.
To create cards in bulk, use the batch photo upload functionality.
In this section:
Monitoring Unmatched Objects
FindFace Multi features a special pre-configured watch list used for monitoring only unmatched objects (objects that do not match any card). This watch list cannot be removed from the system. To edit its settings, navigate to the Preferences tab. Click Watch Lists and then click Unmatched in the table.
Create Watch List
You can create a custom watch list. Do the following:
Navigate to the Preferences tab. Click Watch Lists.
Click +.
From the Label palette, select a color which will be shown in event notifications for this list. Keep in mind that the right color makes for a quicker response of the person on duty.
Specify the watch list name. Add a comment if needed.
Select a camera group(s) that will be used to monitor the watch list. If no groups specified, the watch list will be monitored by all active cameras in the system.
Check Require acknowledgment if it is mandatory that events associated with the list be manually acknowledged.
Check Enable sound alert to turn on sound notifications for the list if needed.
By default, all watch lists in the system are applied the generic confidence threshold. To set an individual threshold for the watch list, check Confidence Threshold and specify the threshold value.
Important
The default generic confidence threshold is optimal for the majority of recognition cases. We do not recommend changing it on your own. Be sure to consult with our technical experts prior (support@ntechlab.com).
Check Active.
Click Save.
On the Permissions tab, assign privileges on the watch list, specifying which user roles are allowed to change/view the watch list settings.
Click Save.
Create Person Card
To create a person card manually, do the following:
Important
To be able to attach body images to a person card, you must enable the body detection functionality.
Navigate to the Cards tab.
Click Humans.
Click +.
Attach images of the person’s face/body. Supported formats: WEBP, JPG, BMP, PNG.
Important
A face/body in the images must be close to a frontal position. Distance between pupils in the face image: 60 px. Images that do not meet the requirements will be rejected with a detailed error description.
Attach related files.
Specify the card name. If necessary, add a comment.
From the Watch lists drop-down menu, select a watch list for the card (or several watch lists, one by one).
Check Active. If a card is inactive, it is excluded from the monitoring.
Click Save.
Note
If a similar card already exists in the card index, you will be given the following options:
add objects from the new card to the already existing one and cancel the new card creation
create the new card anyway
cancel creation
Create Car Card
Important
For car cards to be available for creation, you must enable the car detection functionality.
To create a car card manually, do the following:
Navigate to the Cards tab.
Click Cars.
Click +.
Attach the car images. Supported formats: WEBP, JPG, BMP, PNG.
Attach related files.
Specify the card name. If necessary, add a comment.
Specify the car’s license plate number if applicable.
From the Watch lists drop-down menu, select a watch list for the card (or several lists, one by one) .
Check Active. If a card is inactive, it is excluded from the monitoring.
Click Save.
Note
If a similar card already exists in the card index, you will be given the following options:
add objects from the new card to the already existing one and cancel the new card creation
create the new card anyway
cancel creation
View Matched Clusters
Important
To be able to view matched clusters, you must enable clustering of relevant object types.
In the course of operation, the system may supplement cards with matched clusters.
To view a cluster matched with an exact person or car, open the corresponding card and navigate to the Cluster images tab.
In a person card, you will see a matched face cluster and a body cluster. A relevant car cluster will be shown in a car card.
Card Relations
You can set up relations between cards by creating a common category they belong. For example, the Household
category can include all family members and their vehicles.
A card can be related to as many categories as needed.
To set up a card-to-category relation, do the following:
Open the card for editing by clicking it on the list. Navigate to the Connections tab.
Click Create connection.
In the Relation name field, specify the category name. If there are already existing matching categories in the system, you will be able to see them and select one of them shortly after you start typing the name.
In the Link name field, describe the link between the card and the category.
If there is another card related to the same category, you can promptly add it right here. Do the following:
Check Connect with another card.
Select the card type: Person or Car.
Start typing the card name in the same-name field. As a result, the system will automatically suggest cards with relevant names. Select one of them.
In the Link name field, describe the link between the card and the category.
Click Save. The created links will be shown on the Connections tab and on the card list.
Note
If there are several cards related to the same category, repeat this step-by-step instruction with all the cards involved.
Batch Card Upload
To create cards in bulk, use the batch card upload. Do the following:
Tip
If the number of photos you need to upload exceeds 10,000, use the console bulk upload. In the current version, it does not support cars and bodies, only faces.
Important
Supported formats: WEBP, JPG, BMP, PNG.
Important
A face/body in the images must be close to a frontal position. Distance between pupils in face images: 60 px. Images that do not meet the requirements will be rejected with a detailed error description.
Navigate to the Cards tab.
Click Humans or Cars, subject to the card type being uploaded.
Click +.
Click Try batch card upload.
For person cards, specify the objects to detect: faces or bodies.
Select multiple image files, or a directory.
You can use image file names as a basis for names and/or comments in cards to be created. Select the necessary option(s). Then configure the automatic name/comment generation rule by appending a custom prefix and/or postfix to the file name.
Tip
To avoid merging the 3 words into one, use underscore or another symbol in the prefix and postfix.
From the Watch lists drop-down menu, select a classification list for the cards.
Use the Parallel Upload option to specify the number of photo upload streams. The more streams you use, the faster it takes to complete the upload, however it requires more resources as well.
From the Group Photo drop-down menu, select the system behavior upon detecting several objects in a photo: reject the photo, upload the biggest object, or upload all objects.
Click Start to launch the photo upload.
Filter Cards
You can find all cards created in FindFace Multi on the Cards tab. Use the following filters to filter them out:
On the Humans tab (people cards):
Card name: filter cards by name.
Watch lists: display cards included into selected watch lists.
Faces: filter cards by presence of a face image in a card attachment.
Bodies: filter cards by presence of a body image in a card attachment.
Filled: display cards subject to whether they are auto-generated (not filled) or completed (filled) cards. See Auto-Generated and Completed Cards for a detailed explanation.
ID: display a card with a given ID.
On the Cars tab (car cards):
Card name: filter cards by name.
License plate: filter cards by license plate number.
Watch lists: display cards included into selected watch lists.
Cars: filter cards by presence of a car image in a card attachment.
Filled: display cards subject to whether they are auto-generated (not filled) or completed (filled) cards.
ID: display a card with a given ID.
You can sort out cards in the list by ID.
Purge Card Index
You can purge the entire card index in one click. To do so, navigate to the Preferences tab. Click Watch Lists. Click Delete All Cards.
Disable Event Creation for Specific Objects
Sometimes, it is necessary to omit certain objects during monitoring. One of the most common cases is the objects in advertisement media located in the camera field. Being detected continuously by your system, they can easily overflow the event feed and the database.
To prevent this from happening, do the following:
Create a watch list that will store the objects excluded from detection. In its settings, check Do not create events.
For each excluded object, create a card and add it to the watch list.
Auto-Generated and Completed Cards
Subject to the way a card is created, there are auto-generated (“auto”) cards and completed cards.
Auto cards
Auto cards are created automatically by the system for clusters that have no matches in the card index.
Note
By default, the creation of auto cards is disabled. To enable it, set 'CREATE_AUTO_CARDS': True
at /etc/findface-security/config.py
and restart findface-security
.
sudo vi /etc/findface-security/config.py
# create auto cards for unmatched clusters
'CREATE_AUTO_CARDS': True,
sudo systemctl restart findface-security.service
There are no uploaded images on the General tab in the auto cards, so such cards are also called blank.
The following filters do not apply to the blank cards:
Faces -> Only with faces
Bodies -> Only with bodies
Cars -> Only with cars
Completed cards
Unlike auto cards, completed cards have uploaded images of a person’s face and body or a car image.
To separate auto cards and completed cards in the card index, use the Filled filter.
To turn an auto card into a completed card, upload the same object’s image to it and modify the card name.
Face, Body, Car Counters. Distance Measurement
Important
To be able to count human bodies (silhouettes) or cars, you first have to enable body detection or car detection.
FindFace Multi allows you to count faces, bodies and cars on connected cameras. This functionality can apply to a wide range of situations, such as people counting in queues and waiting areas, monitoring public gatherings, crowding prevention, traffic jam detection, and more.
The counting method is based on time slices, which means that the system counts faces, bodies and cars in static screenshots taken with a given count interval
.
You can opt for counting faces/bodies/cars either on each camera independently, or collectively on all selected cameras.
In this section:
Create Counter
To set up a counter, do the following:
Navigate to the Counters tab.
Click +.
Specify the counter name.
Select one or several camera groups for counting.
Tip
By default, counting will apply to all cameras within the groups. If necessary, you can disable counting on specific cameras.
Specify the interval in seconds between two consecutive screenshots used for counting.
Check Detect Faces to count faces.
Check Detect Bodies to count bodies. Body detection has to be enabled.
Check Detect Cars to count cars. Car detection has to be enabled.
Check Distance measurement to count the distance between bodies. This option becomes available only if Detect Bodies box is checked.
Make sure that the counter is Active.
Click Save.
(Optional) Navigate to the Region of Interest tab to specify the face/body/car tracking region within the camera(s) field.
Calibrate Counter for Distance Measurement
If Distance measurement is enabled, you should perform the counter calibration.
For counter calibration:
Ask a person to stand before the selected camera in full height.
Ask a person to walk through the area visible by the camera, along which the distances will be calculated. To achieve the best quality, the person must walk on a flat surface.
Enter the counter calibration timeframe (from 15s to 300s) and click Calibrate counter.
If you need to calibrate the counter for several cameras, perform steps 1-3 for each camera selected in the Cameras field.
If you are unsatisfied by the result, click Delete calibration and perform the calibration once more.
Counter Chart
To see the counter chart for the last hour, 24 hours, or week, navigate to the Chart tab in the counter settings.
Monitor Counter Operation
To monitor the operation of counters, navigate to the Counters tab.
Counter statuses:
Green: the counter is running without errors, or the number of errors doesn’t pass the acceptable threshold.
Yellow: the number of errors exceeds the threshold.
Red: the number of errors is critical.
Grey: the counter disabled.
Tip
You can configure the yellow and red statuses based on the portion of failed counter records and change the time window duration between two consecutive checks of the counter health status. To do so, modify the following parameters in the /etc/findface-security/config.py
configuration file:
sudo vi /etc/findface-security/config.py
FFSECURITY = {
...
# Counter health status config:
# max percent of camera records with errors
'MAX_COUNTER_ERROR_RECORDS': {'yellow': 0.3, 'red': 0.5},
# time window for computing health status (in seconds)
'COUNTER_HEALTH_STATUS_TIME_WINDOW': 30,
...
}
Work with Counter Records
Static screenshots taken by a counter, with the number of faces, bodies and cars in them, are saved as counter records. If you have enabled the distance measurement, each record will also contain a minimum, average, and maximum detected distance in meters.
If the counter is running with errors, the system will be creating blank records with an error message.
To see the counter records, navigate to the Counters tab. Click on any column for the counter, except ID
(leads to the counter settings).
You can scale the screenshots by clicking on them. If distanse measurement is enabled, the screenshot will contain boxes around bodies and the distance between them.
To work with counter records, use the following filters:
Counter
Cameras
Camera groups
Time
Number of faces in record
Number of bodies in record
Number of cars in record
Record id
Minimum distance
Maximum distance
Average distance
Set Webhook for Counter
To take it up a notch, configure a webhook for counter records with a specific number of faces, bodies and cars.
Configure Counters
To configure counters, open the /etc/findface-security/config.py
configuration file and modify the following parameters:
COUNTERS_DEDUP_OPTIONS
: a set of options that help avoid object duplication in multi-camera counters, i.e., the situation when the same object is counted on several cameras at the same time, leading to the overly increased counting result. Two options are available for each object type (face,
body
,car
):enabled
- enables object deduplication,threshold
- defines the minimum level of similarity between objects for the system to consider them duplicates.COUNTERS_SAVE_FULLFRAME
determines saving options of full frames in counters:always
,detect
- only save if faces, bodies or cars have been detected,never
.COUNTERS_FULLFRAME_JPEG_QUALITY
: JPEG quality of full frames,COUNTERS_THUMBNAIL_JPEG_QUALITY
: JPEG quality of thumbnails.COUNTERS_ROI_INTERSECTION_THRESHOLD
: required percentage of the rectangle around the object (a.k.a.bbox
) intersection with the region of interest defined for the counter.
sudo vi /etc/findface-security/config.py
# deduplicate objects on multi-camera counters
'COUNTERS_DEDUP_OPTIONS': {
'face': {
'enabled': False,
'threshold': 0.723, # model: [kiwi_320]
},
'body': {
'enabled': False,
'threshold': 0.65, # model: [andariel]
},
'car': {
'enabled': False,
'threshold': 0.65, # model: [alonso]
},
},
# counters full frame saving options:
# `always` - save always
# `detect` - save only if faces or bodies have been detected
# `never` - never save full frames
'COUNTERS_SAVE_FULLFRAME': 'always',
'COUNTERS_FULLFRAME_JPEG_QUALITY': 75,
'COUNTERS_THUMBNAIL_JPEG_QUALITY': 75,
# required percentage of bbox intersection with ROI
'COUNTERS_ROI_INTERSECTION_THRESHOLD': 0.75,
...
Be sure to restart the findface-security
service after making changes.
sudo systemctl restart findface-security.service
Area Management
Important
To be able to use the Areas functionality, you first have to enable body detection.
FindFace Multi allows you to monitor the presence of people on cameras in given areas, using rules and schedules. Once the system detects that the situation in an area fits the rule, it creates a so-called area activation.
The Areas functionality is an excellent tool for solving such problems as long queues prevention at retail outlets, theft prevention at enterprises during non-working hours, hazardous area control, work time tracking, and many others.
In this section:
Create Area
To create an area, do the following:
Navigate to the Areas tab.
Click +.
Specify the area name.
Select one or several cameras to monitor the area.
Tip
You can create multiple areas on the same camera to monitor the physical area using different rules and schedules.
Specify the region of interest within the camera(s) field.
Configure the area activation trigger (rule):
Less than: the area will be activated if the number of people has been less than the threshold value during the specified period.
More than: the area will be activated if the number of people has been more than the threshold value during the specified period.
Number of people: threshold number of people.
Duration: minimum period of trigger duration in seconds to activate the area. During this period, the number of people must remain less/more than the threshold value.
Note
For example, the rule
More than 5
Duration 30
applied to a cashier desk results in the area activation if more than five customers have waited longer than 30 seconds in line.Schedule: create one or several monitoring schedules. Specify the day(s) of the week and the period during which the area will be under monitoring. If no schedule is specified, the area will be permanently monitored.
Note
If a situation in the area continues to fit the activation rule after the scheduled monitoring period, the area will remain activated until this situation is over.
Click Save.
Edit Area
To open area settings for editing, click on the ID column in the area list.
Work with Area Activations
Once the situation in an area matches the triggering rule, the system creates an area activation. To see the list of area activations, navigate to the Areas tab. There, click on any column for the area, except its ID (leads to the area settings).
Apply the following filters to the activation list if needed:
The number of people detected in the area (
from
/to
)Trigger duration (
from
/to
)Creation date and time of an activation (
from
/till
)
Click an activation to view its records. The records are screenshots from the area-monitoring camera, with the detected number of people and the timestamp.
Note
The records are taken with the 10-second interval throughout the entire trigger duration.
Filter the record list using the following parameters:
The number of people in the record (
from
/to
)Record timestamp (
from
/till
)
Work Time Reports on Areas
See Reports.
Set Webhook for Area
To take it up a notch and automate your work with an area, configure a webhook.
Configure Areas
You can configure how screenshots (full frames) that you see in area activation records, and their thumbnails, are saved in the system. You can also enable body deduplication in multi-camera areas, to avoid the situation when the same person is counted on several cameras at the same time. This situation can lead to the overly increased counting result and, as an outcome, to a false area activation/absence of activation.
To configure areas, open the /etc/findface-security/config.py
configuration file and modify the following parameters:
AREA_DEDUP_OPTIONS
: a set of options that help avoid body duplication in multi-camera areas. Two options are available:enabled
- enables body deduplication,threshold
- defines the minimum level of similarity between bodies for the system to consider them duplicates.AREAS_SAVE_FULLFRAME
determines saving options of full frames in activation records:always
,never
.AREAS_THUMBNAIL_JPEG_QUALITY
: JPEG quality of thumbnails.AREAS_FULLFRAME_JPEG_QUALITY
: JPEG quality of full frames.
sudo vi /etc/findface-security/config.py
...
# deduplicate objects on multi-camera areas
'AREA_DEDUP_OPTIONS': {
'enabled': False,
'threshold': 0.65, # model: [andariel]
},
# areas full frame saving options:
# `always` - save always
# `never` - never save full frames
'AREAS_SAVE_FULLFRAME': 'always',
'AREAS_THUMBNAIL_JPEG_QUALITY': 75,
'AREAS_FULLFRAME_JPEG_QUALITY': 75,
...
Be sure to restart the findface-security
service after making changes.
sudo systemctl restart findface-security.service
Events and Episodes of Object Recognition
To monitor the real-time object identification in live videos, use the Events and Episodes tabs. Besides monitoring, both tabs allow you to access the history of identification events.
Tip
Search for objects through the event database and card index on the Search tab.
Tip
To perform the object identification in archived videos, see Object Identification in Offline Videos.
Work with Events
This section is about the Events tab.
Important
You can enable sound notifications for events related to specific watch lists. In some browsers, the tab with events has to remain in focus to get a sound played. To put a tab in focus, open it, and click anywhere on the page.
In this chapter:
View Events
Once an object is detected, you will see a notification on one of the event lists: Faces, Bodies, or Cars, subject to the object type.
A notification can feature different pieces of information, depending on whether a detected object has a match in the card index:
Match not found: the normalized object image, detection date and time, camera group name.
Match found: the normalized object image, reference object photo from the related card, name, the similarity between the matched objects, comment from the card, watch list, detection date and time, camera group.
Important
In order to pause the notifications thread, click above the list of events.
When working with events, the following filters may come in handy:
Note
Some filters from the list below may be hidden, subject to enabled recognition features.
Card: display events only for a selected card.
Watch lists: display events only for a selected watch list.
Matches: display events only with/without matches, or all events.
Acknowledged: display only acknowledged/unacknowledged events, or all events.
Cameras: display only events from a selected camera.
Camera groups: display only events from a selected group of cameras.
Start, End: display only events that occurred within a certain period.
Video Archive ID: display events from the video archive with a given ID.
Event ID: display an event with a given ID.
Episode ID: display events from the episode with a given ID.
Limit: display a given number of most relevant events.
Specific filters for faces
Event best shot: display all events of a track, only events with real-time snapshots, only one event with the best snapshot at the end of a track.
Age: display events with people of a given age.
Gender: display events with people of a given gender.
Emotions: display events with given emotions.
Glasses: filter events by the fact of wearing glasses.
Beard: filter events by the fact of having a beard.
Liveness: filter events by face liveness.
Face mask: filter events by the fact of wearing a face mask.
Specific filters for bodies
Headwear: display only events with a person wearing headgear of a given type: hat/cap, hood/headscarf, none.
Upper body clothes: display only events with a person wearing upper body wear of a given generalized category: long sleeves, short sleeves, no sleeve.
Upper clothes type: display only events with a person wearing upper body wear of a given specific type: jacket, coat, sleeveless vest, sweatshirt, T-shirt, shirt, dress.
Upper clothes color: display only events with a person wearing a top of a given color.
Lower body clothes: display only events with a person wearing lower body wear of a given type: pants, skirt, shorts, obscured.
Lower clothes color: display only events with a person wearing a bottom of a given color.
Specific filters for cars
Car body style: display only events with cars of a given body style.
Car color: display only events with cars of a given color.
Country of the license plate: display only events with cars registered in a given country.
Number plate: display an event with a given plate number.
Region of the license plate: display only events with cars registered in a given region.
Make: filter car events by car make.
Model: filter car events by car model.
Special vehicle: display only events with vehicles belonging to a given type: police, ambulance, rescue service, taxi, other.
Event Ticket. Acknowledging Event
In order to navigate to an event ticket from the list of events, click on the recognition result in a notification (No matches or the name from the corresponding card).
An event ticket contains the same data as a relevant notification. It also allows for acknowledging the event. To do so, click Not accepted to change the event acknowledgment status. Click Save.
Tip
If a detected object has a match in the card index, you can navigate to the relevant card by clicking on the name in the event ticket.
Tip
In order to acknowledge all events, click above the list of events.
Note
Event acknowledgment can be automated for selected watch lists.
Event Ticket. Object Search
FindFace Multi allows you to search detected objects through the list of events and card index. To navigate from an event ticket to the search tab, click Events or Cards respectively.
See also
Work Time Reports on Events
See Reports.
Video Player in Event Notification
Important
Only with Video Recorder.
If Video Recorder is deployed and properly configured, clicking on an event’s thumbnail plays the event video in the appeared video player.
The video player has a highly intuitive design. See View Camera Video in Video Player for details.
Organize Events with Episodes
This section is about the Episodes tab.
See also
An episode is a set of identification events that feature objects of the same origin (face and body images of the same person and images of the same car) detected within a specific period. As events on the Events tab show up in an arbitrary order, a large number of miscellaneous events can make the work difficult and unproductive. With the episodes, the system uses AI to organize incoming events based on the objects’ similarity and detection time. This allows for easy processing of diverse events, even in large numbers.
In this chapter:
About Episodes
There are two stages of an episode lifecycle:
LIVE: an episode is currently active, with more events to be possibly added.
Closed: an episode is closed, no events can be added.
Episodes are classified as people episodes and car episodes. People episodes feature face and body images of people, detected within a specific period, while car episodes contain car images.
Grant Rights for Episodes
A user receives a notification of a new episode if they have rights for the first event. Viewing new events in the episode also requires proper rights.
The right for an event consists of the rights for a corresponding camera and watch list.
Note
To see unmatched events, you only need the rights for a camera.
To manage rights of a role for the entire Episode
entity, open permissions for this role and adjust the following permissions:
humanepisode
: people episodescarepisode
: car episodes
Tip
See User Management.
View Episodes
You can find the list of episodes with filters and statistics on the Episodes tab. Episodes are sorted subject to the object’s origin: there are people episodes and car episodes.
Once an object is detected, it is either added to an existing LIVE episode or used as a starting point for a new episode. Each episode is assigned an identifier which can be later used to filter events and episodes.
When working with the episodes, the following default filters may come in handy:
Note
Some filters from the list below may be hidden, subject to enabled recognition features.
Card: display episodes only for a selected card.
Watch lists: display episodes only for a selected watch list.
Matches: display episodes only with/without matches, or all episodes.
Acknowledged: display only acknowledged/unacknowledged episodes, or all episodes.
Cameras: display only episodes from a selected camera.
Camera groups: display only episodes from a selected group of cameras.
Start, End: display only episodes that occurred within a certain period.
Count from: display only episodes with a given number of events.
Video Archive ID: display episodes related to the video archive with a given ID.
Episode ID: display an episode with a given ID.
Specific filters for people episodes
Age: display episodes with people of a given age.
Gender: display episodes with people of a given gender.
Emotions: display episodes with given emotions.
Glasses: filter episodes by the fact of wearing glasses.
Beard: filter episodes by the fact of having a beard.
Liveness: filter episodes by face liveness.
Face mask: filter episodes by the fact of wearing a face mask.
Headwear: display only episodes with a person wearing headgear of a given type: hat/cap, hood/headscarf, none.
Upper body clothes: display only episodes with a person wearing upper body wear of a given generalized category: long sleeves, short sleeves, no sleeve.
Upper clothes type: display only episodes with a person wearing upper body wear of a given specific type: jacket, coat, sleeveless vest, sweatshirt, T-shirt, shirt, dress.
Upper clothes color: display only episodes with a person wearing a top of a given color.
Lower body clothes: display only episodes with a person wearing lower body wear of a given type: pants, skirt, shorts, obscured.
Lower clothes color: display only episodes with a person wearing a bottom of a given color.
Specific filters for car episodes
Car body style: display only episodes with cars of a given body style.
Car color: display only episodes with cars of a given color.
Country of the license plate: display only episodes with cars registered in a given country.
Number plate: display an episode with a given plate number.
Region of the license plate: display only episodes with cars registered in a given region.
Make: filter car episodes by car make.
Model: filter car episodes by car model.
Special vehicle: display only episodes with vehicles belonging to a given type: police, ambulance, rescue service, taxi, other.
To view episode events, click the episode on the list. You will be redirected to the Events tab with the corresponding episode ID set in the Episode filter:
Work with the Events tab as described in Work with Events.
Event and Episode Acknowledging
To acknowledge an entire episode, click Not accepted for this episode on the list. As a result, all events in the episode will be automatically acknowledged, including those that are yet-to-appear (in the case of a LIVE episode).
An episode is also automatically acknowledged after acknowledging all its events one by one.
Filter Events by Episode ID
To display events by episode ID, either use the id filter on the Episodes tab or the Episode ID filter on the Events tab.
Episode Settings
To configure the episodes, use the /etc/findface-security/config.py
configuration file. Find the following parameters in the FFSECURITY
section:
FACE_EPISODES_THRESHOLD
,BODY_EPISODES_THRESHOLD
,CAR_EPISODES_THRESHOLD
: Similarity thresholds that the system are using when searching for recent events in order to construct an episode. The default values are set to the optimum. If necessary, you can change them. Be sure to consult with our technical experts prior (support@ntechlab.com).FACE_EPISODE_SEARCH_INTERVAL
,BODY_EPISODE_SEARCH_INTERVAL
,CAR_EPISODE_SEARCH_INTERVAL
: The period preceding an event, for which the system searches the feature vector database for already existing events with similar objects. If no such event is found, the system creates a new episode. Otherwise, it sorts out the 100 most recent similar objects and picks up the most relevant event from a LIVE episode.EPISODE_MAX_DURATION
: The maximum episode duration in seconds. After this time, an episode automatically closes.EPISODE_EVENT_TIMEOUT
: The maximum time in seconds since the last event has been added to an episode. After this time, an episode automatically closes.EPISODE_KEEP_ONLY_BEST_EVENT
: When closing an episode, delete all events in it, except the one with the best object. Use this option to save disk space.
sudo vi /etc/findface-security/config.py
...
FFSECURITY = {
...
# episodes specific matching threshold that is used to join faces in an episode
'FACE_EPISODES_THRESHOLD': 0.688, # FAR = 1.00E-06 # model: [kiwi_320]
'BODY_EPISODES_THRESHOLD': 0.61, # model: [andariel]
'CAR_EPISODES_THRESHOLD': 0.61, # model: [alonso]
...
'EPISODE_KEEP_ONLY_BEST_EVENT': True,
...
# maximum event age in seconds than could be added to an episode.
'FACE_EPISODE_SEARCH_INTERVAL': 60,
'BODY_EPISODE_SEARCH_INTERVAL': 60,
'CAR_EPISODE_SEARCH_INTERVAL': 60,
# maximum episode duration (episode is closed after)
'EPISODE_MAX_DURATION': 300,
# if no new event added to an episode during this timeout, episode will be closed.
'EPISODE_EVENT_TIMEOUT': 30,
...
}
...
Object Identification in Offline Videos
Besides real-time object identification, FindFace Multi allows for offline video processing. This functionality has a wide range of possible applications, among which the most common case is object detection and recognition in archived videos.
In this chapter:
Video File Formats
FindFace Multi supports a wide variety of file formats depending on the acceleration type (CPU or GPU).
Both CPU- and GPU-accelerated instances support all FFMPEG codecs. In addition to that, the following codecs are supported:
CPU-based acceleration: flv (both as a codec and as a container), h263, h264, h265, mjpeg, vp8, vp9, mpeg1video, mpeg2video, msmpeg4v2, msmpeg4v3
GPU-based acceleration: mjpeg, h264, h265, vp9 and others, depending on the list of codecs supported by the used video card. Apart from that, for GPU-accelerated instances the CPU-based acceleration can be enabled, thus adding flv support, which is not available by default.
Process Video File
Important
Before the video processing, the following prep work is required:
Designate a camera group to which the system will attribute the object recognition events from the video. The
Video archive default camera group
is perfect for this task. You can also create a new camera group with basic settings specifically for this video file.Assign the designated camera group to all watch lists you want to monitor when processing the video.
To identify objects in an offline video, do the following:
Create a video in FindFace Multi by uploading it from a file or online storage/cloud. To do so, navigate to the Videos tab.
Click +.
Specify the video name.
Specify a URL in online storage, or select a file.
Click Upload.
After the video is uploaded, navigate to the Parameters tab. Specify parameters of video processing:
Indicate the camera group you have designated prior. (Optional) Select a camera within that camera group to tag the object recognition events from this video more precisely.
Configure the timestamps for object recognition events.
On the Advanced tab, fine-tune the video processing:
Check detectors that you want to enable for this video: faces, bodies, cars & ALPR.
If needed, change the video orientation.
Timeout in ms: Specify the timeout in milliseconds for posting detected objects.
FFMPEG format: Pass FFMPEG format (mxg, flv, etc.) if it cannot be detected automatically.
FFMPEG parameters: FFMPEG options for the video in the key-value format, for example, [“rtsp_transpotr=tcp”, “ss=00:20:00”].
Play speed limit: If less than zero, the speed is not limited. In other cases, the stream is read with the given
play_speed
.Imotion threshold: Minimum motion intensity to be detected by the motion detector.
Verify SSL: Check to enable verification of the server SSL certificate when the object tracker posts objects to the server over https. Uncheck the option if you use a self-signed certificate.
On the Analytics tab, specify settings for each object type detector.
Minimum object snapshot quality: Minimum quality of an object snapshot to post. Do not change the default value without consulting with our technical experts (support@ntechlab.com).
Minimum object size: Minimum object size in pixels to post.
Maximum object size: Maximum object size in pixels to post.
Compression quality: Full frame compression quality.
Offline mode: Offline mode. Enable posting one snapshot of the best quality per entire track for each object in addition to the sequence of snapshots taken per track in the real-time mode (enabled by default).
Time interval: Time interval in seconds (integer or decimal) within which the object tracker picks up the best snapshot in the real-time mode.
Post first object immediately: Check to post the first object from a track immediately after it passes through the quality, size, and ROI filters, without waiting for the first
Time interval
to complete. The way the subsequent snapshots are posted will depend on thePost best snapshot
value. Uncheck the option only to post the first object after the firstTime interval
completes.Post best snapshot: Check to post the best snapshot obtained within each
Time interval
in the real-time mode, regardless of its quality. Uncheck the option to post the best snapshot only if its quality has improved compared to the previously posted snapshot.
(Optional) On the Region of Tracking and Regions of Interest tabs, specify the region of tracking within the camera field and detection zones for each object type detector.
Navigate to the Processing tab. Click Process to start object identification.
You can view object identification events right here, as well as on the Events and Episodes tabs by filtering the list of events by the camera group/camera associated with the video.
Configure Offline Video Processing
By default, video files are processed in a queued mode to prevent event drops due to resource overconsumption. You can modify the default number of simultaneously processed video files. To do so, open the /etc/findface-security/config.py
configuration file and change the MAX_VIDEO_ARCHIVE_JOBS
parameter. Please contact our experts prior (support@ntechlab.com) to make sure your resources are enough.
sudo vi /etc/findface-security/config.py
...
FFSECURITY = {
...
# maximum concurrent video manager jobs for video archives processing
'MAX_VIDEO_ARCHIVE_JOBS': 3,
...
}
...
Be sure to restart the findface-security
service after making changes.
sudo systemctl restart findface-security.service
Search Objects in Databases
FindFace Multi allows you to search for objects in the following databases:
Database of detected objects: the Events tab.
Card index: the Cards tab.
To find an object in a database, do the following:
Navigate to the Search tab.
Select the object type you are searching for: Faces, Bodies, Cars.
Specify the database to search in: In events, In cards.
Specify the exact object to search for in one of the following ways:
By event ID.
By card ID. Should the card contain multiple photos, select some of them to use in the search.
By uploading a photo. It will be displayed in the Photo area. If there are multiple objects in the image, select the one of your interest.
Specify the maximum number of search results.
If you are searching in events, you can narrow down your search by specifying a camera, camera group, and time period of your interest.
Click Search. You will see the search results appear below. For each object found, the matching confidence level is provided.
Compare Two Objects
FindFace Multi allows you to compare two objects and verify that they match. Do the following:
Navigate to the Preferences tab. Click Verify. Choose the proper tab, depending on the type of objects you are going to compare: Faces, Bodies, Cars.
Specify the IDs of events that feature the objects you want to compare, and/or upload photos with the objects.
Tip
You can find event IDs on the Events tab.
Click Verify. You will see the probability that the objects match.
Face, Body, Car Clusters
FindFace Multi supports automatic clustering of objects of the same origin:
Face images belonging to the same person form a face cluster.
Body images belonging to the same person form a body cluster.
Images of the same car constitute a car cluster.
Aggregated cluster galleries of faces, bodies, and cars are available on the Clusters tab.
Note
If a face cluster or a body cluster matches with a person card, such a cluster will automatically appear in that card. Similarly, a car cluster will be saved to the corresponding car card.
Important
By default, the object clustering is disabled. Enable and configure it via the /etc/findface-security/config.py
configuration file.
In this section:
Clustering Algorithms
FindFace Multi uses the following object clustering algorithms:
Real-time clustering. This clustering algorithm processes episodes to select suitable object images and clusters the selected ones. It works on the fly after an episode is closed. The real-time clustering results are dynamically displayed on the Clusters tab and in the relevant card.
Not all episodes are used for clustering. If an episode meets all requirements (see the details below), the system forms a cluster in the following way:
Selects the best quality event.
Creates a new entity
cluster event
in the main system database PostgreSQL. The entity contains the selected event metadata, a link to the parent episode, object feature vector, and thumbnail.Searches for a matching object centroid in the
cluster_events
gallery of the Tarantool feature vector database. An object centroid is a virtual feature vector averaged across all twin objects that have been detected so far (for example, a face centroid is a feature vector averaged across all face images of the same person). The system updates a matching centroid using the new event if such a centroid was found or creates a new centroid otherwise.
Scheduled clustering. This clustering algorithm works over and revises the cluster events created during the real-time clustering. This algorithm improves the cluster centroid quality as the centroid will be averaged across a more extensive array of accumulated feature vectors. The results of the scheduled clustering are displayed on the Clusters tab and in the relevant card after each scheduled iteration.
Use the RRULE format to define the schedule in the
CLUSTERS_CLUSTERIZATION_SCHEDULE
parameter at/etc/findface-security/config.py
. Late-night hours are preferred as the scheduled clustering takes up a lot of CPU resources and time.Important
The scheduled clustering completely overwrites the cluster galleries, including ids, unless you pin specific clusters by enabling the
CLUSTERS_AUTO_PIN_HEURISTICS
andPIN_MATCHED_CLUSTERS
parameters (see below).
Enable and Configure Clustering
By default, the clustering is disabled. To enable and configure it, do the following:
Open the
/etc/findface-security/config.py
configuration file. Find theClusters configuration
section.sudo vi /etc/findface-security/config.py # -- Clusters configuration -- 'ENABLE_NIGHT_CLUSTERIZATION': True, 'ENABLE_REALTIME_CLUSTERIZATION': True, # rrule (recurrence rule) for scheduling clusters clusterization 'CLUSTERIZATION_SCHEDULE': 'RRULE:FREQ=DAILY;INTERVAL=1;WKST=MO;BYHOUR=0;BYMINUTE=0', # clusterize only selected objects types (for realtime and nightly clusterization) # available are: face, body, car 'CLUSTERIZE_OBJECT_TYPES': ['face','body','car'], # skip clusterization if unpinned cluster events count is greater than this value 'CLUSTERIZATION_MAX_CLUSTER_EVENTS': None, # cluster event to cluster matching confidence threshold 'FACE_CLUSTER_CONFIDENCE_THRESHOLD': 0.723, # model: [kiwi_320] 'BODY_CLUSTER_CONFIDENCE_THRESHOLD': 0.65, # model: [andariel] 'CAR_CLUSTER_CONFIDENCE_THRESHOLD': 0.65, # model: [alonso] # minimum required event quality for cluster creation 'FACE_CLUSTER_EVENT_MIN_QUALITY': 0.45, # model: [quality.v1] 'BODY_CLUSTER_EVENT_MIN_QUALITY': 0.6, # model: [pedattr.quality.v0] 'CAR_CLUSTER_EVENT_MIN_QUALITY': 0.73, # model: [carattr.quality.v0] # minimum required number events in episode for cluster creation 'FACE_CLUSTER_EVENT_MIN_EPISODE_EVENTS': 1, 'BODY_CLUSTER_EVENT_MIN_EPISODE_EVENTS': 1, 'CAR_CLUSTER_EVENT_MIN_EPISODE_EVENTS': 1, # coefficient of dependence of the clustering threshold on the person event's quality 'FACE_CLUSTER_SOFT_CLUSTERIZATION_COEFFICIENT': 0.1, 'BODY_CLUSTER_SOFT_CLUSTERIZATION_COEFFICIENT': 0.1, 'CAR_CLUSTER_SOFT_CLUSTERIZATION_COEFFICIENT': 0.1, # match with clusters when no card objects matches found 'MATCH_CLUSTERS': False, # create auto cards for unmatched clusters 'CREATE_AUTO_CARDS': False, # pinned clusters keep their id and events after reclusterization 'CLUSTERS_AUTO_PIN_HEURISTICS': { 'face': { # pin clusters with `value` minimum cluster events 'min_events': {'enabled': True, 'value': 10}, # cluster's centroid similarity confidence is less then 'max_centroid_similarity_threshold': {'enabled': True, 'value': 0.55}, # minimum average event's quality 'min_average_events_quality': {'enabled': True, 'value': 0.45}, }, 'body': {}, 'car': {}, }, # always pin clusters with matched events (not affected by heuristics above) 'PIN_MATCHED_CLUSTERS': False,
Enable the real-time clustering by setting
ENABLE_REALTIME_CLUSTERIZATION: True
.If necessary, enable the scheduled clustering by setting
ENABLE_NIGHT_CLUSTERIZATION: True
.Important
Enabling the scheduled clustering only makes sense if the real-time clustering is enabled. Otherwise, the system won’t form any new clusters, since only the real-time clustering produces unique cluster events.
... # -- Clusters configuration -- 'ENABLE_NIGHT_CLUSTERIZATION': True, 'ENABLE_REALTIME_CLUSTERIZATION': True, ...
If necessary, specify a recurrence rule (RRULE) for the scheduled clustering. If the RRULE is not specified, the clustering automatically starts at 00:00 GMT.
Tip
See the RRULE calculator here.
# rrule (recurrence rule) for scheduling clusters clusterization 'CLUSTERIZATION_SCHEDULE': 'RRULE:FREQ=DAILY;INTERVAL=1;WKST=MO;BYHOUR=0;BYMINUTE=0',
By default, the system forms only face clusters. To enable cluster formation of bodies and cars, add relevant object types to the following line:
# available are: face, body, car 'CLUSTERIZE_OBJECT_TYPES': ['face','body','car'],
If necessary, modify the minimum number of events in the episodes used for clustering. It’s 1 by default. Do so separately for each object type.
# minimum required number events in episode for cluster creation 'FACE_CLUSTER_EVENT_MIN_EPISODE_EVENTS': 3, 'BODY_CLUSTER_EVENT_MIN_EPISODE_EVENTS': 3, 'CAR_CLUSTER_EVENT_MIN_EPISODE_EVENTS': 2,
If necessary, modify the minimum quality of the object images used for clustering. Do so separately for each object type.
Note
As this setting requires a high level of expertise and knowledge, we highly recommend consulting with our technical experts prior.
# minimum required event quality for cluster creation 'FACE_CLUSTER_EVENT_MIN_QUALITY': 0.45, # model: [quality.v1] 'BODY_CLUSTER_EVENT_MIN_QUALITY': 0.6, # model: [pedattr.quality.v0] 'CAR_CLUSTER_EVENT_MIN_QUALITY': 0.73, # model: [carattr.quality.v0]
If necessary, modify the confidence threshold for matching a cluster event and a cluster.
Warning
Be sure to consult with our experts by support@ntechlab.com before altering this parameter.
# cluster event to cluster matching confidence threshold 'FACE_CLUSTER_CONFIDENCE_THRESHOLD': 0.723, # model: [kiwi_320] 'BODY_CLUSTER_CONFIDENCE_THRESHOLD': 0.65, # model: [andariel] 'CAR_CLUSTER_CONFIDENCE_THRESHOLD': 0.65, # model: [alonso]
After an object is detected in the video, the system searches for its matches through the card index. Set
'MATCH_CLUSTERS': True
to enable the system to search for matches through the cluster gallery as well if it didn’t find a matching card in the card index. In this case, the system will be notifying you whenever an object matches a previously created cluster.# match with clusters when no card objects matches found 'MATCH_CLUSTERS': True,
Set
'CREATE_AUTO_CARDS': True
to automatically create a new card for a cluster that has no matches in the card index. See Auto-Generated and Completed Cards for details.# create auto cards for unmatched clusters 'CREATE_AUTO_CARDS': True,
The scheduled clustering completely overwrites all created clusters. You can “pin” specific clusters, i.e. keep them and associated cluster events, including their IDs, intact. To do so, use the following settings:
Note
These settings are independent. Apply both if necessary.
Note
These settings do not affect the real-time clustering. It will continue to create new cluster events for the pinned clusters.
CLUSTERS_AUTO_PIN_HEURISTICS
: setTrue
orFalse
for the following options and specify the corresponding values:Note
Do so for each object type if applicable.
min_events
: pin a cluster when the number of associated cluster events exceeds the given minimum value.max_centroid_similarity_threshold
: pin a cluster if the similarity between its centroid and centroids of other clusters is less than the given threshold. If a cluster looks similar to some other clusters, chances are these clusters belong to the exact person/car. In this case, the system won’t pin such a cluster to have an opportunity to re-cluster it. On the contrary, dissimilar clusters will be pinned.min_average_events_quality
: pin a cluster if the average quality of associated cluster events is greater than the given minimum value.
# pinned clusters keep their id and events after reclusterization 'CLUSTERS_AUTO_PIN_HEURISTICS': { 'face': { # pin clusters with `value` minimum cluster events 'min_events': {'enabled': True, 'value': 10}, # cluster's centroid similarity confidence is less then 'max_centroid_similarity_threshold': {'enabled': True, 'value': 0.55}, # minimum average event's quality 'min_average_events_quality': {'enabled': True, 'value': 0.45}, }, 'body': {}, 'car': {}, },
Enable the
PIN_MATCHED_CLUSTERS
parameter to pin matched clusters and the associated cluster events.# always pin clusters with matched events (not affected by heuristics above) 'PIN_MATCHED_CLUSTERS': True,
If necessary, specify the maximum number of cluster events in the clusters that remain non-pinned. After this number is reached, the scheduled clustering will be automatically disabled.
# skip clusterization if unpinned cluster events count is greater than this value 'CLUSTERIZATION_MAX_CLUSTER_EVENTS': None,
Restart the
findface-security
service. You will see the Clusters tab appear in the FindFace Multi web interface.sudo systemctl restart findface-security.service
Work with Cluster Galleries
View and Filter Clusters
To view the cluster galleries, navigate to the Clusters tab.
When working with the cluster galleries, the following filters may come in handy:
Note
Some filters from the list below may be hidden, depending on which recognition features are enabled.
Card: display clusters only for a selected card.
Cameras: display only clusters from a selected camera.
Camera groups: display only clusters from a selected group of cameras.
Watch lists: display only clusters for a selected watch list.
Start, End: display only clusters formed within a certain period.
ID: display a cluster with a given ID.
Has card: display clusters subject to whether they have an associated card, auto-generated or completed one (see the Auto-Generated and Completed Cards section for a detailed explanation).
Specific filters for face clusters
Age: display clusters with people of a given age.
Gender: display clusters with people of a given gender.
Emotions: display clusters with given emotions.
Glasses: filter clusters by the fact of wearing glasses.
Beard: filter clusters by the fact of having a beard.
Liveness: filter clusters by face liveness.
Face mask: filter clusters by the fact of wearing a face mask.
Specific filters for body clusters
Headwear: display only clusters with a person wearing headgear of a given type: hat/cap, hood/headscarf, none.
Upper body clothes: display only clusters with a person wearing upper body wear of a given generalized category: long sleeves, short sleeves, no sleeve.
Upper clothes type: display only clusters with a person wearing upper body wear of a given specific type: jacket, coat, sleeveless vest, sweatshirt, T-shirt, shirt, dress.
Upper clothes color: display only clusters with a person wearing a top of a given color.
Lower body clothes: display only clusters with a person wearing lower body wear of a given type: pants, skirt, shorts, obscured.
Lower clothes color: display only clusters with a person wearing a bottom of a given color.
Specific filters for car clusters
Car body style: display only clusters with cars of a given body style.
Car color: display only clusters with cars of a given color.
Make: filter clusters by car make.
Model: filter clusters by car model.
Special vehicle: display only clusters with cars belonging to a given type: police, ambulance, rescue service, taxi, other.
Merge and Delete Clusters
To manually merge several clusters, select them one by one and click Merge.
To delete a cluster, select it and click Delete.
View and Filter Associated Cluster Events
Click a cluster to see the associated cluster events. You will be redirected to the Cluster events page with the corresponding cluster identifier set in the Cluster ID filter.
The set of filters to work with the cluster events are similar to that for the clusters.
Make Cluster Gallery Static
Sometimes it’s necessary to finalize the object clustering at a certain point in time and then operate with a static gallery of formed clusters.
To display the Clusters tab upon the disabled clustering, do the following:
Open the
/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-security/config.py
Add the line
"clusters": True
to theSERVICES
section, as shown in the example below:... SERVICES = { "ffsecurity": { ... "clusters": True, } ...
Disable the real-time and scheduled clustering processes.
... 'ENABLE_NIGHT_CLUSTERIZATION': False, 'ENABLE_REALTIME_CLUSTERIZATION': False,
Restart the
findface-security
service.sudo systemctl restart findface-security.service
Manual Clustering
To manually launch the clustering process, use the run_clusterization
utility.
You can invoke the run_clusterization
help message by executing:
sudo findface-security run_clusterization --help
usage: findface-security run_clusterization [-h]
[--object-types OBJECT_TYPES [OBJECT_TYPES ...]]
[--force]
[--configuration CONFIGURATION]
[--version] [-v {0,1,2,3}]
[--settings SETTINGS]
[--pythonpath PYTHONPATH]
[--traceback] [--no-color]
[--force-color] [--skip-checks]
optional arguments:
-h, --help show this help message and exit
--object-types OBJECT_TYPES [OBJECT_TYPES ...]
Clusterize selected object types. Uses
CLUSTERIZE_OBJECT_TYPES from config if not provided.
Allowed types: face, body, car
--force Force clusterization even if
CLUSTERIZATION_MAX_CLUSTER_EVENTS condition is met
--configuration CONFIGURATION
The name of the configuration class to load, e.g.
"Development". If this isn't provided, the
DJANGO_CONFIGURATION environment variable will be
used.
--version show program's version number and exit
-v {0,1,2,3}, --verbosity {0,1,2,3}
Verbosity level; 0=minimal output, 1=normal output,
2=verbose output, 3=very verbose output
--settings SETTINGS The Python path to a settings module, e.g.
"myproject.settings.main". If this isn't provided, the
DJANGO_SETTINGS_MODULE environment variable will be
used.
--pythonpath PYTHONPATH
A directory to add to the Python path, e.g.
"/home/djangoprojects/myproject".
--traceback Raise on CommandError exceptions
--no-color Don't colorize the command output.
--force-color Force colorization of the command output.
--skip-checks Skip system checks.
With this utility, it is possible to separately launch clustering of faces, bodies, and cars and perform force clustering when the maximum number of cluster events exceeds the value of the CLUSTERIZATION_MAX_CLUSTER_EVENTS
parameter (see Enable and Configure Clustering). For example, to force start face clustering, execute:
sudo findface-security run_clusterization --object-types face --force
Reports
In this chapter:
Report Types
The following reports are available:
Standard reports on the system entities:
areas, area activations, area activation records
Work time reports that allow you to regard the exact moments of staff entering and exiting an enterprise or a designated area and calculate the total time spent at it. Can be built either on the Events or Areas tab.
Build Standard Report
To build a standard report on a system entity, do the following:
Navigate to the tab associated with the required entity: Events, Episodes, Search, Clusters, Cameras, Cards, Analytics, Audit Logs, Areas.
Set filters for the report.
Click Create Report.
Specify the report name.
If you are on the Events or Areas tab, select the report type Standard report (as there are two types of reports available).
If applicable, choose whether to save the report images as links, thumbnails, or full frames.
Click Create. The report will be available for download on the Reports tab.
Build Work Time Report
A work time report can be built on the following tabs:
Events: work time personalized general report. Requires specifying cameras of entry and exit.
Areas: work time non-personalized report regarding usage of a specific area. Considers moments of entering and exiting an area with a given ID.
Events
To build a work time report on the Events tab, do the following:
Set filters for the report.
Click Create Report.
Specify the report name.
Select the report type: Work time report.
Select the entrance camera.
Select the exit camera.
If applicable, choose whether to save the report images as links, thumbnails, or full frames.
Click Create. The report will be available for download on the Reports tab.
Areas:
To build a work time report on the Areas tab, do the following:
Set filters for the report.
Click Create Report.
Specify the report name.
Select the report type: Work time report.
Specify the area ID.
Click Create. The report will be available for download on the Reports tab.
Work with Reports
You can access reports previously created in the system on the Reports tab. The following operations are available:
Download selected reported into the XLSX format.
Update selected reports.
Delete selected reports.
Configure Saving Images in Reports
When building certain reports, you will be able to choose to save the report images as links, thumbnails, or full frames. It is possible to configure the image parameters. To do so, open the /etc/findface-security/config.py
configuration file and alter the default JPEG quality and the maximum height of thumbnails and full frames, subject to your free disc space.
sudo vi /etc/findface-security/config.py
# reports image saving options
'REPORT_THUMBNAIL_JPEG_QUALITY': 75,
'REPORT_THUMBNAIL_MAX_HEIGHT': 100,
'REPORT_FULLFRAME_JPEG_QUALITY': 75,
'REPORT_FULLFRAME_MAX_HEIGHT': 250,
Video Wall
The FindFace Multi Video Wall allows for basic video surveillance. Use it to display the video image from cameras and video files.
Important
Advanced video surveillance is available with Video Recorder.
In this chapter:
Display Video
The Video Wall offers two modes, four predefined layouts in each:
video streaming
video streaming with object detection and episode feed
To display video on the Video Wall, do the following:
Navigate to the Video Wall tab.
Select a mode and camera layout.
Drag-n-drop cameras of your choice to the Video Wall.
You can work with the episode feed on the Video Wall in the same manner as with the Episodes tab, including the following basic filters:
Watch Lists
Matches
Acknowledged
Configure Video Wall
You can set up the Video Wall features that determine how it highlights objects in the video:
turn on/off the bbox and attribute data display for faces, cars, and bodies
Note
The attribute data are displayed if you have previously enabled neural networks for attribute recognition. See Enable Face Attribute Recognition, Enable Car and Car Attribute Recognition, Enable Body and Body Attribute Recognition.
enable blurring unmatched objects to comply with personal data protection laws (GDPR and similar)
To configure the Video Wall, do the following:
Open the
/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-security/config.py
Find the
FFSECURITY_UI_CONFIG
->available_video_wall_features
section.Set
"faces"
,"cars"
,"bodies"
True
orFalse
to enable/disable the bbox and attribute data display for the relevant objects.FFSECURITY_UI_CONFIG = { ... "available_video_wall_features": { "faces": True, "cars": False, "bodies": True, ... } }
Set
"gdpr": True
to enable blurring all unmatched objects displayed on the Video Wall.Tip
To fully comply with the personal data protection laws, follow this guide.
FFSECURITY_UI_CONFIG = { ... "available_video_wall_features": { ... "gdpr": True } }
Restart
findface-security
.sudo systemctl restart findface-security.service
See also
Integrations
This chapter is all about integration with FindFace Multi. Integrate your system through HTTP API and webhooks, or check out our turnkey partner integrations.
HTTP API
Detailed interactive documentation on the FindFace Multi HTTP API is available after installation at http://<findface-security-ip:port>/api-docs
. Learn and try it out.
Tip
You can also find it by navigating to
in the web interface.Webhooks
You can set up FindFace Multi to automatically send notifications about specific events, episodes, counter records, and area activations to a given URL. To do so, create and configure a webhook. In this case, when such an event, episode, counter record, or an area activation occurs, FindFace Multi will send an HTTP request to the URL configured for the webhook.
You can use webhooks for various purposes, for instance, to notify a user about a specific event, invoke required behavior on a target website, and solve security tasks such as automated access control.
In this section:
Configure Webhook
Important
You need Administrator privileges to create a webhook.
Note
To use the webhooks, make sure that at least one of the following parameters is specified in /etc/findface-security/config.py
: SERVICE_EXTERNAL_ADDRESS
or EXTERNAL_ADDRESS
.
To create and configure a webhook, do the following:
Navigate to the Preferences tab. Click Webhooks.
Click +.
Specify the webhook title.
Specify URL to automatically send notifications to.
You can send notifications in batches. Specify the maximum number of notifications in a webhook batch. The actual number may be less.
Specify the maximum number of attempts to send a notification. The interval between attempts increases exponentially with a maximum of 100 seconds.
Important
To receive all messages since the connection loss, should it happen, set
0
. Set1
to omit old messages.FindFace Multi will be automatically sending notifications on events, episodes, counter records, and area activations that match given filters. The following filters are available:
Recognition events (face, body, car):
Sections:
face_events
,body_events
,car_events
.allowed_bs_types
: object tracking mode, possible values:overall
,realtime
.camera_group_in
: camera group id, number.camera_in
: camera id, number.matched_lists_in
: watch list id, number.matched_card_in
: matched card id, number.matched
: thematched
status. Settrue
orfalse
if only matched or unmatched events must trigger sending a notification, boolean.confidence_gte
: minimum confidence, number.
Episodes with people:
Section:
human_episodes
.allowed_types
: episode status, possible values: an episode opening (episode_open
), adding a new event into an episode (episode_event
), an episode closing (episode_close
).camera_group_in
: camera group id, number.camera_in
: camera id, number.matched_lists_in
: watch list id, number.face_matched
: thematched
status of face events in an episode, boolean. Settrue
if only matched faces orfalse
if only unmatched faces must trigger sending a notification.body_matched
: thematched
status of body events in an episode, boolean. Settrue
if only matched bodies orfalse
if only unmatched bodies must trigger sending a notification.events_count_gte
: minimum number of events in an episode, number.events_count_lte
: maximum number of events in an episode, number.
Episodes with cars:
Section:
car_episodes
.allowed_types
: episode status, possible values: an episode opening (episode_open
), adding a new event into an episode (episode_event
), an episode closing (episode_close
).camera_group_in
: camera group id, number.camera_in
: camera id, number.matched_lists_in
: watch list id, number.car_matched
: thematched
status of car events in an episode, boolean. Settrue
if only matched cars orfalse
if only unmatched cars must trigger sending a notification.events_count_gte
: minimum number of events in an episode, number.events_count_lte
: maximum number of events in an episode, number.
Counter records:
Section:
counters
.counter_in
: counter id, number.camera_group_in
: camera group id, number.camera_in
: camera id, number.faces_gte
: minimum number of faces in a counter record, number.faces_lte
: maximum number of faces in a counter record, number.silhouettes_gte
: minimum number of bodies in a counter record, number.silhouettes_lte
: maximum number of bodies in a counter record, number.cars_gte
: minimum number of cars in a counter record, number.cars_lte
: maximum number of cars in a counter record, number.proximity_min_lte
: send a notification if the minimum detected distance in meters is less than this value, number.proximity_min_gte
: send a notification if the minimum detected distance in meters is greater than this value, number.proximity_avg_lte
: send a notification if the average detected distance in meters is less than this value, number.proximity_avg_gte
: send a notification if the average detected distance in meters is greater than this value, number.proximity_max_lte
: send a notification if the maximum detected distance in meters is less than this value, number.proximity_max_gte
: send a notification if the maximum detected distance in meters is greater than this value, number.
Area activations:
Section:
areas
.area_in
: area id, number.camera_group_in
: camera group id, number.camera_in
: camera id, number.
{ "face_events": { "allowed_bs_types": [ "overall", "realtime" ], "camera_group_in": [], "camera_in": [], "matched_lists_in": [], "matched_card_in": [], "matched": true, "confidence_gte": 0.75 }, "body_events": { "allowed_bs_types": [ "overall", "realtime" ], "camera_group_in": [], "camera_in": [], "matched_lists_in": [], "matched_card_in": [], "matched": true, "confidence_gte": 0.75 }, "car_events": { "allowed_bs_types": [ "overall", "realtime" ], "camera_group_in": [], "camera_in": [], "matched_lists_in": [], "matched_card_in": [], "matched": true, "confidence_gte": 0.75 }, "human_episodes": { "allowed_types": [ "episode_open", "episode_event", "episode_close" ], "camera_group_in": [], "camera_in": [], "matched_lists_in": [], "face_matched": true, "body_matched": true, "events_count_gte": 0, "events_count_lte": 999 }, "car_episodes": { "allowed_types": [ "episode_open", "episode_event", "episode_close" ], "camera_group_in": [], "camera_in": [], "matched_lists_in": [], "car_matched": true, "events_count_gte": 0, "events_count_lte": 999 }, "counters": { "counter_in": [], "camera_group_in": [], "camera_in": [], "faces_gte": 1, "faces_lte": 100, "silhouettes_gte": 1, "silhouettes_lte": 100, "cars_gte": 1, "cars_lte": 100, "proximity_min_lte": 100, "proximity_min_gte": 0, "proximity_avg_lte": 100, "proximity_avg_gte": 0, "proximity_max_lte": 100, "proximity_max_gte": 0 }, "areas": { "area_in": [], "camera_group_in": [], "camera_in": [] } }
Important
Use only filters which match your search needs. To turn off a filter, remove it from a webhook. Do not leave a filter empty (
[]
) as in this case, the result of filtration will be empty as well.Note
To get all notifications, pass only curly braces without any enclosed filters:
{}
Tip
Example #1. Get notifications about all car events:
{ "car_events": {} }
Example #2. Get notifications of the opening of matched episodes that feature human faces and bodies:
{ "human_episodes": { "allowed_types": ["episode_open"], "face_matched": true, "body_matched": true }}
Note
You can specify several values for filters with square braces. In this case, the webhook will be triggered once one of the values from this filter has been matched. In the example below, you will get a body event from the camera group
1
or3
if a matched card is12
or25
.{ "body_events": { "camera_group_in": [1, 3], "matched_card_in": [12, 25], }, }
Check Active.
Click Save.
Webhook in Action
Try out a webhook by capturing event notifications with a simple web server in Python:
from pprint import pprint
from aiohttp import web
async def handle(request):
pprint(await request.json())
return web.Response(status=200)
app = web.Application()
# for aiohttp v 3.x
# app.add_routes([web.post('/', handle)])
# for aiohttp v 2.x
app.router.add_post('/', handle)
web.run_app(app, port=8888)
Important
A webhook catcher that you use must return an HTTP 200
response after receiving the webhook request from FindFace Multi, like in the example above.
If no filters are configured for a webhook, this web server will be getting notifications about all events, episodes, counter records, and area notifications that occur in the system. The notifications have the following formats:
To view a webhook pulling status, execute:
sudo journalctl -u findface-security.service | grep 'Webhook'
Success:
May 30 14:13:43 ffsecurity[12441]: INFO [Webhook(id=6) worker(type=face_events] <queue: 0> Sent batch(len-1, type-"face_events"): ['4355024961160384430']
May 30 14:13:43 ffsecurity[12441]: INFO [SC:OQSrsPV9] [Webhooks manager-38bc5] Processing message(type="face_events:event_created"). Consumer reception delta: 0.003450
May 30 14:13:43 ffsecurity[12441]: INFO [Webhook(id=6) worker(type=face_events] <queue: 0> Sent batch(len-1, type-"face_events"): ['4355024961658847580']
May 30 14:13:44 ffsecurity[12441]: INFO [SC:JtRz2Vuo] [Webhooks manager-38bc5] Processing message(type="face_events:event_created"). Consumer reception delta: 0.001263
May 30 14:13:44 ffsecurity[12441]: INFO [Webhook(id=6) worker(type=face_events] <queue: 0> Sent batch(len-1, type-"face_events"): ['4355024962087522421']
May 30 14:13:44 ffsecurity[12441]: INFO [SC:9AnzRJwU] [Webhooks manager-38bc5] Processing message(type="face_events:event_created"). Consumer reception delta: 0.001691
May 30 14:13:44 ffsecurity[12441]: INFO [Webhook(id=6) worker(type=face_events] <queue: 0> Sent batch(len-1, type-"face_events"): ['4355024962355957878']
Failure:
May 30 14:18:49 ffsecurity[12441]: INFO [SC:sp34rVQR] [Webhooks manager-38bc5] Processing message(type="face_events:event_created"). Consumer reception delta: 0.001376
May 30 14:18:49 ffsecurity[12441]: WARNING [Webhook(id=6) worker(type=face_events] <queue: 0> Error sending webhook: Cannot connect to host 127.0.0.1:8888 ssl:None [Connection refused]. Attempt 1 out of 10. Next attempt in 0.270 seconds.
May 30 14:18:50 ffsecurity[12441]: WARNING [Webhook(id=6) worker(type=face_events] <queue: 0> Error sending webhook: Cannot connect to host 127.0.0.1:8888 ssl:None [Connection refused]. Attempt 2 out of 10. Next attempt in 0.729 seconds.
May 30 14:18:50 ffsecurity[12441]: INFO [SC:zUhLHNxN] [Webhooks manager-38bc5] Processing message(type="face_events:event_created"). Consumer reception delta: 0.001368
May 30 14:18:50 ffsecurity[12441]: INFO [SC:1Q66tcUS] [Webhooks manager-38bc5] Processing message(type="face_events:event_created"). Consumer reception delta: 0.001386
May 30 14:18:50 ffsecurity[12441]: WARNING [Webhook(id=6) worker(type=face_events] <queue: 2> Error sending webhook: Cannot connect to host 127.0.0.1:8888 ssl:None [Connection refused]. Attempt 3 out of 10. Next attempt in 1.968 seconds.
May 30 14:18:52 ffsecurity[12441]: WARNING [Webhook(id=6) worker(type=face_events] <queue: 2> Error sending webhook: Cannot connect to host 127.0.0.1:8888 ssl:None [Connection refused]. Attempt 4 out of 10. Next attempt in 5.314 seconds.
May 30 14:18:55 ffsecurity[12441]: INFO [SC:5kl6zGrF] [Webhooks manager-38bc5] Processing message(type="face_events:event_created"). Consumer reception delta: 0.001542
May 30 14:18:58 ffsecurity[12441]: WARNING [Webhook(id=6) worker(type=face_events] <queue: 3> Error sending webhook: Cannot connect to host 127.0.0.1:8888 ssl:None [Connection refused]. Attempt 5 out of 10. Next attempt in 14.349 seconds.
Verbose Webhooks
By default, webhook notifications contain only ids of such entities as cards, watch lists, cameras, and camera groups. It is possible to get the full content of the mentioned entities by switching webhooks to the verbose mode.
To do so, open the /etc/findface-security/config.py
configuration file and set 'VERBOSE_WEBHOOKS': True
:
sudo vi /etc/findface-security/config.py
...
FFSECURITY = {
...
# send serialized cards, card-lists, camera and camera groups in webhooks
'VERBOSE_WEBHOOKS': True,
...
}
...
Be sure to restart the findface-security
service after making changes.
sudo systemctl restart findface-security.service
In the verbose mode, webhook notifications have the following formats:
Partner Integrations
Genetec Security Center
FindFace Multi integration with Genetec Security Center allows you to expand the capabilities of your Genetec-based security system with face recognition functionality.
Configure Integration
Integration with Genetec Security Center is implemented via the findface-genetec
plugin. By default, the plugin is disabled.
Before getting started with the integration on the FindFace Multi side, deploy the Genetec Web SDK and Media Gateway packages, and create an Alarm
entity that will be triggered in Genetec Security Center when a face recognition event occurs in FindFace Multi.
Important
For the Genetec-FindFace integration to work, you also need to purchase a proper license from Genetec (license part number GSC-1SDK-Ntech-FindFace
) and activate it in Genetec Security Center.
In this chapter:
Configure Genetec Web SDK and Media Gateway
To enable and configure Web SDK, use Genetec Config Tool. For details, refer to
.When enabling and configuring Media Getaway in Genetec Config Tool, refer to
.Important
Make sure that the firewall is configured so that the ports for the WebSDK and Media Gateway are open.
Create Alarm in Genetec Security Center
Create and configure a new Alarm
entity in Genetec Config Tool. Refer to for details.
Tip
On the Properties tab, select the Video display option that suits your needs the best. Available options are Live, Playback, etc.
Tip
To enable alarm procedures and auto rotation of video right within the alarm pop-up window, enable Content cycling.
When configuring the integration in FindFace Multi, you will have to enter the alarm logical id that is specified on the Identity tab.
Enable Genetec Integration in FindFace Multi
To enable the Genetec integration in FindFace Multi, do the following:
Enable the
findface-genetec
plugin. To do so, open the/etc/findface-security/config.py
configuration file and uncomment theINSTALLED_APPS.append(‘ffsecurity_genetec’)
line. Make sure that at least one of the following parameters is specified:SERVICE_EXTERNAL_ADDRESS
orEXTERNAL_ADDRESS
.sudo vi /etc/findface-security/config.py ... # SERVICE_EXTERNAL_ADDRESS is prioritized for FFSecurity webhooks and Genetec plugin. # EXTERNAL_ADDRESS is used instead if SERVICE_EXTERNAL_ADDRESS is not provided. # You must provide either SERVICE_EXTERNAL_ADDRESS or EXTERNAL_ADDRESS in order # to be able to work with FFSecurity webhooks and Genetec plugin. SERVICE_EXTERNAL_ADDRESS = 'http://127.0.0.1' # EXTERNAL_ADDRESS is used to access objects created inside FFSecurity via external links. EXTERNAL_ADDRESS = '' ... # FINDFACE SECURITY PLUGINS # ============================================================================== # Uncomment lines below to enable plugins. Please consult documentation for # a plugin specific settings. ... # =============== Genetec ================ INSTALLED_APPS.append('ffsecurity_genetec')
Migrate the main database architecture from FindFace Multi to PostgreSQL and re-create user groups with predefined rights.
sudo findface-security migrate sudo findface-security create_groups
Restart
findface-security
.sudo systemctl restart findface-security.service
Configure Endpoints in FindFace Multi
To establish connection between FindFace Multi and Genetec Security Center, do the following:
Navigate to the Preferences tab. Click Genetec.
In the Server and Media sections, specify settings of the Web SDK and Media Gateway endpoints.
Important
The ports for the WebSDK and Media Gateway need to be open.
In the Ids section, specify the logical id of the
Alarm
entity that will be triggered in Genetec Security Center when a face recognition event occurs in FindFace Multi.Click Save. If the connection to Genetec Security Center is successfully established, you will see the State change to Configured.
Import Cameras from Genetec Security Center
Once the connection to Genetec Security Center is established, import cameras. To do so, click Cameras on the Genetec tab. Click Import.
This action will create a group of cameras Genetec
listing all the cameras from Genetec Security Center.
To view this list of cameras, navigate to the Cameras tab on the FindFace Multi navigation bar. If you want to exclude a camera from face recognition, simply deactivate it in the list.
Create Watch Lists and Cards in FindFace Multi
After you have configured the endpoints and camera settings, finish the integration by creating a card index. Notifications about face recognition events will be automatically sent to Genetec Security Center. See Notifications in Genetec Security Center.
Notifications in Genetec Security Center
Each face recognition event from a Genetec camera, that has a match with a card, triggers a relevant alarm in Genetec Security Center. Every alarm triggered by FindFace Multi is associated with a relevant camera (source of the face recognition event) so you can instantly watch live or playback video within the Alarm Monitoring
task in Genetec Security Desk. FindFace Multi also utilizes Alarm Procedures
to provide a user with additional content related to the alarm, such as:
face detected in the video
matching face from the card index
person’s name and comment from the card
matching confidence
watch list’s name
full frame
After you receive a face recognition alarm, process it as you usually do with other alarms in Genetec Security Center.
Axxon Next
FindFace Multi integration with Axxon Next allows you to detect and identify objects in video streams from an Axxon-based security system.
Integration with Axxon Next is implemented via the ffsecurity_axxon
plugin.
To configure the FindFace Multi integration with Axxon Next in Ubuntu, do the following:
Activate the plugin by uncommenting the
INSTALLED_APPS.append(‘ffsecurity_axxon’)
line in the/etc/findface-security/config.py
configuration file.sudo vi /etc/findface-security/config.py ... # ============================================================================== # FINDFACE SECURITY PLUGINS # ============================================================================== # Uncomment lines below to enable plugins. Please consult documentation for # a plugin specific settings. # =============== Axxon ================ INSTALLED_APPS.append('ffsecurity_axxon')
Uncomment the
AXXON
section in the configuration file. Fill it out by analogy with the example below. In theapi
parameter, specify the IP address and API port of the Axxon Next server that will provide FindFace Multi with Axxon API and HLS-archive streams. In thertsp
parameter, specify the common segment of Axxon video stream addresses.name
,user
,password
: the Axxon Next server name and credentials to access it.=============== Axxon ================ INSTALLED_APPS.append('ffsecurity_axxon') AXXON = [ { 'name': 'my_axxon', 'api': 'http://192.168.0.2:8000', 'rtsp': 'rtsp://192.168.0.2:50554/', 'user': 'cinderella', 'password': '1@love$ocean8', } ]
Note
You can establish connections to several Axxon Next servers by multiplying the
AXXON
section content as follows:AXXON = [ { 'name': 'my_axxon1', 'api': 'http://192.168.0.2:8000', 'rtsp': 'rtsp://192.168.0.2:50554/', 'user': 'user1', 'password': 'password1', }, { 'name': 'my_axxon2', 'api': 'http://192.168.0.3:8000', 'rtsp': 'rtsp://192.168.0.3:50554/', 'user': 'user2', 'password': 'password2', }, { 'name': 'my_axxon3', 'api': 'http://192.168.0.4:8000', 'rtsp': 'rtsp://192.168.0.4:50554/', 'user': 'user3', 'password': 'password3', } ]
Create representations of Axxon Next cameras in FindFace Multi (see Camera Management and Video Recording). A camera representation URL must be specified in the format
<server>:<friendlyNameLong>
, whereserver
is the Axxon Next server name specified on the previous step in thename
parameter,`friendlyNameLong
is a camera name on the Axxon Next server. You can find out the camera name in the Axxon user interface, or via Axxon API by executing:curl http://user:password@127.0.0.1/video-origins/ { "OLOLOE-DEV/DeviceIpint.vhod_1/SourceEndpoint.video:0:0" : { "friendlyNameLong" : "vhod_1.Vhod_1", "friendlyNameShort" : "Vhod_1", "origin" : "OLOLOE-DEV/DeviceIpint.vhod_1/SourceEndpoint.video:0:0", "state" : "signal_restored" } }
For the camera from our example, URL will specified as
my_axxon:vhod_1.Vhod_1
.
The configuration is now finished. If the integration is properly configured, FindFace Multi will be detecting and identifying objects in Axxon Next video streams, and recognition events will be featuring video clips from Axxon Next (upon relevant settings).
Edge Devices
It is possible to integrate FindFace Multi with the edge devices that can source frames for object recognition, e.g., Access Control terminals. In this case, once FindFace Multi receives a frame from an edge device, it will initiate the extraction of the object’s feature vector and event creation. You can work with these events by analogy with the events from CCTV cameras.
The edge device integration is done through HTTP API. After the primary configuration, FindFace Multi will issue a token. Specify this token in every API request sent by the edge device to FindFace Multi to authorize the device.
To integrate an edge device with FindFace Multi, do the following:
Navigate to the Cameras tab.
Click +.
On the Settings -> Main tab, enter the general edge device information:
Connection type: choose Edge device.
Specify the edge device name.
Add the edge device to a camera group, so it will be more convenient to filter events from this device later.
Tip
You can allot a separate camera group specifically to edge devices.
If necessary, add a comment.
Check Active.
On the Camera info tab, copy the provided edge device token to the clipboard.
Specify this token in every API request that the edge device sends to FindFace Multi to create an event. As a result, frames passed in the requests will be associated with the edge device’s relevant camera and processed by analogy with the frames from CCTV cameras.
Detailed interactive documentation on the FindFace Multi HTTP API is available after installation at
http://<findface-security-ip:port>/api-docs
. Learn and try it out.Tip
You can also find it by navigating to
in the web interface.