Welcome to FindFace Lite’s documentation!

The documentation consists of blocks. Use them to onboard to the FindFace Lite and to find the information fast.

Below there is the overview of the blocks:

  • Should be useful block contains additional docs which can help you in work with the FindFace Lite.

  • Scenarios and features block contains information about the cases of FindFace Lite usage and features that can help to improve the service performance.

  • Getting started block lead you through the steps of preparation and installation of FindFace Lite.

  • Settings block contains information articles describing what settings you can make in FindFace Lite.

  • Integration documentation block includes articles about all integration methods and their descriptions.

Glossary

FindFace Lite

FindFace Lite is a light version of the FindFace Multi.

FindFace Lite Installer

FindFace Lite is a file, contaning a number of configs which install the FindFace Lite service.

Identity authentication terminal

Identity authentication terminal is an access control device with, which supports face authentication.

VideoWorker

VideoWorker is an interface object for tracking, processing and matching faces on multiple video streams.

Liveness detection

Liveness detection is a technique where an algorithm securely detects whether the source of a biometric sample comes from a fake representation or is a live human being. The biometric sample is a facial photo taken by a user.

Event

Event is a representation of an object (face or car) occurrence in the Camera frame. With active Camera Event is automatically created from VideoWorker detection data.

Card

Card is a profile of a real person or a car. Card can be one of two types: face or car.

Camera

Camera is a representation of any video stream that can also be a file.

Object

Object is a representation or particular face or car. To create it, you have to add the image and link this set to which Card it is linked.

Webhook

Webhooks are user-defined HTTP callbacks, triggering by an event in a web app.

PACS

PACS or ACS are a particular type of access control system used as an electronic security counter-measure. PACS can be used to control employee and visitor access to a facility and within controlled interior areas.

Deduplication

Deduplication is a feature that is used to prevent recognising of one person or a car as several different events in ine period of time

Antispam events

Antispam is a feature that is used for distinguishing of a real Event to process and so called “spam” Event fixed in the system accidentally.

Edge device

Edge device is a physical device (e.g., Identity authentication terminals), which can connect and send images to FindFace Lite to recognise objects.

Usage scenarios

FindFace Lite easily integrates with the enterprise systems and sends the processed data from connected cameras and access terminals.

Business logic of the systems remains with no changes, but at the same time enriches with the necessary video analytics data.

FindFace Lite usage scenarios are divided in two main categories:

  • Vehicle scenarios

  • Face scenarios

Tip

All scenarios can be improved by integrated FindFace Lite features, which are described in the Features article.

Vehicle scenarios

FindFace Lite processes video stream, recognizes vehicle parameters and sends an event to Physical Access Control System (PACS) through a webhook in the JSON format. Based on the information from FindFace Lite PACS performs a further access control scenario.

FindFace Lite suppose two options for interaction with PACS:

License plate recognition only

If PACS makes decisions to grant access only based on a license plate number, and doesn’t use vehicle parameters.

FindFace Lite sends an event to PACS with a license plate number, vehicle photo with license plate number, date, time and camera ID (used by PACS).

Recognition of multiple vehicles parameters

Complex scenario which uses multiple parameters, besides license plate number:

Parameters

Scenario

Vehicle front and rear parts

To avoid false alarms when the camera sees the license plate on the car rear after passing the gate.

To avoid false alarms for vehicle reverse.

Vehicle type

To meet municipal requirements for special vehicle access (e.g. ambulance car or fire truck).

License plate number visibility

To avoid false alarm when the vehicle is too far, or staying aside.

Face scenarios

FindFace Lite rules out all scam and false alarm cases using modern features such as Liveness and Headpose.

Moreover FindFace Lite can control medical mask presence and can be easily integrated with biometric access terminals.

Scenarios with PACS

FindFace Lite processes videostream, identifies person parameters and sends an event to PACS through a webhook in the JSON format.

Based on the information from FindFace Lite PACS performs a further access control scenario.

Time tracking

FindFace Lite allows to collect working time data from the Chrome application on different devices and easily make an export to working hours accounting systems (ERP, WFM).

Workers can easily mark start and finish time with just pressing the button and looking at the camera. Advanced face falsification security system Liveness prevents all the false alarms.

Black and white lists scenarios

FindFace Lite can be integrated with CRM and security systems in order to perform access control scenarios:

  • When a client or a person from a white list comes, FindFace Lite can interact with customer loyalty program, send to CRM (or any other system) an event with a person ID and all the other parameters from the person card.

  • When person from a black list comes, FindFace Lite will send an event to external system which perform necessary action.

  • FindFace Lite can control medical mask presence within the staff and register an event with all the staff connected data (image without mask, staff ID etc.)

Features

Event deduplication car face

Deduplication of events is a feature that is used to prevent recognising of one person or a car as several different events in ine period of time and simplifies integration with PACS that do not support deduplication.

Note

Deduplication works for events from all connected cameras.

It means that in FindFace Lite you can set a period of time during which Events with the same person or a car will be considered as duplicates and the system will react and process only on the first one and the following duplicating events will be ingnored.

Possible scenarios of duplication

  • Events with Car can be duplicated if the vehicle stops for a while in front of the barrier or if the rear license plate of a vehicle that has already passed through the barrier is recognized by the exit camera.

  • Events with Face can be duplicated if the camera fixed the face of the person that has already passed through the PACS who showed face to the camera again for any reason.

More scenarios are described in Findface Lite scenarios article.

How to configure event deduplication

  1. Open the configuration file located in the FFlite -> api_config.yml using one of the editors (e.g., nano command).

  2. Enable the feature by setting the dedup_enabled parameter to true.

  3. Configure the savings settings for duplicates in the save_dedup_event parameter, choose true to save duplicates and false to not to save.

  4. Set the parameters for car and faces recognition separately.

Parameter

Default value

Description

face_dedup_confidence

0.9

Confidence of matching between 2 car or face Events to be considered duplicates.

If matching score is equal or more than set value, Event is labeled as duplicating.

car_dedup_confidence

0.9

face_dedup_interval

5

Time interval in seconds during which Events with the same car or face will be considered as duplicates.

car_dedup_interval

5

  1. Save the changes in the file and close the editor.

  2. Apply new settings by restarting the api service with the following command:

docker compose restart api

Tip

Read the detailed instruction about all configuration settings in the article.

Spam events filtering car face

Spam events filtering is a feature that is used for distinguishing of a real Event to process and so called “spam” Event fixed in the system accidentally.

It means that in the moment of creating an Event from added Camera or sending an Event from the edge device via POST request you can set the image detection area and omit its spamming part.

Possible scenarios of spam events

  • Events with Car can be spam events if a vehicle is parked near the barrier and the license plate fall into the field of view of the video camera multiple events will be generated.

    If the license plate is found in the PACS database, then the barrier will be opened.

  • Events with Face can be spam events if a person stay near the camera, but is not going to pass through the PACS. If the person is found in the PACS database, the barrier will be opened.

More scenarios are described in Findface Lite scenarios article.

How to configure spam filtering

Settings of capturing frames is configured in the roi parameter, where you have to specify the distance from each frame or image side.

Tip

Currently settings are only available in via API requests

  • To configure the roi setting for videostreams, use the following requests and the format WxH+X+Y for the roi parameter:

    • POST /v1/cameras/ to create the Camera object.

    • PATCH /v1/cameras/{camera_id} to the update Camera object.

{
  "name": "test cam",
  "url": "rtmp://example.com/test_cam",
  "active": true,
  "single_pass": false,
  "stream_settings": {
    "rot": "",
    "play_speed": -1,
    "disable_drops": false,
    "ffmpeg_format": "",
    "ffmpeg_params": [],
    "video_transform": "",
    "use_stream_timestamp": false,
    "start_stream_timestamp": 0,
    "detectors": {
      "face": {
        "roi": "1740x915+76+88", <<-- roi
        "jpeg_quality": 95,
        "overall_only": false,
        "filter_max_size": 8192,
        "filter_min_size": 1,
        "fullframe_use_png": false,
        "filter_min_quality": 0.45,
        "fullframe_crop_rot": false,
        "track_send_history": false,
        "track_miss_interval": 1,
        "post_best_track_frame": true,
        "post_last_track_frame": false,
        "post_first_track_frame": false,
        "realtime_post_interval": 1,
        "track_overlap_threshold": 0.25,
        "track_interpolate_bboxes": true,
        "post_best_track_normalize": true,
        "track_max_duration_frames": 0,
        "realtime_post_every_interval": false,
        "realtime_post_first_immediately": false
      }
    }
  }
}
  • To configure the roi setting for images got form an edge device, use the following request and the format [left, top, right, bottom] for the roi parameter:

    • POST /v1/events/{object_type}/add to post the Event object.

{
  "object_type": "face",
  "token": "change_me",
  "camera": 2,
  "fullframe": "somehash.jpg",
  "rotate": true,
  "timestamp": "2000-10-31T01:30:00.000-05:00",
  "mf_selector": "biggest",
  "roi": 15,20,12,14 <<-- roi
}

Liveness face

Liveness is a technology used in CCTV cameras to determine whether the biometric trait presented to the system belongs to a live person or is a spoof attempt such as a photograph, a video, or a mask.

FindFace lite uses passive method of liveness detection, that uses algorithms to analyze various features of a person’s face, such as pupil movement, facial expressions, or the presence of micro-movements to determine if the person is alive.

This method is less invasive than other liveness detection methods and:

  • it does not require any action from the user. This can lead to higher user acceptance rates and fewer errors due to user discomfort or error;

  • it is real-time: it can quickly and accurately authenticate a user without causing any delay or disruption to the authentication process.

  • it does not require any additional hardware or sensors;

  • it is difficult to spoof than other types of liveness detection, such as those that require the user to perform a specific action.

Possible scenarios of liveness detection

Liveness detection can be used in lots of scenarios, there are several of them:

  • Banking – Liveness detection may be used in banking to verify the identity of customers.

    For example, a customer may be required to present their face to a camera during a video call with a bank representative, and the system can use liveness detection to ensure that the customer is alive and not presenting a fake photo.

  • Employee Access Control: Liveness detection can be used to control employee access to secure areas in the workplace.

    For example, if an employee attempts to enter a secure area by presenting a photograph or a mask, the CCTV camera with liveness detection can deny access and notify the security team.

How to configure liveness

  1. Open the configuration file located in the FFlite -> api_config.yml using one of the editors (e.g., nano command).

  2. Enable the feature by adding the liveness value to the face_features parameter.

  3. Configure the source liveness detection in the liveness_source parameter.

  • If the detection source is going to be an image (Events will be created via POST /{object_type}/add), set the eapi value.

  • If the detection source is going to be a videostream (Events will be created in FindFace Lite automatically after recigniton from a videstream), set the vw value.

  1. Save the changes in the file and close the editor.

  2. Apply new settings by restarting the api service with the following command:

docker compose restart api

Tip

Read the detailed instruction about all configuration settings in the article.

Headpose face

Headpose feature refers to the ability of the camera to detect and track the orientation and movement of a person’s head relative the CCTV camera in real-time.

Warning

The headpose feature does not work if the person wear a medmask.

To detects the headpose the in two-dimensional space the FindFace Lite uses pitch and yaw.

  • Pitch refers to the rotation of the head around its horizontal axis, which runs from ear to ear. Positive pitch indicates that the head is tilted forward, while negative pitch indicates that the head is tilted backward.

  • Yaw refers to the rotation of the head around its vertical axis, which runs from top to bottom. Positive yaw indicates that the head is turned to the right, while negative yaw indicates that the head is turned to the left.

_images/headpose.jpg

Possible scenarios of headpose detection

Headpose detection can be used in various scenarios where face recognition is used to improve accuracy and security, there are several of them:

  • Improving employee access control systems by ensuring that the face of the employee matches the expected orientation.

    It means that if a person stay near the camera, turns the head to the camera, but is not going to pass through the PACS, the access will not be provided, beacause of detected headpose.

  • Improving the comfort, while using the PACS system. If an employee approaches a security checkpoint at an awkward angle, the CCTV camera with headpose detection can trigger an alert to the access control system to reposition the camera and ensure proper orientation of the face.

How to configure headpose

  1. Open the configuration file located in the FFlite -> api_config.yml using one of the editors (e.g., nano command).

  2. Enable the feature by adding the headpose value to the face_features parameter.

  3. Save the changes in the file and close the editor.

  4. Apply new settings by restarting the api service with the following command:

docker compose restart api

Tip

Read the detailed instruction about all configuration settings in the article.

Introduction to Getting Started

Getting Started block contains 5 steps, going through which you will easily install FindFace Lite.

For the correct work of the service you have to:

  • Prepare a CCTV camera according recommendations (STEP 1).

  • Prepare a server according necessary characteristics (STEP 2).

  • Prepare it installing Docker Engine and Docker Compose (STEP 3).

After all preparation steps are behind:

  • Upload the FindFace Lite installer and the service license to the server (STEP 4).

  • Install FindFace Lite (STEP 5).

Good luck!

If you have questions, please, do not hesitate to write us on support@ntechlab.com.

STEP 1. CCTV Camera Requirements: characteristics and installation

Face recognition

CCTV Camera characteristics

  1. The minimum pixel density required for identification is 500 pixels/m (roughly corresponds to a face width of 80 pixels).

    cctv_minimum_pix_en

  2. Select such a focal length of the camera’s lenses that provides the required pixel density at a predetermined distance to the recognition objects. The picture below demonstrates how to calculate the focal length subject to the distance between the camera and recognition objects. Estimating the focal length for a particular camera requires either calculators or a methodology provided by the camera manufacturer.

    cctv_focus_en

  3. The exposure must be adjusted so that the face images are sharp (“in focus”), non-blurred, and evenly lit (not overlit or too dark).

    cctv_exposure_en

  4. For imperfect lighting conditions such as flare, too bright or too dim illumination, choose cameras with WDR hardware (Wide Dynamic Range) or other technologies that provide compensation for backlight and low illumination. Consider BLC, HLC, DNR, high optical sensitivity, Smart infrared backlight, AGC, and such.

    cctv_light_en

  5. Video compression: most video formats and codecs that FFmpeg can decode.

  6. Video stream delivery protocols: RTSP, HTTP.

Tip

To calculate the precise hardware configuration tailored to your purposes, contact our experts by support@ntechlab.com.

CCTV Camera installation

  1. For correct face detection in a video stream, mount the camera so that the face of each individual entering the monitored area surely appears in the camera field of view.

  2. The vertical tilt angle of the camera should not exceed 15°. The vertical tilt is a deviation of the camera’s optical axis from the horizontal plane, positioned at the face center’s level for an average height person (160 cm).

    cctv_man_en

  3. The horizontal deflection angle should not exceed 30°. The horizontal deflection is a deviation of the camera’s optical axis from the motion vector of the main flow of objects subject to recognition.

    cctv_angle_en

Vehicle recognition

CCTV Camera characteristics

General characteristics

FindFace lite requires the configuration described in the tables below.

Object in frame requirements

Parameter

Minimal requirements

Recommended requirements

Object size: vehicle width

>= 80 px

>= 120 px

Object size: license plate number width

>= 100 px

>= 150 px

Object size: LPN + vehicle

>= 340 px

>= 340 px

Object allowable overlap

<= 30%

<= 15%

Camera requirements (for a digital image)

Parameter

Minimal requirements

Recommended requirements

Matrix size

>= 1/2,8

>= 1/1,8

Focal length

>= 1,5 mm

>= 4 mm

Light sensitivity (color)

<= 0.1 lux

<= 0,05 lux

TCP protocol

Yes

Yes

Broadcast resolution

>= 720x576

>= 1920x1080

Broadcast quality

3000-4000 kb/s

>= 4000 kb/s

Frame rate

>= 15

>= 50-60

Shutter speed

up to 1/100

up to 1/500

H.264 support

H.264

H.264, H,265

Keyframe frequency adjustment

Yes

Yes

WDR support

Yes

Yes (up to 120 dB)

Aperture adjustment

Not required

Yes

Focal length adjustment

Not required

Yes

Mechanical IR filter

Not required

Yes

ONVIF support

Not required

Yes

Сamera mounting (object in a frame allowable rotation)

Parameter

Minimal requirements

Recommended requirements

Camera vertical angle

<= 45°

<= 30°

Camera horizontal tilt angle (vehicle)

Not matter

Not matter

Camera horizontal tilt angle (LPN)

<= 30°

<= 15°

Lighting requirements

Parameter

Minimal requirements

Recommended requirements

Illumination in the recognition zone

>= 150 lux

>= 200 lux

Backlight compensation

<= 200 lux

<= 100 lux

Broadcast camera settings requirements

Settings

Parameter

Recommended requirements

Exposition

Iris mode

auto

Auto iris level

50

Exposure time

1/200

Gain

25

Camera focus

Camera focus

Should be configured manually for a specific scene

Backlight settings

BLC

OFF

WDR

OFF

HLC

ON (for barriers camera, if collecting of vehicles attributes is not required)

Additional characteristics
IR illumination

Highly recommended option for camera. License plate number effectively reflects lights and in low light conditions the light from camera IR will perfectly highlight the car number.

Note

IR illumination characteristics of the equipment written by the manufacturer (e.g. 10, 20, 50 meters) are the range to the complete backlight extinction.

So, the effective range is usually 30-40% lower. Please, keep that in mind.

SmartIR

If there is an IR illuminator in the camera, SmartIR option could greatly improve the quality of image in low light conditions.

SmartIR or smart IR illumination control allows you to reduce the backlight intensity if the subject is too close and the frame is overexposed.

With Smart IR

_images/smart.png

Without Smart IR

_images/nosmart.png
WDR (Wide Dynamic Range)

Use the WDR setting to adjust the white balance.

_images/wdr.png
BLC (Blacklight Compensation)

Use the BLC setting to correct backlight problems.

HLC (Highlight Compensation)

Use the HLC setting to compensate for overexposed areas. HLC automatically detects redundant light sources and reduces flare, greatly improving the clarity of bright areas.

When HLC is activated, the camera will proceed bright areas such as a spotlight and adjust the exposure accordingly.

With HLC the camera will try to build the whole scene exposure correctly, reducing the brightness of overlighted areas.

_images/hls.png

CCTV camera installation

Objects in the frame should be detailed, focused, not blurred and highly contrasted.

For proper analytics follow the mounting guidelines below.

General installation recommendations
  • The distance from camera to recognition zone is arbitrary. Cameras with appropriate lenses are being selected depending on the distance.

  • Camera should be mounted to a fixed rigid construction.

  • Avoid sunlight or excess light in camera lens, it can lead to image flare.

  • Object in the frame should be completely visible. The central camera axis should approach to recognition zone center, so the object itself is in the center of the frame.

  • Resolution and camera lens should be clear and not blurry, without any visible distortion.

_images/general_mounting.png
Installation on barriers

Parameters

Requirements

Installation height (H)

1.3-1.5 m

Distance from camera to image corner (D)

2.5 - 3 m.

Visible frame distance (L)

2.5-5m

_images/barriers_gen.png

barriers_side_en

STEP 2. Server and Software Requirements

Server Requirements

FindFace Lite requires different server configuration depending on the type of processed information (image or stream) and on quantity of used camera and identity authentication terminals.

Image processing

For image processing, when identity authentication terminal detects a face and sends a result to FindFace Lite, server requirements are the following:

Devices Quantity

Physical Intel Cores >2.4 GHz

RAM, GB

1 – 8

4

6

8 – 16

4 – 6

8

16 – 24

4

8

HD (720p) live streams processing

For detection while processing HD (720p) live streams (20 FPS) server requirements are the following:

Devices Quantity

Physical Intel Cores >2.4 GHz

RAM, GB

1

4

8

5

8

10

10

14

16

FHD (1080p) live streams processing

For detection while processing FHD (1080p) live streams (20-25) FPS server requirements are the following:

Devices Quantity

Physical Intel Cores >2.4 GHz

RAM, GB

1

6

8

5

16

10

10

24

16

Software Requirements

Software

Recommendations

Operation System

Ubuntu 18.04 x64, CentOS 7 and other similar OS.

Command Line

Linux command line only.

Docker Engine

Version 19.03+

Docker Compose

Version 2.2.3+

To set up Docker software correctly, please, read Step 3.

NVIDIA Container Toolkit

Only for GPU server. Version 1.7.0+ (nvidia-docker2 >= 2.8.0)

STEP 3. Server preparation

CPU server preparation

To prepare a CPU server to FindFace Lite, please, install Docker Engine (19.03+) and Docker Compose (2.2.3+) .

Before you install Docker Engine (19.03+) and Docker Compose (2.2.3+) for the first time on a new host machine, you need to set up the Docker repository.

Steps from setting up the Docker repository to Docker and Docker Compose installation are described in the guides for Ubuntu OS and CentOS below.

Ubuntu OS

  1. Update apt and install packages for data encryption to use the repository over HTTPS:

sudo apt-get update
sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
  1. Add the GPG key given by Docker. Use the command below:

sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
  1. Add the Docker repository using the following command:

echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  1. Update the apt:

sudo apt-get update
  1. Install Docker and Docker Compose Plugin:

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
  1. Check on how the Docker was installed using the command which downloads a test image and runs it in a container:

sudo docker run hello-world

The container will be started, you will see the operation success message, then the container will automatically be stopped.

_images/helloworld.png
  1. Make sure the Docker Compose Plugin is also installed correctly. Use the following command to check:

docker compose version
_images/docker_compose_v.png

Now all requirements for FindFace Lite are followed, please, go to the next STEP to upload FindFace Lite installer and license on the server.

CentOS

  1. Install the yum-utils package and set up the repository. Use the command below:

sudo yum install -y yum-utils
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
  1. Install Docker and Docker Compose Plugin:

sudo yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin

Note

If prompted to accept the GPG key, verify that the fingerprint matches 060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35 value, and if so, accept it.

  1. Start Docker using the following command:

sudo systemctl start docker
  1. Check on how the Docker was installed using the command which downloads a test image and runs it in a container:

sudo docker run hello-world

The container will be started, you will see the operation success message, then the container will automatically be stopped.

_images/helloworld.png
  1. Make sure the Docker Compose Plugin is also installed correctly. Use the following command to check:

docker compose version
_images/docker_compose_v.png

Now all requirements for FindFace Lite are followed, please, go to the next STEP to upload FindFace Lite installer and license on the server.

GPU server preparation

To prepare a GPU server to FindFace Lite, please, install NVIDIA Container Toolkit.

Before you install NVIDIA Container Toolkit for the first time on a new host machine, you need to prepare server according requrements:

  • NVIDIA Linux drivers >= 418.81.07 (note that older driver releases or branches are unsupported, to install drivers go to the official NVIDIA guide);

  • NVIDIA GPU with Architecture >= Kepler;

  • Docker >= 19.03.

Steps from checking GPU server configuration to the Docker software and NVIDIA Container Toolkit installation are described in the guides for Ubuntu OS and CentOS below.

Ubuntu OS

  1. Chek the GPU drivers version using the command below:

nvidia-smi

Driver Version: should be >= 418.81.07.

_images/nvidia-smi.png

If it is not so, please, go to the official NVIDIA guide to install the drivers.

  1. Check a graphics card model using command below:

nvidia-smi -L
  1. Verify that graphics card has architecture >= Kepler. Find your graphics cards model in the list below and check:

Architecture (from the oldest to the newest)

Series

Fermi

GeForce 400 and 500: GTX 480, GTX 470, GTX 580, GTX 570;

Kepler

GeForce 600 and 700: Nvidia GTX 680, 670, 660, GTX 780, GTX 770;

Maxwell

GeForce 900: GTX 960, GTX 970, GTX 980;

Pascal

GeForce 1000: GTX 1050, 1050 Ti, 1060, 1080;

Turing

GeForce RTX 2000 and GTX 1600: GTX 1660, GTX 1650, RTX 2060, RTX 2080;

Ampere

GeForce RTX 3080, RTX 3090, RTX 3070, etc.

  1. Install the latest version of Docker using the command below:

  curl https://get.docker.com | sh \

&& sudo systemctl --now enable docker

Warning

If the command doesn’t work, please, follow all the steps of Docker installation on CPU server.

  1. Setup the NVIDIA package repository and the GPG key:

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
  && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
        sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
        sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
  1. Update the apt package:

sudo apt-get update
  1. Install the nvidia-docker2 package and dependencies:

sudo apt-get install -y nvidia-docker2
  1. Restart the Docker daemon to complete the installation after setting the default runtime:

sudo systemctl restart docker
  1. Check the runtime in the config:

cat /etc/docker/daemon.json

If there is the nvidia-container-runtime in the output, installation is succesful.

_images/nvidia-deamon.png

Now all requirements for FindFace Lite are followed, please, go to the next STEP to upload FindFace Lite installer and license on the server.

CentOS

  1. Chek the GPU drivers version using the command below:

nvidia-smi

Driver Version: should be >= 418.81.07.

_images/nvidia-smi.png

If it is not so, please, go to the official NVIDIA guide to install the drivers.

  1. Check graphics card model using command below:

nvidia-smi -L
  1. Verify that graphics card has architecture >= Kepler. Find your graphics cards model in the list below and check:

Architecture (from the oldest to the newest)

Series

Fermi

GeForce 400 and 500: GTX 480, GTX 470, GTX 580, GTX 570;

Kepler

GeForce 600 and 700: Nvidia GTX 680, 670, 660, GTX 780, GTX 770;

Maxwell

GeForce 900: GTX 960, GTX 970, GTX 980;

Pascal

GeForce 1000: GTX 1050, 1050 Ti, 1060, 1080;

Turing

GeForce RTX 2000 and GTX 1600: GTX 1660, GTX 1650, RTX 2060, RTX 2080;

Ampere

GeForce RTX 3080, RTX 3090, RTX 3070, etc.

  1. Setup the official Docker CE repository:

sudo yum-config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
  1. Install the containerd.io package:

sudo yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.4.3-3.1.el7.x86_64.rpm
  1. Install the latest version of Docker using the command below:

sudo yum install docker-ce -y
  1. Ensure the Docker service is running with the following command:

sudo systemctl --now enable docker
  1. Test your Docker installation by running the hello-world container:

sudo docker run --rm hello-world

The container will be started, you will see the operation success message, then the container will automatically be stopped.

_images/helloworld.png
  1. After Docker is installed continue with NVIDIA installation. Setup the repository and the GPG key using the command below:

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
    && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.repo | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
  1. Update the yam package:

sudo yum clean expire-cache
  1. Install the nvidia-docker2 package and dependencies:

sudo yum install -y nvidia-docker2
  1. Restart the Docker daemon to complete the installation after setting the default runtime:

sudo systemctl restart docker
  1. Check the runtime in the config:

cat /etc/docker/daemon.json

If there is the nvidia-container-runtime in the output, installation is succesful.

_images/nvidia-deamon.png

Now all requirements for FindFace Lite are followed, please, go to the next STEP to upload FindFace Lite installer and license on the server.

STEP 4. FindFace Lite installer and the license uploading to the server

In this step we ask you to put the FindFace Lite installer and the product license on the machine, which you are going to use to work with FindFace Lite. It can be either virtual and physical server or PC meeting the requirements.

To upload the installer and the license from the local machine to the virtual one use one of the options: scp command or an SFTP file client.

Warning

Is is important to put FindFace Lite installer and the license in the same directory.

Use scp command

Scp allows you to securely copy files between two locations using SSH for encryption.

Warning

For using :code: scp command you need to have a few things in place:

  • SSH installed on both the source and the target machines.

  • Root access to both source and target machines.

Scp syntax may be different, it depends on the authentication you use for SSH connection to target machine.

Below you will find descriptions on how:

General scp syntax

scp [[user@]src_host:]file1 [[user@]dest_host:]file2

Where:

  • scp initializes the command and ensures a secure shell is in place.

  • src_host is a source place, where the file is hosted.

  • dest_host is a target place, where the file will be copied to.

Scp syntax for login and password SSH authentication

The format of scp command for uploading from the local machine to the server is the following:

scp location/file_name.ext username@destination_host:/location

Where:

  • scp initializes the command and ensures a secure shell is in place.

  • location/file_name.ext is the path to the file, which you want to copy, and its full name.

  • username@destination_host are credentias for connection: username is for a username, destination_host is for a server IP address.

  • /location is the path where to put the file copy.

Command example:

scp NtechLab_license_512b25bdcd334b44b87ccf5f089215b9.lic azureuser@00.00.000.000:home/azureuser

After you run the command, you will be asked for a password. Enter it, but note that it won’t be displayed.

Command result:

azureuser@00.00.000.000’s password:

NtechLab_license_512b25bdcd334b44b87ccf5f089215b9.lic …………………..100% 3672KB 126.6KB/s 00:29

Scp syntax for secret and public SSH keys authentication

The format of scp command for uploading from the local machine to the server is the following:

scp -i ~/.ssh/private_key_name location/file_name.ext username@destination_host:/location

Where:

  • scp initializes the command and ensures a secure shell is in place.

  • ~/.ssh/private_key_name is the path to the private key in ssh folder. This part of the command is responsible for the authentication to the server.

  • location/file_name.ext is the path to the file, which you want to copy, and its full name.

  • username@destination_host are credentias for connection: username is for a username, destination_host is for a server IP address.

  • /location is the path where to put the file copy.

Command example:

scp -i ~/.ssh/private NtechLab_license_512b25bdcd334b44b87ccf5f089215b9.lic azureuser@00.00.000.000:/home/azureuser

Command result:

azureuser@00.00.000.000:

NtechLab_license_512b25bdcd334b44b87ccf5f089215b9.lic ………………….100% 3672KB 126.6KB/s 00:29

Use an SFTP file manager

  1. Install one of the file managers. For example, FileZilla Client , which is suitable for Windows, Linux and MacOS platforms.

  2. Connect to the server with a username and a server IP address.

  3. Move the file you need to the server using GUI.

STEP 5. FindFace Lite installation

Installation

  1. Change the FindFace Lite installer file mode to executive using the command below:

chmod +x fflite-{cpu|gpu}-master-g{git_hash}.run

Where fflite-{cpu|gpu}-master-g{git_hash}.run is the FindFace Lite installer name.

  1. Run the installer file:

sudo  ./fflite-{cpu|gpu}-master-g{git_hash}.run
  1. Installer interface will be opened within the command line. Press [Next].

_images/screen1.png
  1. Wait until the validator checks the software settings and press [Next].

_images/screen2.png
  1. After a status check, the installer will start components installation. Wait until installation will be completed and press Enter.

_images/screen3.png
  1. You will see your personal authorization information.

Warning

Save the displayed information for future use.

_images/screen4.png
  1. Installation is finished.

Press [Exit] button, the path to the log file of installation will be displayed.

In this log file you can find the credentials to the FindFace Lite.

When the installation is finished

Try FindFace Lite service using API and UI:

_images/api.png
  • UI URL address is located on http://<your_hostname>.

_images/UI.png

Config Settings

You can fine-tune the FindFace Lite making changes in the configuration file located in the FFlite -> api_config.yml.

Configuration file includes blocks with information about:

  • app — API configuration;

  • eapi — eapi address;

  • eapi_license_plate – license plate eapi address;

  • vm — vm address and credentials;

  • db — db address and credentials.

How to work with config settings

Open the file located in the FFlite -> api_config.yml using any text editor (nano, vim, etc.) and make changes in necessary settings.

In sections below you will find the full information about each block and possible settings values.

Warning

Please, read the description of the settings below to be sure in the result.

Apply new settings by restarting the api service with the following command:

docker compose restart api

App configuration

Settings possible values

Settings

Possible values

host

0.0.0.0 – default value.

port

8000 – default value.

debug

false (default) – debug mode is disabled;

true – debug mode is enabled.

router_base_url

http://nginx – default value.

media_root

/uploads – default value.

fullframe_root

/fullframe – default value.

normalized_root

/normalized – default value.

save_fullframe

false – fullframe images will not be saved on the disk.

true (default) — fullframe images will be saved on the disk.

save_normalized

false (default) – normalized images will not be saved on the disk.

true — normalized images will be saved on the disk.

secret_key

change_me – default value.

max_event_age_days

20 – default value.

face_confidence_threshold

0.714 – default value.

car_confidence_threshold

0.65 – default value.

webhook_workers_num

10 – default value.

exit_on_availability_check_fail

false – API service will retry to reach necessary resources until success with exponential timeouts;

true (default) – API service will exit if any of necessary resources are not available.

event_creation_token

change_me – default value.

event_creation_response_type

serialized (default) – the response will contain full information about a created Event, including matched Card, path to the fullframe image, etc.

id – the response will contain only ID of a created Event.

serialized_verbose – the response will contain full information about a created Event (including a matched Card, path to the fullframe image etc) and full information about the Card.

face_features

headpose – position of a head;

medmask – detection of a medmask;

liveness – liveness detection.

car_features

orientation – recognition a car position while recognition.

special_types – recognition of a car type.

license_plate_visibility – recognition of license plate.

liveness_source

eapi – liveness detection from image. This value is required to be set if you want to create Events via POST /{object_type}/add

vw – liveness detection from videostream.

auth_enabled

false – authorization is disabled;

true (default) – authorization is enabled.

access_token_expire_minutes

43200 – default value.

dedup_enabled

false – deduplication of Events is disabled;

true (default) – deduplication of Event is enabled.

save_dedup_events

false (default) – Events duplicates will be saved.

true – Events duplicates will not be saved.

face_dedup_interval

5 – default value.

face_dedup_confidence

0.9 – default value.

car_dedup_interval

5 – default value.

car_dedup_confidence

0.9 – default value.

Settings description

Settings

Description

host

Host information

port

Port information

debug

Debug mode. Currently manage only debug logs.

router_base_url

Router base URL for VM. Please, change only if you are sure.

media_root

Root directory for media files, which stores the Objects images.

fullframe_root

Root directory for fullframe files, which stores frame images from VideoWorker (vw).

normalized_root

Root directory for normalized files, which stores files used for migrations between models in database.

save_fullframe

Saving settings of fullframe images.

save_normalized

Saving settings of normalized images.

secret_key

A secret key, needed for security operations.

max_event_age_days

Maximum time, which Event is stored. After the expiration of the set time period, the Event is deleted.

face_confidence_threshold

Value of confidence threshold, according to which an Event matches or not with a Card during the face matching. If matching score is more than set value, Event is matched with a Card.

car_confidence_threshold

Value of confidence threshold, according to which an Event matches or not with a Card during the car matching. If matching score is more than set value, Event is matched with a Card.

webhook_workers_num

Number of concurrent webhook workers sending requests to webhook targets.

exit_on_availability_check_fail

Behaviour of API service in case of unavailability of necessary resources.

event_creation_token

A token using for external detector authentication for Event creation (/{object_type}/add). JWT token doesn’t apply in this request.

event_creation_response_type

Response verbosity of Event creation (/{object_type}/add) request.

face_features

Features of FindFace Lite for face recognition. A feature is activated if it is listed in the value. After feature activation freshly created events will be populated with corresponding feature.

car_features

Features of FindFace Lite for car recognition. A feature is activated if it is listed in the value. After feature activation freshly created events will be populated with corresponding feature.

liveness_source

Source of liveness detection.

auth_enabled

Authorization managing. Note, that all API calls (with some exceptions) will require Authorization header with JWT <token>.

access_token_expire_minutes

Time of access token expiration interval.

dedup_enabled

Events deduplication managing.

save_dedup_event

Events duplicates saving settings.

face_dedup_interval

A time interval in seconds during which Events with the same person will be considered as duplicates.

face_dedup_confidence

Confidence of matching between 2 Events to be considered duplicates.

car_dedup_interval

Time interval in seconds during which Events with the same car will be considered as duplicates.

car_dedup_confidence

Confidence of matching between 2 Events to be considered duplicates.

EAPI configuration

Settings

Possible values

Description

host

eapi – default value.

Host information

port

18666 – default value.

Port information

License plate EAPI configuration

Settings

Possible values

Description

host

eapi-license-plate – default value.

Host information

port

18667 – default value.

Port information

VM configuration

Settings

Possible values

Description

host

vm – default value.

Host information

port

18810 – default value.

Port information

token

GOOD_TOKEN – default value.

Token for processes connected with vm. Should be the same as token in vm.conf file.

DB configuration

Settings

Possible values

Description

host

postgres – default value.

Host information

port

5432 – default value.

Port information

user

fflite – default value.

Credentials to access database with the name from database setting.

password

fflite – default value.

database

fflite – default value.

Database name.

API

FindFace Lite API is located on http://<your_hostname>/api-docs. It is interactive, which means that you can make requests and get responses right on this page.

API documentation allows to read, create, update and delete all entities and provides description for all methods and parameters.

In this article we will overview the blocks of FindFace Lite functionality accessible via API and how to use interactive API.

Preparation to API usage

Before use the FindFace Lite API, please, authentificate yourself by creating JWT token in the AUTHENTICATION section.

Enter username and password from the STEP 5 of Getting started block into the form and click the SET TOKEN button.

_images/authentification.png

After authentification you can use interactive FindFace Lite API.

Note

If you need to use API requests outside the interactive API, please, use the created token.

API usage overview

API page is divided into 2 parts: left is a list of operations and right are operations execution field.

_images/api.png

Each operaton consists of Request and Response part:

  • Request part describes the operation, including request schema with the interpretation of each parameter, request example and TRY button, which sends the request.

_images/request.png
  • Response part describes the schema and examples of each variant of response on the given operation.

_images/response_ex.png

After the TRY button is clicked, request is sent and you will see the response block with the response status and detailed information.

_images/response_status.png

FindFace Lite API functionality

API documentation can be divided into the semantic blocks, which contain all requests (to get, add, edit and delete entities):

  1. Recognition flow

    • Camera block manages Camera object, which is a representation of any video stream (it also can be a file). Active Camera receives detection data from VideoWorker and converts it to Events.

    • Event block manages Event object, which is a representation of an object (face or car) occurrence in the camera frame. With active Camera Event is automaticaly created from VideoWorker detection data. You or any 3rd party system can also create it outside the main flow using POST request.

    • Card block manages Card object, which is a profile of a real person or a car. Card can be one of two types: face or car.

    • Object block manages Object, which is a representation or particular face or car. To create it, you have to add the image and link it to the Card.

    flow_full

  2. External system intereaction

    • Webhook block can be used to notify external systems about Events and matches.

  3. Authentification and user management

    • Auth block describes methods used for authentication.

    • User block manages Findface Lite users

  4. System operations

    • Misc block contains undefined requests, connected with the service needs.

    • Pipeline block is an internal method for VideoWorker and normally is not used in usual flow.

Edge Devices

What are edge devices

Edge devices are physical devices (e.g., Identity authentication terminals), which can connect and send images to FindFace Lite to recognise objects. In order to these devices get the results of recognition webhooks should be set.

The process of recognition from edge devices is the same as from CCTV cameras, the only difference is that Events are created directly from an edge device via POST request, but not from FindFace Lite system.

Preparation to recognition process

Before edge device integration, authenticate a device and create Objects and Cards, which will be compared with the created Events.

Authenticate a device

In order to execute all operations (except those which are connected with Events) authenticate a device in the system, using /v1/auth/login POST request. For Events there is a token from the config file is used.

For username and password parameters use data you got on the STEP 5.

Request example:

{
  "username": "login",
  "password": "password"
}

Successful response example:

{
   "access_token": "token"
}

Create a Card

Card is used for keeping several Objects of a person or a car under the one profile. In the process of recognition the Card is treated as a result.

To create a Card, use the /v1/cards/ POST request. All parameters are described below.

Parameters

Value types

Description

name

string

The name of a Card.

active

true

The Card is enabled.

false

The Card is disabled.

type

face

The Card is created for the face recognition.

car

The Card is created for the car recognition.

wiegand

string

Wiegand code.

Request example:

{
  "name": "test card",
  "active": true,
  "type": "face",
  "wiegand": "test wiegand code"
}

Successful response example:

{
  "name": "test card2",
  "active": true,
  "type": "face",
  "wiegand": "test wiegand cod2e",
  "id": 2,
  "objects": []
}

Create an Object

Objects are used for representation of a face or a car. To create it add the image and link to the Card, using /v1/objects/ POST request.

All operations are described below.

Parameters

Value types

Description

card_id

a number

The Card ID, which you want to connect with the Object. With one Card can be connected several Objects.

type

enum

The type of the Object you want to create. Сar is for car image, face is for face image and license_plate is for license plate image.

input_file

string

Put the filename which contains a face or car you would like to add to the database of Objects.

Request example:

{
  "card_id": "2",
  "type": "face",
  "input_file": "somehash.jpg"
}

Successful response example:

{
  "id": 1,
  "emben": "vV1yPfc2izy...de8vY/bvNXLfDw=",
  "type": "face",
  "card": 4,
  "filename": "somehash.jpg"
}

Edge Devices integration

To integrate an edge device with FindFace Lite, use API. All operations are described below.

Create a Camera

Create Camera object using /v1/cameras/ POST request.

Note

Camera object is only needed for further Event objects creations. It is not used in the flow of recognition via edge devices.

Here is the description of needed parameters. Parameters, which are not included, describe settings for strreams. As you do not have any streams, you may not pay attention to them.

Parameters

Description

name

The name for the Camera object. You can choose any.

url

URL of an added stream. As you don’t have the stream, set any value, beginning with rtmp://.

active

Camera object status. Set it to disabled – false.

stream_settings

Settings for streams from the CCTV camera. You have to fill only the mandatory parameter – detectors.

detectors

Detectors settings. You have to include this parameter into the request, but you can leave it empty.

Request example:

{
  "name": "Edge device",
  "url": "rtmp://none",
  "active": false,
  "stream_settings": {
      "detectors": {
      }
  }
}

Successful response example:

{
  "name": "Edge device",
  "url": "rtmp://none",
  "active": false,
  "single_pass": false,
  "stream_settings": {
      "rot": "",
      "play_speed": -1,
      "disable_drops": false,
      "ffmpeg_format": "",
      "ffmpeg_params": [],
      "video_transform": "",
      "use_stream_timestamp": false,
      "start_stream_timestamp": 0,
      "detectors": {}
  },
  "id": 2,
  "status": "UNKNOWN"
}

Configure the Edge Device

Warning

After this step the edge device will be able to send data to FindFace Lite for recognition. To get the results back you should set the Webhook.

Configure the edge device to send Events /v1/events/{object_type}/add POST requests. Event in case of edge device is a representation of an object (face or car) occurrence in the edge device zone, which is sent as a static file to the FinFace lite via API.

Here is the description of parameters the edge device should send to FindFace Lite API.

Parameters

Value types

Description

object_type

car

The path parameter /v1/events/{object_type}/add, which specify the object of recognition.

face

license_plate

token

string

Authorization via event_creation_token set in configuration file.

camera

a number

Camera ID with which the Event will be connected.

fullframe

binary

Image in any suitable for static content format (jpeg, png, etc.).

rotate

true

The technology of image rotation is enabled. System checks the objects position and tries to rotate if objects are upside down.

false

The technology of image rotation is disabled.

timestamp

date-time

The data-time in ISO format yyyy-MM-dd’T’HH:mm:ss. SSSXXX, for example: 2000-10-31T01:30:00.000-05:00

mf_selector

all

Multiface selector is enabled. All objects of Event are detected.

biggest

Multiface selector is disabled. There is only biggest object on the image is detected.

roi

numbers

Region of interest, which means image detection area. Specify the value in the [left, top, right, bottom] format, where values in the brackets are the numbers in grades

Request example:

{
    "object_type": "face",
    "token": "change_me",
    "camera": 2,
    "fullframe": "somehash.jpg",
    "rotate": true,
    "timestamp": "2000-10-31T01:30:00.000-05:00",
    "mf_selector": "biggest",
    "roi": 15,20,12,14
}

Successful response example:

The response view will be different, depending on the value of the setting event_creation_response_type in the config file.

  • If ID is set, you will see only main information and the ID of a created Event

{
  "events": [
    "cc04cc9c-f355-4121-80c4-94a02eec652a",
    "c7d51db3-5b52-4318-9565-e2651308c1a6"
  ]
}
  • If serialized is set, you will see full information about a created Event, including matched Card, path to the fullframe image, etc.

{
  "events": [
    {
      "bbox_bottom": 97,
      "bbox_left": 170,
      "bbox_right": 214,
      "bbox_top": 39,
      "bs_type": "realtime",
      "camera": 1,
      "card": null,
      "confidence": null,
      "created_date": "2022-12-29 13:02:07.910724+00:00",
      "emben": "bmY3Pff9Grt1Ah09lp8kvn+a6Tw8SZs8K5xtvLOjtrxFtJ+9d5WIPH3PHL39acg9oNWhu4Mv2j2VjPo8QqDjubiFkz05Bou9SywUvMZ39bxYIhs9ucWxPTbApD3n8468/aQBvfdqFD2/woc9j03iO5U3vT1P6ya9BfNyPUCBkz1Smmm8CIPvPRPxWTzWXxo8DwRGvMxfp7zRhGw8KyZzPtwoCT0Bx7C9AcKWvflgUb2NLWQ9KmmjPUJ83D3XFVY9wdO7vX7/BD7OU8M9grEbPVTZCb3mgXg+LxfEvdm6uL2wLh08BU6yPQhREz0kj1M98tY+PNbA9D1MDC07Tp4dPjh7n7zZAQS9/JFzPWEJCb1CTwU+3deFvQb/hz2YaAa+Qbjpvd0UFb7HVtG9NEhLPMfS1rouTk49f6DTOy9r/j373aG7hBa3PW/eJT5Zuz08cO+2PZXVDL3hemE8sYa0PY7Xtj06NAG+Stw4vrhQhb2KtJe9J8hCvTM/Gr4vr0m9S3jJvfnBHz2QVgk97HuzvH2V/71lQsk9l4UTvqE3XD1ssos9ErvcvY5CCr6ftJ+9EOarPOKvKr0JFa69sdyCvQPcuj2nLtO89I+iuzS+aj1bTgY+qT4dPtREVz2iwLu9pM45PTjxe7yOyOm88t74PXgj3b0FDKA9uiaRvcs1Ez5dchS+5lTpO7D3Tb3CVLw8zrQwvczzq7wPVEo9OeEgvORUOz1x2Qk9pWKDPTdDjT24pXQ9MCWJPRe8iD1Vxse7iNmivY+coT2VRrG93XugPeZlrr3LSoU9iwpcPTF2uT2Bwei8ffa+u6JwLTwAk8M8NfAaveDZqT3AchA9Y/rVPA==",
      "features": {
        "headpose": {
          "pitch": 11.207756,
          "roll": 3.1429868,
          "yaw": 4.652887
        },
        "liveness": 0.6584981
      },
      "fullframe": "2022/12/29/13/c436c2d92c4c627d5c6d13f9f1d9555a.jpg",
      "type": "face",
      "uuid": "2f25dd19-d0cd-4b44-9147-69a7dc57450e"
    },
    {
      "bbox_bottom": 75,
      "bbox_left": 57,
      "bbox_right": 100,
      "bbox_top": 15,
      "bs_type": "realtime",
      "camera": 1,
      "card": 1,
      "confidence": 0.8306973576545715,
      "created_date": "2022-12-29 13:02:07.917896+00:00",
      "emben": "yRukvSOo1r2Bcm699SGTO/SPBz64MvK9xINqPYDge708nSW9ba4BvAVgLz77ctw7A2OhvZ5LRD2rA1488DLnvSKQXT0ER367zf20vdypqb2Lhog8nIxjPa7E9rwqCuI9+lHAPDjvWLzxJko+aXkRPBgWfr1u9pi982G8PW4FEr3vlqM91VW5vXRxoz3B4OE9kLLPPfznu71po4w9sPVFPR4tH73Qe4+9/wKlvVlwr724Ll09LdqYPeOq170bDua8Zn2lPNy8dr0TF9G7VhMXPT6yQz0aCI49h8OJPQVTKb25oB+9x9++PEKCFj4uq0i8bBhoPbJPVjxkCgO+drV/PR3mrbxs+rW7TQqNO4QcFz2oI407H4nfvY/nQD57Y2m9ItFMPZQKibzobRi6cf9wPT1itz0lkw89qUv0vS9RVDxjGoC9E3SiOxqSsbzjnyc+P4ZnPpFjEz5XMZE8IuILvYvgQjwYu/A9waicOx9On7z0kW+9k7EmvIxuuLwdPPo9t5H3vQETLz6FcGQ+1fOqvfkwKz0rfSO9ckoivV65k70xw5296raIvYnnE70gaYa8IE1pvQZ+tr0VpJM8oAAWO8lU8zxlaai9WbnJvYPgHL29ouy8GyALvsoj1D1BiHg4+F2xPZlVhTyiiIS8eZFCPvUfTD4UcNQ9j2bqvUjSlbySk1+9q4ljvdcqyTzCnLa7hDgLvcb1oL2ScLo8GSYqvZW82b1Ppma9Ni2ePcrWQj34xxs8WU+2vbGUuD1+r0S9wieuPUfEkLthyCE9iZ3oOy9TED2RAVW93nXMvcVoAz1plvK9n8UOPUV3grvH+yO9DEohPAMYDD0hd+698x6TvQ==",
      "features": {
        "headpose": {
          "pitch": -2.1808214,
          "roll": -0.5856089,
          "yaw": -4.5041146
        },
        "liveness": 0.5574532
      },
      "fullframe": "2022/12/29/13/4ef96c620d738d87c00aaaaa12fccca2.jpg",
      "type": "face",
      "uuid": "9b718d45-919a-490f-9fe6-b2af58cbf83a"
    }
  ]
}
  • If serialized_verbose is set, you will see full information about a created Event (including a matched Card, path to the fullframe image, etc) and full information about the Card.

{
  "events": [
    {
      "bbox_bottom": 97,
      "bbox_left": 170,
      "bbox_right": 214,
      "bbox_top": 39,
      "bs_type": "realtime",
      "camera": {
        "active": false,
        "id": 1,
        "name": "test camera",
        "single_pass": false,
        "status": "DISABLED",
        "url": "rtmp://test"
      },
      "card": null,
      "confidence": null,
      "created_date": "2022-12-29 13:48:34.624541+00:00",
      "emben": "bmY3Pff9Grt1Ah09lp8kvn+a6Tw8SZs8K5xtvLOjtrxFtJ+9d5WIPH3PHL39acg9oNWhu4Mv2j2VjPo8QqDjubiFkz05Bou9SywUvMZ39bxYIhs9ucWxPTbApD3n8468/aQBvfdqFD2/woc9j03iO5U3vT1P6ya9BfNyPUCBkz1Smmm8CIPvPRPxWTzWXxo8DwRGvMxfp7zRhGw8KyZzPtwoCT0Bx7C9AcKWvflgUb2NLWQ9KmmjPUJ83D3XFVY9wdO7vX7/BD7OU8M9grEbPVTZCb3mgXg+LxfEvdm6uL2wLh08BU6yPQhREz0kj1M98tY+PNbA9D1MDC07Tp4dPjh7n7zZAQS9/JFzPWEJCb1CTwU+3deFvQb/hz2YaAa+Qbjpvd0UFb7HVtG9NEhLPMfS1rouTk49f6DTOy9r/j373aG7hBa3PW/eJT5Zuz08cO+2PZXVDL3hemE8sYa0PY7Xtj06NAG+Stw4vrhQhb2KtJe9J8hCvTM/Gr4vr0m9S3jJvfnBHz2QVgk97HuzvH2V/71lQsk9l4UTvqE3XD1ssos9ErvcvY5CCr6ftJ+9EOarPOKvKr0JFa69sdyCvQPcuj2nLtO89I+iuzS+aj1bTgY+qT4dPtREVz2iwLu9pM45PTjxe7yOyOm88t74PXgj3b0FDKA9uiaRvcs1Ez5dchS+5lTpO7D3Tb3CVLw8zrQwvczzq7wPVEo9OeEgvORUOz1x2Qk9pWKDPTdDjT24pXQ9MCWJPRe8iD1Vxse7iNmivY+coT2VRrG93XugPeZlrr3LSoU9iwpcPTF2uT2Bwei8ffa+u6JwLTwAk8M8NfAaveDZqT3AchA9Y/rVPA==",
      "features": {
        "headpose": {
          "pitch": 11.207756,
          "roll": 3.1429868,
          "yaw": 4.652887
        },
        "liveness": 0.6584981
      },
      "fullframe": "2022/12/29/13/5e870f4f9dbd1e27652f6384663b8cab.jpg",
      "type": "face",
      "uuid": "df0821b4-6e52-4b66-abd2-0f642e2a090a"
    },
    {
      "bbox_bottom": 75,
      "bbox_left": 57,
      "bbox_right": 100,
      "bbox_top": 15,
      "bs_type": "realtime",
      "camera": {
        "active": false,
        "id": 1,
        "name": "test camera",
        "single_pass": false,
        "status": "DISABLED",
        "url": "rtmp://test"
      },
      "card": {
        "active": true,
        "id": 1,
        "name": "test card",
        "objects": [],
        "type": "face",
        "wiegand": "test wiegand code"
      },
      "confidence": 0.8306973576545715,
      "created_date": "2022-12-29 13:48:34.633562+00:00",
      "emben": "yRukvSOo1r2Bcm699SGTO/SPBz64MvK9xINqPYDge708nSW9ba4BvAVgLz77ctw7A2OhvZ5LRD2rA1488DLnvSKQXT0ER367zf20vdypqb2Lhog8nIxjPa7E9rwqCuI9+lHAPDjvWLzxJko+aXkRPBgWfr1u9pi982G8PW4FEr3vlqM91VW5vXRxoz3B4OE9kLLPPfznu71po4w9sPVFPR4tH73Qe4+9/wKlvVlwr724Ll09LdqYPeOq170bDua8Zn2lPNy8dr0TF9G7VhMXPT6yQz0aCI49h8OJPQVTKb25oB+9x9++PEKCFj4uq0i8bBhoPbJPVjxkCgO+drV/PR3mrbxs+rW7TQqNO4QcFz2oI407H4nfvY/nQD57Y2m9ItFMPZQKibzobRi6cf9wPT1itz0lkw89qUv0vS9RVDxjGoC9E3SiOxqSsbzjnyc+P4ZnPpFjEz5XMZE8IuILvYvgQjwYu/A9waicOx9On7z0kW+9k7EmvIxuuLwdPPo9t5H3vQETLz6FcGQ+1fOqvfkwKz0rfSO9ckoivV65k70xw5296raIvYnnE70gaYa8IE1pvQZ+tr0VpJM8oAAWO8lU8zxlaai9WbnJvYPgHL29ouy8GyALvsoj1D1BiHg4+F2xPZlVhTyiiIS8eZFCPvUfTD4UcNQ9j2bqvUjSlbySk1+9q4ljvdcqyTzCnLa7hDgLvcb1oL2ScLo8GSYqvZW82b1Ppma9Ni2ePcrWQj34xxs8WU+2vbGUuD1+r0S9wieuPUfEkLthyCE9iZ3oOy9TED2RAVW93nXMvcVoAz1plvK9n8UOPUV3grvH+yO9DEohPAMYDD0hd+698x6TvQ==",
      "features": {
        "headpose": {
          "pitch": -2.1808214,
          "roll": -0.5856089,
          "yaw": -4.5041146
        },
        "liveness": 0.5574532
      },
      "fullframe": "2022/12/29/13/6041c2a71f4e2020d4cbaa52ce9b41f8.jpg",
      "type": "face",
      "uuid": "4dad4c16-f1cd-4ff1-a18a-268b71c1dbec"
    }
  ]
}

Webhooks

Webhook is a user-defined HTTP callbacks, triggering by an event in a web app.

You can use webhooks for various purposes, for instance, to notify a user about a specific Event, invoke required behavior on a target website, and solve security tasks such as automated access control.

For example, if you set up an edge device and want to send the result of recognition back to proceed with the object verification.

In order to FindFace Lite send an HTTP request to the URL when a required event occurs, configure the webhook.

Authenticate a device

To authenticate your device in the system, use /v1/auth/login POST request.

For username and password parameters use data you got on the STEP 5.

Request example:

{
  "username": "login",
  "password": "password"
}

Successfull response example:

{
  "access_token": "token"
}

Create a Webhook

To create a webhook use /v1/webhooks/ POST request. Below are described all parameters and values type:

Parameters

Value types

Description

name

string

The name of a webhook.

active

true

The default value, which means that the webhook is enabled.

false

The possible value, which means that the webhook is disabled.

target

string

Target URL to call when an event happens.

filters

object [filters]

A set of filters, according to which messages are sent or not to the target URL. Only specified filters influence the Webhook work. If empty, all Events are sent to the target URL.

type_in

string

If the created Event is matched with the specified types (face, car or license_plate), a message is sent to the target URL.

camera_in

a number or several numbers

If the created Event is connected with the specified Camera ID, a message is sent to the target URL.

card_in

a number or several numbers

If the created Event is matched with the specified Card ID, a message is sent to the target URL.

confidence_gte

a number from 0 to 1

If the result of recognition is greater than or equal to the specified value, a message is sent to the target URL.

confidence_lte

a number from 0 to 1

If the result of recognition is less than or equal to the specified value, a message is sent to the target URL.

matched

true

Only matched Events will trigger a message sending to the target URL.

false

Only unmatched Events will trigger a message sending to the target URL.

bs_type_in

overall

Only the best result of recognition for a particular time will trigger a message sending to the target URL.

realtime

All results of recognition will trigger a message sending to the target URL.

Yaw, Pitch, Roll

Parameters mean the angle of rotation in grades. They can only be apply if in the configuration file there is a headpose value for the face_features.

yaw_lte

a number

If the headpose yaw is less than or equal to the specified value, a message is sent to the target URL.

yaw_gte

a number

If the headpose yaw is greater than or equal to the specified value, a message is sent to the target URL.

pitch_lte

a number

If the headpose pitch is less than or equal to the specified value, a message is sent to the target URL.

pitch_gte

a number

If the headpose pitch is greater than or equal to the specified value, a message is sent to the target URL.

roll_lte

a number

If the headpose roll is less than or equal to the specified value, a message is sent to the target URL.

roll_gte

a number

If the headpose roll is greater than or equal to the specified value, a message is sent to the target URL.

liveness_lte

a number from 0 to 1

If the liveness level is less than or equal to the specified value, a message is sent to the target URL.

liveness_gte

a number from 0 to 1

If the liveness level is greater than or equal to the specified value, a message is sent to the target URL.

medmask

object

If the medmask feature is specified in the confinguration file, you can filter messages, sent to the target URL, by the result of medmask analysis while face recognition.

name

enum

Available opttons according to which events can be filtered:

none – there is no medmask on the face;

correct – medmask is put on correctly;

incorrect – medmask is put on incorrectly.

confidence_lte

a number from 0 to 1

If the confidence value of medmask recognition result is less than or equal to the specified value, a message is sent to the target URL.

confidence_gte

a number from 0 to 1

If the confidence value of medmask recognition result is greater than or equal to the specified value, a message is sent to the target URL.

orientation

object

If the orientation feature is specified in the confinguration file, you can filter webhooks by the result of car orientation analysis while car recognition.

name

enum

Available event recognition results according to which they can be filtered:

back – vehicle rear part;

side – vehicle side part;

front – vehicle front part.

confidence_lte

a number from 0 to 1

If the confidence value of car orientation recognition result is less than or equal to the specified value, a message is sent to the target URL.

confidence_gte

a number from 0 to 1

If the confidence value of car orientation recognition result is less than or equal to the specified value, a message is sent to the target URL.

special_type

object

If the special_type feature is specified in the confinguration file, you can filter webhooks by the result of car type analysis while car recognition.

name

enum

Available event recognition results according to which they can be filtered:

not_special – ordinary vehicle,

police,

ambulance,

road_service,

gas_service,

rescue_service,

other_special – all other special vehicles that are not specified as a separated group,

taxi,

route_transport,

car_sharing,

military.

confidence_lte

a number from 0 to 1

If the confidence value of car type recognition result is less than or equal to the specified value, a message is sent to the target URL.

confidence_gte

a number from 0 to 1

If the confidence value of car type recognition result is greater than or equal to the specified value, a message is sent to the target URL.

license_plate_number

string

If the recognised car license plate number is equal to the specified value, a message is sent to the target URL.

license_plate_visibility

object

license_plate_visibility is mandatory feature and enabled implicitly in the configuration file. You can filter webhooks by the result of car license plate visibility while car recognition.

name

enum

Available event recognition results according to which they can be filtered:

partly_visible_no_text – license plate is without text and partly visible,

fully_visible_no_text – license plate is without text and fully visible,

invisible – license plate is invisible,

partly_visible – license plate is with text and partly visible,

fully_visible – license plate is with text and fully visible.

confidence_lte

a number from 0 to 1

If the confidence value of car orientation recognition result is less than or equal to the specified value, a message is sent to the target URL.

confidence_gte

a number from 0 to 1

If the confidence value of car orientation recognition result is less than or equal to the specified value, a message is sent to the target URL.

license_plate_event_number

string

Filter can only be applied to license_plate Event type. If the recognised car license plate number in the event is equal to the specified value, a message is sent to the target URL.

send_attempts

a number

Numbers of attempts to send a message to the target URL. The attempts are unlimited if 0 is set.

Request example:

{
  "name": "test webhook",
  "active": true,
  "target": "http://localhost/webhok_test",
  "filters": {
    "camera_in": [
      1,
      2
    ],
    "card_in": [
      4,
      5
    ],
    "confidence_gte": 0.75,
    "confidence_lte": 0.79,
    "matched": true,
    "bs_type_in": [
      "overall",
      "realtime"
    ],
    "yaw_lte": 3.5,
    "yaw_gte": 3.5,
    "pitch_lte": -4.2,
    "pitch_gte": -4.2,
    "roll_lte": 1.8,
    "roll_gte": 1.8,
    "liveness_lte": 0.44,
    "liveness_gte": 0.44
  },
  "send_attempts": 3
}

Successfull response example:

{
  "name": "test webhook",
  "active": true,
  "target": "http://localhost/webhok_test",
  "filters": {
    "camera_in": [
      1,
      2
    ],
    "card_in": [
      4,
      5
    ],
    "confidence_gte": 0.75,
    "confidence_lte": 0.79,
    "matched": true,
    "bs_type_in": [
      "overall",
      "realtime"
    ],
    "yaw_lte": 3.5,
    "yaw_gte": 3.5,
    "pitch_lte": -4.2,
    "pitch_gte": -4.2,
    "roll_lte": 1.8,
    "roll_gte": 1.8,
    "liveness_lte": 0.44,
    "liveness_gte": 0.44
  },
  "send_attempts": 3,
  "id": 1
}