Enable Face Liveness Detection
The FindFace Multi face liveness detector tells apart live faces from face representations, such as face images, videos, or masks. The liveness detector estimates face liveness with a certain level of confidence and returns the confidence score along with a binary result real/fake
, depending on the pre-defined liveness threshold.
On video streams from cameras and uploaded video files, liveness detection is performed by the findface-video-worker
service. You can enable liveness detection during FindFace Multi standard installation by answering y
to the question Enable liveness and attempt to continue installation?(y/n)
. If you skipped this step, you can manually enable liveness later, following the instructions below.
Note
The face liveness detector functions on both CPU- and GPU-acceleration. However, it is much slower on CPU.
In addition to liveness detection, realized by the findface-video-worker
service, FindFace Multi provides an API-based face liveness detection service findface-liveness-api
. The findface-liveness-api
service is automatically installed and enabled during the standard deployment, as FindFace Multi uses it for authentication by face. The service supports CPU- or GPU-decoding.
Liveness detection can be enabled on external detectors. Frames from external detectors are processed by the findface-extraction-api
service. The service supports CPU- or GPU-acceleration. The liveness extraction model faceattr.liveness_web.v1
, that comes in the default findface-extraction-api.yaml
configuration, is used for authentication by face. If your use case implies liveness detection on an external detector, you may want to replace the default liveness recognition model in the findface-extraction-api.yaml
file with the model that is recommended for liveness spoofing attack detection on PACS cameras. Read more in the Configure Liveness Recognition Model section.
In this section:
Enable Face Liveness Detector
To enable the face liveness detector on video streams and uploaded video files, do the following:
Open the
/opt/findface-multi/configs/findface-video-worker/findface-video-worker.yaml
configuration file. In theliveness
section, specify the neural network models as shown in the example:GPU
sudo vi /opt/findface-multi/configs/findface-video-worker/findface-video-worker.yaml liveness: fnk: /usr/share/findface-data/models/faceattr/faceattr.liveness_pacs.v3.gpu.fnk norm: /usr/share/findface-data/models/facenorm/crop2x.v2_maxsize400.gpu.fnk ...
CPU
sudo vi /opt/findface-multi/configs/findface-video-worker/findface-video-worker.yaml liveness: fnk: /usr/share/findface-data/models/faceattr/faceattr.liveness_pacs.v3.cpu.fnk norm: /usr/share/findface-data/models/facenorm/crop2x.v2_maxsize400.cpu.fnk ...
Restart the
findface-multi-findface-video-worker-1
container.sudo docker restart findface-multi-findface-video-worker-1
Configure Liveness Threshold
If necessary, you can adjust the liveness threshold in the /opt/findface-multi/configs/findface-multi-legacy/findface-multi-legacy.py
configuration file. The liveness detector will estimate face liveness with a certain level of confidence. Depending on the threshold value, it will return a binary result real
or fake
.
Note
The default value is optimal. Before changing the threshold, we recommend that you seek advice from our experts at support@ntechlab.com.
sudo vi /opt/findface-multi/configs/findface-multi-legacy/findface-multi-legacy.py
FFSECURITY = {
...
# feature specific confidence thresholds
'LIVENESS_THRESHOLD': 0.885, # model: [faceattr.liveness_pacs.v3]
...
Restart all FindFace Multi containers.
cd /opt/findface-multi/
sudo docker-compose restart