Skip to content

How do I configure Amazon Rekognition Face Liveness for mobile applications?

4 minute read
0

I want to use Amazon Rekognition Face Liveness in my mobile application to verify that users are real and reduce fraud during authentication.

Resolution

Configure the Face Liveness AWS SDKs

Install and configure the AWS SDK for Python (Boto3) and the AWS Amplify UI React Liveness library. The SDK for Python interacts with Amazon Rekognition from your backend API and the Amplify library renders the Face Liveness frontend in your React application.

Note: The AWS Identity and Access Management (IAM) user or role that calls Face Liveness must have the AmazonRekognitionFullAccess and AmazonS3FullAccess permissions.

Create a backend API

To manage the session and securely return results, create a backend API that's as a bridge between your frontend application and Amazon Rekognition. The API doesn't expose your AWS credentials to the client.

To build the backend API, use AWS Lambda with Amazon API Gateway, a Node.js or Python REST API, or a server-side framework.

Configure the API to call the CreateFaceLivenessSession operation to initiate a Face Liveness session and return a Session ID. The frontend uses the session ID to start the Face Liveness video stream and retrieve results. The API must also call the GetFaceLivenessSessionResult operation to retrieve the Face Liveness results.

Example code snippet to initiate a Face Liveness session:

import boto3

rek_client = boto3.client('rekognition')
response = rek_client.create_face_liveness_session()
session_id = response.get("SessionId")

session_id

Example code snippet to retrieve the results of the Face Liveness session:

import boto3

rek_client = boto3.client('rekognition')
response = rek_client.get_face_liveness_session_results(SessionId=session_id)
imageStream = io.BytesIO(response['ReferenceImage']['Bytes'])
referenceImage = base64.b64encode(imageStream.getvalue())
response['ReferenceImage']['Bytes'] = referenceImage

response

Note: Replace session_id with the Session ID that the CreateFaceLivenessSession API call returned.

Configure a React Frontend

After the Face Liveness session completes, compare the confidence score against the threshold that you configured. If the check fails, then Amazon Rekognition prompts you to try again. If the score is higher than the threshold, then continue the review process.

The FaceLivenessDetector component handles the user interface, display challenges such as color sequences or face movements, records the video, and streams the video to Amazon Rekognition.

To configure the React application to use the FaceLivenessDetector component, complete the following steps:

  1. Configure Amplify in your application with your AWS Region and other details.
  2. After the backend returns a Session ID, configure the FaceLivenessDetector component in React with the Session ID. The FaceLivenessDetector component automatically handles StartFaceLivenessSession when it receives the Session ID.
  3. Configure onAnalysisComplete and onError callbacks to handle the liveness flow and errors that might occur. The onAnalysisComplete callback initiates when the liveness check completes and the onError callback initiates when an error occurs during the liveness flow.
  4. In the onAnalysisComplete callback, make a POST request to your backend API with the Session ID to retrieve the session results. Your backend then calls GetFaceLivenessSessionResults and returns the confidence score to your frontend.
  5. Grant or deny access based on the confidence score and your application's requirements.

The following example React component creates a Face Liveness session, renders the FaceLivenessDetector component, and retrieves the session results when the liveness check completes:

import React from "react";
import { useEffect } from "react";
import { Loader } from '@aws-amplify/ui-react';
import '@aws-amplify/ui-react/styles.css';
import { FaceLivenessDetector } from '@aws-amplify/ui-react-liveness';


function FaceLiveness({faceLivenessAnalysis}) {
    const [loading, setLoading] = React.useState(true);
    const [sessionId, setSessionId] = React.useState(null)
    const endpoint = process.env.REACT_APP_ENV_API_URL ? process.env.REACT_APP_ENV_API_URL : ''

    useEffect(() => {
        /*
         * API call to create the Face Liveness Session
         */
        const fetchCreateLiveness = async () => {
            const response = await fetch(endpoint + 'createfacelivenesssession');
            const data = await response.json();
            setSessionId(data.sessionId)
            setLoading(false);

        };
        fetchCreateLiveness();

    },[])

    /*
   * Get the Face Liveness Session Result
   */
    const handleAnalysisComplete = async () => {
        /*
         * API call to get the Face Liveness Session result
         */
        const response = await fetch(endpoint + 'getfacelivenesssessionresults',
            {
                method: 'POST',
                headers: {
                    'Accept': 'application/json',
                    'Content-Type': 'application/json'
                },
                body: JSON.stringify({ sessionid: sessionId })
            }

        );
        const data = await response.json();
        faceLivenessAnalysis(data.body)
    };

    return (
        <>
            {loading ? (
                <Loader />
            ) : (
                <FaceLivenessDetector
                    sessionId={sessionId}
                    region="us-east-1"
                    onAnalysisComplete={handleAnalysisComplete}
                    onError={(error) => {
                        console.error(error);
                      }}
                />
            )}
        </>
    );
}

export default FaceLiveness;

Related information

Amazon Rekognition Face Liveness on the GitHub website

Detecting face liveness

Recommendations for Usage of Face Liveness

AWS OFFICIALUpdated 3 days ago