Face Detection and Recognition

SDK.detect_faces(self: tfsdk.SDK)List[tfsdk.FaceBoxAndLandmarks]

Detect all the faces in the image and return the bounding boxes and facial landmarks. This method has a small false positive rate. To reduce the false positive rate to near zero, filter out faces with score lower than 0.90. Alternatively, you can use the FACEDETECTIONFILTER configuration option to filter the detected faces. The face detector has a detection scale range of about 5 octaves. tfsdk.ConfigurationOptions.smallest_face_height determines the lower of the detection scale range. E.g., setting tfsdk.ConfigurationOptions.smallest_face_height to 40 pixels yields the detection scale range of ~40 pixels to 1280 (=40x2^5) pixels.

Returns

A list of FaceBoxAndLandmarks representing each of the detected faces. If not faces are found, the list will be empty. The detected faces are sorted in order of descending face score.

The recall and precision of the face detection algorithm on the WIDER FACE dataset:

_images/face_detection_roc.png

The effect of face height on similarity score:

_images/face_height_match_score_FULL_model.png _images/face_height_match_score_LITE_model.png
SDK.detect_largest_face(self: tfsdk.SDK)Tuple[bool, tfsdk.FaceBoxAndLandmarks]

Detect the largest face in the image. This method has a small false positive rate. To reduce the false positive rate to near zero, filter out faces with score lower than 0.90. Alternatively, you can use the FACEDETECTIONFILTER configuration option to filter the detected faces. See tfsdk.SDK.detect_faces() for detection range.

Returns

A bool indicating if a face was detected and the corresponding tfsdk.FaceBoxAndLandmarks, in that order.

Deprecated since version 0.17: Use tfsdk.SDK.detect_faces() or tfsdk.SDK.detect_largest_face() instead.

SDK.get_face_landmarks(self: tfsdk.SDK, face_box_and_landmarks: tfsdk.FaceBoxAndLandmarks)Tuple[tfsdk.ERRORCODE, List[Trueface::Point<int>]]

Obtain the 106 face landmarks.

Parameters

face_box_and_landmarks - tfsdk.FaceBoxAndLandmarks returned by tfsdk.SDK.detect_faces() or tfsdk.SDK.detect_largest_face().

Returns

The tfsdk.ERRORCODE and list of the 106 face landmark points, returned in that order.

Obtain the 106 face landmarks.

The order of the face landmarks:

_images/landmarks.png
SDK.extract_aligned_face(*args, **kwargs)

Overloaded function.

  1. extract_aligned_face(self: tfsdk.SDK, face_box_and_landmarks: tfsdk.FaceBoxAndLandmarks, margin_left: int = 0, margin_top: int = 0, margin_right: int = 0, margin_bottom: int = 0, scale: float = 1.0) -> numpy.ndarray[uint8]

    Extract the aligned face chip in a Numpy array. Changing the margins and scale will change the face chip size. If using the face chip with Trueface algorithms (ex face recognition), do not change the default margin and scale values.

    Parameters
    • face_box_and_landmarks - the tfsdk.FaceBoxAndLandmarks returned by tfsdk.SDK.detect_largest_face() or tfsdk.SDK.detect_faces().

    • margin_left - adds a margin to the left side of the face chip (default = 0).

    • margin_top - adds a margin to the top side of the face chip (default = 0).

    • margin_right - adds a margin to the right side of the face chip (default = 0).

    • margin_bottom - adds a margin to the bottom side of the face chip (default = 0).

    • scale - changes the scale of the face chip (default = 1).

    Returns

    Returns a numpy array containing the face chip.

  2. extract_aligned_face(self: tfsdk.SDK, buffer_pointer: int, face_box_and_landmarks: tfsdk.FaceBoxAndLandmarks, margin_left: int = 0, margin_top: int = 0, margin_right: int = 0, margin_bottom: int = 0, scale: float = 1.0) -> tfsdk.ERRORCODE

    Extract the aligned face chip in a Numpy array. Changing the margins and scale will change the face chip size. If using the face chip with Trueface algorithms (ex face recognition), do not change the default margin and scale values. This function override requires the caller to allocate the memory required for the face chip. The buffer size can be computed as follows: width = int((112+margin_left+margin_right)*scale), height = int((112+margin_top+margin_bottom)*scale), and therefore the buffer size is computed as: width * height * 3

    Parameters
    • buffer_pointer - a buffer allocated by the caller which the face chip will be written to.

    • face_box_and_landmarks - the tfsdk.FaceBoxAndLandmarks returned by tfsdk.SDK.detect_largest_face() or tfsdk.SDK.detect_faces().

    • margin_left - adds a margin to the left side of the face chip (default = 0).

    • margin_top - adds a margin to the top side of the face chip (default = 0).

    • margin_right - adds a margin to the right side of the face chip (default = 0).

    • margin_bottom - adds a margin to the bottom side of the face chip (default = 0).

    • scale - changes the scale of the face chip (default = 1).

    Returns

    The tfsdk.ERRORCODE.

SDK.save_face_image(self: tfsdk.SDK, face_image: numpy.ndarray[uint8], filepath: str)None

Store the face image in JPEG file.

Parameters
  • face_image - the face chip as a numpy array, must be 112x112 pixels, therefore use the default margin and scale parameters when calling tfsdk.SDK.extract_aligned_face().

  • filepath - relative or absolute file path without a file extension.

Returns

Error code, see ERRORCODE

SDK.get_face_feature_vector(*args, **kwargs)

Overloaded function.

  1. get_face_feature_vector(self: tfsdk.SDK, aligned_face_image: numpy.ndarray[uint8]) -> Tuple[tfsdk.ERRORCODE, tfsdk.Faceprint]

    Return the corresponding feature vector for the given aligned face image.

    Parameters

    aligned_face_image - aligned face chip as numpy array. Face chip must have size of 112x112 pixels, therefore use the default margin and scale parameters when calling tfsdk.SDK.extract_aligned_face().

    Returns

    The ERRORCODE and tfsdk.Faceprint, in that order.

  2. get_face_feature_vector(self: tfsdk.SDK, face_box_and_landmarks: tfsdk.FaceBoxAndLandmarks) -> Tuple[tfsdk.ERRORCODE, tfsdk.Faceprint]

    Extract the face feature vector from the face box.

    Parameters

    face_box_and_landmarks - face box and landmarks returned by tfsdk.SDK.detect_faces() or tfsdk.SDK.detect_largest_face().

    Returns

    The ERRORCODE and tfsdk.Faceprint, in that order.

SDK.get_face_feature_vectors(*args, **kwargs)

Overloaded function.

  1. get_face_feature_vectors(self: tfsdk.SDK, aligned_face_images: List[numpy.ndarray[uint8]]) -> Tuple[tfsdk.ERRORCODE, List[tfsdk.Faceprint]]

    Extract the face feature vectors from the aligned face images.

    Parameters

    aligned_face_images - a list of numpy array aligned face image buffers.

    Returns

    The ERRORCODE and a list of tfsdk.Faceprint, in that order.

  2. get_face_feature_vectors(self: tfsdk.SDK, aligned_face_images: List[int]) -> Tuple[tfsdk.ERRORCODE, List[tfsdk.Faceprint]]

    Extract the face feature vectors from the aligned face images.

    Parameters

    aligned_face_images - a pointer to a list of aligned face image buffers.

    Returns

    The ERRORCODE and a list of tfsdk.Faceprint, in that order.

SDK.get_largest_face_feature_vector(self: tfsdk.SDK)Tuple[tfsdk.ERRORCODE, tfsdk.Faceprint, bool]

Detect the largest face in the image and return the corresponding feature vector.

Returns

The ERRORCODE, tfsdk.Faceprint, and a bool indicating if a face was detected, in that order. If not face was detected in the image, the Faceprint will be empty.

static SDK.faceprint_to_json(faceprint: tfsdk.Faceprint)str

Convert a tfsdk.Faceprint into a json string.

Parameters

faceprint – the tfsdk.Faceprint to convert to a string.

Returns

The string representation of the tfsdk.Faceprint

static SDK.json_to_faceprint(json_string: str)Tuple[tfsdk.ERRORCODE, tfsdk.Faceprint]

Create a tfsdk.Faceprint from a json string.

Parameters

json_string – the json string representation of a tfsdk.Faceprint, generated from the tfsdk.SDK.json_to_faceprint() function.

Returns

The tfsdk.Faceprint generated from the json string.

SDK.get_similarity(self: tfsdk.SDK, feature_vector_1: tfsdk.Faceprint, feature_vector_2: tfsdk.Faceprint)Tuple[tfsdk.ERRORCODE, float, float]

Compute the similarity of the given feature vectors.

Parameters
  • feature_vector_1 – the first Faceprint to be compared.

  • feature_vector_2 – the second Faceprint to be compared.

Returns

The ERRORCODE, match probability and similairty score, in that order. The match probability is the probability that the two faces feature vectors are a match, while the similairty is the computed similairty score.

SDK.estimate_face_image_quality(self: tfsdk.SDK, aligned_face_image: numpy.ndarray[uint8])Tuple[tfsdk.ERRORCODE, float]

Estimate the image quality for face recognition.

Parameters

aligned_face_image - the numpy aligned face chip image returned by tfsdk.SDK.extract_aligned_face().

Returns

The ERRORCODE and face quality score, in that order. The quality score is between 0 and 1, 1 being perfect quality.

SDK.estimate_head_orientation(self: tfsdk.SDK, face_box_and_landmarks: tfsdk.FaceBoxAndLandmarks)Tuple[tfsdk.ERRORCODE, float, float, float]

Estimate the head orientation using the detected facial landmarks.

Parameters

face_box_and_landmarks - the tfsdk.FaceBoxAndLandmarks returned by tfsdk.SDK.detect_largest_face() or tfsdk.SDK.detect_faces().

Returns

The ERRORCODE, yaw, pitch, roll, in that order. Angles are in radians.

The accuracy of this method is estimated using 1920x1080 pixel test images. A test image:

_images/yaw_positive_20.jpg

The accuracy of the head orientation estimation:

_images/yaw_estimation_accuracy.png

The effect of the face yaw angle on match similarity can be seen in the following figure:

_images/yaw_vs_sim_score.png

The effect of the face pitch angle on match similarity can be seen in the following figure:

_images/pitch_vs_sim_score.png
class tfsdk.FaceBoxAndLandmarks
property bottom_right

The bottom-right corner Point of the bounding box.

property landmarks

The list of facial landmark points (Point) in this order: left eye, right eye, nose, left mouth corner, right mouth corner.

property score

Likelihood of this being a true positive; a value lower than 0.85 indicates a high chance of being a false positive.

property top_left

The top-left corner Point of the bounding box.

class tfsdk.Faceprint
compare(self: tfsdk.Faceprint, fp: tfsdk.Faceprint)Tuple[tfsdk.ERRORCODE, float, float]

Compare the similarity between two tfsdk.Faceprint. The same as tfsdk.SDK.get_similairty()

Parameters

fp - the tfsdk.Faceprint to compare against.

Returns

The ERRORCODE, match probability, and similarity score, in that order.

property feature_vector

Vector of floats which describe the face.

get_quantized_vector(self: tfsdk.Faceprint)numpy.ndarray[int16]

Return the feature vector as a list of 16-bit integers. This is useful for when the tfsdk.ModelOptions.fr_vector_compression option is enabled and you require an integer representation of the quantized feature vector.

Returns

A 16-bit numpy array representation of the tfsdk.Faceprint.feature_vector.

property model_name

Name of model used to generate feature vector.

property model_options

Additional options used for generating the feature vector.

property sdk_version

SDK version used to generate feature vector.

set_quantized_vector(self: tfsdk.Faceprint, quantized_feature_vector: numpy.ndarray[int16])None

Populate the tfsdk.Faceprint.feature_vector() from a quantized 16-bit numpy array generated by tfsdk.Faceprint.get_quanitzed_vector()

Parameters

quantized_feature_vector - the 16-bit numpy array feature vector.

class tfsdk.ModelOptions
property fr_vector_compression

Indicates if the tfsdk.ConfigurationOptions.fr_vector_compression option was enabled when generating the feature vector.

class tfsdk.Point
property x

Coordinate along the horizontal axis, or pixel column.

property y

Coordinate along the vertical axis, or pixel row.