Face Detection

class tfsdk.TFFacechip
TFFacechip.save_image(self: tfsdk.TFFacechip, filepath: str)None

Save the face chip to disk.

Parameters

filepath - the filepath where the face chip should be saved, including the image extension.

TFFacechip.load_image(self: tfsdk.TFFacechip, filepath: str, gpu_memory: bool, gpu_index: int)tfsdk.ERRORCODE

Load the face chip from disk.

Parameters

filepath - the filepath of the facechip to load. gpu_memory - read the image into GPU memory. This should be set to true when running GPU inference. gpu_index - the GPU index.

TFFacechip.get_height(self: tfsdk.TFFacechip)int

Get the face chip height in pixels.

Returns

Returns the face chip height in pixels.

TFFacechip.get_width(self: tfsdk.TFFacechip)int

Get the face chip width in pixels.

Returns

Returns the face chip width in pixels.

SDK.detect_faces(self: tfsdk.SDK, tf_image: tfsdk.TFImage)List[tfsdk.FaceBoxAndLandmarks]

Detect all the faces in the image and return the bounding boxes and facial landmarks. This method has a small false positive rate. To reduce the false positive rate to near zero, filter out faces with score lower than 0.90. Alternatively, you can use the FACEDETECTIONFILTER configuration option to filter the detected faces. The face detector has a detection scale range of about 5 octaves. tfsdk.ConfigurationOptions.smallest_face_height determines the lower of the detection scale range. E.g., setting tfsdk.ConfigurationOptions.smallest_face_height to 40 pixels yields the detection scale range of ~40 pixels to 1280 (=40x2^5) pixels.

Parameters

tf_image – the input tfsdk.TFImage, returned by tfsdk.SDK.preprocess_image().

Returns

A list of FaceBoxAndLandmarks representing each of the detected faces. If no faces are found, the list will be empty. The detected faces are sorted in order of descending face score.

The recall and precision of the face detection algorithm on the WIDER FACE dataset:

_images/face_detection_roc.png

The effect of face height on similarity score:

_images/face_height_match_score_FULL_model.png _images/face_height_match_score_LITE_model.png
SDK.detect_largest_face(self: tfsdk.SDK, tf_image: tfsdk.TFImage)Tuple[bool, tfsdk.FaceBoxAndLandmarks]

Detect the largest face in the image. This method has a small false positive rate. To reduce the false positive rate to near zero, filter out faces with score lower than 0.90. Alternatively, you can use the FACEDETECTIONFILTER configuration option to filter the detected faces. See tfsdk.SDK.detect_faces() for detection range.

Parameters

tf_image – the input tfsdk.TFImage, returned by tfsdk.SDK.preprocess_image().

Returns

A bool indicating if a face was detected and the corresponding tfsdk.FaceBoxAndLandmarks, in that order.

SDK.get_face_landmarks(self: tfsdk.SDK, tf_image: tfsdk.TFImage, face_box_and_landmarks: tfsdk.FaceBoxAndLandmarks)Tuple[tfsdk.ERRORCODE, List[tfsdk.Point[106]]]

Obtain the 106 face landmarks.

Parameters

Returns

The tfsdk.ERRORCODE and list of the 106 face landmark points, returned in that order.

Obtain the 106 face landmarks.

The order of the face landmarks:

_images/landmarks.png
SDK.extract_aligned_face(self: tfsdk.SDK, tf_image: tfsdk.TFImage, face_box_and_landmarks: tfsdk.FaceBoxAndLandmarks, margin_left: int = 0, margin_top: int = 0, margin_right: int = 0, margin_bottom: int = 0, scale: float = 1.0)tfsdk.TFFacechip

Extract the aligned face chip. Changing the margins and scale will change the face chip size. If using the face chip with Trueface algorithms (ex face recognition), do not change the default margin and scale values.

Parameters
Returns

Returns tfsdk.TFFacechip.

SDK.estimate_head_orientation(self: tfsdk.SDK, tf_image: tfsdk.TFImage, face_box_and_landmarks: tfsdk.FaceBoxAndLandmarks)Tuple[tfsdk.ERRORCODE, float, float, float]

Estimate the head orientation using the detected facial landmarks. Only works with images where the face is at least 100 pixels in height, otherwise returns ERRORCODE.FACE_TOO_SMALL

Parameters
Returns

The ERRORCODE, yaw, pitch, roll, in that order. Angles are in radians.

The accuracy of this method is estimated using 1920x1080 pixel test images. A test image:

_images/yaw_positive_20.jpg

The accuracy of the head orientation estimation:

_images/yaw_estimation_accuracy.png

The effect of the face yaw angle on match similarity can be seen in the following figure:

_images/yaw_vs_sim_score.png

The effect of the face pitch angle on match similarity can be seen in the following figure:

_images/pitch_vs_sim_score.png
class tfsdk.Point
to_dict(self: tfsdk.Point)dict
property x

Coordinate along the horizontal axis, or pixel column.

property y

Coordinate along the vertical axis, or pixel row.

class tfsdk.FaceBoxAndLandmarks
property bottom_right

The bottom-right corner Point of the bounding box.

property landmarks

The list of facial landmark points (Point) in this order: subject right eye, subject left eye, nose, subject right mouth corner, subject left mouth corner.

property score

Likelihood of this being a true positive; a value lower than 0.85 indicates a high chance of being a false positive.

to_dict(self: tfsdk.FaceBoxAndLandmarks)dict
property top_left

The top-left corner Point of the bounding box.

The order of the face landmarks:

_images/landmarks2.jpg