Welcome to Trueface’s documentation!

trueface.access_control

trueface.age

module to handle age detection

trueface.face_attributes

module to handle face attributes like emotion

trueface.heartbeat

Heartbeat detector

trueface.helper

helper methods

trueface.motion

Motion Detector module

trueface.object_detection

Object Detection Module

trueface.recognition

Recognition Module

trueface.searching

Search Module

trueface.spoof

Spoof detection module

trueface.tracking

Tracking module

trueface.utils

utility methods

trueface.video

Video module

module to handle age detection

class trueface.age.AgeDetector(model_path=None, params_path=None, license=None, ctx='cpu', set_weights=True)

Age Detector class

Constructor

Parameters
  • model_path (str) – path to model file

  • params_path (str) – path to params file

  • license (str) – the license key you got from trueface

  • ctx (str) – context (either ‘cpu’ or ‘gpu’)

  • set_weights (bool) – whether to set weights or not

predict(chip)

Take a chip and predict the age

Parameters

chip (opencv image) – the chip to predict the age for

Returns

Return type

age (float)

module to handle face attributes like emotion

class trueface.face_attributes.FaceAttributes(model, params, labels)

Face attributes class

init emotion class

get_attributes(chip)

gets emotion for a face

Heartbeat detector

class trueface.heartbeat.HeartbeatDetector(buffer_size=250)

Heartbeat detector class

helper methods

trueface.helper.adjust_input(in_data)

adjust the input from (h, w, c) to ( 1, c, h, w) for network input

in_data: numpy array of shape (h, w, c)

input data

out_data: numpy array of shape (1, c, h, w)

reshaped array

trueface.helper.detect_first_stage(img, net, scale, threshold)

run PNet for first stage

img: numpy array, bgr order

input image

scale: float number

how much should the input image scale

net: PNet

worker

total_boxes : bboxes

trueface.helper.generate_bbox(map, reg, scale, threshold)

generate bbox from feature map

map: numpy array , n x m x 1

detect score for each position

reg: numpy array , n x m x 4

bbox

scale: float number

scale of this detection

threshold: float number

detect threshold

bbox array

trueface.helper.nms(boxes, overlap_threshold, mode='Union')

non max suppression

box: numpy array n x 5

input bbox array

overlap_threshold: float number

threshold of overlap

mode: float number

how to compute overlap ratio, ‘Union’ or ‘Min’

index array of the selected bbox

Motion Detector module

class trueface.motion.MotionDetector(frame, threshold=2, max_value=2)

MotionDetector class

Object Detection Module

class trueface.object_detection.ObjectRecognizer(ctx='cpu', model_path=None, params_path=None, classes=None, license=None, conf_threshold=0.5, nms_threshold=0.4, dims=(None, None))

TF Local Object Detector

Parameters
  • ctx – ‘cpu’ or ‘gpu’

  • model_path – path to model

  • params_path – path to params file

  • license – your license token

  • method – ‘ssd’ or ‘yolo’

  • conf_threshold – set the conf_threshold

  • nms_threshold – set the nms_threshold

  • dims

batch_predict(images)

Detect object

Parameters

images

Returns

list of results

compute_resize_scale(image_shape, min_side=512, max_side=700)

Compute an image scale such that the image size is constrained to min_side and max_side. Args

min_side: The image’s min side will be equal to min_side after resizing. max_side: If after resizing the image’s max side is above max_side, resize until the max side is equal to max_side.

Returns

A resizing scale.

predict(image)

Detect object

Parameters

image

Returns

list of results

resize_image(img, min_side=512, max_side=700)

Resize an image such that the size is constrained to min_side and max_side. Args

min_side: The image’s min side will be equal to min_side after resizing. max_side: If after resizing the image’s max side is above max_side, resize until the max side is equal to max_side.

Returns

A resized image.

Recognition Module

class trueface.recognition.BaseRecognizer(license, ctx='cpu', gpu=0)

BaseRecognizer class

batch_make_mx_input(images, dims=(512, 512))

Prepare list of images for MXNet input :param images: :param dims:

Returns

list with mx.nd.array elements

batch_preprocess_image(images, dims=(512, 512))

Takes a list of images, preprocesses them (resize and BGR2RGB

Parameters
  • images

  • dims

Returns

list of preprocessed images

blur_region(region, frame)

blurs a region in the image

Parameters
  • region – (leftx, topy, rightx, bottomy)

  • frame

cosine_sim(feature, collection_features, length=512)

Cosine Similarity with mxnet

Parameters
  • feature – the source feature

  • collection_features – a list of features

  • length

draw_box(img, box)

draws a box on the image

Parameters
  • img (str or binary) – image path, base64 encoded image, numpy array or OpenCV image

  • box – (pt1, pt2, pt3, pt4)

  • pt1 – (x coordinate of vertex, y coordinate of vertex)

  • pt2 – (x coordinate of point opposite vertex, y coordinate of vertex)

draw_label(image, point, label, font=0, font_scale=0.5, thickness=1)

Draw label on the image

Parameters
  • img (str or binary) – image path, base64 encoded image, numpy array or OpenCV image

  • point (tuple) – (x_label, y_label)

  • label (str) – The label you want to write

  • font (int) – your preferred font

  • font_scale (float) – scaling factor for the font

  • thickness (int) – thickness of text

get_image(_bin, rgb=True)

gets an image from a path or from base64 string

Parameters
  • _bin (str) – filesystem path or base64 string

  • rgb (bool) – whether to perform BGR2RGB conversion

Returns

a pre-handled image ready for further processing

get_string_from_cv2(image, encode=False)

Gets a string from a cv2 image

Parameters
  • image

  • encode

Returns

string representation of image

make_cv_input(image)

Create OpenCV blob from an image

Parameters

image

Returns

cv2.dnn.blob

make_mx_input(image, dims=(512, 512))

prepare image for MXNet input

Parameters
  • image

  • dims

Returns

mx.nd.array

preprocess_image(image, dims=(512, 512))

resize image to dims and perform a BGR2RGB conversion

Parameters
  • image (opencv image) – Image

  • dims – Dimensions

Returns:

class trueface.recognition.ColorRecognizer(n_clusters=10)

Color Recognizer class

Parameters

n_clusters

centroid_histogram()

grab the number of different clusters and create a histogram based on the number of pixels assigned to each cluster

Returns

Return type

histogram

detect(img)

detect colors present in image :param img: the input image

Returns

list of colors along with percentages

class trueface.recognition.FaceRecognizer(ctx='cpu', min_face=40, accurate_landmark=False, fd_model_path=None, fr_model_path=None, params_path=None, license=None, gpu=0)

TF Local Face Detector

static average_features(features_lists)
return the mean of all features in the list

This allows us to directly input a list of tracked objects where some of them will have a list of features and some will not

Parameters

features_lists – list of features lists or None

Returns:

batch_get_features(image_array, batch_size=24, progress_bar=True)
returns face features for a list of images

this function does not perform any face detection but feeds the provided image directly to the model for batch feature extraction

Parameters
  • image_array (list) –

  • batch_size (int) –

  • progress_bar (bool) –

batch_identify(chips, collection='collection.npz', features=None, labels=None, threshold=0.25, db=None, return_features=False, length=512)

identify a batch of face chips by comparing it to a collection npz file or memsql table

Parameters
  • chips (list) – a list of face chips. (opencv images or numpy arrays)

  • collection (str) – path to a collection npz file

  • threshold (float) – similarity threshold over which to call it a match

  • return_features (bool) – whether to return the features

  • labels (list) – list of labels

  • features (list) – list of corresponding features, this gets overloaded if you pass a collection

Returns

List of dictionaries with predicted_label and confidence as well as features if the return_features param was set to True

create_collection(folder=None, output=None, images=None, labels=None, return_features=False, batch_size=8, db=False, mp=False)

creates a collection from a folder :param folder: path to folder holding images :type folder: str :param images: alternatively, you can pass a list of images :type images: list :param labels: labels corresponding to the images list.

You will have to pass both images and labels or neither

Parameters
  • output (str) – location for the generated npz collection file or the db name if a DBService object is being passed as the db parameter If this is None, the folder parameter will be used

  • db (DBService) – DBService object that connects to a database

  • return_features (bool) – whether to return the extracted features

  • batch_size (int) – size of the batch to break the passed list into for processing

create_collection_directory(folder, batch_size, mp)
delete_feature_from_collection(collection_filename, feature)

Remove an individual face feature from a collection :param collection_filename: Path to an existing collection npz file :type collection_filename: str :param feature: the feature as returend by identify for example :type feature: numpy array

Returns

updated collection

delete_from_collection(collection_filename, labels)

Remove a label and the corresponding features from a collection :param collection_filename: Path to an existing collection npz file :type collection_filename: str :param labels: list of labels to delete :type labels: list

Returns

updated collection

find_biggest_face(img, return_chips=False, chip_size=112, padding=0.2, return_binary=False)

finds the biggest face in the image

Parameters
  • img (image path, base64 encoded image, numpy array or OpenCV image) – image

  • return_chips (bool) –

  • chip_size (int) – size of face chip

  • padding (float) –

  • return_binary (bool) –

Returns

the detected box, points and chips if the return_binary param was set to True, or a json dict with the above data otherwise. The chip is only returned if the return_chips param was set to True

find_faces(img, return_chips=False, chip_size=112, padding=0.2, return_binary=False)

finds all faces and returns chips

Parameters
  • img (image path, base64 encoded image, numpy array or OpenCV image) – image

  • return_chips (bool) –

  • chip_size (int) – size of face chip

  • padding (float) –

  • return_binary (bool) –

Returns

the detected box, points and chips if the return_binary param was set to True, or a json dict with the above data otherwise. The chip is only returned if the return_chips param was set to True

get_features(img, b64=False)

features of the biggest face found in the provided image

Parameters
  • img (image path, base64 encoded image, numpy array or OpenCV image) – image

  • b64 (bool) – whether to return embedding b64 encoded

get_match(score, threshold, use_sim)

Return true or false related to score and threshold

Parameters
  • score

  • threshold

  • use_sim – return true if score >= threshold, if use_sim is false, return true if score <= threshold

Returns

boolean

Return type

match

identify(chip, collection=None, threshold=0.25, return_features=False, labels=None, features=None, length=512, db=False)

identify a face chip by comparing it to a collection npz file or database table :param chip: face chip (padded image array) :type chip: opencv image or numpy array :param collection: path to a collection npz file :type collection: str :param features: list of features, this gets overloaded if you pass a collection :type features: list :param threshold: similarity threshold over which to call it a match :type threshold: float :param return_features: whether to return the features :type return_features: bool :param labels: whether to return the label :type labels: bool :param db: pass a DBService object if you want to use the DB backend to manage your collection :type db: DBService

Returns

A dictionary with predicted_label and confidence as well as features if the return_features param was set to True

impl_get_features(img)
returns face features for an image

this function does not perform any face detection but feeds the provided image directly to the model for feature extraction

Parameters

img (opencv image or numpy array) –

Returns

feature found in image

match_two_features(source, target, use_sim=False, threshold=1.5)

Match two features

Parameters
  • source – base64 encoded source feature

  • target – base64 encoded source feature

  • use_sim – if use_sim = True, we recommend a threshold of 0.25

  • threshold

match_two_images(source, target, use_sim=False, threshold=1.5)

matches two images

Parameters
  • source (path, base64, binary, OpenCV or nympy image) – source image to match

  • target (path, base64, binary, OpenCV or nympy image) – target image to match

  • use_sim (bool) – if use_sim = True, we recommend a threshold of 0.25

  • threshold (float) –

read_collection_dir_parallel(folder)

Load an image collection from disk using all available CPUs :param folder: path to collection folder :type folder: str

Returns

list of extracted features labels (list): list of corresponding labels

Return type

features (list)

read_collection_dir_sequential(folder)

Load an image collection from disk in sequence on one CPU

Parameters

folder (str) – path to collection folder

Returns

list of extracted features labels (list): list of corresponding labels

Return type

features (list)

update_collection(input_folder, collection_filename=None, return_features=False)
Parameters
  • input_folder (str) – Path to a folder with updated images

  • collection_filename (str) – Path to an existing collection npz file

  • return_features (bool) – Whether to return features from input_folder

Returns

json response with the path to the updated collection filename

class trueface.recognition.ObjectRecognizer(ctx='cpu', model_path=None, params_path=None, license=None, method='ssd', conf_threshold=0.5, nms_threshold=0.4, dims=(None, None))

TF Local Object Detector

Parameters
  • ctx – ‘cpu’ or ‘gpu’

  • model_path – path to model

  • params_path – path to params file

  • license – your license token

  • method – ‘ssd’ or ‘yolo’

  • conf_threshold – set the conf_threshold

  • nms_threshold – set the nms_threshold

  • dims

detect(input, dims=None)

Detect object

Parameters
  • input

  • dims

Returns

list of results

non_max_suppression(boxes, confidences)

Remove the bounding boxes with low confidence using non-max suppression

Parameters
  • boxes – list of boxes

  • confidences – list of corresponding confidence scores

Returns:

postprocess_mx_output(outs, conf_threshold, nms_threshold=None, dims=None)

Scan through all the bounding boxes output from the network and keep only the ones with high confidence scores. Assign the box’s class label as the class with the highest score.

Parameters
  • outs – list of outs

  • conf_threshold – confidence threshold

  • nms_threshold – NMS threshold

  • dims – dimensions

Returns: filtered results

trueface.recognition.response(message, data)

return a json object

Parameters
  • message

  • data

Returns

a json object containing a message and a data field

Search Module

class trueface.searching.VideoSearch(index='index.npy', video='video.avi', output=None, out_filename='results.avi', save_photos=False, random_name_len=5, recognizer=None, similarity_threshold=0.25)

VideoSearch class

Spoof detection module

class trueface.spoof.SpoofDetector(model_path, params_path, token, ctx='cpu')

The spoof detector class

Parameters
  • model_path – filesystem path to model

  • params_path – filesystem path to params

  • token – your token from creds.json

  • ctx – the mxnet context, “cpu” or “gpu”

is_spoof(image, threshold)

Returns true or false wheter image is a spoof based on threshold

Parameters
  • image (numpy array or str) – if image is a string it will be read as a filepath

  • threshold (float) – threshold between 0 and 1 below which we call it a spoof

Returns

true or false whether this is a spoof image

Return type

bool

spoof_probability(image)

Returns probability that image passed is a spoof

Parameters

image (numpy array or str) – if image is a string it will be read as a filepath

Returns

Probability this is a spoof from 0 to 1

Return type

float

Tracking module

class trueface.tracking.BaseTracker(threshold=10.0, min_feats=1, track_movements=0, max_steps=20)

BaseTracker class

Parameters
  • threshold (float) –

  • min_feats

  • track_movements

  • max_steps

clean()

clean tracked objects

draw_motion_tracks(frame)

draw motion tracks on frame

Parameters

frame

abstract find_tracked_object(bbox, image)

Abstract method

remove_unknown_identities()

remove unknown identities

abstract track(object_to_track, image, identity, chip=None, related_object=None)

Abstract method

abstract update(bboxes, chips, frame, features=None)

Abstract method

class trueface.tracking.COObjectTracker(threshold, min_feats=1, track_movements=0, max_steps=20)
find_tracked_object(obj, frame)

Matches passed obj to a tracked obj

update(bboxes, frame, chips=None, features=None)
update_trackers(frame)

update trackers

class trueface.tracking.CVObjectTracker(threshold, min_feats=1, track_movements=0, max_steps=20, tracker_type='KCF')
find_tracked_object(obj, frame)

Matches passed obj to a tracked obj

track(object_to_track, image, identity, chip=None, features=None, related_object=None)

track object

update(bboxes, chips, frame, features=[])

Abstract method

utility methods

class trueface.utils.RedisQueue(name, namespace='queue', **redis_kwargs)

Simple Queue with Redis Backend

The default connection parameters are: host=’localhost’, port=6379, db=0

empty()

Return True if the queue is empty, False otherwise.

get(block=True, timeout=None)

Remove and return an item from the queue.

If optional args block is true and timeout is None (the default), block if necessary until an item is available.

get_nowait()

Equivalent to get(False).

put(item)

Put item into the queue.

qsize()

Return the approximate size of the queue.

Video module

class trueface.video.QVideoStream(src=0, queue_size=128)

QVideoStream class

initialize the file video stream along with the boolean used to indicate if the thread should be stopped or not

Parameters
  • src

  • queue_size

start()

start a thread to read frames from the file video stream :return: QVideoStream object

update()

read and update the video stream

Returns: loops infinitely

Indices and tables