Class: exports

exports()

This is the main TFSDK class. In order for SDK to work, please first download the models files, only download the models which is required. Download blink detector model

Constructor

new exports()

Source:
Examples

curl -O -L https://storage.googleapis.com/sdk-models/enc/blink/blink_detector_v1.trueface.enc

Download body pose estimator model

test -e body_pose_estimator_v1.trueface.enc || curl -O -L https://storage.googleapis.com/sdk-models/enc/body_pose_estimator/v1/body_pose_estimator_v1.trueface.enc

Download face landmarks v2 model

curl -O -L https://storage.googleapis.com/sdk-models/enc/landmark_detection/face_landmark_detector_v2.trueface.enc

Download face recognition lite v2 model

curl -O -L https://storage.googleapis.com/sdk-models/enc/face_recognition/cpu/face_recognition_cpu_lite_v2.trueface.enc

Download face recognition tfv4 cpu model

curl -O -L https://storage.googleapis.com/sdk-models/enc/face_recognition/cpu/face_recognition_cpu_v4.trueface.enc

Download face recognition tfv5 cpu model

curl -O -L https://storage.googleapis.com/sdk-models/enc/face_recognition/cpu/face_recognition_cpu_v5.trueface.enc

Download face recognition tfv4 gpu model

curl -O -L https://storage.googleapis.com/sdk-models/enc/face_recognition/gpu/face_recognition_gpu_v4.trueface.enc

Download face recognition tfv5 gpu model

curl -O -L https://storage.googleapis.com/sdk-models/enc/face_recognition/gpu/face_recognition_gpu_v5.trueface.enc

Download object detector v1 model

curl -O -L https://storage.googleapis.com/sdk-models/enc/object_detection/object_detector_v1.trueface.enc

Download spoof model

curl -O -L https://storage.googleapis.com/sdk-models/enc/spoof/v5/spoof_v5.trueface.enc

Then in your command line, run sh model_file.sh and place the model files in desired location, be sure to set the correct modelsPath in your sdk initialization.

Methods

checkSpoofImageFaceSize(faceBoxAndLandmarks, imageProperties, activeSpoofStage) → {object}

Ensures that the face size meets the requirements for active spoof. Must check return value of function! Active spoof works by analyzing the way a persons face changes as they move closer to a camera. The active spoof solution therefore expects the face a certain distance from the camera. **In the far image, the face should be about 18 inches from the camera, while in the near image, the face should be 7-8 inches from the camera.** This function must be called before calling detectActiveSpoof().
Parameters:
Name Type Description
faceBoxAndLandmarks object The face on which to run active spoof detection.
imageProperties object The properties of the image, obtained from getImageProperties().
activeSpoofStage object The stage of the image, either near stage or far stage.
Source:
Returns:
error code, see ErrorCode. If `ErrorCode::NO_ERROR` is returned, then the image is eligible for active spoof detection. If `ErrorCode::FACE_TOO_CLOSE` or `ErrorCode::FACE_TOO_FAR` is returned, the image is not eligible for active spoof detection.
Type
object

createDatabaseConnection(databaseConnectionString) → {object}

Create a connection to a new or existing database. If the database does not exist, a new one will be created with the provided name. If the `Trueface::DatabaseManagementSystem::NONE` (memory only) configuration option is selected, this function does not need to be called (and is a harmless no-op).
Parameters:
Name Type Description
databaseConnectionString string If `Trueface::DatabaseManagementSystem::SQLITE` is selected, this should be the filepath to the database. ex. "/myPath/myDatabase.db". If `Trueface::DatabaseManagementSystem::POSTGRESQL` is selected, this should be a database connection string. Here is a list of all supported PostgreSQL connection parameters. ex. "hostaddr=192.168.1.0 port=5432 dbname=face_recognition user=postgres password=my_password". ex. "host=localhost port=5432 dbname=face_recognition user=postgres password=m_password". To enable ssl, add "sslmode=require" to the connection string.
Source:
Returns:
error code, see ErrorCode.
Type
object

createLoadCollection(collectionName) → {object}

Create a new collection, or load data from an existing collection into memory (RAM) if one with the provided name already exists in the database. Equivalent to calling createCollection() then loadCollection().
Parameters:
Name Type Description
collectionName string the name of the collection.
Source:
Returns:
error code, see ErrorCode.
Type
object

detectActiveSpoof(nearFaceLandmarks, farFaceLandmarks) → {number|boolean|object}

Detect if there is a presentation attack attempt. Must call checkSpoofImageFaceSize() on both input faces before calling this function.
Parameters:
Name Type Description
nearFaceLandmarks object The face landmarks of the near face, obtained by calling getFaceLandmarks().
farFaceLandmarks object The face landmarks of the far face, obtained by calling getFaceLandmarks().
Source:
Returns:
  • spoofScore The output spoof score. If the spoof score is above the threshold, then it is classified as a real face. If the spoof score is below the threshold, then it is classified as a fake face.
    Type
    number
  • spoofPrediction The predicted spoof result, using a spoofScore threshold of 1.05.
    Type
    boolean
  • error code, see ErrorCode.
    Type
    object

detectFaces() → {object|object}

Detect all the faces in the image. This method has a small false positive rate. To reduce the false positive rate to near zero, filter out faces with score lower than 0.90. Alternatively, you can use the `Trueface::FaceDetectionFilter` configuration option to filter the detected faces. The face detector has a detection scale range of about 5 octaves. \ref ConfigurationOptions.smallestFaceHeight determines the lower of the detection scale range. E.g., setting \ref ConfigurationOptions.smallestFaceHeight to 40 pixels yields the detection scale range of ~40 pixels to 1280 (=40x2^5) pixels.
Source:
Returns:
  • faceBoxAndLandmarks a vector of \ref FaceBoxAndLandmarks representing each of the detected faces. If not faces are found, the vector will be empty. The detected faces are sorted in order of descending face score.
    Type
    object
  • error code, see ErrorCode.
    Type
    object

detectGlasses(faceBoxAndLandmarks, result, glassesScore) → {object}

Detect whether the face in the image is wearing any type of eye glasses or not
Parameters:
Name Type Description
faceBoxAndLandmarks object FaceBoxAndLandmarks returned by detectFaces() or detectLargestFace().
result boolean The predicted GlassesLabel for face image.
glassesScore score The glasses score for this image. This can be used for setting custom thresholds that work better for the use case. By default, we use a glasses score greater than 0.0 to determine that glasses were detected.
Source:
Returns:
error code, see ErrorCode.
Type
object

detectLargestFace() → {object|boolean|object}

Detect the largest face in the image. This method has a small false positive rate. To reduce the false positive rate to near zero, filter out faces with score lower than 0.90. Alternatively, you can use the `Trueface::FaceDetectionFilter` configuration option to filter the detected faces. See detectFaces() for the detection scale range.
Source:
Returns:
  • faceBoxAndLandmarks the FaceBoxAndLandmarks containing the landmarks and bounding box of the largest detected face in the image.
    Type
    object
  • found whether a face was found in the image.
    Type
    boolean
  • error code, see ErrorCode.
    Type
    object

detectMask(faceBoxAndLandmarks) → {boolean|object}

Detect whether the face in the image is wearing a mask or not
Parameters:
Name Type Description
faceBoxAndLandmarks object FaceBoxAndLandmarks returned by detectFaces() or detectLargestFace().
Source:
Returns:
  • result The predicted MaskLabel for face image.
    Type
    boolean
  • error code, see ErrorCode.
    Type
    object

enrollFaceprint(faceprint, identity) → {string|object}

Enroll a Faceprint for a new or existing identity in the collection.
Parameters:
Name Type Description
faceprint object the Faceprint to enroll in the collection.
identity string the identity corresponding to the Faceprint.
Source:
Returns:
  • UUID universally unique identifier corresponding to the Faceprint.
    Type
    string
  • error code, see ErrorCode.
    Type
    object

estimateFaceImageQuality(alignedFaceImage) → {number|object}

Estimate the quality of the face image for recognition.
Parameters:
Name Type Description
alignedFaceImage array The array returned by extractAlignedFace().
Source:
Returns:
  • quality a value between 0 to 1, 1 being prefect quality for recognition. We suggest using a threshold of 0.999 as a filter for enrollment images.
    Type
    number
  • error code, see ErrorCode.
    Type
    object

estimateHeadOrientation(faceBoxAndLandmarks) → {object|object}

Estimate the head pose.
Parameters:
Name Type Description
faceBoxAndLandmarks object FaceBoxAndLandmarks returned by detectFaces() or detectLargestFace().
Source:
Returns:
  • yaw the rotation angle around the image's vertical axis, in radians. pitch the rotation angle around the image's transverse axis, in radians. roll the rotation angle around the image's longitudinal axis, in radians.
    Type
    object
  • error code, see ErrorCode.
    Type
    object

extractAlignedFace(faceBoxAndLandmarks, marginLeft, marginTop, marginRight, marginBottom, scale) → {object}

Align the the detected face to be optimized for passing to feature extraction. If using the face chip with Trueface algorithms (ex face recognition), do not change the default margin and scale values.
Parameters:
Name Type Description
faceBoxAndLandmarks object the FaceBoxAndLandmarks returned by detectLargestFace() or detectFaces().
marginLeft number adds a margin to the left side of the face chip.
marginTop number adds a margin to the top side of the face chip.
marginRight number adds a margin to the right side of the face chip.
marginBottom number adds a margin to the bottom side of the face chip.
scale number changes the scale of the face chip.
Source:
Returns:
  • faceImage the pointer to a uint8_t buffer of 112x112x3 = 37632 bytes (when using default margins and scale). The aligned face image is stored in this buffer. The memory must be allocated by the user. If using non-default margin and scale (again, non-standard face chip sizes will not work with Trueface algorithms), the faceImage will be of size: width = int((112+marginLeft+marginRight)*scale), height = int((112+marginTop+marginBottom)*scale), and therefore the buffer size is computed as: width * height * 3
    Type
    object
  • error code, see ErrorCode.

getFaceFeatureVector(alignedFaceImage, faceprint) → {object}

Extract the face feature vector from an aligned face image.
Parameters:
Name Type Description
alignedFaceImage object buffer returned by extractAlignedFace(). Face image must be have size of 112x112 pixels (default extractAlignedFace() margin and scale values).
faceprint object a Faceprint object which will contain the face feature vector.
Source:
Returns:
error code, see ErrorCode.
Type
object

getFaceLandmarks(faceBoxAndLandmarks) → {object|object}

Obtain the 106 face landmarks.
Parameters:
Name Type Description
faceBoxAndLandmarks object FaceBoxAndLandmarks returned by detectFaces() or detectLargestFace().
Source:
Returns:
  • landmarks an array of 106 face landmark points.
    Type
    object
  • error code, see ErrorCode.
    Type
    object

getImageProperties() → {object}

Get properties of the image set by setImage().
Source:
Returns:
imageProperties the image properties
Type
object

getLargestFaceFeatureVector() → {object|boolean|object}

Detect the largest face in the image and return its feature vector.
Source:
Returns:
  • Faceprint object which will contain the face feature vector.
    Type
    object
  • foundFace indicates if a face was detected in the image. If no face was detected, then the faceprint will be empty.
    Type
    boolean
  • error code, see ErrorCode.
    Type
    object

getLargestFaceFeatureVector() → {object|object|object}

Detect the largest face in the image and return its feature vector.
Source:
Returns:
  • faceprint a Faceprint object which will contain the face feature vector.
    Type
    object
  • foundFace indicates if a face was detected in the image. If no face was detected, then the faceprint will be empty.
    Type
    object
  • error code, see ErrorCode.
    Type
    object

getSimilarity(faceprint1, faceprint2) → {number|number}

Compute the similarity between two feature vectors, or how similar two faces are.
Parameters:
Name Type Description
faceprint1 object the first Faceprint to be compared.
faceprint2 object the second Faceprint to be compared.
Source:
Returns:
  • matchProbability the probability the two face feature vectors are a match.
    Type
    number
  • similarityMeasure the computed similarity measure.
    Type
    number
  • error code, see ErrorCode.

getVersion() → {string}

Gets the version-build number of the SDK.
Source:
Returns:
Version Number.
Type
string

identifyTopCandidate(faceprint, inopt) → {string|boolean|object}

Get the top match Candidate in the collection and the corresponding similarity score and match probability.
Parameters:
Name Type Attributes Description
faceprint object the Faceprint to be identified.
in number <optional>
threshold the similarity score threshold above which it is considered a match. Higher thresholds may result in faster queries. Refer to our ROC curves when selecting a threshold.
Source:
Returns:
  • candidate the top match Candidate.
    Type
    string
  • found set to true if a match is found.
    Type
    boolean
  • error code, see ErrorCode.
    Type
    object

isLicensed() → {boolean}

Checks whether the given license token is valid and you can use the SDK.
Source:
Returns:
Whether the given license token is valid.
Type
boolean

removeByIdentity(identity) → {number|object}

Remove all Faceprints in the collection corresponding to the identity.
Parameters:
Name Type Description
identity string the identity to remove from the collection.
Source:
Returns:
  • numFaceprintsRemoved the the number of Faceprint which were removed for that identity.
    Type
    number
  • error code, see ErrorCode.
    Type
    object

removeByUUID(UUID) → {object}

Remove a Faceprint from the collection using the UUID.
Parameters:
Name Type Description
UUID string the universally unique identifier returned by enrollFaceprint().
Source:
Returns:
error code, see ErrorCode.
Type
object

setImage(image, width, height, color) → {object}

Set the image that is processed by the other methods.
Parameters:
Name Type Description
image Buffer an 8-bit decoded image array, in the CPU memory or the GPU memory.
width number the image width.
height number the image height.
color number the image color model, see ColorCode.
Source:
Returns:
error code, see ErrorCode. Note, it is highly encouraged to check the return value from setImage before proceeding. If the license is invalid, the INVALID_LICENSE error will be returned.
Type
object

setImageFromFile(path) → {object}

Set the image from a image file
Parameters:
Name Type Description
path string image's path
Source:
Returns:
errorCode if there's error from set image
Type
object

setLicense(token) → {boolean}

Sets and validates the given license token. Need to call this method before being able to use the SDK.
Parameters:
Name Type Description
token string the license token (if you do not have this talk to support@trueface.ai).
Source:
Returns:
Whether the given license token is valid.
Type
boolean

Documentation generated by JSDoc 3.6.7 on Mon Oct 18 2021 18:45:00 GMT+0000 (Coordinated Universal Time)