Global

Methods

checkSpoofImageFaceSize(image, faceBoxAndLandmarks, imageProperties, activeSpoofStage) → {object}

Ensures that the face size meets the requirements for active spoof. Must check return value of function! Active spoof works by analyzing the way a persons face changes as they move closer to a camera. The active spoof solution therefore expects the face a certain distance from the camera. In the far image, the face should be about 18 inches from the camera, while in the near image, the face should be 7-8 inches from the camera. This function must be called before calling detectActiveSpoof().

Parameters:
Name Type Description
image buffer

the pre-processed image in buffer

faceBoxAndLandmarks object

The face on which to run active spoof detection.

imageProperties object

The properties of the image, obtained from getImageProperties().

activeSpoofStage object

The stage of the image, either near stage or far stage.

Source:
Returns:

error code, see ErrorCode. If ErrorCode::NO_ERROR is returned, then the image is eligible for active spoof detection. If ErrorCode::FACE_TOO_CLOSE or ErrorCode::FACE_TOO_FAR is returned, the image is not eligible for active spoof detection.

Type
object

createDatabaseConnection(databaseConnectionString) → {object}

Create a connection to a new or existing database. If the database does not exist, a new one will be created with the provided name. If the Trueface::DatabaseManagementSystem::NONE memory only configuration option is selected, this function does not need to be called (and is a harmless no-op).

If Trueface::DatabaseManagementSystem::SQLITE is selected, this should be the filepath to the database. ex. "/myPath/myDatabase.db". If Trueface::DatabaseManagementSystem::POSTGRESQL is selected, this should be a database connection string. Here is a list of all supported PostgreSQL connection parameters.

Parameters:
Name Type Description
databaseConnectionString string
Source:
Returns:

error code, see ErrorCode.

Type
object
Examples
"hostaddr=192.168.1.0 port=5432 dbname=face_recognition user=postgres password=my_password"
"host=localhost port=5432 dbname=face_recognition user=postgres password=m_password"
To enable ssl, add "sslmode=require" to the connection string.

createLoadCollection(collectionName) → {object}

Create a new collection, or load data from an existing collection into memory (RAM) if one with the provided name already exists in the database. Equivalent to calling createCollection() then loadCollection().

Parameters:
Name Type Description
collectionName string

the name of the collection.

Source:
Returns:

error code, see ErrorCode.

Type
object

destroyImage(image)

Destroy image which is no longer needed for SDK, this is to conserve memory usage

Parameters:
Name Type Description
image buffer

the image pointer

Source:

detectActiveSpoof(nearFaceLandmarks, farFaceLandmarks) → {number|boolean|object}

Detect if there is a presentation attack attempt. Must call checkSpoofImageFaceSize() on both input faces before calling this function.

Parameters:
Name Type Description
nearFaceLandmarks object

The face landmarks of the near face, obtained by calling getFaceLandmarks().

farFaceLandmarks object

The face landmarks of the far face, obtained by calling getFaceLandmarks().

Source:
Returns:
  • spoofScore The output spoof score. If the spoof score is above the threshold, then it is classified as a real face. If the spoof score is below the threshold, then it is classified as a fake face.

    Type
    number
  • spoofPrediction The predicted spoof result, using a spoofScore threshold of 1.05.

    Type
    boolean
  • error code, see ErrorCode.

    Type
    object

detectFaces(image) → {object|object}

Detect all the faces in the image. This method has a small false positive rate. To reduce the false positive rate to near zero, filter out faces with score lower than 0.90. Alternatively, you can use the Trueface::FaceDetectionFilter configuration option to filter the detected faces.

The face detector has a detection scale range of about 5 octaves. \ref ConfigurationOptions.smallestFaceHeight determines the lower of the detection scale range. E.g., setting \ref ConfigurationOptions.smallestFaceHeight to 40 pixels yields the detection scale range of ~40 pixels to 1280 (=40x2^5) pixels.

Parameters:
Name Type Description
image buffer

the pre-processed image in buffer

Source:
Returns:
  • faceBoxAndLandmarks a vector of \ref FaceBoxAndLandmarks representing each of the detected faces. If not faces are found, the vector will be empty. The detected faces are sorted in order of descending face score.

    Type
    object
  • error code, see ErrorCode.

    Type
    object

detectGlasses(image, faceBoxAndLandmarks, result, glassesScore) → {object}

Detect whether the face in the image is wearing any type of eye glasses or not

Parameters:
Name Type Description
image buffer

the pre-processed image in buffer

faceBoxAndLandmarks object

FaceBoxAndLandmarks returned by detectFaces() or detectLargestFace().

result boolean

The predicted GlassesLabel for face image.

glassesScore score

The glasses score for this image. This can be used for setting custom thresholds that work better for the use case. By default, we use a glasses score greater than 0.0 to determine that glasses were detected.

Source:
Returns:

error code, see ErrorCode.

Type
object

detectLargestFace(image) → {object|boolean|object}

Detect the largest face in the image. This method has a small false positive rate. To reduce the false positive rate to near zero, filter out faces with score lower than 0.90. Alternatively, you can use the Trueface::FaceDetectionFilter configuration option to filter the detected faces. See detectFaces() for the detection scale range.

Parameters:
Name Type Description
image buffer

the pre-processed image in buffer

Source:
Returns:
  • faceBoxAndLandmarks the FaceBoxAndLandmarks containing the landmarks and bounding box of the largest detected face in the image.

    Type
    object
  • found whether a face was found in the image.

    Type
    boolean
  • error code, see ErrorCode.

    Type
    object

detectMask(image, faceBoxAndLandmarks) → {boolean|object}

Detect whether the face in the image is wearing a mask or not

Parameters:
Name Type Description
image buffer

the pre-processed image in buffer

faceBoxAndLandmarks object

FaceBoxAndLandmarks returned by detectFaces() or detectLargestFace().

Source:
Returns:
  • result The predicted MaskLabel for face image.

    Type
    boolean
  • error code, see ErrorCode.

    Type
    object

detectObjects(image) → {object|object}

Detect and identify all the objects in the image.

Parameters:
Name Type Description
image buffer

the pre-processed image in buffer

Source:
Returns:
  • errorCode, see ErrorCode.

    Type
    object
  • array of object description, object location in the image.

    Type
    object

enrollFaceprint(faceprint, identity) → {string|object}

Enroll a Faceprint for a new or existing identity in the collection.

Parameters:
Name Type Description
faceprint object

the Faceprint to enroll in the collection.

identity string

the identity corresponding to the Faceprint.

Source:
Returns:
  • UUID universally unique identifier corresponding to the Faceprint.

    Type
    string
  • error code, see ErrorCode.

    Type
    object

estimateHeadOrientation(image, faceBoxAndLandmarks) → {object|object}

Estimate the head pose.

Parameters:
Name Type Description
image buffer

the pre-processed image in buffer

faceBoxAndLandmarks object

FaceBoxAndLandmarks returned by detectFaces() or detectLargestFace().

Source:
Returns:
  • yaw the rotation angle around the image's vertical axis, in radians. pitch the rotation angle around the image's transverse axis, in radians. roll the rotation angle around the image's longitudinal axis, in radians.

    Type
    object
  • error code, see ErrorCode.

    Type
    object

extractAlignedFace(image, faceBoxAndLandmarks, marginLeft, marginTop, marginRight, marginBottom, scale) → {object}

Align the the detected face to be optimized for passing to feature extraction. If using the face chip with Trueface algorithms (ex face recognition), do not change the default margin and scale values.

Parameters:
Name Type Description
image buffer

the pre-processed image in buffer

faceBoxAndLandmarks object

the FaceBoxAndLandmarks returned by detectLargestFace() or detectFaces().

marginLeft number

adds a margin to the left side of the face chip.

marginTop number

adds a margin to the top side of the face chip.

marginRight number

adds a margin to the right side of the face chip.

marginBottom number

adds a margin to the bottom side of the face chip.

scale number

changes the scale of the face chip.

Source:
Returns:
  • faceImage the pointer to a uint8_t buffer of 112x112x3 = 37632 bytes (when using default margins and scale). The aligned face image is stored in this buffer. The memory must be allocated by the user. If using non-default margin and scale (again, non-standard face chip sizes will not work with Trueface algorithms), the faceImage will be of size: width = int((112+marginLeft+marginRight)*scale), height = int((112+marginTop+marginBottom)*scale), and therefore the buffer size is computed as: width * height * 3

    Type
    object
  • error code, see ErrorCode.

getError(error) → {object}

Return the text description of the error code

Parameters:
Name Type Description
error string

code as number

Source:
Returns:

the description of error code, see exports.getErrorErrorCode.

Type
object

getFaceFeatureVector(image, alignedFaceImage, faceprint) → {object}

Extract the face feature vector from an aligned face image.

Parameters:
Name Type Description
image buffer

the pre-processed image in buffer

alignedFaceImage object

buffer returned by extractAlignedFace(). Face image must be have size of 112x112 pixels (default extractAlignedFace() margin and scale values).

faceprint object

a Faceprint object which will contain the face feature vector.

Source:
Returns:

error code, see ErrorCode.

Type
object

getFaceLandmarks(image, faceBoxAndLandmarks) → {object|object}

Obtain the 106 face landmarks.

Parameters:
Name Type Description
image buffer

the pre-processed image in buffer

faceBoxAndLandmarks object

FaceBoxAndLandmarks returned by detectFaces() or detectLargestFace().

Source:
Returns:
  • landmarks an array of 106 face landmark points.

    Type
    object
  • error code, see ErrorCode.

    Type
    object

getLargestFaceFeatureVector(image) → {object|boolean|object}

Detect the largest face in the image and return its feature vector.

Parameters:
Name Type Description
image buffer

the pre-processed image in buffer

Source:
Returns:
  • Faceprint object which will contain the face feature vector.

    Type
    object
  • foundFace indicates if a face was detected in the image. If no face was detected, then the faceprint will be empty.

    Type
    boolean
  • error code, see ErrorCodegetError.

    Type
    object

getLargestFaceFeatureVector(image) → {object|object|object}

Detect the largest face in the image and return its feature vector.

Parameters:
Name Type Description
image buffer

the pre-processed image in buffer

Source:
Returns:
  • faceprint a Faceprint object which will contain the face feature vector.

    Type
    object
  • foundFace indicates if a face was detected in the image. If no face was detected, then the faceprint will be empty.

    Type
    object
  • error code, see ErrorCode.

    Type
    object

getSimilarity(faceprint1, faceprint2) → {number|number}

Compute the similarity between two feature vectors, or how similar two faces are.

Parameters:
Name Type Description
faceprint1 object

the first Faceprint to be compared.

faceprint2 object

the second Faceprint to be compared.

Source:
Returns:
  • matchProbability the probability the two face feature vectors are a match.

    Type
    number
  • similarityMeasure the computed similarity measure.

    Type
    number
  • error code, see ErrorCode.

getVersion() → {string}

Gets the version-build number of the SDK.

Source:
Returns:

Version Number.

Type
string

identifyTopCandidate(faceprint, inopt) → {string|boolean|object}

Get the top match Candidate in the collection and the corresponding similarity score and match probability.

Parameters:
Name Type Attributes Description
faceprint object

the Faceprint to be identified.

in number <optional>

threshold the similarity score threshold above which it is considered a match. Higher thresholds may result in faster queries. Refer to our ROC curves when selecting a threshold.

Source:
Returns:
  • candidate the top match Candidate.

    Type
    string
  • found set to true if a match is found.

    Type
    boolean
  • error code, see ErrorCode.

    Type
    object

isLicensed() → {boolean}

Checks whether the given license token is valid and you can use the SDK.

Source:
Returns:

Whether the given license token is valid.

Type
boolean

preprocessImageFromFile(path) → {object|object}

Pre-process image for SDK

Parameters:
Name Type Description
path string

the image file path

Source:
Returns:
  • errorCode if there's error from process image see getError ErrorCode

    Type
    object
  • the pre-processed image in buffer

    Type
    object

removeByIdentity(identity) → {number|object}

Remove all Faceprints in the collection corresponding to the identity.

Parameters:
Name Type Description
identity string

the identity to remove from the collection.

Source:
Returns:
  • numFaceprintsRemoved the the number of Faceprint which were removed for that identity.

    Type
    number
  • error code, see ErrorCode.

    Type
    object

removeByUUID(UUID) → {object}

Remove a Faceprint from the collection using the UUID.

Parameters:
Name Type Description
UUID string

the universally unique identifier returned by enrollFaceprint().

Source:
Returns:

error code, see ErrorCode.

Type
object

rotateImage(image, rotAngleDegrees)

Rotate pre-processed image with certain angel degree

Parameters:
Name Type Description
image buffer

the pre-processed image in buffer

rotAngleDegrees number

degree in number which the image rotates

Source:

saveImage(image, path)

Save the buffer image to disk

Parameters:
Name Type Description
image buffer

the pre-processed image in buffer

path string

the destination file path on disk

Source:

setLicense(token) → {boolean}

Sets and validates the given license token. Need to call this method before being able to use the SDK.

Parameters:
Name Type Description
token string

the license token (if you do not have this talk to support@trueface.ai).

Source:
Returns:

Whether the given license token is valid.

Type
boolean

Documentation generated by JSDoc 3.6.10 on Fri Jun 10 2022 21:02:25 GMT+0000 (Coordinated Universal Time)