1N Identification

In order to avoid loading the same collection into memory multiple times (which becomes an issue when the collection sizes become very large), instances of the SDK created within the same process will share the same collection in memory (RAM). This means when you enroll a template into the collection using one instance of the SDK, it will be available in all other instances of the SDK in the same process. For this same reason, applications which have multiple instances of the SDK in a single process only need to call create_database_connection and create_load_collection on a single instance of the SDK and all other instances will automatically be connected to the same database and collection.

The PostgreSQL backend option also has built in synchronization across multiple processes. Let’s take an example where you have two processes on different machines, A and B, connected to the same PostgreSQL backend. Each of these processes will initially connect to the same database and collection and therefore load all the templates from the database into memory (RAM). If process A then enrolls a template into the collection, this will both add the template to the in-memory (RAM) collection of process A and will update the PostgreSQL database. In doing so, it will also automatically push out a notification to all the subscribed processes which are connected to the same database and collection. Any process connected to the same database and collection is automatically subscribed to updates, no additional action is required from the developer. Process B will therefore receive a notification that an update was made and will therefore automatically enroll the same template into its in-memory (RAM) collection. Process A and B therefore have synchronized collections. Note, it can take up to 30 seconds for subscribed processes to receive the notification.

This sort of multi-process synchronization is not supported by the sqlite backend. With the sqlite backend, if process A makes a change to the database, process B will not know of the changes. Process B must re-call create_load_collection in order to register the changes that were made to the database from process A. Note doing so will not perform an incremental update, but will instead discard then re-load all the data into memory, which can be slow if the collection size is large. This is why it is advised to use the sqlite backend option only for use cases which involve only a single process connecting to the database. If multiple processes need to connect to a database (and require synchronization), it is advised to use the PostgreSQL backend.

With the fr_vector_compression flag enabled, at a conservative average, each face template and corresponding metadata is roughly 750 bytes in size, though this ultimately depends on the length of the identity string you choose. You can therefore calculate approximately how much RAM is required for various collection sizes. For example, a collection of size 1 million templates will require 750 bytes * 1,000,000 templates = 750Mb of RAM, a collection of size 10 million templates will require 7.5Gb of RAM, and so on. For most use cases, even embedded devices have enough RAM to search through collections of medium to even large sizes (ex. An RPI 4 can handle a few million templates). However, when running 1 to N identification on massive collections (10s or 100s of millions of templates) on a lightweight embedded device, you may find the device does not have sufficient RAM to store the entire collection in memory. In these situations, you will want to run the actual 1 to N search on a beefy server which has sufficient RAM. Process the video streams on the embedded devices at the edge to generate feature vectors for the detected faces, then send these feature vectors to the server (or cluster of servers) to run the actual 1 to N identification functions (ex. identify_top_candidate). The server should also handle enrolling and deleting templates from the collection as required (these functions can also be exposed to the edge devices as REST API endpoints). Hence, the edge devices only generate feature vectors, while only the beefy servers are connected to the database and perform the searches. To simplify things (and avoid having to write your own REST API server), you can have your edge devices send the feature vectors to an instance of the Trueface Visionbox running on your server to perform the matching.

SDK.create_database_connection(self: tfsdk.SDK, database_connection_string: str) → Trueface::ErrorCode

Create a connection to a new or existing database. If the database does not exist, a new one will be created with the provided name. If the NONE DatabaseManagementSystem (memory only) configuration option is selected, this function does not need to be called (and is a harmless no-op). If SQLITE DatabaseManagementSystem is selected, this should be the filepath to the database.ex. “/myPath/myDatabase.db”. If POSTGRESQL DatabaseManagementSystem is selected, this should be a database connection string. For a list of PostgreSQL connection parameters, visit: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS ex. “hostaddr=192.168.1.0 port=5432 dbname=face_recognition user=postgres password=my_password”. ex. “host=localhost port=5432 dbname=face_recognition user=postgres password=m_password”.

SDK.create_load_collection(self: tfsdk.SDK, collection_name: str) → Trueface::ErrorCode

Create a new collection, or load data from an existing collection into memory (RAM) if one with the provided name already exists in the database.

SDK.enroll_template(self: tfsdk.SDK, faceprint: Trueface::Faceprint, identity: str) → Tuple[Trueface::ErrorCode, str]

Enroll a template for a new or existing identity in the collection. Returns the UUID for the template.

SDK.remove_by_UUID(self: tfsdk.SDK, UUID: str) → Trueface::ErrorCode

Remove a template from the collection using the UUID.

SDK.remove_by_identity(self: tfsdk.SDK, identity: str) → Trueface::ErrorCode

Remove all templates in the collection corresponding to the identity.

SDK.identify_top_candidate(self: tfsdk.SDK, faceprint: Trueface::Faceprint, threshold: float = 0.3) → Tuple[Trueface::ErrorCode, bool, Trueface::Candidate]

Get the top candidate identity in the collection and the corresponding similarity score and match probability. Returns true if a match is found.

SDK.batch_identify_top_candidate(self: tfsdk.SDK, faceprints: List[Trueface::Faceprint], threshold: float = 0.3) → Tuple[Trueface::ErrorCode, List[bool], List[Trueface::Candidate]]

Get the top candidate identity in the collection and the corresponding similarity score and match probability for each probe faceprint. Like identifyTopCandidate, but runs search queries in parallel and improves throughput.

SDK.identify_top_candidates(self: tfsdk.SDK, faceprint: Trueface::Faceprint, num_candidates: int, threshold: float = 0.3) → Tuple[Trueface::ErrorCode, bool, List[Trueface::Candidate]]

Get a list of the top n candidate identities in the collection and their corresponding similarity scores and match probabilities. Returns true if at least one match is found.

class tfsdk.Candidate
property UUID

The UUID of the match.

property identity

The identity of the match.

property match_probability

The probability the two face feature vectors are a match.

property similarity_measure

The computed similarity measure.