Skip to main content

Session

Interface for managing face recognition sessions. Provides functionality for face tracking, feature extraction, and analysis.

Methods

setTrackPreviewSize

Set the track preview size in the session.

setTrackPreviewSize(size: number): void

Parameters

NameTypeDescription
sizenumberSize in pixels for tracking preview. Defaults to 192

Returns

  • void

setFaceDetectThreshold

Set the face detect threshold in the session.

setFaceDetectThreshold(threshold: number): void

Parameters

NameTypeDescription
thresholdnumberDetection threshold value between 0 and 1

Returns

  • void

setFilterMinimumFacePixelSize

Set the minimum number of face pixels that the face detector can capture, and people below this number will be filtered.

setFilterMinimumFacePixelSize(size: number): void

Parameters

NameTypeDescription
sizenumberMinimum size in pixels. Defaults to 0

Returns

  • void

setTrackModeSmoothRatio

Set the track mode smooth ratio in the session.

setTrackModeSmoothRatio(ratio: number): void

Parameters

NameTypeDescription
rationumberSmoothing ratio value between 0 and 1. Defaults to 0.05

Returns

  • void

setTrackModeNumSmoothCacheFrame

Set the track mode num smooth cache frame in the session.

setTrackModeNumSmoothCacheFrame(num: number): void

Parameters

NameTypeDescription
numnumberNumber of frames to cache for smoothing. Defaults to 5

Returns

  • void

setTrackModeDetectInterval

Set the track model detect interval in the session.

setTrackModeDetectInterval(num: number): void

Parameters

NameTypeDescription
numnumberInterval value between detections. Defaults to 20

Returns

  • void

executeFaceTrack

Run face tracking in the session.

executeFaceTrack(imageStream: ImageStream): FaceData[]

Parameters

NameTypeDescription
imageStreamImageStreamInput image stream to process

Returns

  • FaceData[] – Array of detected face data objects.

extractFaceFeature

Extract a face feature from a given face.

extractFaceFeature(
imageStream: ImageStream,
faceToken: ArrayBuffer
): ArrayBuffer

Parameters

NameTypeDescription
imageStreamImageStreamInput image stream to process
faceTokenArrayBufferFace token from previous detection

Returns

  • ArrayBuffer – Face feature vector representing the detected face.

getFaceAlignmentImage

Get the face alignment image.

getFaceAlignmentImage(
imageStream: ImageStream,
faceToken: ArrayBuffer
): ImageBitmap

Parameters

NameTypeDescription
imageStreamImageStreamInput image stream to process
faceTokenArrayBufferFace token from previous detection

Returns

  • ImageBitmap – Aligned face image from the detection.

multipleFacePipelineProcess

Process multiple faces in a pipeline.

multipleFacePipelineProcess(
imageStream: ImageStream,
multipleFaceData: FaceData[],
parameter: SessionCustomParameter
): boolean

Parameters

NameTypeDescription
imageStreamImageStreamInput image stream to process
multipleFaceDataFaceData[]Array of face data objects to process
parameterSessionCustomParameterConfiguration for feature enabling/disabling

Returns

  • boolean – Returns true if the pipeline processing completed successfully; otherwise false.

getRGBLivenessConfidence

Get the RGB liveness confidence.

getRGBLivenessConfidence(): number[]

Returns

  • number[] – Confidence scores (0-1) per face.

getFaceQualityConfidence

Get the face quality predict confidence.

getFaceQualityConfidence(): number[]

Returns

  • number[] – Quality scores (0-1) per face.

getFaceMaskConfidence

Get the face mask confidence.

getFaceMaskConfidence(): number[]

Returns

  • number[] – Mask detection scores (0-1) per face.

getFaceInteractionState

Get the prediction results of face interaction.

getFaceInteractionState(): FaceInteractionState[]

Returns


getFaceInteractionActionsResult

Get the prediction results of face interaction actions.

getFaceInteractionActionsResult(): FaceInteractionsAction[]

Returns


getFaceAttributeResult

Get the face attribute results.

getFaceAttributeResult(): FaceAttributeResult[]

Returns


getFaceEmotionResult

Get the face emotion recognition results.

getFaceEmotionResult(): FaceEmotionResult[]

Returns


clearTrackingFace

Clear all currently tracked faces. Useful for resetting tracking state.

clearTrackingFace(): void

Returns

  • void

setTrackLostRecoveryMode

Set the track lost recovery mode (only for LightTrack mode).

setTrackLostRecoveryMode(enable: boolean): void

Parameters

NameTypeDescription
enablebooleanWhether to enable track lost recovery (default: false)

Returns

  • void

setLightTrackConfidenceThreshold

Set the light track confidence threshold (only for LightTrack mode).

setLightTrackConfidenceThreshold(value: number): void

Parameters

NameTypeDescription
valuenumberConfidence threshold value (default: 0.1)

Returns

  • void

reconfigure

Reconfigure the session with new parameters. Internally destroys and recreates the underlying session handle. The JS object reference remains stable.

reconfigure(
parameter: SessionCustomParameter,
detectMode: DetectMode,
maxDetectFaceNum: number,
detectPixelLevel: number,
trackByDetectModeFPS: number
): void

Parameters

NameTypeDescription
parameterSessionCustomParameterCustom parameters for the session
detectModeDetectModeFace detection mode
maxDetectFaceNumnumberMaximum number of faces to detect
detectPixelLevelnumberDetection resolution level (-1 for default 320)
trackByDetectModeFPSnumberFrame rate for tracking mode (-1 for default 30)

Returns

  • void

getTrackPreviewSize

Get the current track preview size.

getTrackPreviewSize(): number

Returns

  • number - Current preview size in pixels

faceQualityDetect

Detect the quality of a single face without running the full pipeline. Useful for quick quality checks without enabling all pipeline features.

faceQualityDetect(faceToken: ArrayBuffer): number

Parameters

NameTypeDescription
faceTokenArrayBufferFace token data

Returns

  • number - Quality confidence score (0-1)

extractFaceFeatureFromAlignmentImage

Extract face features from an already-aligned face image. Use after calling getFaceAlignmentImage() to avoid redundant re-alignment.

extractFaceFeatureFromAlignmentImage(imageStream: ImageStream): ArrayBuffer

Parameters

NameTypeDescription
imageStreamImageStreamImage stream of the aligned face

Returns

  • ArrayBuffer - Extracted face feature vector