The packet contains json data with the following fields:
{
"average_temperature_measured":36.98200225830078,
"median_temperature_measured":36.98200225830078,
"camera_id":"192.168.0.22",
"eyeduct_bounding_box":{
"bottom_right":{
"x":941,
"y":762
},
"top_left":{
"x":864,
"y":686
}
},
"eyeduct_visible":true,
"face_bounding_box":{
"bottom_right":{
"x":1060,
"y":948
},
"top_left":{
"x":785,
"y":592
}
},
"face_chip_aligned":"...",
"snapshot":"...",
"face_detected":true,
"temperature_measured":36.96999740600586,
"temp_unit": "F",
"mask_label": "mask",
"mask_score": "7.145",
"eyeglasses_label": "glasses",
"eyeglasses_score": "1.771",
"timestamp": "21-07-2020 14:34:19.728",
"liveness_check_passed":true,
}
face_detected
Type: bool
Indicates if a face is detected in the most recent frame. If this is set to true, then the face_bounding_box
coordinates will be populated.
eyeduct_visible
Type: bool
Indicates if the eye duct is visible to the camera in the most recent frame. This is determined using the yaw and pitch of the face. If the face is oriented at an extreme angle (ex. side profile), the eyeduct may not be visible, and the temperature will therefore not be reported. Additionally, if the detected face is outside the thermal frame but still in the visible frame (refer to image below), then this will also be set to false (face detection is performed using 16:9 aspect ratio stream). If this is the case, the user should be prompted to face the camera directly and move to the center of the frame. If this is set to true, then the temperature_measured
, average_temperature_measured
, and eyeduct_bounding_box
fields will also be populated.
temperature_measured
Type: float
The measured temperature of the eye duct region in the most recent frame, reported in the chosen unit.
average_temperature_measured
Type: float
The average measured temperature over the last n frames of the eye duct region, reported in the chosen unit. If a face is not detected in a frame, this average will be reset.
median_temperature_measured
Type: float
The median measured temperature over the last n frames of the eye duct region, reported in the chosen unit. If a face is not detected in a frame, this median will be reset.
camera_id
Type: string
The local IP address of the camera.
face_bounding_box
Type: object
The bounding box pixel coordinates for the face which was detected in the last frame in the coordinate system of the selected stream.
eyeduct_bounding_box
Type: object
The bounding box pixel coordinates for the eyeduct which was detected in the last frame in the coordinate system of the selected stream. The left or right eye duct may be used, depending on the orientation of the face.
face_chip_aligned
Type: string
Base64 encoded RGB cropped aligned face chip (jpg), 112x112 jpg. Will only be set if a face is detected in the frame. This face chip can then be used to generate a face recognition template using this function.
snapshot
Type: string
Base64 encoded RGB jpg frame (640x360px). If a face is detected, then the face bounding box is drawn on the frame. This frame is in sync with the other information streamed in the websocket packet. It was added because the RTSP stream can lag the websocket stream by a second or two.
Note, by default, this field will not be send. To enable it, use the /config
endpoint and set the enable_snapshot
payload field appropriately. Enabling this can slow things down so only do so if absolutely required.
temp_unit
Type: string
The temperature unit, either F
or C
.
mask_label
Type: string
Either mask
or no_mask
. This field will only be set if a face is detected in the frame (indicated by the face_detected field).
mask_score
Type: float
The mask score for this image. This can be used for setting custom thresholds that work better for the use case. By default, we use a mask score greater than 3.0 to determine if a mask was detected. This field will only be set if a face is detected in the frame (indicated by the face_detected field).
eyeglasses_label
Type: string
Either glasses
or no_glasses
. This field will only be set if a face is detected in the frame (indicated by the face_detected field).
eyeglasses_score
Type: float
The glasses score for this image. This can be used for setting custom thresholds that work better for the use case. By default, we use a glasses score greater than 0.0 to determine if glasses were detected. This field will only be set if a face is detected in the frame (indicated by the face_detected field).
timestamp
Type: string
The timestamp at the time of sending the websocket packet.
Format: day-month-year hour:minute:second.millisecond
liveness_check_passed
Type: bool
Indicates whether the detected face passed the liveness test, based on thermal data. This is used to prevent spoof attempts such as holding up a phone or image to the camera. This field will only be set if a face is first detected. If the liveness_check
configuration option is set to false (default), then this field will not be sent as part of the response.