The Trueface Developer Hub

Welcome to the Trueface developer hub. You'll find comprehensive guides and documentation to help you start working with Trueface as quickly as possible, as well as support if you get stuck. Let's jump right in!

Live Emotion Analysis in 90 Seconds

0.4 Update

change fr.draw_face_box() to fr.draw_box()

Step by Step Tutorial

Step 1 - Install

pip install wheel_url_for_your_platform

Step 2 - Download Models

Download Emotion and FR models from:
https://github.com/getchui/offline_sdk/releases/tag/models-latest

Step 3 - Import Modules

from trueface.recognition import FaceRecognizer
from trueface.face_attributes import FaceAttributes
from trueface.video import VideoStream, QVideoStream
import cv2

Step 3 - Init FaceRecognizer & FaceAttributes classes

fr = FaceRecognizer(
    ctx='cpu',
    fd_model_path='fd_model',
    fr_model_path='model-lite/model.trueface', 
    params_path='model-lite/model.params',
    license='your_token'
    )

fa = FaceAttributes(
    labels='emotion_model/labels.pickle',
    model='emotion_model/model.trueface',
    params='emotion_model/model.params',
    )

Step 4 - Start Video Capture Session

vcap = VideoStream(src=0).start()

Step 5 - Construct Main Loop

while(True):
  	#read frames
    frame = vcap.read()
    
    #resize if desired
    frame = cv2.resize(frame, (640,480))
    
    #run face detect with return chips and return binary params
    bounding_boxes, points, chips = fr.find_faces(frame, return_chips=True, return_binary=True, chip_size=112)
    
    #if no faces in frame continue
    if bounding_boxes is None:
            continue
        
    #loop over extracted faces
    for i,chip in enumerate(chips):
      
      	#resize to fit network input, otherwise extract in find_faces with chip_size=96
        chip = cv2.resize(chip, (96,96))
        
        #call get_attributes()
        results = fa.get_attributes(chip)
        
        if results:
            #get max emotion
            m = max(results, key=results.get)
            #draw max emotion label
            fr.draw_label(
                frame, 
                (int(bounding_boxes[i][0]), 
                 int(bounding_boxes[i][1])), 
                '%s %f' % (m, results[m]))
            
        #sort all emotion results
        results = sorted(results.items(), key=lambda x: x[1], reverse=True)
        
        #draw all results on the screen in order
        for f,result in enumerate(results):
        	cv2.putText(frame, '%s %f' % (result[0], result[1]), (10, (f * 30) + 25), 
                cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
          
        #draw face box
        fr.draw_box(frame, bounding_boxes[i])
    
    #show result
    cv2.imshow('Trueface.ai', frame)
    if cv2.waitKey(33) == ord('q'):
        break

Updated about a month ago

Live Emotion Analysis in 90 Seconds


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.