version 1.1 - supports armeabi-v7a without NEON, arm64-v8a and x86, android minSdkVersion to 24,
Note: as of version 0.8, we have introduced breaking changes to improved performance of SDK, please make sure to test your code and migrate to Frame helper.
License
To obtain a License key, please contact our sales team and provide your applicationId
To setup Trueface mobile SDK in Android Studio, please import trueface-*.aar as new module (FIle -> New -> New Module), then add it as dependency via module settings.
To construct a FaceRecognizer object, you need to provide an application instance and a valid license key as follows
FaceRecognizer fr = new FaceRecognizer(getApplication());
fr.setLicense("<your license key>");
if (fr.isLicensed()) {
Log.d(TAG, "the SDK is licensed");
}
this method find faces in given frame, the second parameter limits search to first face found only
// holds reference to bmp, to void passing data overhead
Frame frame = new Frame(getApplication(), bmp);
List<Fbox> faces = fr.getFaces(frame, false);
Log.d(TAG, "number of faces found is " + faces.size());
for (Fbox face: faces) {
Log.d(TAG, String.format("top/left %f/%f bottom/right %f/%f",
face.x1, face.y1, face.x2, face.y2));
}
// make sure to release frame after usage
// the frame reference will be deleted on app exit
frame.release();
this method returns a cropped and aligned face in a given frame and annotated by Fbox object
Bitmap face = fr.getFace(frame, faces.get(0));
ImageView faceView = findViewById(R.id.face);
faceView.setImageBitmap(face);
this method returns face features in a given frame and annotated by Fbox object
float[] features = fr.getFaceFeatures(frame, faces.get(0));
String featuresCSV = fr.getFaceFeaturesAsCSV(frame, faces.get(0));
this method returns the calculated similarity between two face features, the score is between 0 and 1
float[] features1 = fr.getFaceFeatures(frame, faces.get(0));
float[] features2 = fr.getFaceFeatures(frame, faces.get(1));
double similarity = fr.getSimilarity(features1, features2);
To estimate head position, first detect face in given photo then call estimateHeadPose as in example:
Bitmap mrBean = getBitmapFromAsset(this, "mr_bean.jpg");
Frame bean = new Frame(getApplication(), mrBean);
Fbox beanFaceBox = fr.getFaces(bean, true).get(0);
bean.release();
HeadPose hp = fr.estimateHeadPose(mrBean, beanFaceBox);
if (hp != null) {
assert (hp.yaw > 0.95f);
assert (hp.yaw < 1.02f);
assert (hp.pitch > 0.15f);
assert (hp.pitch < 0.20f);
assert (hp.roll > 0.10f);
assert (hp.roll < 0.17f);
}
Estimate probability of eye blink in given frame, return float array, first value represent left eye, second value represent right eye, value with -0.1f represent error, such as no face detected
boolean isClosed(float[] args) {
return args[0] >= 0.999 && args[1] >= 0.999;
}
Bitmap close = getBitmapFromAsset(this, "close.png");
Bitmap open = getBitmapFromAsset(this, "open.png");
Frame f = new Frame(getApplication(), close);
float[] isClosed = fr.detectBlinks(f);
f.release();
f = new Frame(getApplication(), open);
float[] isOpened = fr.detectBlinks(f);
f.release();
d(TAG, "isClosed " + isClosed[0] + " - " + isClosed[1] + " - " + isClosed(isClosed));
d(TAG, "isOpened " + isOpened[0] + " - " + isOpened[1] + " - " + isClosed(isOpened));
Updated 4 days ago