Face Recognition

This service spots for a specified set of people in images or videos.

To be able to recognize people, this service needs to be first provided with a few pictures of each person’s face.

How to prepare face samples?

Here are a few tips to make sure you get the most of Angus face_recognition service:

  • make sure the resolution of these samples is high enough.
  • make sure these samples show a unique face only, in order to avoid any ambiguity.
  • the service will perform better if you provide more than 1 sample for each person, with different face expressions.

For example, the code sample shown below makes use of the following face sample (only 1 sample per people to recognize is used in that case).

../../_images/aurelien.jpg ../../_images/gwenn.jpg ../../_images/sylvain.jpg

Getting Started

Using the Angus python SDK:

# -*- coding: utf-8 -*-
import angus.client
from pprint import pprint

conn = angus.client.connect()
service = conn.services.get_service('face_recognition', version=1)

PATH = "/path/to/your/face/samples/"

w1_s1 = conn.blobs.create(open(PATH + "jamel/1.jpeg", 'rb'))
w1_s2 = conn.blobs.create(open(PATH + "jamel/2.jpg", 'rb'))
w1_s3 = conn.blobs.create(open(PATH + "jamel/3.jpg", 'rb'))
w1_s4 = conn.blobs.create(open(PATH + "jamel/4.jpg", 'rb'))

w2_s1 = conn.blobs.create(open(PATH + "melissa/1.jpg", 'rb'))
w2_s2 = conn.blobs.create(open(PATH + "melissa/2.jpg", 'rb'))
w2_s3 = conn.blobs.create(open(PATH + "melissa/3.jpg", 'rb'))
w2_s4 = conn.blobs.create(open(PATH + "melissa/4.jpg", 'rb'))

album = {'jamel': [w1_s1, w1_s2, w1_s3, w1_s4], 'melissa': [w2_s1, w2_s2, w2_s3, w2_s4]}

job = service.process({'image': open(PATH + "melissa/5.jpg", 'rb'), "album" : album})


The API captures a stream of 2D still images as input, under jpg or png format, without any constraint of resolution.

Note however that the bigger the resolution, the longer the API takes to process and give a result.

The function process() takes a dictionary as input formatted as follows:

 'image' : file,
 'album' : {"people1": [sample_1, sample_2], "people2" : [sample_1, sample_2]}
  • image: a python File Object as returned for example by open() or a StringIO buffer.
  • album : a dictionary containing samples of the faces that need to be spotted. Samples need first to be provided to the service using the function blobs.create() as per the example above. The more samples the better, although 1 sample per people is enough.


Events will be pushed to your client following that format:

  "input_size" : [480, 640],
  "nb_faces" : 1,
  "faces" : [
                "roi" : [345, 223, 34, 54],
                "roi_confidence" : 0.89,
                "names" : [
                              "key" : "jamel",
                              "confidence" : 0.75
                              "key" : "melissa",
                              "confidence" : 0.10
  • input_size : width and height of the input image in pixels (to be used as reference to roi output.
  • nb_faces : number of faces detected in the given image
  • roi : Region Of Interest containing [pt.x, pt.y, width, height], where pt is the upper left point of the rectangle outlining the detected face.
  • roi_confidence : probability that a real face is indeed located at the given roi.
  • key : they key identifying a given group of samples (as specified in the album input).
  • confidence : probability that the corresponding people was spotted in the image / video stream.

Code Sample

requirements: opencv2, opencv2 python bindings

This code sample captures the stream of a web cam and displays the result of the face_recognition service in a GUI.

# -*- coding: utf-8 -*-
import StringIO
import cv2
import numpy as np

import angus.client

def main(stream_index):
    camera = cv2.VideoCapture(stream_index)
    camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640)
    camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480)
    camera.set(cv2.cv.CV_CAP_PROP_FPS, 10)

    if not camera.isOpened():
        print("Cannot open stream of index {}".format(stream_index))

    print("Input stream is of resolution: {} x {}".format(camera.get(3), camera.get(4)))

    conn = angus.client.connect()
    service = conn.services.get_service("face_recognition", version=1)

    ### Choose here the appropriate pictures.
    ### Pictures given as samples for the album should only contain 1 visible face.
    ### You can provide the API with more than 1 photo for a given person.
    w1_s1 = conn.blobs.create(open("./images/gwenn.jpg", 'rb'))
    w2_s1 = conn.blobs.create(open("./images/aurelien.jpg", 'rb'))
    w3_s1 = conn.blobs.create(open("./images/sylvain.jpg", 'rb'))

    album = {'gwenn': [w1_s1], 'aurelien': [w2_s1], 'sylvain': [w3_s1]}

    service.enable_session({"album" : album})

    while camera.isOpened():
        ret, frame = camera.read()
        if not ret:

        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        ret, buff = cv2.imencode(".jpg", frame, [cv2.IMWRITE_JPEG_QUALITY, 80])
        buff = StringIO.StringIO(np.array(buff).tostring())

        job = service.process({"image": buff})
        res = job.result

        for face in res['faces']:
            x, y, dx, dy = face['roi']
            cv2.rectangle(frame, (x, y), (x+dx, y+dy), (0,255,0))

            if len(face['names']) > 0:
                name = face['names'][0]['key']
                cv2.putText(frame, "Name = {}".format(name), (x, y),
                            cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255))

            cv2.imshow('original', frame)
            if cv2.waitKey(1) & 0xFF == ord('q'):



if __name__ == '__main__':
    ### Web cam index might be different from 0 on your setup.
    ### To grab a given video file instead of the host computer cam, try:
    ### main("/path/to/myvideo.avi")