Face Analytics

Zia Face Analytics performs facial detection in images, and analyzes the facial features to provide information such as the gender, age, and emotion of the detected faces.

You must provide a .webp/.jpeg or .png file as the input to the open() method to perform Face Analytics on that image. This opens the provided file and returns a file object as a response.

The analyse_face() method accepts the input image as its argument. You can also specify the analysis mode as basic, moderate, or advanced. You can also specify the attributes age, smile, or gender as true to detect or false to not detect. These values are optional. All attributes are detected and the advanced mode is processed by default.

Refer to the API documentation for the request and response formats.

To know more about the component instance zia used below, please refer to this help section.

The response returns the prediction of the enabled attributes, the coordinates and landmarks of facial features of each face, and the confidence score of each analysis.

Parameters Used

Parameter Name Data Type Definition
img Image A Mandatory parameter. Will store the image of the face to be analyzed.
mode String A Optional parameter. Will store the analysis mode values - "basic", "moderate" or "advanced".
age Boolean A Optional parameter. Will decide whether to determine age or not. Values accepted are "Yes" or "No"
emotion Boolean A Optional parameter. Will decide whether to determine emotion or not. Values accepted are "Yes" or "No"
gender Boolean A Optional parameter. Will decide whether to determine gender or not. Values accepted are "True" or "False"
    
copy
# Face Analytics implementation zia = app.zia() img = open("sample.webp", "rb") result = zia.analyse_face( img, {"mode": "moderate", "age": True, "emotion": True, "gender": False} )

A sample response is shown below :

    
copy
{
"faces_count":1,
"faces":[
{
"co_ordinates":[
"401",
"193",
"494",
"313"
],
"emotion":{
"confidence":{
"smiling":"0.75",
"not_smiling":"0.25"
},
"prediction":"smiling"
},
"gender":{
},
"confidence":1,
"id":"0",
"landmarks":{
"right_eye":[
[
"467",
"230"
]
],
"nose":[
[
"451",
"264"
]
],
"mouth_right":[
[
"474",
"278"
]
],
"left_eye":[
[
"426",
"239"
]
],
"mouth_left":[
[
"434",
"283"
]
]
},
"age":{
"confidence":{
"20-29":"0.73",
"30-39":"0.08",
"0-2":"0.0",
"40-49":"0.0",
"50-59":"0.0",
">70":"0.0",
"60-69":"0.0",
"10-19":"0.17",
"3-9":"0.0"
},
"prediction":"20-29"
}
}
]
}
Info : Refer to the SDK Scopes table to determine the required permission level for performing the above operation.

Last Updated 2025-03-28 18:24:49 +0530 +0530

ON THIS PAGE
ACCESS THIS PAGE