Image Moderation

Image Moderation detects and recognizes inappropriate and unsafe content in images. The criteria include suggestive or explicit racy content, nudity, violence, gore, bloodshed, and the presence of weapons and drugs.

You can provide a .webp/.jpeg or .png file as the input to the open() method. This method returns the image file object as a response.

You can set the moderation mode as BASIC, MODERATE, or ADVANCED optionally. The image is processed in the ADVANCED mode by default.

The response returns the probability of each criteria with their confidence scores, and the prediction of the image being safe_to_use or unsafe_to_use.

To know more about the component instance zia used below, please refer to this help section.

Parameters Used

Parameter Name Data Type Definition
img Image A Mandatory parameter. Will store the image to be analyzed.
options Array A Optional parameter. Will store the analysis mode values - "basic", "moderate" or "advanced"
    
copy
zia = app.zia() img = open("sample.webp", "rb") result = zia.moderate_image(img, options={"mode": "moderate"})

A sample response is shown below :

    
copy
{ "probability":{ "racy":"0.09", "nudity":"0.06" }, "confidence":"0.85", "prediction":"safe_to_use" }
Info : Refer to the SDK Scopes table to determine the required permission level for performing the above operation.

Last Updated 2025-03-28 18:24:49 +0530 +0530

ON THIS PAGE