Before you learn about the use cases and implementation of Image Moderation, it’s important to understand its fundamental concepts in detail.
Image Moderation enables you to select specific criteria to detect and flag during the moderation process. You can do this by specifying one of the three moderation modes in the input along with the image file.
The three moderation modes available are:
- Basic: Detects nudity alone in an image.
- Moderate: Detects nudity and racy content in an image.
- Advanced: Detects all the supported criteria which are nudity, racy content, gore, drugs, weapons.
The accuracy levels of the moderation process varies with each moderation mode. The accuracy levels are as follows:
- Basic: Can detect unsafe content with 98% accuracy
- Moderate: Can detect unsafe content with 96% accuracy
- Advanced: Can detect unsafe content with 93-95% accuracy
You can consider these accuracy levels and choose a moderation mode based on your use case, and enable moderation for those criteria alone.
Zia Image Moderation moderates image files of the following input file formats:
You can implement Image Moderation in your application and enable input as you require, based on your use case. For example, you can automatically moderate image files uploaded by the end users of your application, and delete unwanted images in real-time.
Zia can detect instances of unsafe content better if they are visible and distinct in the image, or if they are not obstructed by textual content or watermarks.
The input provided using the API request contains the input image file, and the value for the moderation mode as basic, moderate, or advanced. If you don’t specify the moderation mode, the advanced mode will be followed by default. The file size must not exceed 10 MB.
You can check the request format from the API documentation.
Zia Image Moderation returns the response in the following ways:
- In the console: When you upload a sample image with in the console, it will return the decoded data in two response formats:
- Textual format:
The textual response contains a list of the detected unsafe content, with the confidence levels of the detection as percentage values. It provides the prediction as Safe to Use or Unsafe to Use with a confidence percentage, based on the detected content. In the textual response, the supported criteria are grouped under the following categories:
* Violence: Weapons * Suggestive: Explicit nudity, Revealing clothes * Substance Abuse: Drugs * Visually Disturbing: Blood
- JSON format: The JSON response contains the probability of each criteria of the moderation mode in a value between 0 to 1, based on the detected content. The criteria in the JSON response are: racy, weapon, nudity, gore, and drug. It provides the prediction as safe_to_use or unsafe_to_use, with a confidence score of 0 to 1, based on the probabilities of all the criteria. The confidence score of 0 to 1 can be equated to percentage values as follows:
- Textual format:
|Confidence Level in percentage
|Confidence Score of values between 0 and 1
- Using the SDKs: When you send an image file using an API request, you will only receive a JSON response containing the results in the format specified above. You can check the JSON response format from the API documentation.
Last Updated 2023-05-09 17:03:08 +0530 +0530
Send your feedback to us