# Image Moderation -------------------------------------------------------------------------------- title: "Introduction" description: "Image Moderation is a Zia AI-driven, readily-available image monitoring service that recognizes NSFW content in images, that you can implement in your Catalyst applicatoin." last_updated: "2026-03-18T07:41:08.701Z" source: "https://docs.catalyst.zoho.com/en/zia-services/help/image-moderation/introduction/" service: "Zia Services" -------------------------------------------------------------------------------- # Image Moderation ## Introduction Image Moderation is a Catalyst Zia Servies component that monitors and recognizes inappropriate and unsafe content in images. It is a sub-feature of content moderation that decides if a particular image is safe for work, based on a pre-determined set of rules. Zia Image Moderation monitors for and recognizes the following criteria in an image: * Explicit nudity * Racy or Suggestive content * Bloodshed, gore, and violence * Drugs and substances * Weapons Image Moderation is used to ensure that the user-generated content in your Catalyst applications does not violate the application standards and guidelines. You can maintain your brand reputation and general decorum by flagging, filtering, or automatically deleting the detected inappropriate content. Catalyst provides Image Moderation in the {{%bold%}}Java{{%/bold%}}, {{%bold%}}Node.js{{%/bold%}} and {{%bold%}}Python{{%/bold%}} SDK packages, which you can integrate in your Catalyst web or Android application. The Catalyst console provides easy access to code templates for these environments that you can implement in your application's code. You can also test Image Moderation using sample images in the console, and obtain the moderation results based on the attributes mentioned above. Image Moderation also provides a confidence score for each result that enables you to verify its accuracy and make informed decisions on the next steps to be taken. You can refer to the {{%link href="/en/sdk/java/v1/zia-services/image-moderation/" %}}Java SDK documentation{{%/link%}}, {{%link href="/en/sdk/nodejs/v2/zia-services/image-moderation/" %}}Node.js SDK documentation{{%/link%}} and {{%link href="/en/sdk/python/v1/zia-services/image-moderation/" %}}Python SDK documentation{{%/link%}} for code samples of Image Moderation. Refer to the {{%link href="/en/api/code-reference/zia-services/image-moderation/#ImageModeration" %}}API documentation{{%/link%}} to learn about the API available for Image Moderation. You can learn more about the other components of Catalyst Zia Services from {{%link href="/en/zia-services" %}}this page{{%/link%}}. -------------------------------------------------------------------------------- title: "Key Concepts" description: "Image Moderation is a Zia AI-driven, readily-available image monitoring service that recognizes NSFW content in images, that you can implement in your Catalyst applicatoin." last_updated: "2026-03-18T07:41:08.701Z" source: "https://docs.catalyst.zoho.com/en/zia-services/help/image-moderation/key-concepts/" service: "Zia Services" -------------------------------------------------------------------------------- # Key Concepts Before you learn about the use cases and implementation of Image Moderation, it's important to understand its fundamental concepts in detail. ### Moderation Modes Image Moderation enables you to select specific criteria to detect and flag during the moderation process. You can do this by specifying one of the three moderation modes in the input along with the image file. The three moderation modes available are: * {{%bold%}}Basic:{{%/bold%}} Detects nudity alone in an image. * {{%bold%}}Moderate:{{%/bold%}} Detects nudity and racy content in an image. * {{%bold%}}Advanced:{{%/bold%}} Detects all the supported criteria which are nudity, racy content, gore, drugs, weapons. The accuracy levels of the moderation process varies with each moderation mode. The accuracy levels are as follows: * Basic: Can detect unsafe content with 98% accuracy * Moderate: Can detect unsafe content with 96% accuracy * Advanced: Can detect unsafe content with 93-95% accuracy You can consider these accuracy levels and choose a moderation mode based on your use case, and enable moderation for those criteria alone. ### Input Format Zia Image Moderation moderates image files of the following input file formats: * ._jpg_/._jpeg_ * ._png_ You can implement Image Moderation in your application and enable input as you require, based on your use case. For example, you can automatically moderate image files uploaded by the end users of your application, and delete unwanted images in real-time. Zia can detect instances of unsafe content better if they are visible and distinct in the image, or if they are not obstructed by textual content or watermarks. The input provided using the API request contains the input image file, and the value for the moderation mode as {{%badge%}}basic{{%/badge%}}, {{%badge%}}moderate{{%/badge%}}, or {{%badge%}}advanced{{%/badge%}}. If you don't specify the moderation mode, the advanced mode will be followed by default. The file size must not exceed 10 MB. You can check the request format from the {{%link href="/en/api/code-reference/zia-services/image-moderation/#ImageModeration" %}}API documentation{{%/link%}}. ### Response Format Zia Image Moderation returns the response in the following ways: * {{%bold%}}In the console:{{%/bold%}} When you upload a sample image with in the console, it will return the decoded data in two response formats:<br /> * {{%bold%}}Textual format:{{%/bold%}}<br /> The {{%link href="/en/zia-services/help/image-moderation/implementation/#test-image-moderation-in-the-catalyst-console" %}}textual response{{%/link%}} contains a list of the detected unsafe content, with the confidence levels of the detection as percentage values. It provides the prediction as _Safe to Use_ or _Unsafe to Use_ with a confidence percentage, based on the detected content. In the textual response, the supported criteria are grouped under the following categories:<br /> * Violence: Weapons * Suggestive: Explicit nudity, Revealing clothes * Substance Abuse: Drugs * Visually Disturbing: Blood * {{%bold%}}JSON format:{{%/bold%}} The JSON response contains the probability of each criteria of the moderation mode in a value between 0 to 1, based on the detected content. The criteria in the JSON response are: {{%badge%}}racy{{%/badge%}}, {{%badge%}}weapon{{%/badge%}}, {{%badge%}}nudity{{%/badge%}}, {{%badge%}}gore{{%/badge%}}, and {{%badge%}}drug{{%/badge%}}. It provides the prediction as {{%badge%}}safe\_to\_use{{%/badge%}} or {{%badge%}}unsafe\_to\_use{{%/badge%}}, with a confidence score of 0 to 1, based on the probabilities of all the criteria. The confidence score of 0 to 1 can be equated to percentage values as follows:<br /> <table class="content-table"> <thead> <tr> <th>Confidence Level in percentage</th> <th>Confidence Score of values between 0 and 1</th> </tr> </thead> <tbody> <tr> <td>0-9</td> <td>0.0</td> </tr> <tr> <td>3-9</td> <td>0.0</td> </tr> <tr> <td>10-19</td> <td>0.0</td> </tr> <tr> <td>20-29</td> <td>0.0</td> </tr> <tr> <td>30-39</td> <td>0.0</td> </tr> <tr> <td>40-49</td> <td>0.01</td> </tr> <tr> <td>50-59</td> <td>0.12</td> </tr> <tr> <td>60-69</td> <td>0.23</td> </tr> <tr> <td>&lt;70</td> <td>0.63</td> </tr> </tbody> </table> * {{%bold%}}Using the SDKs:{{%/bold%}} When you send an image file using an API request, you will only receive a JSON response containing the results in the format specified above. You can check the JSON response format from the {{%link href="/en/api/code-reference/zia-services/image-moderation/#ImageModeration" %}}API documentation.{{%/link%}} -------------------------------------------------------------------------------- title: "Benefits" description: "Image Moderation is a Zia AI-driven, readily-available image monitoring service that recognizes NSFW content in images, that you can implement in your Catalyst applicatoin." last_updated: "2026-03-18T07:41:08.701Z" source: "https://docs.catalyst.zoho.com/en/zia-services/help/image-moderation/benefits/" service: "Zia Services" -------------------------------------------------------------------------------- # Benefits 1. {{%bold%}}Protect Community Users{{%/bold%}}<br /><br /> Image Moderation assists you in providing a safe and protected environment for your customers and application users. It helps you enforce compliance with legal standards, company policies, and general decorum. Catalyst enables you to maintain your brand and customer reputation by ensuring that images containing disturbing content like gore, substance abuse, pornography, and graphic adult content are not circulated in your application's platform. 2. {{%bold%}}Customized and Accurate Results{{%/bold%}}<br /><br /> The moderation modes in Image Moderation provide flexibility in detecting instances of specific categories of unsafe content, based on your requirements. The results are also generated with low error margins, as Zia's training model is implemented with repeated systematic training using various machine learning techniques. Zia studies and analyzes large volumes of data to be able to perform complex analysis, ensuring that the results generated are precise, accurate, and reliable. 3. {{%bold%}}Automatic Real-Time Monitoring {{%/bold%}}<br /><br /> Image Moderation enables you to perform real-time monitoring of the user generated content in your application. Catalyst saves the time and effort required in moderating content manually, by limiting or preventing human review. You can also process images with unsafe content in any way you need. For example, you can implement an additional manual review process, or code your application to delete the detected content automatically, and issue warnings to or terminate the accounts of the users that violate guidelines. This ensures that your application is monitored 24/7\. 4. {{%bold%}}Rapid Performance{{%/bold%}}<br /><br /> Image Moderation generates results instantaneously with a short turn-around time, as soon as the user-generated content is uploaded in your platform. Catalyst ensures a high throughput of data transmission, and a minimal latency in serving requests. The fast response time, state-of-the art infrastructure, and scalable resources ensure that it meets unanticipated spikes, and provides superior performance. 5. {{%bold%}}Seamless Integration{{%/bold%}}<br /><br /> You can easily implement Image Moderation in your application without having to learn the complex processing of the algorithms or the backend set-up. You can implement the ready-made code templates provided for the Java, Node.js and Python platforms in any of your Catalyst applications that requires Image Moderation. 6. {{%bold%}}Testing in the Console{{%/bold%}}<br /><br /> The testing feature in the console enables you to verify the efficiency of Image Moderation. You can upload sample images and view the results. This allows you to get an idea about the format and accuracy of the response that will be generated when you implement it in your application. -------------------------------------------------------------------------------- title: "Use Cases" description: "Image Moderation is a Zia AI-driven, readily-available image monitoring service that recognizes NSFW content in images, that you can implement in your Catalyst applicatoin." last_updated: "2026-03-18T07:41:08.701Z" source: "https://docs.catalyst.zoho.com/en/zia-services/help/image-moderation/use-cases/" service: "Zia Services" -------------------------------------------------------------------------------- # Use Cases Image Moderation is strongly required in applications that allow user-generated content to be freely circulated. The following are some use cases for Zia Image Moderation: * A social media application that enables users to post pictures in their profiles implements the {{%link href="/en/zia-services/help/image-moderation/key-concepts/#moderation-modes" %}}moderate mode{{%/link%}} of Image Moderation to monitor the images about to be published, as soon as the users click "Upload". This monitoring process happens in the background, and Zia instantly detects images containing explicit nudity and racy content. The application's logic is coded to delete them automatically, and issue warnings to the users that uploaded the inappropriate content. * A website implements a child-friendly version, and requires the content distributed in it to be strictly monitored for all instances of gore, nudity, weapons, drugs, and racy content. It uses the advanced mode of Image Moderation to monitor and automatically delete all inappropriate content, to ensure a safe space for minors to use the website freely without parental guidance. Image Moderation can also be implemented in the following scenarios: * Applications that implement graphic warning and image covering for instances that contain gore or nudity. * Applications that enforce respect towards cultural and religious beliefs by preventing offensive images to be published. * Apps created for professional and educational environments. * Child-friendly apps and websites. * Blogging or social media applications that enable users to upload content without admin review. * Apps enforcing prevention of cyberbullying and pornographic material distribution. -------------------------------------------------------------------------------- title: "Implementation" description: "Image Moderation is a Zia AI-driven, readily-available image monitoring service that recognizes NSFW content in images, that you can implement in your Catalyst applicatoin." last_updated: "2026-03-18T07:41:08.702Z" source: "https://docs.catalyst.zoho.com/en/zia-services/help/image-moderation/implementation/" service: "Zia Services" -------------------------------------------------------------------------------- # Implementation This section only covers working with Image Moderation in the Catalyst console. Refer to the {{%link href="/en/sdk/java/v1/zia-services/image-moderation" %}}SDK{{%/link%}} and {{%link href="/en/api/code-reference/zia-services/image-moderation" %}}API{{%/link%}} documentation sections for implementing Image Moderation in your application's code. As mentioned earlier, you can access the code templates that will enable you to integrate Image Moderation in your Catalyst application from the console, and also test the feature by uploading images and obtaining the results. ### Access Image Moderation To access Image Moderation in your Catalyst console: 1. Navigate to {{%bold%}}Zia Services{{%/bold%}} in the left pane of the Catalyst console and click {{%bold%}}Image Moderation.{{%/bold%}}<br /> 2. Click {{%bold%}}Try a Demo{{%/bold%}} in the Image Moderation feature page.<br /> <br /> This will open the Image Moderation feature.<br /> <br /> ### Test Image Moderation in the Catalyst Console You can test Image Moderation by either selecting a sample image from Catalyst or by uploading your own image. To scan a sample image and view the result: 1. Click {{%bold%}}Select a Sample Image{{%/bold%}} in the box.<br /> 2. Select an image from the samples provided.<br /> <br /> Image Moderation will scan the image for inappropriate content of all criteria in the {{%link href="/en/zia-services/help/image-moderation/key-concepts/#moderation-modes" %}}advanced mode{{%/link%}}, and display the probability of each detected criteria as percentage values.<br /> <br /> The colors in the response bars indicate the safety of the image in the following way: red indicates that the image is unsafe to use, orange indicates that the image is partially safe to use, and green indicates that the image is safe to use. <br /> You can also view the complete JSON response, which includes the probability of each detected criteria, the prediction, and its confidence score. Click {{%bold%}}View Response{{%/bold%}} to view the {{%link href="/en/zia-services/help/image-moderation/key-concepts/#response-format" %}}JSON response{{%/link%}}.<br /> <br /> You can refer to the {{%link href="/en/api/code-reference/zia-services/image-moderation/#ImageModeration" %}}API documentation{{%/link%}} to view a complete sample JSON response structure for each moderation mode. To upload your own image and test Image Moderation: 1. Click {{%bold%}}Upload{{%/bold%}} under the _Result_ section.<br /> <br /> If you're opening Image Moderation after you have closed it, click {{%bold%}}Browse Files{{%/bold%}} in this box.<br /> 2. Upload a file from your local system. <br /> {{%note%}}{{%bold%}}Note:{{%/bold%}} The file must be in ._jpg_/._jpeg_ or ._png_ format. The file size must not exceed 10 MB.{{%/note%}} The console will scan the image for inappropriate content and display the results.<br /> <br /> You can click {{%bold%}}View Photo{{%/bold%}} to uncover the image.<br /> <br /> You can view the JSON response as well in the same way.<br /> ### Access Code Templates for Image Moderation You can implement Image Moderation in your Catalyst application using the code templates provided by Catalyst for {{%link href="/en/sdk/java/v1/zia-services/image-moderation/" %}}Java{{%/link%}}, {{%link href="/en/sdk/nodejs/v2/zia-services/image-moderation/" %}}Node.js{{%/link%}} and {{%link href="/en/sdk/python/v1/zia-services/image-moderation/" %}}Python{{%/link%}} platforms. You can access them from the section below the test window. Click either the {{%bold%}}Java SDK{{%/bold%}} or {{%bold%}}NodeJS SDK{{%/bold%}} tab, and copy the code using the copy icon. You can paste this code in your web or Android application's code wherever you require. You can process the input file as a new {{%badge%}}File{{%/badge%}} in Java. The {{%badge%}}ZCImageModerationOptions{{%/badge%}} module enables you to set the {{%link href="/en/zia-services/help/image-moderation/key-concepts/#moderation-modes" %}}moderation mode{{%/link%}} as {{%badge%}}BASIC{{%/badge%}}, {{%badge%}}MODERATE{{%/badge%}}, or {{%badge%}}ADVANCED{{%/badge%}} using {{%badge%}}setAnalyseMode{{%/badge%}}. In Node.js, the {{%badge%}}imPromise{{%/badge%}} object is used to hold the input image file and the moderation mode set for it. You can specify the {{%badge%}}mode{{%/badge%}} as {{%badge%}}basic{{%/badge%}}, {{%badge%}}moderate{{%/badge%}}, or {{%badge%}}advanced{{%/badge%}} to process the image in required mode. In Python, you can provide a .webp/.jpeg or .png file as the input to the open() method. This method returns the image file object as a response. In the {{%badge%}}zia.moderate_image{{%/badge%}} method, you can pass the image file object,and set the moderation mode as {{%badge%}}BASIC{{%/badge%}}, {{%badge%}}MODERATE{{%/badge%}}, or {{%badge%}}ADVANCED{{%/badge%}} optionally. The image is processed in the {{%badge%}}ADVANCED{{%/badge%}} mode by default. The response returns the probability of each criteria with their confidence scores, and the prediction of the image being {{%badge%}}safe_to_use{{%/badge%}} or {{%badge%}}unsafe_to_use{{%/badge%}}.