# Churn Prediction -------------------------------------------------------------------------------- title: "Introduction" description: "churn prediction, Create a powerful ML pipeline that can be used to predict the churn using the Catalyst QuickML service." last_updated: "2026-03-18T07:41:08.674Z" source: "https://docs.catalyst.zoho.com/en/tutorials/churn-prediction/introduction/" service: "All Services" -------------------------------------------------------------------------------- # Churn Prediction # Introduction In this tutorial, we will guide you through the process of building a powerful machine learning model using {{%link href="/en/quickml/getting-started/introduction/" %}}Catalyst QuickML{{%/link%}} to predict whether or not a client would leave. In this tutorial, we'll first do {{%link href="/en/quickml/help/data-preprocessing/data-cleaning/" %}}preprocess the datasets{{%/link%}} to make sure they're tidy and prepared for training. A {{%link href="/en/quickml/help/create-data-pipeline/" %}}data pipeline{{%/link%}} will be built next to handle data transformation, and an {{%link href="/en/quickml/help/create-ml-pipeline/" %}}ML pipeline{{%/link%}} will be built to train and test the model. Finally, we'll provide an {{%link href="/en/quickml/help/pipeline-endpoints/" %}}endpoint{{%/link%}} for the trained model that enables interaction with external apps and provides Churn predictions. The Churn Prediction ML model is built using the following Catalyst service: **{{%link href="/en/quickml/getting-started/introduction/" %}}Catalyst QuickML{{%/link%}}** : Using this service, we will first pre-process the sample dataset by implementing {{%link href="/en/quickml/help/data-preprocessing/data-cleaning/" %}}node operations{{%/link%}} on them and constructing the {{%link href="/en/quickml/help/create-data-pipeline/" %}}data pipeline{{%/link%}}. This pre-processed data will be used to create an ML model by executing {{%link href="/en/quickml/help/ml-algorithms/classification-algorithms/" %}}ML algorithms{{%/link%}}. Finally, this Churn Prediction ML model can be accessed by external applications using the {{%link href="/en/quickml/help/pipeline-endpoints/" %}}endpoint URL{{%/link%}} generated in QuickML. The final output, after creating all the required data and ML pipelines in the {{%link href="https://console.catalyst.zoho.com/baas/index" %}}Catalyst console{{%/link%}}, will look like this: -------------------------------------------------------------------------------- title: "Prerequisites" description: "churn prediction, Create a powerful ML pipeline that can be used to predict the churn using the Catalyst QuickML service." last_updated: "2026-03-18T07:41:08.675Z" source: "https://docs.catalyst.zoho.com/en/tutorials/churn-prediction/prerequisites/" service: "All Services" related: - Machine Learning Algorithms (/en/quickml/help/ml-algorithms/classification-algorithms/) -------------------------------------------------------------------------------- # Prerequisites Since this tutorial involves only {{%link href="/en/quickml/getting-started/introduction/" %}}Catalyst QuickML{{%/link%}}, we will be working entirely in the {{%link href="https://console.catalyst.zoho.com/baas/index" %}}Catalyst console{{%/link%}} to build data and {{%link href="/en/quickml/help/create-ml-pipeline/" %}}ML pipelines{{%/link%}}, create ML models, and train the models to predict outcomes. Before you begin working on this tutorial, please download the below dataset: - {{%link href="https://workdrive.zohoexternal.com/external/0107330f706ec1b02997f01fce855067dce3efbbaac7b96bf2fba8682e873a09" %}}Churn_1{{%/link%}} - {{%link href="https://workdrive.zohoexternal.com/external/1876fbcdd058652c2c512390cac5821e690f75522612bb862fc1297503a41850" %}}Churn_2{{%/link%}} This tutorial aims to implement cleaning, refining and pre-processing operations on the datasets, and then use them to train ML models. We will be uploading the dataset to Catalyst QuickML in the later sections of this tutorial. -------------------------------------------------------------------------------- title: "Create a Project" description: "Churn prediction, Create a powerful ML pipeline that can be used to predict the churn using the Catalyst QuickML service." last_updated: "2026-03-18T07:41:08.680Z" source: "https://docs.catalyst.zoho.com/en/tutorials/churn-prediction/create-a-project/" service: "All Services" related: - Catalyst Projects (/en/getting-started/catalyst-projects) -------------------------------------------------------------------------------- # Create a Project Let's {{%link href="/en/getting-started/catalyst-projects" %}}create a Catalyst project{{%/link%}} from the Catalyst console. 1. Log in to the {{%link href="https://console.catalyst.zoho.com/baas/index" %}}Catalyst console{{%/link%}}, then click {{%badge%}}Create a new Project{{%/badge%}}. <br /> 2. Enter the project’s name as "**ChurnPrediction**" (or a name you wish to give for the project) in the pop-up window that appears. <br /> 3. Click on {{%badge%}}Create{{%/badge%}} button. Your project will be created and automatically opened. To access your project later, simply click on the {{%badge%}}Access Project{{%/badge%}} button. <br /> -------------------------------------------------------------------------------- title: "Upload Dataset" description: "churn prediction, Create a powerful ML pipeline that can be used to predict the churn using the Catalyst QuickML service." last_updated: "2026-03-18T07:41:08.681Z" source: "https://docs.catalyst.zoho.com/en/tutorials/churn-prediction/upload-dataset/" service: "All Services" related: - Create Your First pipeline (/en/quickml/help/create-ml-pipeline) -------------------------------------------------------------------------------- # Upload the Dataset Let's begin by uploading the dataset in Catalyst QuickML using the available dataset {{%link href="/en/quickml/help/data-connectors/zoho-apps/" %}}dataset connectors{{%/link%}}. 1. Navigate to the QuickML service in the Catalyst console and click {{%badge%}}Start Exploring{{%/badge%}}. <br /> 2. Navigate to the {{%badge%}}Datasets{{%/badge%}} component and click {{%badge%}}Import Dataset{{%/badge%}}. <br /> 3. An Import Dataset pop-up will be displayed. In the **Data Sources** step, navigate to File Upload and click {{%badge%}}Upload File{{%/badge%}}. <br /> Upload the **Churn_1** dataset that you have downloaded already, we can have the Quotes Type as "**Double Quotes(")**" and Escape Character as "**Backslash(\)**" and click {{%badge%}}Next{{%/badge%}}. <br /> The name of the dataset will be auto-populated based on the uploaded file. You can edit it, if required, and click {{%badge%}}Upload{{%/badge%}}. <br /> The dataset is now uploaded successfully proceed further by clicking on {{%badge%}}Done{{%/badge%}} button <br /> The dataset will be displayed in the **All Datasets** section. You can click on the dataset name to view the dataset's details. <br /> Once if you click on the **Churn_1** dataset in the list, you'll be redirected to the **Dataset Details** page where you can view the {{%link href="/en/quickml/help/data-profiler-and-viewer/#what-is-data-profiling" %}}profiling, data preview{{%/link%}} and {{%link href="/en/quickml/help/data-visualization/overview/" %}}visualization chart{{%/link%}} of the dataset. <br /> Now, you can proceed to upload the another dataset called **Churn_2** by repeating the steps mentioned above. -------------------------------------------------------------------------------- title: "Create Data Pipeline" description: "churn prediction, Create a powerful ML pipeline that can be used to predict the churn using the Catalyst QuickML service." last_updated: "2026-03-18T07:41:08.681Z" source: "https://docs.catalyst.zoho.com/en/tutorials/churn-prediction/create-data-pipeline/" service: "All Services" related: - Data Cleaning (/en/quickml/help/data-preprocessing/data-cleaning) - Data Transformation (/en/quickml/help/data-preprocessing/data-transformation) - Data Profiler and Viewer (/en/quickml/help/data-profiler-and-viewer/) -------------------------------------------------------------------------------- # Create a data pipeline Now that we have uploaded the dataset, we will proceed with creating a {{%link href="/en/quickml/help/create-data-pipeline/"%}}data pipeline{{%/link%}} with the dataset. 1. Navigate to the **Datasets** component in the left menu. There are two ways to create a data pipeline: - You can click on the dataset and then click {{%badge%}}Create Pipeline{{%/badge%}} in the top-right corner of the page. <br /> - You can click on the pen icon located to the left of the dataset name, as shown in the image below. <br /> Here, we are uploading the **Churn_1** dataset for preprocessing. **Churn_2** will be added to this dataset in the upcoming preprocessing steps. 2. Name the pipeline "**Churn Prediction Data Pipeline**" and click {{%badge%}}Create Pipeline{{%/badge%}}. <br /> The {{%link href="/en/quickml/help/pipeline-builder-interface/walkthrough/#pipeline-builder-interface-1" %}}pipeline builder interface{{%/link%}} will open as shown in the screenshot below. <br /> We will be performing the following set of data preprocessing operations in order to clean, refine, and transform the datasets, and then execute the data pipeline. Each of these operations involve individual {{%link href="/en/quickml/help/data-preprocessing/data-cleaning/" %}}data nodes{{%/link%}} that are used to construct the pipeline. ### Data preprocessing with QuickML 1. #### Combining two datasets With the aid of the in QuickML, we can add a new dataset (please note that you must first upload the dataset you wish to add). Here, we are adding the **Churn_2** dataset to merge with the existing dataset, Custom name for the node can be given in the {{%badge%}}Custom Name{{%/badge%}} section, here we have didn't change the default name **Add dataset**. Then click the {{%badge%}}Save{{%/badge%}} button. <br /> - Click {{%badge%}}Data Extraction{{%/badge%}} in the left panel, and choose {{%badge%}}Add Dataset{{%/badge%}} [node](/en/quickml/help/data-preprocessing/data-extraction/#add-dataset). This will help you to add a new dataset(Churn_2) to the pipeline. - Select {{%badge%}}Data Transformation{{%/badge%}} in the left panel and choose {{%badge%}}Union{{%/badge%}} [node](/en/quickml/help/data-preprocessing/data-transformation/#union). Then make a connection between the nodes by joining the links between the two nodes. This will help to combine these two supplied datasets, **Churn_1** and **Churn_2**, into a single dataset. - If any duplicate records exist in either dataset, be careful to tick the box labeled "**Drop Duplicate Records**" while performing the operation. Then click the {{%badge%}}Save{{%/badge%}} button. It will remove the duplicate records from both datasets. <br /> 2. #### Select/drop columns Selecting or dropping columns from a dataset is a common data preprocessing step in data analysis and machine learning. The choice to select or drop columns depends on the specific objectives and requirements of your analysis or modelling task. The columns we don't need for our model training from this dataset are "**security_no**", "**joining_date**", "**avg_frequency_login_days**", "**last_visit_time**" and "**referral_id**" in the provided datasets. Using QuickML, you may quickly choose the necessary fields from the dataset for model training using the **Select/Drop** [node](/en/quickml/help/data-preprocessing/data-cleaning/#select-or-drop) from the Data Cleaning component. <br /> 3. #### Filling columns in dataset with values Using the {{%badge%}}Fill Columns{{%/badge%}} [node](/en/quickml/help/data-preprocessing/data-cleaning/#fill-columns) in QuickML, we can easily fill the column values based on any certain condition. We can fill the null values or non-null values based on our requirements. Here we are filling the columns "**joined_through_referral**" and "**medium_of_operation**" by filling the columns with a custom value "**Not mentioned**" for the rows with the "**?**". For the column "**points_in_wallet,**" we are replacing the empty values with a custom value of "**0**". <br /> From the drop-down menu, choose the appropriate **data type** for the column. <br /> Click on the {{%badge%}}"+"{{%/badge%}} button to add multiple criteria, then click on the {{%badge%}}Save{{%/badge%}} button once the criteria is selected. <br /> 4. #### Filter Data Filtering a dataset typically involves selecting a subset of rows from a DataFrame that meet certain criteria or conditions. Here we are using the {{%badge%}}Filter{{%/badge%}} [node](/en/quickml/help/data-preprocessing/data-cleaning/#filter) from the Data Cleaning session to filter the "**days_since_last_login**", "**avg_time_spent**," and "**points_in_wallet**" columns whose values are greater than or equal to "**0**" and for columns "**preferred_offer_types**" and "**region_category**" that have non-empty values using the {{%badge%}}Filter{{%/badge%}} [node](/en/quickml/help/data-preprocessing/data-cleaning/#filter) from the Data Cleaning session. <br /> 5. #### Sentiment Analysis Sentiment Analysis is a technique used to determine the sentiment or emotional tone expressed in a piece of text, such as customer feedback or reviews. The goal of sentiment analysis is to classify the text as positive, negative, or neutral based on the emotions or opinions it conveys. Here we have the column named "**feedback**" which contains the feedback about the product. We can classify the values of the column as positive, negative, or neutral using the **Sentiment Analysis** [node](/en/quickml/help/zia-features/#zia-sentiment-analysis) under **Zia Features**. Mark the checkbox next to {{%badge%}}Replace in place{{%/badge%}} if you want to replace the value of the "**feedback**" column with the result of {{%badge%}}Sentiment Analysis{{%/badge%}} node. <br /> 6. #### Save and Execute Now, connect the {{%badge%}}Sentiment Analysis{{%/badge%}} [node](/en/quickml/help/zia-features/#zia-sentiment-analysis) tothe {{%badge%}}Destination{{%/badge%}} node. Once all the nodes are connected, click the {{%badge%}}Save{{%/badge%}} button to save the pipeline. Then click on {{%badge%}}Execute{{%/badge%}} button to execute the pipeline. <br /> You'll be redirected to the page below, which shows the executed pipeline with the execution status. We can see here that the pipeline execution was successful. <br /> Click on {{%badge%}}Execution Stats{{%/badge%}} to access more details regarding the compute usage, as shown below. <br /> In this part, we've looked at how to process data using QuickML, giving you a variety of effective ways to get your data ready for the creation of machine learning models. This data pipeline can be reused to create multiple ML experiments for varied use cases within your Catalyst project. -------------------------------------------------------------------------------- title: "Create ML Pipeline" description: "churn prediction, Create a powerful ML pipeline that can be used to predict the churn using the Catalyst QuickML service." last_updated: "2026-03-18T07:41:08.681Z" source: "https://docs.catalyst.zoho.com/en/tutorials/churn-prediction/create-ml-pipeline/" service: "All Services" related: - ML Algorithms in QuickML (/en/quickml/help/ml-algorithms/classification-algorithms) - Operations in QuickML (/en/quickml/help/operations-in-quickml/encoding) -------------------------------------------------------------------------------- # Create an ML pipeline To build the prediction model, we will use the preprocessed dataset in the {{%link href="/en/quickml/help/create-ml-pipeline/"%}}ML Pipeline Builder{{%/link%}}. The initial step in building the ML Pipeline involves selecting the **target column**, which is the column that we are trying to predict. To create an ML pipeline, first Navigate to the **Pipelines** component and click on the {{%badge%}}Create Pipeline{{%/badge%}} option. <br /> In the pop-up that appears, provide the pipeline name, we'll Name the pipeline as **Churn Prediction** and the model **Churn Prediction Model** in the Create Pipeline pop-up. Then, select the appropriate dataset and the column name of the target. <br /> We need to select the source dataset that is chosen for building the data pipeline, as the preprocessed data is reflected in the source dataset. In our case, we will be importing the **Churn_1** dataset, as we have selected it for preprocessing and cleaning, and our target is the column named **churn_risk_score**. 1. ### Encoding categorical columns Encoders are used in various data preprocessing and machine learning tasks to convert categorical or non-numeric data into a numerical format that machine learning algorithms can work with effectively. - #### Ordinal encoding Here we are using ordinal encoding to encode the following categorical features: "**membership_category**", **preferred_offer_types**", "**medium_of_operation**", "**internet_option**", "**gender**", "**used_special_discount**", "**past_complaints**", "**complaint_status**" and "**feedback**". It assigns integers to the categories based on their order, making it possible for machine learning algorithms to capture the ordinal nature of the data. We'll use the {{%badge%}}Ordinal Encoder{{%/badge%}} [node](/en/quickml/help/operations-in-quickml/encoding/#ordinal-encoder) by navigating to **ML operations**, clicking the ->**Encoding** component, and choosing-> **Ordinal Encoder** in QuickML to turn the selected category columns into numerical columns. <br /> - #### One-hot encoder {{%link href="/en/quickml/help/operations-in-quickml/encoding/#one-hot-encoding"%}}One-hot encoding{{%/link%}} is typically applied to categorical columns in a dataset, where each category represents a distinct class or group. This method typically increases the dimensionality of the dataset because it creates a new binary column for each unique category. The number of binary columns is equal to the number of unique categories minus one, as you can infer the presence of the last category from the absence of all others. Here, we are using {{%badge%}}One-Hot Encoder{{%/badge%}} node to encode the following columns: "**region_category**", "**joined_through_referral**" and "**offer_application_preference**". We'll use the One-Hot Encoder node by navigating to **ML operations**, selecting the -> **Encoding** component and choosing -> **One-Hot Encoder** in QuickML to turn the selected category columns into numerical columns. <br /> 2. ### Feature Engineering: {{%link href="/en/quickml/help/operations-in-quickml/feature-engineering/#feature-selection"%}}Feature selection{{%/link%}} is the process of choosing a subset of the most relevant and important features (variables or columns) from the dataset to use in model training and analysis. The goal of feature selection is to improve the performance, efficiency, and interpretability of machine learning models. Feature selection is particularly crucial when dealing with high-dimensional datasets, as it can help reduce overfitting, reduce computation time, and enhance model interpretability. Here we are using the **redundancy rlimination** feature selection technique to generate the features. This method will identify and remove redundant features from a dataset. Redundant features provide duplicate or highly correlated information, and they don't contribute significantly to improving the performance of machine learning models. Select **Redundancy Elimination** node by navigating to **ML operations**, clicking ->**Feature Engineering**, selecting ->**Feature Selection**, and choosing -**>Redundancy Elimination**. <br /> 3. ### ML Algorithm: The next step in ML pipeline building is selecting the appropriate algorithm for training the preprocessed data. Here we'll use the {{%link href="/en/quickml/help/ml-algorithms/classification-algorithms/#xgb-classification"%}}XGBoost classification algorithm{{%/link%}} to train the data. **XGBoost** (Extreme Gradient Boosting) is a popular and powerful machine learning algorithm commonly used for classification tasks. It's an ensemble learning method that combines the predictions of multiple decision trees to create a strong predictive model. XGBoost is known for its speed, scalability, and ability to handle complex datasets.\ We can quickly construct the XGBoost Classification method in QuickML's ML Pipeline Builder by dragging and dropping the relevant {{%badge%}}XGBoost Classification{{%/badge%}} node from **ML operations**, selecting ->**Algorithm**, clicking ->**Classification**, and choosing ->XGBoost Classification**.\ In order to make sure the model is optimized for our particular dataset, we may also adjust the tuning parameters; in our instance, we can just stick with the default settings. When everything is configured, we may save the pipeline for further testing and deployment. <br /> Once we drag-and-drop the algorithm node, its end node will be automatically connected to the destination node. Click {{%badge%}}Save{{%/badge%}} to save the pipeline and execute the pipeline by clicking the {{%badge%}}Execute{{%/badge%}} button at the top-right corner of the pipeline builder page. This will redirect you to the page below which shows the executed pipeline with execution status. We can clearly see here that the pipeline execution is successful. <br /> Click {{%badge%}}Execution Stats{{%/badge%}} to view more compute details about each stage of the model execution in detail. <br /> The prediction model is created and can be examined under the Model section(click on **Churn Prediction model**) following the successful completion of the ML workflow. <br /> This offers useful perceptions into the efficiency and performance of the model while making predictions based on the data. <br /> -------------------------------------------------------------------------------- title: "Create Endpoint" description: "churn prediction, Create a powerful ML pipeline that can be used to predict the churn using the Catalyst QuickML service." last_updated: "2026-03-18T07:41:08.682Z" source: "https://docs.catalyst.zoho.com/en/tutorials/churn-prediction/create-endpoint/" service: "All Services" related: - Pipeline Endpoints (/en/quickml/help/pipeline-endpoints) -------------------------------------------------------------------------------- # Create an endpoint We will now create an endpoint for the above Deal Prediction model to allow external applications to interact with the model seamlessly and get predictions. 1. Navigate to the **Endpoints** component in the left menu and click {{%badge%}}Create Endpoint{{%/badge%}}. <br /> 2. Provide a name for the endpoint in **Endpoint Name** field; (we'll name it **Churn Prediction**), and select the model pipeline name from the dropdown values of the Choose Model field. Then click {{%badge%}}Create Endpoint{{%/badge%}}. <br /> 3. Once the endpoint is created, you can view the endpoint's details page, as shown below. You can test the model by providing a sample request in the Request column and click on the {{%badge%}}Predict{{%/badge%}} button. This will generate the predicted value in the Response column. <br /> 4. Click {{%badge%}}Publish{{%/badge%}} and use the endpoint URL to integrate the ML model with any other applications. <br /> {{%note%}}{{%bold%}}Note :{{%/bold%}} You can also check out {{%link href="/en/quickml/help/pipeline-endpoints/#external-oauth2-authentication" %}}this document{{%/link%}} to implement pipeline authentication. This ensures secured access to endpoints, the ML models, and datasets.{{%/note%}}