Create a data pipeline

Now that we have uploaded the dataset, we will proceed with creating a data pipeline with the dataset.

  1. Navigate to the Datasets component in the left menu. There are two ways to create a data pipeline:
  • You can click on the dataset and then click Create Pipeline in the top-right corner of the page. create-pipeline
  • You can click on the pen icon located to the left of the dataset name, as shown in the image below. pen-icon
  1. Name the pipeline “Fraud Detection Data Pipeline” and click Create Pipeline. Pipeline Name

The pipeline builder interface will open as shown in the screenshot below. Initial Pipeline

We will be performing the following set of data preprocessing operations in order to clean, refine, and transform the datasets, and then execute the data pipeline. Each of these operations involve individual data nodes that are used to construct the pipeline.

Data preprocessing with QuickML

  1. Select/drop columns

    Selecting or dropping columns from a dataset is a common data preprocessing step in data analysis and machine learning. The choice to select or drop columns depends on the specific objectives and requirements of your analysis or modelling task. The columns we don’t need for our model training from this dataset are “ID”, “trans_date_trans_time”, “cc_num”, “Merchant”, “First”, “Last”, “Street”, “City”, “Zip”, “Lat”, “Long”, “job”, “dob”, “Trans_num”, “Unix_Time”, “merch_lat”, “merch_long” in the provided datasets. Using QuickML, you may quickly choose the necessary fields from the dataset for model training using the Select/Drop node from the Data Cleaning component. required-field-selection

  2. Filter Data

    Filtering a dataset typically involves selecting a subset of rows from a DataFrame that meet certain criteria or conditions. Here we are using the Filter node from the Data Cleaning session to filter the “state” column whose values are non-empty. Data Filter

  3. Save and Execute

    Now, connect the Filte node tothe Destination node. Once all the nodes are connected, click the Save button to save the pipeline. Then click on Execute button to execute the pipeline. Completed data pipeline

You’ll be redirected to the page below, which shows the executed pipeline with the execution status. We can see here that the pipeline execution was successful.

Executed data pipeline

Click on Execution Stats to access more details regarding the compute usage, as shown below.

Execution stats for data pipeline

In this part, we’ve looked at how to process data using QuickML, giving you a variety of effective ways to get your data ready for the creation of machine learning models. This data pipeline can be reused to create multiple ML experiments for varied use cases within your Catalyst project.

Last Updated 2024-10-10 12:38:19 +0530 +0530