Bulk Read Rows
Catalyst allows you to perform bulk read jobs on a specific table present in the Data Store.
In the SDK snippet below, the Bulk Read job can read thousands of records from a specific table and generate a CSV file containing the results of the read operation, if the job is successful.The table is referred to by its unique Table ID.
To know more about the component instance datastore_service used below, please refer to this help section.
Parameter Name | Data Type | Definition |
---|---|---|
criteria | Array | A Mandatory parameter. Will hold the conditions based on which the rows has to be read. |
page | Numeric | A Mandatory parameter. Will hold the number of page rows that has to be read. |
select_columns | Array | A Mandatory parameter. Will hold specific columns that has to be read. |
Copy the SDK snippet below to perform a bulk read job on a particular table.
copy
#Bulk read
datastore_service = app.datastore()
bulk_read = datastore_service.table("sampleTable").bulk_read()
#Create bulk read job
bulk_read_Job = bulk_read.create_job({
"criteria": {
"group_operator": 'or',
"group": [
{
"column_name": 'Department',
"comparator": 'equal',
"value": 'Marketing'
},
{
"column_name": 'EmpId',
"comparator": 'greater_than',
"value": '1000'
},
{
"column_name": 'EmpName',
"comparator": 'starts_with',
"value": 'S'
}
]
},
"page": 1,
"select_columns": ['EmpId', 'EmpName', 'Department']
})
#Get bulk read status
status = bulk_read.get_status(bulk_read_Job['job_id'])
#Get bulk read result
result = bulk_read.get_result(bulk_read_Job['job_id'])
Note: A maximum of 200,000 rows can be read simultaneously.
Info : Refer to the SDK Scopes table to determine the required permission level for performing the above operation.
Last Updated 2025-03-28 18:24:49 +0530 +0530
Yes
No
Send your feedback to us
Skip
Submit