Upload Object

The SDK methods listed in this section will allow you to upload objects to the bucket in various manners. You can upload objects as a string or as a stream. The Bucket reference used in the below code snippet is the component instance.

If you do not have Versioning enabled for your object, and if Stratus gets multiple write requests for the same object, the object will be continuously overwritten. The latest upload of the object will be the only object that is stored.

However, with Versioning enabled, each upload will be considered a version of the object, and all of them will be stored in the bucket, each with a unique version_id.

Note: The following characters including space are not supported when you create a path or an object: double quote, both angular brackets, hashtag, backward slash and pipe symbol.

Upload Object as a Stream

Using this SDK method, you can upload objects to a bucket as a stream. Store the stream in a variable and then pass that variable in the upload method.

    
copy
file = open('file_path','rb') res = bucket.put_object("sam/out/sample.txt",file) print(res)

Upload Object as a String

Using this SDK method, you can upload the object as a string. You will pass the object name, and the data to be stored in the object in string format in the upload method; put_object()

    
copy
res = bucket.put_object("sam/out/sample.txt",'content of the file') print(res)

Upload Object with Options

Using this SDK method, you can use the following options while you upload an object.

  • overwrite: This is an option you can use, if Versioning for your bucket is not enabled for your bucket. Without versioning, you need to use this option if you wish to overwrite a resource. The default value is ‘false’.

  • ttl: This is an option you can use to set Time-to-Live (TTL) in seconds for an object. Value should be greater than or equal to 60 seconds.

  • meta_data: This is an option you can use to upload meta details of the object that is being uploaded.

    
copy
options = { 'overwrite': 'true', 'ttl': '300', 'meta_data': { 'author': 'John' } } file = open('file_path','rb') res = bucket.put_object("sam/out/sample.txt",file, options) print(res)

Upload Object Using Multipart

In this section we are going to go over the SDK methods that will allow you to successfully upload a large object to a bucket in Stratus.

The multipart upload feature will upload a large file to the bucket in multiple HTTPS requests. All of these requests will be combined into a single object once all the individual parts have been uploaded.

Note: It is recommended that you consider Multipart Upload as the preferred method to upload objects that are 100 MB or larger.

Initiate Multipart Upload

To perform multipart operations, you need to get a multipart object instance. We will refer to this component instance in various code snippets where we work with multipart operations being performed on objects stored in a bucket in Stratus.

Parameter Used

bucket: This is the bucket instance you need to have initialized earlier using this SDK method.

    
copy
init_res = bucket.initiate_multipart_upload(key="") print(init_res)

Example Response

    
copy
{ bucket: 'zcstratus123-development', key: 'objectName.txt', upload_id: '01j7xbm4vm5750zbedxqgc4q6m', status: 'PENDING' }

Upload Parts of the Object

In the following SDK method, we are going to perform uploads of the individual parts of the object. Each part will have a distinct partNumber ranging anywhere between 1 and 1000. While this represents the ordering of the parts, these parts will not necessarily be uploaded in sequence. These parts will be combined in sequence once the upload of all the parts of the objects is complete.

    
copy
upload_res = bucket.upload_part(key="",upload_id="", part_number=3, body=open('file_path','rb')) print(upload_res)

Get Multipart Upload Summary

The following SDK method can be used to obtain an operational summary of all the uploaded parts. To view the summary, we will use the get_multipart_upload_summary() method.

    
copy
summary_res = bucket.get_multipart_upload_summary(key="", upload_id="") print(summary_res)

Example Response

    
copy
{ "bucket": "zcstratus12345-development", "key": "sasm.txt", "upload_id": "01hyfyeazrrstmt7k5fa7ej726", "status": "PENDING", "parts": [ { "part_number": 1, "size": 0, "uploaded_at": 1716374678999 }, { "part_number": 2, "size": 2797094, "uploaded_at": 1716374678576 }, { "part_number": 4, "size": 0, "uploaded_at": 1716374679136 } ] }

Complete Multipart Upload of the Object

The following method allows us to terminate the multipart process once all the parts have been successfully uploaded. To complete the process we will pass the uploadId to the complete_multipart_upload() method.

    
copy
complete_res = bucket.complete_multipart_upload(key="", upload_id="") print(complete_res)

Example SDK Implementation

    
copy
from concurrent.futures import ThreadPoolExecutor import zcatalyst_sdk def handler(request: Request): app = zcatalyst_sdk.initialize() if request.path == "/": # stratus instance stratus = app.stratus() # bucket instance bucket = stratus.bucket('bucket_name') # multipart upload key = "sam/out/sample.txt" file_path = '/sam/smple.mp4' initiate_res = bucket.initiate_multipart_upload(key) part_number = 1 part_size = 50 * 1024 * 1024 futures = [] try: with open(file_path, 'rb') as file: with ThreadPoolExecutor(max_workers=3) as executor: while True: chunk = file.read(part_size) if not chunk: break futures.append(executor.submit( bucket.upload_part, key, initiate_res['upload_id'], chunk, part_number ) ) part_number += 1 for future in futures: future.result() except Exception as err: raise err multipart_upload_res = bucket.complete_multipart_upload(key, initiate_res['upload_id']) return multipart_upload_res else: response = make_response('Unknown path') response.status_code = 400 return response

Upload an Object Using Transfer Manager

When the Object that you need to upload is too large to upload, you can perform a transfer manager operation. The transfer manager operation will split the object into multiple parts and perform a quicker upload. In this SDK section, we are going to go over all the SDK methods that are available to perform upload objects in Stratus using Transfer Manager.

Get Transfer Manager Instance

To perform transfer manager operations, you need to get a transfer manager object instance. We will refer to this component instance in various code snippets where we work with transfer manager operations being performed on objects stored in a bucket in Stratus.

Parameter Used

bucket: This is the bucket instance you need to have initialized earlier using this SDK method.

Ensure the following packages are imported

    
copy
from zcatalyst_sdk.stratus.transfer_manager import TransferManager
    
copy
transfer_manager = TranferManager(bucket)

Upload an Object Using Transfer Manager

In this section we are going to go over the SDK methods that will allow you to successfully upload a large object to a bucket in Stratus.

The multipart upload feature will upload a large file to the bucket in multiple HTTPS requests. All of these requests will be combined into a single object once all the individual parts have been uploaded.

Note: It is recommended that you consider Multipart Upload as the preferred method to upload objects that are 100 MB or larger.

Create Multipart Instance

Using the following SDK method, we are going to generate a upload_id. Using this ID we are going to create and return an instance that allows you to perform multipart operations on the object.

    
copy
init_ins = transfer_manager.create_multipart_instance(key="")

If you are required to create an instance for an already initialized multipart upload operation, then copy and use the code snippet given below.

    
copy
init_ins = transfer_manager.create_multipart_instance(key="", upload_id="")

Perform Multipart Upload for Parts of the Object

In the following SDK method, we are going to perform uploads of the individual parts of the object. Each part will have a distinct part_number ranging anywhere between 1 and 1000. While this represents the ordering of the parts, these parts will not necessarily be uploaded in sequence. These parts will be combined in sequence once the upload of all the parts of the objects is complete.

Parameters Used

  • part_number: Will have the ordering of the parts that are being uploaded.
  • body: Will contain the data/content of the object.
    
copy
upload_res = init_ins.upload_part(body=open('file_path','rb'), part_number=3) print(upload_res)

Get Multipart Upload Summary

The following SDK method can be used to obtain an operational summary of all the uploaded parts. To view the summary, we will use the get_upload_summary() method.

    
copy
summary_res = init_ins.get_upload_summary() print(summary_res)

Complete Multipart Upload of the Object

The following method allows us to terminate the multipart process once all the parts have been successfully uploaded. To complete the process we will pass the upload_id to the complete_upload() method.

    
copy
complete_res = init_ins.complete_upload() print(complete_res)

Upload an Object Wrapping all the Transfer Manager Functionality

The following SDK method acts as a wrapper, where the entire transfer manager upload operation is carried out without employing multiple steps. Using this method, the object is split into multiple parts, uploaded to the bucket in multiple parts, and then combined once all the parts are uploaded.

Note: For object's that are larger than 2GB, we would recommend that you use the individual SDK methods to carry out the multipart upload operation successfully.

    
copy
upload_res = transfer_res.put_object_as_parts(key='', body=open('file_path', 'rb'), part_size=50) print(upload_res)

Generate Presigned URL to Upload an Object

Presigned URLs are secure URLs that authenticated users can share to their non-authenticated users. This URL will provide non-authenticated users with temporary authorization to access objects. The Bucket reference used in the below code snippet is the component instance.

Info: To use this SDK method, you need intialize it with Admin scope. You can learn more about this requirement from this section

Parameters Used

Parameter Name Data Type Definition
key String A Mandatory parameter. Will hold the complete name of the object along with it's path.
url_action Request Method A Mandatory parameter. This is the parameter that will allow you to generate a presigned URL for an upload(PUT) action.
  • PUT: To upload an object
expiry String This is an Optional parameter. The URL validity time in seconds.
  • Default value: 3600 seconds
  • Minimum value: 30 seconds
  • Maximum value: 7 days
active_from String This is an Optional parameter. This param will contain the time after which the URL is valid. Maximum value is 7 days. URLs are made active as soon as they are generated by default.
    
copy
pre_signed_url_res = bucket.generate_presigned_url("sam/out/sample.txt",url_action='PUT',expiry_in_sec='300', active_from='1023453725828') print(pre_signed_url_res)

Example Response for Generating a Presigned URL for Upload

    
copy
{ "signature": "https://sadi-development.zohostratus.com/_signed/code.txt?organizationId=96862383&stsCredential=96858154-96862383&stsDate=1747899245773&stsExpiresAfter=300&stsSignedHeaders=host&stsSignature=YBPoNE9txCIUWWX3ntdgVd95VTt1jGFlSuvnTRFbCMQ" file_path = "/Users/ranjitha-18338/Documents/Pyhton-SDK/filestore2.0/output.txt", "expiry_in_seconds": "300", "active_from": "1726492859577" }

Example Snippet Illustrating Usage of Presigned URL to Upload an Object

    
copy
import requests #Replace this with your actual pre-signed URL url = "https://sadi-development.zohostratus.com/_signed/code.txt?organizationId=96862383&stsCredential=96858154-96862383&stsDate=1747899245773&stsExpiresAfter=300&stsSignedHeaders=host&stsSignature=YBPoNE9txCIUWWX3ntdgVd95VTt1jGFlSuvnTRFbCMQ" #Path to the local file that you want to upload file_path = "file_path" #Set required headers headers = { # 'Content-Type': 'text/plain', # Specify content type of the file # 'overwrite': 'true', # Optional custom header to indicate overwrite (if required by server) } #Open the file in binary read mode and send a PUT request to upload it with open(file_path, 'rb') as f: files = {'file': f} #Create a file payload response = requests.put(url, headers=headers, files=files) #Check the response status if response.status == 200: print('Object uploaded successfully') else: print('Object upload failed')

Last Updated 2025-06-03 12:50:19 +0530 +0530