Introduction

In an earlier post, I reviewed how you can automate CPQ Data Table exports using the CPQ REST APIs. In this post, I’ll walk you through how to script data imports that include data updates and deletes.

Let’s get started.

Formatting Import Files

CPQ accepts data table imports as CSV files with a particular format. The file is structured as a data file with a metadata envelope and a multi-line column header, followed by the actual rows to process. The name of your file corresponds to the name of your data table when it is imported into CPQ. If a table of the same name already exists, the data will append to the existing file.

Here are the parts of the file:

  • Metadata block. These lines mark the beginning and end of a metadata/header section.
  • Field names: Defines the column names in your data table.
    • The _update_action column is a special control column indicating what to do with each row (values include modify or delete).
    • Please note that the first letter of the data type must be capitalized.
  • Data types: Provides expected data types for each column.
    • Data types need to be either String, Integer or Float. The first letter of the data type must be capitalized.
  • Human-readable description: Optionally adds friendly descriptions of the fields.

Here’s an example file where you can see that the field names are Item, Description, and LeadTime and the special _update_action column. Each of the columns are strings, except LeadTime, which is an integer.

_start meta data,,,,
_update_action,Item,Description,LeadTime
String,String,String,Integer
,Item number,Item description,Item lead time
_end meta data,,,,
modify,genhsctrl101,Item description for genhsctrl101,21
delete,genhsctrl102,Item description for genhsctrl102,34
modify,genhsctrl103,Item description for genhsctrl103,50

You’ll also notice that the row describing genhsctrl102 is marked to delete. Use the _update_action column to specify instructions when you work with data tables with natural keys.

Natural Key can be a single column or a combination of several columns that produce a unique identifier for each record (row)… Users no longer need to purge and reload all Data Table rows to edit a subset of rows.

Importing Files into Data Tables

Once you’ve created your data table import files, you can begin the import process using the Import Data Tables API. Here’s an example request payload which uses a multipart/form-data request body.

POST /rest/{{version}}/datatables/actions/import HTTP/1.1
Content-Type: multipart/form-data
Authorization: Basic xxxxxxxx
 
file=@{{filename}}
columnDelimiter=","
dataHasDelimiter="true"

As you can see above, the request is a post that uses basic authentication to authenticate each request. Make sure you base64 encrypt your login credentials and include them in the in the Authorization header.

Request details

The request accepts a multipart/form-data request body with the following accepted values:

  • file: The file to be imported. (required)
  • columnDelimiter: The column delimiter. Optional, defaults to “,” [comma]
  • dataHasDelimiter: Set to ‘true’ to indicate content with delimiters (e.g. numeric values like 1,000).
  • folderVarName: The variable name of the parent data table folder. If this value is not specified, the new data tables will be imported into the ‘_default’ folder.
  • rowDelimiter: The row delimiter.

Python example

Here’s an example in Python that uses the Requests library to handle basic authorization, set the multipart form file upload, and send the request.

import requests

# --- Config ---
host = "yourhostname"
version = "v19"  
endpoint = f"https://{host}/rest/{version}/datatables/actions/import"

filename = "/path/to/yourfile.csv"  
username = "your_user"
password = "your_password"

# --- Multipart form body ---
# NOTE: Do NOT set Content-Type manually for multipart; requests sets boundary correctly.
# 'files' creates the multipart "file" part; 'data' adds the other form fields.
with open(filename, "rb") as f:
    files = {
        # field name must match what the API expects: "file"
        "file": (filename.split("/")[-1], f, "text/csv"),
    }
    data = {
        "columnDelimiter": ",",
        "dataHasDelimiter": "true",
    }

    response = requests.post(
        endpoint,
        auth=(username, password),  # requests will create the Basic Authorization header
        files=files,
        data=data,
        timeout=120,
    )

print("Status:", response.status_code)
print("Body:", response.text)
response.raise_for_status()

Both the export and import actions kick off a long-running task and responds with information about the task it created. Here’s a sample response payload.

{
  "links": [{
     "rel": "related",
     "href": "https://{{host}}/rest/{{version}}/tasks/123456"
  }],
  "taskId": 123456
}

Checking Task Status

A task is initiated when you import and export data tables. After importing the data tables, you can view the status of the import.  Please note, Task APIs are only available for admin users.

As you can see from above, the new task identifier is included in the data table import response. You’ll use that identifier to check the status and download you files. If you’re using Postman, here’s sample code that will check the response and save the task identifier.

var data = JSON.parse(pm.response.text())

pm.test("Tasks present", function () {
    tasks = data.tasks;
    pm.expect(tasks.length).greaterThanOrEqual(0);
});

pm.test("Task Id present", function () {
    pm.expect(tasks[0].taskId).not.eql(null);
    // Set Task Id variable
    pm.collectionVariables.set('taskId',tasks[0].taskId);
});

Once you’ve saved the task identifier, you use that in a request to Get Task Status. Here’s how that looks.

GET /rest/{{version}}/tasks/{{taskId}} HTTP/1.1
Authorization: Basic xxxx
 
HTTP/1.1 200 OK
Content-Type: application/json
 
{
    "name": "SampleDataTable",
    "category": {
        "lookupCode": "13",
        "displayValue": "Data Table Upload"
    },
    "status": "Completed",
    "result": "https://{{host}}/admin/bulkservices/view_error_log.jsp?file_name={{taskId}}.log&company_id={{companyId}}",
    "executionTime": "02/23/2026 5:39 PM",
    "dateAdded": "02/23/2026 5:39 PM",
    "id": {{taskId}},
    "dateModified": "02/23/2026 5:40 PM",
}

When the task status value is “Completed”, you can move on to the next step – getting the task file.

Getting the Task File

After the task has completed, call the Get Task File List API using the task id from the Import Data Tables API response. Here’s how that looks.

GET /rest/{{version}}/tasks/{{version}}/files HTTP/1.1
Authorization: Basic xxxx
HTTP/1.1 200 OK
Content-Type: application/json

{
  "items": [{
    "links": [{
      "rel": "related",
      "href": "https://{{host}}/rest/{{version}}/tasks/{{taskId}}/files/{{taskFile}}"
    }],
   "name": "{{taskFile}}",
   "type": "application/zip"
  }]
}

Using Postman, you can check the response payload and save a variable for the task file. Here’s sample code to do so.

var data = JSON.parse(responseBody)

pm.test("Task file items present", function () {
    pm.expect(data.items.length).to.be.above(0);    
});

pm.test("Task file links present", function () {
    console.log(data.items[0].links.length);
    pm.expect(data.items[0].links.length).to.be.above(0);
    // Get task file name from links href variable
    pm.collectionVariables.set('taskFile',data.items[0].links[0].href)
    console.log(pm.collectionVariables.get('taskFile'))
});

Downloading the Task File

Once you’ve stored the task file full path to the taskFile variable, you can download the task file directly. The task file is a zip archive that includes a separate CSV file for each data table you specified in the  Import Data Tables request. Here’s an example of the GET request and response.

Mon Feb 23 17:40:00 CST 2026 - Started upload of data table for {{host}}
**********
Mon Feb 23 17:40:00 CST 2026 - Completed processing data table {{dataTable}} for {{host}}. Number of records processed: 3 out of 3

Total Number of Records Uploaded : 3
Number of Records Failed : 0
Number of Records Deleted : 0

Total Processing Time : 0 hours 0 minutes 0 seconds 24 milliseconds
Average Processing Time Per 1000 Records : 0 hours 0 minutes 8 seconds 0 milliseconds

Summary

Importing data tables is easy to script using the Import Data Tables API. Once you upload your CSV file to Import Data Tables as part of the multipart/form-data request, a long running task is initiated. To view the status of the data table import call Get Task Status using the task identifier and once the task has completed, call the Get Task File List API to get the task file full path. Finally, download the task file to review the status of the import.

Happy hunting!

~sn