REST

The REST Task in Gaio DataOS allows you to connect to any external API — public or private — directly within your data flow. You can use it to fetch data, post information, automate API integrations, convert JSON responses into usable tables inside your project, and integrate AI, analytics, and workflows with external services.
Supported HTTP Methods
The REST Task supports the following HTTP methods:
GET
Retrieve data
POST
Send data/ create resources
PUT
Replace resources
PATCH
Partially update resources
DELETE
Remove resources
The behavior of PUT, PATCH, and DELETE follows the same semantics commonly used in REST
How to Use the REST Task
1. Open the REST Task
In the Studio, go to the Tasks panel.
Under the ETL section, select REST.
2. Choose Method and Configure URL
Select the HTTP method:
GET,POST,PUT,DELETE, etc.In the URL field, type the API endpoint.
You can use dynamic parameters in the URL:
Example:
URL parameters are resolved automatically at runtime.
3. Add Parameters
In the Parameters tab, you can define URL parameters:
URL Parameter: Name of the parameter (e.g.,
userId).Value: Static or dynamic value (e.g.,
{{param.user_id}})
Click + Add Parameter to add more.
4. Define the Request Body (if needed)
In the Body tab, you can:
Enter JSON, XML, or plain text content for the request body (mainly used for
POSTandPUT).Use dynamic variables inside the content.
For POST, PUT, and PATCH requests, you can define a request body.
Example body using table data:
Send all table data in a single batch

Enabled → Sends the entire table as one request
Disabled → Sends one request per row
This is useful for:
Bulk inserts
Batch updates
High-performance integrations
5. Headers and Authorization
Headers tab: add custom headers like Authorization, Content-Type, Accept, Bearer Tokens
Authorization tab: configure Basic Auth.
6. Result Tab
This tab controls how the API response is interpreted and converted into a table.
Structure Identification: Lets Gaio automatically detect column names and data types. Set to “Automatic” by default.
Object Property: Define the path to the array in the JSON response, if it’s nested. Example: results.items.
Always Drop Table: If enabled, the existing result table will be replaced each time the task runs.
Input Columns to Keep: Select which columns from the response should be stored.
Object Property That Holds Results: Define the JSON property that contains the result list.
📌 Behavior:
If the API returns a JSON object, one row will be created.
If it returns a JSON array, each item will be inserted as a separate row.
7. Error Log Tab
This tab allows you to capture and store any errors encountered during execution.
Log Table: The name of the table that will receive error logs from the REST task (e.g.,
log_api_errors)
This makes debugging and monitoring easier in production.

REST Task · Bulk Mode
The REST Bulk mode extends the standard REST Task by enabling batch-oriented API requests. Instead of sending one request per row, the task can aggregate table data and send it in batches to an external API. This mode is ideal for high-volume integrations, bulk inserts, and performance-optimized API workflows.

When to Use REST Bulk
Use REST Bulk when:
The API supports bulk payloads.
You want to reduce the number of HTTP requests.
You need to send large datasets efficiently.
You are synchronizing tables with external systems.
REST Bulk is especially useful for POST, PUT, and PATCH operations.
Bulk Configuration Overview
Bulk-specific settings are configured in the Body tab and include:
Batch execution control
Pagination over table data
Optional schema definition for batch payloads
Enable Bulk Mode
Enable “Send all table data in a single batch”.
When enabled:
The REST Task sends multiple rows together
Payload is constructed from the source table
Execution is optimized for throughput
Batch Size (Table Pagination)
Define how many rows are sent per request.
Numeric value: Number of rows per batch.
0: Load and send the full table in a single request.
Body Payload (Bulk)
The Body editor defines how batch data is sent.
Example:
In this example:
table.sales_restresolves to an array of rowsEach batch injects a subset of the table
The API receives a structured list of records
This pattern is common for APIs expecting payloads like:
Runtime Behavior (Bulk Mode)
At execution time:
The task reads the source table
Splits data according to batch size
Injects each batch into the request body
Sends one request per batch
Processes API responses
Writes results to the configured result table
Relationship with HTTP Methods
POST → Bulk create
PUT → Bulk replace
PATCH → Bulk update
DELETE → Bulk delete (API-dependent)
Bulk mode affects how data is sent, not how results are parsed.
REST Task · Bulk at Root
The REST Bulk at Root mode is a variation of the REST Bulk configuration where table data is sent directly at the root of the request body, instead of being wrapped inside an enclosing object (such as { data: [...] }).
This mode is designed for APIs that expect a raw array or object at the root level of the request payload.
When to Use Bulk at Root
Use Bulk at Root when:
The API expects a JSON array at the root.
No wrapper property (e.g.
data,items,payload) is allowed.You are integrating with strict REST or ingestion APIs.
The API specification explicitly defines the body as a root collection.
Example API expectation:
Key Difference from Standard REST Bulk
REST Bulk
{ "data": [ ... ] }
REST Bulk at Root
[ ... ]
Body Payload (Root)
In Bulk at Root, the body contains only the table reference, without any wrapper.
Example Body:
REST Task · Bulk With Schema

The REST Bulk With Schema mode extends REST Bulk by allowing you to explicitly define the structure of each record before sending data to an external API.
Instead of sending raw table rows, you define a custom schema template that controls:
Which fields are sent
How fields are named
How table columns are mapped into the payload
This mode is ideal when APIs require strict payload contracts or custom field mappings.
When to Use Bulk With Schema
Use Bulk With Schema when:
The API requires a specific JSON structure.
Table column names do not match API field names.
You need to omit or rename fields.
You want full control over the outgoing payload.
You are integrating with strict or enterprise APIs.
How Bulk With Schema Works
Bulk With Schema introduces a two-layer payload definition:
Schema definition → how a single record should look
Body template → how records are grouped and sent
The system generates the batch dynamically by applying the schema to each row.
Bulk Execution Settings
Enable batch-based execution.
Define how many rows are sent per request.
Enable “Define the table data schema”.
Example schema:
This means:
Each row in
sales_restis transformed into this structure.Field names are controlled explicitly.
Only mapped fields are sent.
The schema is applied per row, even when sending data in bulk.
REST Bulk
Low
{ "data": {{ table }} }
REST Bulk at Root
Medium
{{ table }}
REST Bulk With Schema
High
{ "data": {{ schema }} }
✅ Best Practices
Always Run test before finalizing the configuration.
Use POST/PUT/PATCH for mutations.
Use dynamic parameters to reuse the same task with different inputs.
Use batch mode for high-volume APIs.
When using nested JSON, define the Object Property path precisely.
Enable the Error Log tab to trace failed API calls and improve reliability.
Validate API responses before downstream use.
Last updated