List of Integrations

Integration: 

API

The first time interacting with an API can feel daunting. Each API is unique and requires different settings, but is generally standardized to make understanding and connecting to an API accessible.

To learn how to best use APIs in Parabola, check out our video guides.

Types of APIs

Parabola works best with two types of APIs. The most common API type to connect to is a REST API. Another API type rising in popularity is a GraphQL API. Parabola may be able to connect to a SOAP API, but it is unlikely due to how they are structured.

To evaluate if Parabola can connect with an API, reference this flow chart.

REST API

A REST API is an API that can return data by making a request to a specific URL. Each request is sent to a specific resource of an API using a unique Endpoint URL. A resource is an object that contains the data being requested. Common examples of a resource include Orders, Customers, Transactions, and Events.

To receive a list of orders in Squarespace, the Pull from an API step will make a request to the Squarespace's Orders resource using an Endpoint URL:

https://api.squarespace.com/{api-version}/commerce/orders

GraphQL API

GraphQL is a new type of API that allows Parabola to specify the exact data it needs from an API resource through a request syntax known as a GraphQL query. To get started with this type of API call in Parabola, set the request type to "POST" in any API step, then select "GraphQL" as the Protocol of the request body.

Once your request type is set, you can enter your query directly into the request body. When forming your query, it can be helpful to use a formatting tool to ensure correct syntax.

Our GraphQL implementation current supports Offset Limit pagination, using variables inserted directly into the query. Variables can be created by inserting any single word between the brackets '<%%>'. Once created, variables will appear in the dropdown list in the "Pagination" section. One of these variables should correspond to your "limit", and the other should correspond to "offset."

The limit field is static; it represents the number of results returned in each API request. The offset field is incremented in each subsequent request based on the "Increment each page by" value. The exact implementation will be specific to your API docs.

After configuring your pagination settings, also be sure to adjust the "Maximum pages to fetch" setting in the "Rate Limiting" section as well to retrieve more or less results.

GraphQL can be used for data mutations in addition to queries, as specified by the operation type at the start of your request body. For additional information on Graph queries and mutations, please reference GraphQL's official documentation.

Reading API Documentation

The first step to connect to an API is to read the documentation that the service provides. Oftentimes, the documentation is commonly referred to as the API Reference, or something similar. These pages tend to feature URL and code block content.

The API Reference, always provides at least two points of instruction. The first point outlines how to Authenticate a request to give a user or application permission to access the data. The second point outlines the API resources and Endpoint URLs, or where a request can be sent.

Authentication

Most APIs require authentication to access their data. This is likely the first part of their documentation. Try searching for the word "Authentication" in their documentation.

The most common types of authentication are Bearer Tokens, Username/Password (also referred to as Basic), and OAuth2.0.

Bearer Token

This method requires you to send your API Key or API Token as a bearer token. Take a look at this example below:

The part that indicates it is a bearer token is this:

-H "Authorization: Bearer sk_test_WiyegCaE6iGr8eSucOHitqFF"

Username/Password (Basic)

This method is also referred to as Basic Authorization or simply Basic. Most often, the username and password used to sign into the service can be entered here.

However, some APIs require an API key to be used as a username, password, or both. If that's the case, insert the API key into the respective field noted in the documentation.

The example below demonstrates how to connect to Stripe's API using the Basic Authorization method.

https://assets.website-files.com/5d9bdcad630fbe7a7468a9d8/5df043083024f8acf53e7729_Screen Shot 2019-12-10 at 5.14.37 PM.png

The Endpoint URL shows a request being made to a resource called "customers".  The authorization type can be identified as Basic for two reasons:

  1. The -u indicates Basic Authorization.
  2. Most APIs reference the username and password formatted as username:password. There is a colon : . This indicates that only a username is required for authentication.

OAuth2.0

This method is an authorization protocol that allows users to sign into a platform using a third-party account. OAuth2.0 allows a user to selectively grant access for various applications they may want to use.

Authenticating via OAuth2.0 does require more time to configure. For more details on how to authorize using this method, read our guide Using OAuth2.0 in Parabola.

Expiring Access Token

Some APIs will require users to generate access tokens that have short expirations. Generally, any token that expires in less than 1 day is considered to be "short-lived" and may be using this type of authentication. This type of authentication in Parabola serves a grouping of related authentication styles that generally follow the same pattern.

One very specific type of authentication that is served by this option in Parabola is called OAuth2.0 Client Credentials. This differs from our standard OAuth2.0 support, which is built specifically for OAuth2.0 Authorization Code. Both Client Credentials and Authorization Code are part of the OAuth2.0 spec, but represent different Grant Types.

Authenticating with the Expiring Access Token option is more complex than options like Bearer Token, but less complex than OAuth2.0. For more details on how to use this option, read our guide Using Expiring Access Tokens in Parabola.

Resources

A resource is a specific category or type of data that can be queried using a unique Endpoint URL. For example, to get a list of customers, you might use the Customer resource. To add emails to a campaign, use the Campaign resource.

Each resource has a variety of Endpoint URLs that instruct you how to structure a URL to make a request to a resource.  Stripe has a list of resources including "Balance", "Charges", "Events", "Payouts", and "Refunds".

HTTP Methods

HTTP methods, or verbs, are a specific type of action to make when sending a request to a resource. The primary verbs are GET, POST, PUT, PATCH, and DELETE.

  • The GET verb is used to receive data.
  • The POST verb is used to create new data.
  • The PUT verb is used to update existing data.
  • The PATCH verb is used to modify a specific portion of the data.
  • The DELETE verb is used to delete data.

Custom Headers

A header is a piece of additional information to be sent with the request to an API. If an API requires additional headers, it is commonly noted in their documentation as -H.

Remember the authentication methods above? Some APIs list the authentication type to be sent as a header. Since Parabola has specific fields for authentication, those headers can typically be ignored.

Taking a look at Webflow's API, they show two headers are required:

The first -H header is linked to a key called Authorization. Parabola takes care of that. It does not need to be added as a header. The second -H header is linked to a key called accept-version. The value of the header is 1.0.0. This likely indicates which version of Webflow's API will be used.

JSON

JavaScript Object Notation, or more commonly JSON, is a way for an API to exchange data between you and a third-party. JSON is follows a specific set of syntax rules.

An object is set of key:value pairs and is wrapped in curly brackets {}. An array is a list of values linked to a single key or a list of keys linked to a single object.

JSON in API documentation may look like this:

https://assets.website-files.com/5d9bdcad630fbe7a7468a9d8/5df049f8fc0b1a217704f242_Screen Shot 2019-12-10 at 5.44.15 PM.png

Interpreting cURL

Most documentation will use cURL to demonstrate how to make a request using an API.

Let's take a look at this cURL example referenced in Spotify's API:

curl -X GET "[<https://api.spotify.com/v1/artists?ids=0oSGxfWSnnOXhD2fKuz2Gy>](<https://api.spotify.com/v1/artists?ids=0oSGxfWSnnOXhD2fKuz2Gy,3dBVyJ7JuOMt4GE9607Qin>)"
-H "Authorization: Bearer {your access token}"

We can extract the following information:

  • Method: GET
  • Resource: artists
  • Endpoint URL:
https://api.spotify.com/v1/artists?ids=0oSGxfWSnnOXhD2fKuz2Gy
  • Authorization: Bearer token
  • Headers: "Authorization: Bearer {your access token}"

Because Parabola handles Authorization separately, the bearer token does not need to be passed as a header.

Here's another example of a cURL request in Squarespace:

This is what we can extract:

  • Method: POST
  • Resource: products
  • Endpoint URL:
https://api.squarespace.com/1.0/commerce/products/
  • Authorization: Bearer token
  • Headers:
"Authorization: Bearer YOUR_API_KEY_OR_OAUTH_TOKEN", "User-Agent: YOUR_CUSTOM_APP_DESCRIPTION"
  • Content-Type: application/json

Parabola also passes Content-Type: application/json as a header automatically. That does not need to be added.

Error Codes

Check out this guide to learn more troubleshooting common API errors.

The Pull from an API step sends a request to an API to return specific data. In order for Parabola to receive this data, it must be returned in a CSV, JSON, or XML format. This step allows Parabola to connect to a third-party to import data from another service, platform, or account.

You might wonder when it is best to use the Pull from API step vs Enrich with API step. If you need to take existing data and pass it through an API, we recommend you use Enrich with API in the middle of the Flow. Enrich with API makes requests row by row. If you just need to fetch data and join it into the middle of a Flow, you could use the “Pull from API” step and then a join step.

Basic Settings

To use the Pull from an API step, the "Request Type" and "API Endpoint URL" fields are required.

Request Type

There are two ways to request data from an API: using a GET request or using a POST request. These are also referred to as verbs, and are standardized throughout REST APIs.

The most common request for this step is a GET request. A GET request is a simple way to ask for existing data from an API.

"Hey API, can you GET me data from the server?"

To receive all artists from Spotify, their documentation outlines using GET request to Artist resource using this Endpoint URL:

Some APIs will require a POST request to import data, however it is uncommon. A POST request is a simple way to make changes to existing data such as adding a new user to a table.

The request information is sent to the API in theJSON body of the request. The JSON body is a block that outlines the data that will be added.

Hey API, can you POST my new data to the server? The new data is in the JSON body.

API Endpoint URL

Similar to typical websites, APIs use URLs to request or modify data. More specifically, an API Endpoint URL is used to determine where to request data from or where to send new data to. Below is an example of an API Endpoint URL.

To add your API Endpoint URL, click the API Endpoint URL field to open the editor. You can add URL parameters by clicking the +Add icon under the "URL Parameters" text in that editor. The endpoint dynamically changes based on the key/value pairs entered into this field.

Authentication

Most APIs require authentication to access their data. This is likely the first part of their documentation. Try searching for the word Authentication in their documentation.

Here are the Authentication types available in Parabola:

The most common types of authentication are Bearer Tokens, Username/Password (also referred to as Basic), and OAuth2.0. Parabola has integrated these authentication types directly into this step.

Bearer Token

This method requires you to send your API Key or API Token as a Bearer Token. Take a look at this example below:

The part that indicates it is a bearer token is this:

-H "Authorization: Bearer sk_test_WiyegCaE6iGr8eSucOHitqFF"

To add this specific token in Parabola, select Bearer Token from the Authorization menu and add "sk_test_WiyegCaE6iGr8eSucOHitqFF" as the value.

Username/Password (Basic)

This method is also referred to as Basic Authorization or simply Basic. Most often, the username and password used to sign into the service can be entered here.

However, some APIs require an API key to be used as a username, password, or both. If that's the case, Insert the API key into the respective field noted in the documentation.

The example below demonstrates how to connect to Stripe's API using the Basic Authorization method.

https://assets.website-files.com/5d9bdcad630fbe7a7468a9d8/5df043083024f8acf53e7729_Screen Shot 2019-12-10 at 5.14.37 PM.png

The Endpoint URL shows a request being made to a resource called customers. The authorization type can be identified as Basic for two reasons:

  1. The -u indicates Basic Authorization username.
  2. Most APIs reference the username and password formatted as username:password. There is a colon, which indicates that only a username is required for authentication.

To authorize this API in Parabola, fill in the fields below:

OAuth2.0

This method is an authorization protocol that allows users to sign into a platform using a third-party account. OAuth2.0 allows a user to selectively grant access for various applications they may want to use.

Authenticating via OAuth2.0 does require more time to configure. For more details on how to authorize using this method, read our guide Using OAuth2.0 in Parabola.

Expiring Access Token

Some APIs will require users to generate access tokens that have short expirations. Generally, any token that expires in less than 1 day is considered to be "short-lived" and may be using this type of authentication. This type of authentication in Parabola serves a grouping of related authentication styles that generally follow the same pattern.

One very specific type of authentication that is served by this option in Parabola is called OAuth2.0 Client Credentials. This differs from our standard OAuth2.0 support, which is built specifically for OAuth2.0 Authorization Code. Both Client Credentials and Authorization Code are part of the OAuth2.0 spec, but represent different Grant Types.

Authenticating with the Expiring Access Token option is more complex than options like Bearer Token, but less complex than OAuth2.0. For more details on how to use this option, read our guide Using Expiring Access Tokens in Parabola.

Request Headers

A header is a piece of additional information to be sent with the request to an API. If an API requires additional headers, it is commonly noted in their documentation as -H.

Remember the authentication methods above? Some APIs list the authentication type to be sent as a header. Since Parabola has specific fields for authentication, those headers can typically be ignored.

Taking a look at Webflow's API, they show two headers are required.

The first -H header is linked to a key called Authorization. Parabola takes care of that. It does not need to be added as a header. The second -H header is linked to a key called accept-version. The value of the header is 1.0.0. This likely indicates which version of Webflow's API will be used.

Response JSON

APIs typically to structure data as a nested objects. This means data can exist inside data. To extract that data into separate columns and rows, use the Output section to select a top-level column.

For example, a character can exist as a data object. Inside the result object, additional data is included such as their name, date of birth, and location.

This API shows a data column linked result. To expand all of the data in the results object into neatly displayed columns, select results as the top-level column in the Output section.

If you only want to expand some of the columns, choose to keep specific columns and select the columns that you want to expand from the dropdown list.

Pagination

APIs return data in pages. This might not be noticeable for small requests, but larger request will not show all results. APIs return 1 page of results. To view the other pages, pagination settings must configured

Each API has different Pagination settings which can be searched in their documentation. The three main types of pagination are Page, Offset and Limit, and Cursor based pagination.

Page Based Pagination

APIs that use Page based pagination make it easy to request more pages. Documentation will refer to a specific parameter key for each request to return additional pages.

Intercom uses this style of pagination. Notice they reference the specific parameter key of page:

Parabola refers to this parameter as the Pagination Key. To request additional pages from Intercom's API, set the Pagination Key to page.

The Starting page is the first page to be requested. Most often, that value will be set to 0. For most pagination settings, 0 is the first page. The Increment by value is the number of pages to advance to. A value of 1 will fetch the next page. A value of 10 will fetch every tenth page.

Offset and Limit Based Pagination

APIs that use Offset and Limit based pagination require each request to limit the amount of items per page. Once that limit is reached, an offset is used to cycle through those pages.

Spotify refers to this type of pagination in their documentation:

To configure these pagination settings in Parabola, set the Pagination style to offset and limit.

The Starting Value is set to 0 to request the first page. The Increment by value is set to 10. The request will first return page 0 and skip to page 10 .

The Limit Key is set to limit to tell the API to limit the amount of items. The Limit Value is set to 10 to define the number of items to return.

Cursor Based Pagination

Otherwise known as the bookmark of APIs, Cursor based pagination will mark a specific item with a cursor. To return additional pages, the API looks for a specific Cursor Key linked to a unique value or URL.

Squarespace uses cursor based pagination. Their documentation states that two Cursor Keys can be used. The first one is called nextPageCursor and has a unique value:

"nextPageCursor": "b342f5367c664d3c99aa56f44f95ab0a"

The second one is called nextPageUrl and has a URL value:

"nextPageUrl": "<https://api.squarespace.com/1.0/commerce/inventory?cursor=b342f5367c664d3c99aa56f44f95ab0a>"

To configure cursor based pagination using Squarespace, use these values in Parabola:

Replace the Cursor path in response with pagination.nextPageURL to use the URL as a value. The API should return the same results.

Rate Limiting

Imagine someone asking thousands of questions all at once. Before the first question can be answered thousands of new questions are coming in. That can become overwhelming.

Servers are no different. Making paginated API calls requires a separate request for each page. To avoid this, APIs have rate limiting rules to protect their servers from being overwhelmed with requests. Parabola can adjust the Max Requests per Minute to avoid rate limiting.

By default, this value is set to 60 requests per minute. That's 1 request per second. The Max Requests per Minute does not set how many requests are made per minute. Instead, it ensures that Parabola will not ask too many questions.

Lowering the requests will avoid rate limiting but will calculate data much slower. Parabola will stop calculating a flow after 60 minutes.

Max Pages to Fetch

To limit the amount of pages to fetch use this field to set the value. Lower values will return data much faster. Higher values will take longer return data.

The default value in Parabola is 5 pages. Just note, this value needs be larger than the expected number of pages to be returned. This prevents any data from being omitted.

If you are pulling a large amount of data and want to limit how much is being pulled in while building, you can set the step to pull a lower number of pages while editing the Flow than while running the Flow.

Note, there is a 1000 page limit when building vs. running flows.

Encode URLs

URLs tend to break when there are special characters like spaces, accented characters, or even other URLs. Most often, this occurs when using {text merge} values to dynamically insert data into a URL.

Check the "Encode URLs" box to prevent the URL from breaking if special characters need to be passed.

Response type

By default, this step will parse the data sent back to Parabola from the API in the format indicated by the content-type header received. Sometimes, APIs will send a content-type that Parabola does not know how to parse. In these cases, adjust this setting from auto-detect to a different setting, to force the step to parse the data in a specific way.

Use the gzip option when the data is returned in a gzip format, but can be unzipped into csv, xml, or JSON data. If you enable gzip parsing, you must also specify a response type option.

Tips and troubleshooting

  • Please note that the Pull from API step cannot extract dynamic ranges, such as date. We suggest taking existing data—even just a Start with date & time step—and using an Enrich with API step to create a Flow whose parameters update on each Flow run.
  • Parabola will never limit API calls according to a user’s plan—rate limiting is at the discretion of the user, and may be restricted natively by the API.
  • We recommend using an API key that is unique to Parabola. This is not strictly necessary, but it helps with troubleshooting and debugging!

Something not right? Check out this guide to learn more troubleshooting common API errors.

The Send to an API step sends a request to an API to export specific data.  Data must be sent through the API using JSON formatted in the body of the request. This step can send data only when a flow is published.

Input

This table shows the product information for new products to be added to a store. It shows common columns like "My Product Title", "My Product Description", "My Product Vendor", "My Product Tags".

These values can be used to create products in bulk via the Send to an API step.

Basic Settings

To use the Send to an API step, a Request Type, API Endpoint URL, and Authentication are required. Some APIs require Custom Headers while other APIs nest their data into a single cell that requires a Top Level Key to format into rows and columns.

Request Type

There are four ways to send data with an API using POST, PUT, PATCH, and DELETE requests. These methods are also known as verbs.

The POST verb is used to create new data. The DELETE verb is used to delete data.  The PUT verb is used to update exiting data, and the PATCH verb is used to modify a specific portion of the data.

Hey API, can you POST new data to the server?  The new data is in the JSON body.

API Endpoint URL

The API Endpoint URL is the specific location where data will be sent. Each API Endpoint URL belongs to a specific resource. A resource is the broader category to be targeted when sending data.

To create a new product in Shopify, use their Products resource. Their documentation specifies making a POST request to that resource using this Endpoint URL:

Your Shopify store domain will need to prepended to each Endpoint URL:

https://your-shop-name.myshopify.com/admin/api/2020-10/products.json

The request information is sent to the API in the JSON body of the request. The JSON body is a block that outlines the data that will be added.

Body

The body  of each request is where data that will be sent through the API is added. The body must be in raw JSON format using key:value pairs. The JSON below shows common attributes of a Shopify product.

{
 "product": {
   "title": "Baseball Hat",
   "body_html": "<strong>Awesome hat!</strong>",
   "vendor": "Parabola Cantina",
   "product_type": "Hat",
   "tags": [
     "Unisex",
     "Salsa",
     "Hat"
   ]
 }
}

Notice the title, body_html, vendor, product_type, and tags can be generated when sending this data to an API.

Since each product exists per row, {text merge} values can be used to dynamically pass the data in the JSON body.

This will create 3 products: White Tee, Pink Pants, and Sport Sunglasses with their respective product attributes.

Authentication

Most APIs require authentication to access their data. This is likely the first part of their documentation. Try searching for the word Authentication in their documentation. Below are the authentication types supported on Parabola:

The most common types of authentication are Bearer Tokens, Username/Password (also referred to as Basic), and OAuth 2.0. Parabola has integrated these authentication types directly into this step.

Bearer Token

This method requires you to send your API Key or API Token as a bearer token. Take a look at this example below:

The part that indicates it is a bearer token is this:

-H "Authorization: Bearer sk_test_WiyegCaE6iGr8eSucOHitqFF"

To add this specific token in Parabola, select Bearer Token from the Authorization menu and add sk_test_WiyegCaE6iGr8eSucOHitqFF as the value.

Username/Password (Basic)

This method is also referred to as Basic Authorization or simply Basic. Most often, the username and password used to sign into the service can be entered here.

However, some APIs require an API key to be used as a username, password, or both. If that's the case, Insert the API key into the respective field noted in the documentation.

The example below demonstrates how to connect to Stripe's API using the Basic Authorization method.

The Endpoint URL shows a DELETE request being made to a resource called customers.  The authorization type can be identified as Basic for two reasons:

  1. The -u indicates Basic Authorizationusername.
  2. Most APIs reference the username and password formatted as username:password. There is a colon : . This indicates that only a username is required for authentication.

To delete this customer using Parabola, fill in the fields below:

OAuth2.0

This method is an authorization protocol that allows users to sign into a platform using a third-party account. OAuth2.0 allows a user to selectively grant access for various applications they may want to use.

Authenticating via OAuth2.0 does require more time to configure. For more details on how to authorize using this method, read our guide Using OAuth2.0 in Parabola.

Expiring Access Token

Some APIs will require users to generate access tokens that have short expirations. Generally, any token that expires in less than 1 day is considered to be "short-lived" and may be using this type of authentication. This type of authentication in Parabola serves a grouping of related authentication styles that generally follow the same pattern.

One very specific type of authentication that is served by this option in Parabola is called OAuth2.0 Client Credentials. This differs from our standard OAuth2.0 support, which is built specifically for OAuth2.0 Authorization Code. Both Client Credentials and Authorization Code are part of the OAuth2.0 spec, but represent different Grant Types.

Authenticating with the Expiring Access Token option is more complex than options like Bearer Token, but less complex than OAuth2.0. For more details on how to use this option, read our guide Using Expiring Access Tokens in Parabola.

Custom Headers

A header is a piece of additional information to be sent with the request to an API. If an API requires additional headers, it is commonly noted in their documentation as -H.

Remember the authentication methods above? Some APIs list the authentication type to be sent as a header. Since Parabola has specific fields for authentication, those headers can typically be ignored.

Taking a look at Webflow's API, they show two headers are required.

The first -H header is linked to a key called Authorization. Parabola takes care of that. It does not need to be added as a header. The second -H header is linked to a key called accept-version. The value of the header is 1.0.0. This likely indicates which version of Webflow's API will be used.

Advanced Settings

Encode URLs

URLs tend to break when there are special characters like spaces, accented characters, or even other URLs. Most often, this occurs when using {text merge} values to dynamically insert data into a URL.

Check the "Encode URLs" box to prevent the URL from breaking if special characters need to be passed.

See sent request

If you woud like to see the request that was sent to the API during the Flow run, you can dothis from the API step. To do this, click the square button next to the Request Settings section in the step to see more detailed information.

Reading API Errors

Check out this guide to learn more troubleshooting common API errors.

Use the Enrich with API step to make API requests using a list of data, enriching each row with data from an external API endpoint.

Input/output

Our input data has two columns: "data.id" and "data.employee_name".

Our output data, after using this step, has three new columns appended to it: "api.status", "api.data.id", and "api.data.employee_name". This data was appended to each row that made the call to the API.

Custom settings

First, decide if your data needs a GET or POST operation, or the less common PUT or PATCH, and select it in the Type dropdown. A GET operation is the most common way to request data from an API. A POST is another way to request data, though it is more commonly used to make changes, like adding a new user to a table. PUT and PATCH make updates to data, and sometimes return a new value that can be useful.

Insert your API endpoint URL in the text field.

Sending a body in your API request

  • A GET cannot send a body in its request. A POST can send a Body in its request. In Parabola, the Body of the request will always be sent in JSON.
  • Simple JSON looks like this:
{ "key1":"value1", "key2":"value2", "key3":"value3" }

Using merge tags

  • Merge tags can be added to the API Endpoint URL or the Body of a request. For example, if you have a column named "data.id", you could use it in the API Endpoint URL by including {data.id} in it. Your URL would look like this:
http://third-party-api-goes-here.com/users/{data.id}
  • Similarly, you can add merge tags to the body.
{
"key1": "{data.id}",
"key2": "{data.employee_name}",
"key3": "{Type}"
}
  • For this GET example, your API endpoint URL will require an ID or some sort of unique identifier required by the API to match your data request with the data available. Append that ID column to your API endpoint URL. In this case, we use {data.id}.
  • Important Note: If the column referenced in the API endpoint URL is named "api", the enrichment step will remove the column after the calculation. Use the Edit Columns step to change the column name to anything besides "api", such as "api.id".

Authentication

Most APIs require authentication to access their data. This is likely the first part of their documentation. Try searching for the word "authentication" in their documentation.

Here are the authentication types available in Parabola:

The most common types of authentication are 'Bearer Token', 'Username/Password' (also referred to as Basic), and 'OAuth2.0'. Parabola has integrated these authentication types directly into this step.

Bearer Token

This method requires you to send your API key or API token as a bearer token. Take a look at this example below:

The part that indicates it is a bearer token is this:

-H "Authorization: Bearer sk_test_WiyegCaE6iGr8eSucOHitqFF"

To add this specific token in Parabola, select 'Bearer Token' from the 'Authorization' menu and add "sk_test_WiyegCaE6iGr8eSucOHitqFF" as the value.

Username and Password (Basic)

This method is also referred to as "basic authorization" or simply "basic". Most often, the username and password used to sign into the service can be entered here.

However, some APIs require an API key to be used as a username, password, or both. If that's the case, insert the API key into the respective field noted in the documentation.

The example below demonstrates how to connect to Stripe's API using the basic authorization method.

https://assets.website-files.com/5d9bdcad630fbe7a7468a9d8/5df043083024f8acf53e7729_Screen Shot 2019-12-10 at 5.14.37 PM.png

The endpoint URL shows a request being made to a resource called customers.  The authorization type can be identified as basic for two reasons:

  1. The -u indicates a username.
  2. Most APIs reference the username and password formatted as username:password. Here, there is a colon with no string following, indicating that only a username is required for authentication.

To authorize this API in Parabola, fill in the fields below:

OAuth2.0

This method is an authorization protocol that allows users to sign into a platform using a third-party account. OAuth2.0 allows a user to selectively grant access for various applications they may want to use.

Authenticating via OAuth2.0 does require more time to configure. For more details on how to authorize using this method, read our guide Using OAuth2.0 in Parabola.

Expiring Access Token

Some APIs will require users to generate access tokens that have short expirations. Generally, any token that expires in less than 1 day is considered to be "short-lived" and may be using this type of authentication. This type of authentication in Parabola serves a grouping of related authentication styles that generally follow the same pattern.

One very specific type of authentication that is served by this option in Parabola is called "OAuth2.0 Client Credentials". This differs from our standard OAuth2.0 support, which is built specifically for "OAuth2.0 Authorization Code". Both methods are part of the OAuth2.0 spec, but represent different grant types.

Authenticating with an expiring access token is more complex than using a bearer token, but less complex than OAuth2.0. For more details on how to use this option, read our guide Using Expiring Access Tokens in Parabola.

How to work with errors when you expect them in your API calls

Enabling Error Handling

In the Enrich with an API step and the Send to an API step, enable Error Handling to allow your API steps to pass through data even if one or more API requests fail. Modifying this setting will add new error handling columns to your dataset reporting on the status of those API calls

By default, this section will show that the step will stop running when 1 row fails. This has always been the standard behavior of our API steps. Remember, each row of data is a separate API call. With this default setting enabled, you will never see any error handling columns.

Update that setting, and you will see that new columns are set to be added to your data. These new columns are:

  • API Success Status
  • API Error Code
  • API Error Message

API Success Status will print out a true or false value to show if that row's API call succeeded or failed.

API Error Code will have an error code for that row if the API call failed, and will be blank if the API call succeeded.

API Error Message will display the error message associated with any API call that failed, if the API did in fact send us back a message.

With the exception of the default settings, these columns will still be included even if every row succeeded. In that case, you will see the API Success Status column with all true values, and the other two columns as all blank values.


Using the error handling settings

It is smart to set a threshold where the step will still fail if enough rows have failed. Usually, if enough rows fail to make successful API calls, there may be a problem with your step settings, the data you are merging into those calls, or the API itself. In these cases, it is a good idea to ensure that the step can fully stop without needing to run through every row.

Choose to stop running this step if either a static number of rows fail, or if a percentage of rows fail.

You must choose a number greater than 0.

When using a percentage, Parabola will always round up to the next row if the percentage of the current set of rows results in a partial row.

Prevent the step from ever stopping

In rare cases, you may want to ensure that your step never stops running, even if every row results in a failed API call. In that case, set your error handling threshold to any number greater than 100%, such as 101% or 200%.

What to do with these new error handling columns

Once you have enabled this setting, use these new columns to create a branch to deal with errors. The most common use case will be to use a Filter Rows step to filter down to just the rows that have failed, and then send those to a Google Sheet for someone to check on and make adjustments accordingly.

Error handling in the Live flow Run logs

If you have a flow that is utilizing these error handling columns, the run logs on the live view of the flow will not indicate if any rows were recorded as failed. The run logs will only show a failure if the step was forced to stop by exceeding the threshold of acceptable errors. It is highly advisable that you set up your flow to create a CSV or a Google Sheet of these errors so that you have a record of them from each run.

Integration: 

Amazon Seller Central

Use the Pull from Amazon Seller Central step to import Amazon reports into your flow.

Set up the step

  1. Drag the Pull from Amazon Seller Central step onto the canvas.
  2. Click "Authorize Amazon Seller".
  3. In the pop-up that appears, log in to your Amazon Seller Central account to connect it to Parabola.

Configure your settings

  • Report category: Select the type of report you want to pull. Descriptions for categories are available in Amazon’s developer documentation.
  • Report type: Options vary based on the selected category.
  • Timeframe: Defaults to the last month. To speed up report delivery, select the shortest timeframe that meets your needs.
  • Report options: Some reports allow for extra configuration.
  • Troubleshooting "fatal error" response from Amazon

    Potential reasons for fatal errors from Amazon

    • No data is available for the date range specified.
    • The date range doesn’t follow Amazon’s specifications. Some report types have minimum and maximum date range limits. Check the description of your selected report type for details.
    • The connected seller account doesn’t sell in the marketplace(s) specified in the request.
    • You’ve made the same exact report request too many times in a row. In rare cases, this can result in a fatal error response from Amazon.

    Potential solutions

    • Try adjusting your date range to a smaller or different window.
    • Manually confirm in Amazon Seller Central that data exists for the date range you’re requesting.
    • While this behavior isn’t confirmed in Amazon’s official documentation, several users have observed it. Try spacing out repeated report requests (try again in 24 hours) if you suspect that this might be the cause of the fatal errors.

    Helpful tips

    • This step pulls from Amazon’s Reporting API. If you need data from the Orders or Customers APIs, look for reports that already contain that information.
    • At this time, we are unable to pull reports from the Easy Ship and Orders report categories. (Last update: October 7, 2025)
    • There are two types of inventory reports: Inventory and Fulfillment by Amazon (FBA) Inventory. Check both if you’re unsure where your dataset lives. Inventory reports cover products you fulfill directly, while FBA Inventory reports cover products Amazon fulfills on your behalf.
    • Amazon’s API can take up to an hour to return report results. Limit the timeframe or data size when possible to reduce wait times.
    • The default timezone matches your browser. You can adjust this if needed. Parabola converts your timeframe and timezone to UTC when requesting the report.
    • If a report exists in Amazon Seller Central but isn’t available in Parabola, contact us at help@parabola.io.

    Integration: 

    CSV file

    The Use CSV file step enables you to pull in tabular data from a CSV, TSV, or a semicolon delimited file.

    Custom Settings

    The first thing to do when using this step is to either drag a file into the outlined box or select "Click to upload a file".

    Once the file is uploaded and displayed in the Results tab, you'll see two settings on the lefthand side: File and Delimiter. You can click File to upload a different file. Parabola will default to using a comma delimiter, but you can always update the appropriate delimiter for your file by clicking on the Delimiter dropdown. Comma , , tab \t, and semi-colon ; are the three delimiter types we support.

    In the "Advanced Settings", you can set a number of rows and a number of columns to skip when importing your data. This will skip rows from top-down and columns from left-to-right. You can also select a Quote Character which will help make sure data with commas in the values/cells don’t disrupt the CSV structure.

    Helpful Tips

    • Security: the files you upload through this step are stored by Parabola. We store the data as a convenience, so that the next time you open the flow, the data is still loaded into it. Your data is stored securely in an Amazon S3 Bucket, and all connections are established over SSL and encrypted.
    • Limitations: Parabola can't pull in updates to this file from your computer automatically, so you must manually upload the file's updates if you change the original file. Formatting and formulas from a file will not be preserved. When you upload this file, all formulas are converted to their value and formatting is stripped.

    The "Generate CSV file" step allows you to export tabular data as a CSV file. You can use it to create custom datasets from various sources within your Flow. Once the Flow run is complete, the CSV file can be downloaded from the Flow’s Run History. You can also configure the step to email a download link to the Flow owner.

    Custom Settings

    Once you connect your Flow to this export step, it will display a preview of the tabular data to be exported.

    The name of the generated file will match the step’s title. To rename your custom dataset file, simply double-click the step title and enter a new name.

    After publishing and running your Flow, you can download the generated CSV file from the Flow’s Run History panel. Past CSVs created by this step are also accessible there.

    You can optionally configure the step to email a download link to the Flow owner when the run is complete. Please note that this link will expire after 24 hours.

    If the step receives zero rows of data as input, no CSV file will be generated and no download link will be emailed.

    Helpful Tips

    Security

    Files generated by this step are stored by Parabola for your convenience. This allows the data to be reloaded the next time you open the Flow. Your data is stored securely in an Amazon S3 bucket, with all connections established over SSL and encrypted.

    Limitations

    This step supports only one input source at a time.
    If your Flow includes multiple branches or datasets, you'll need to connect each one to its own Generate CSV file step to export them separately.

    Alternatively, consider using the "Generate Excel file" step, which allows multiple inputs and creates a single Excel file with each input as a separate tab.

    Integration: 

    DHL

    The DHL Shipment Tracking API is used to provide up-to-the-minute shipment status reports by retrieving tracking information for shipments, identifying DHL service providers, and verifying DHL delivery addresses.

    DHL is a beta integration which requires a slightly more involved setup process than our native integrations. Following the guidance in this document should help even those without technical experience pull data from DHL. If you run into any questions, shoot our team an email at support@parabola.io.

    Use Cases

    Use Case Description
    Track DHL Shipments Generate status reports by retrieving tracking information for shipments, identifying DHL service providers, and verifying DHL delivery addresses.

    🤝 DHL | Integration configuration

    📖 DHL Reference docs:

    https://developer.dhl.com/api-reference/shipment-tracking#reference-docs-section

    🔐 DHL Authentication doc links:

    https://developer.dhl.com/api-reference/shipment-tracking#get-started-section/user-guide

    Instructions

    1. Click My Apps on the portal website.

    2. Click the + Add App button.

    3. The “Add App” form appears.

    4. Complete the Add App form.

    5. You can select the APIs you want to access.

    6. When you have completed the form, click the Add App button.

    7. From the My Apps screen, click on the name of your app. The Details screen appears.

    8. If you have access to more than one API, click the name of the relevant API.

    ⚠️ Note: The APIs are listed under the Credentials section.

    9. Click the Show link below the asterisk that is hiding the Consumer Key.

    🔐 Parabola | Authentication configuration

    1. Add an Enrich tracking from DHL step template to your canvas.

    2. Click into the Enrich with API: DHL Tracking step to configure your authentication.

    3. Under the Authentication Type, select None.

    4. Click into the Request Settings to configure your request using the format below:

    Request Headers

    Header Key Header Value
    DHL-API-Key Consumer Key>

    Example Screenshot

    🌐 DHL | Sample API Requests

    Track DHL Shipment Statuses by tracking number

    Get started with this template.

    Test URL

    https://api-test.dhl.com/track/

    Production URL

    https://api-eu.dhl.com/track/

    1. Add a Use sample data step to your Flow. You can also import a dataset with tracking numbers into your Flow. (Pull from Excel File, Pull from Google Drive, Pull from API, Use sample data, etc.)

    💡 Tip: When using your own data, use the Edit columns step to rename the tracking column in your source data to Tracking Number.

    2. Connect it to the Enrich with API: DHL Tracking step.

    3. Under Authentication Type, select None.

    4. Click into the Request Settings to configure your request using the format below:

    API Endpoint URL

    Field Value
    Method GET
    API Endpoint URL https://api-eu.dhl.com/track/shipments?trackingNumber={Tracking Number}
    💡 Tip: The Enrich with API step makes dynamic requests for each row in the table by inserting the tracking number in the API Endpoint URL.

    The example above assumes, there is a Tracking Number column and is referenced using curly brackets: {Tracking Number}
    Enclose your column header containing tracking numbers with curly brackets to dynamically reference the tracking numbers in your table.

    Request Headers

    Header Key Header Value
    DHL-API-Key Consumer Key>

    5. Click Refresh data to display the results.

    Example Screenshot

    📣 Callouts

    ⚠️ Note: Rate limits protect the DHL infrastructure from suspicious requests that exceed defined thresholds.

    When you first request access to the API, you will get the initial service level which allows 250 calls per day with a maximum of 1 call every 5 seconds.

    Additional rate limits are available and they are granted according to your specific use case. If you would like to request for additional limits, please proceed with the following steps:

    1. Create an app as described under the Get Access section.
    2. Click My Apps on the portal website.
    3. Click on the App you created.
    4. Scroll down to the APIs list and click on the "Request Upgrade" button.

    Integration: 

    DocSpring

    Use the Send to DocSpring step to automatically create submissions for your DocSpring PDF Templates.

    Connect your DocSpring account

    To connect to your DocSpring account, you'll first need to click the blue "Authorize" button.

    You'll need your DocSpring API Token ID and your DocSpring API Token Secret to proceed. To do so, visit your API Token settings in DocSpring.

    Reminder: if you're creating a new API Token, the Token Secret will only be revealed immediately after creating the new Token. Be sure to copy and paste or write it down in a secure location. Once you've created or copied your API Token ID and Secret, come back to Parabola and paste them into the correct fields.

    Custom settings

    To pull in the correct DocSpring Template, you'll need to locate the Template ID. Open the Template you want to connect in DocSpring and locate the URL.  The Template ID is the string of characters following templates/ in the URL:

    https://app.docspring.com/templates/{Template ID}

    Paste the ID from the URL in the Template ID field.

    Helpful tips

    • Your PDF templates in DocSpring can accept a variety of data types to fill in their fields, however, there are no column mapping options in Parabola. Make sure your column headers match the names of the fields in DocSpring, exactly, to ensure your data fills in the correct predefined field in the PDF.

    Integration: 

    Email a file attachment

    The Email a file attachment step gives you the ability to send an email to a list of recipients with a custom message and an attached file (CSV or Excel) of your transformed data.

    Setup

    1. Connect your flow to Email a file attachment.
    2. Select your Email template:
      1. Parabola branded (default): sends a branded email.
      2. Plain text without branding: sends a simple plaintext email.
      3. Delivery details: Regardless of template, emails are sent from the Parabola domain. The sender address is team@parabolamail.com. Set ‘Reply-to’ if you want responses to go elsewhere.
    3. In the step, fill out all required fields:
      1. Email recipients: enter up to ten email addresses.
      2. Email subject: enter your subject line.
      3. Email body: write your message.
    4. In Advanced settings, set Reply-to to direct recipient replies to the right inbox.

    Use merge tags

    You can insert dynamic values in Email recipients, Email subject, Email body, File name, and Reply-to by wrapping a column name in {}.


    Example: {Name} inserts the value from the first row of the {Name} column from the first connected input.

    • To automatically remove any columns used as merge tags from the attachment, open Advanced settings and turn on Remove merge columns from output file.

    Multiple inputs (Excel only)

    If File format = Excel, the step can accept multiple inputs. Each input becomes a separate tab in the generated file. Give each tab a unique name.

    Security

    Parabola stores files you send through this step so your flow can reload results next time. We store data securely in Amazon S3, and all connections use SSL with encryption.

    Limitations

    • Maximum email size: 30 MB.
    • The step does not send an email if the input contains zero rows.
    • You can email up to 10 recipients
    • This step sends an attachment to your specified recipients. By contrast, the Generate CSV file and Generate Excel file steps create a downloadable file and email a link to the flow owner. Files from the Generate steps are also available in the flow’s Run history.

    Integration: 

    Excel file

    The Use Excel file step enables you to pull in tabular data from an Excel file.

    Custom settings

    First, select Click to upload a file.

    If your Excel file has multiple sheets, select which one you'd like to use in the dropdown menu for Sheet.

    In the Advanced Settings, you may also select to skip rows or columns. This will skip rows from top-down and columns from left-to-right.

    Formatted data

    Cell data is imported as formatted values from Excel. Dates, numbers, and currencies will be represented as they appear in the Excel workbook, as opposed to their true underlying value.

    Enabling unformatted values will import the underlying data from Excel. Most notably, this will show raw numbers without any rounding applied, and will convert dates to Excel's native date format (the number of days since 1900-01-01).

    Helpful tips

    Limitations

    This step can't pull in file updates from your computer, so if you make dataset changes and wish to bring them into Parabola, this requires manually uploading the updated Excel file. When you upload an Excel file, all formulas are converted to their value, and formatting is stripped (formatting or formulas are not preserved). If you want to pull in live updates on each run without having to upload a file manually, you can use a step like Pull from SharePoint, OneDrive, or Google Drive.

    Security

    The files you upload through this step are stored by Parabola. We store the data as a convenience, so that the next time you open the flow, the data is still loaded into it. Your data is stored securely in an Amazon S3 Bucket, and all connections are established over SSL and encrypted.

    Custom Settings

    Once you connect your Flow to this export step, it will show a preview of the tabular data to be sent.

    The step will automatically send this downloadable Excel file link to the email address of the Flow owner.

    By default, the name of the file will be ‘Parabola Excel File’—if you'd like to rename your dataset, click the box under ‘Download a Excel file named’ and type your new filename.

    Note that the Generate Excel file step can take multiple inputs. Each input step will send data to a separate sheet, and the names of these sheets can be customized. 'Input 1' will map to 'Sheet 1' by default, and so forth. Refer to the 'Input' tabs at the top of your step window to ensure your step is sending your data to the desired source.

    Once you publish and run your Flow, the emailed Excel file link will expire after 24 hours.

    If the step has no data in it (0 rows), then even after running your Flow, an email with an Excel file won't be sent.

    You can download past Excel files that were generated with this step by opening the “Run History” panel, navigating to the Flow run, and clicking Download Excel.

    Helpful Tips

    Security

    The files you send through this step are stored by Parabola. We store the data as a convenience, so that the next time you open the Flow, the data is still loaded into it. Your data is stored securely in an Amazon S3 Bucket, and all connections are established over SSL and encrypted.

    Limitations

    All sheet names must be less than or equal to 31 characters, or the Flow will fail.

    Integration: 

    Extract from PDF

    You can import PDF files in a few different ways:

    • Upload a file directly using the Extract from PDF file step
    • Pull PDFs from inbound email using the Extract from email step
    • Bulk process PDF files using the Pull from file queue step

    Check out this Parabola University video for a quick intro to our PDF parsing capabilities, and see below for an overview of how to read and configure your PDF data in Parabola.

    Understanding your PDF data

    Parabola’s Pull from PDF file step can be configured to return Columns or Keys

    • Columns are parts of tables that are likely to have more than one row associated with them
    • Keys are single pieces of data that are applicable to the entire document. As an example - “Total” rows or fields like dates that only appear once at the top of a document are best expressed as keys
    • Sometimes AI can interpret something as a column or a key that a human might consider the other. If the tool is not correctly pulling a piece of information, you might try experimenting with columns versus keys for that data point
    • Both columns and keys can be given additional information from you to ensure the tool is identifying and returning the correct information - more on that below!

    Step Configuration

    You can use Extract from PDF, Extract from email, and Pull from file queue to parse PDFs. Once you have a PDF file uploaded into your Flow, the configuration settings are uniform.

    Extract a table

    1. Auto-detected Table (default)
    Parabola scans your PDF, detects possible tables, and labels the most likely columns. This option uses LLM technology and works exceptionally well if the PDF document has a clear, structured table. All detected tables will be available in the sub-dropdown under the "Use an auto-detected table" dropdown.

    • Quickest setup
    • Works best when your table has headers
    • You can manually add more columns or keys after

    2. Define a Custom Table
    Manually define the structure of your table if the AI didn’t pick it up. You can name the table and define the columns that you want to extract from the PDF by clicking on the + Add Column button.

    • Good for multi-table documents
    • Works well with tables spread across multiple pages
    • Requires a bit more setup

    3. Extract All Data (OCR-first mode)
    Use OCR to return all text from the PDF — helpful if the structure is complex or you're feeding the result into an AI step later. We only recommend this option if the first two extraction methods aren't yielding the desired results.

    Return formats:

    • All data → Every value, one per row
    • Table data → Tables split by page, each with a table ID
    • Key-value pairs → Labeled items like SKU: 12345
    • Raw text → One cell per page, useful for follow-up AI parsing

    Extract values

    If there are document-level values like invoice date and PO number that you want to extract, add them as keys in this section. You can add this by clicking on the “+ Add key” button. Each key that you configure will be represented as its own column and the value will be repeated across all the rows of the resulting data set.

    • Column and key names can be descriptive or instructive, and do not need to match exactly what the PDF says. However, you should try to ensure the name is something that the underlying AI can associate with the desired column of data
    • Providing examples is the best way to increase the accuracy of column (or key) parsing
    • The “Additional instructions to find this value” field is not required, however, here you can input further instructions on how to identify a value as well as instructions on how to manipulate that value. For example in a scenario where you want to make two distinct columns out of a singular value in the file, say an order number in the format “ABC:123".  You might use the prompt - “Take the order ID and extract all of the characters before the “:” into a new column”

    See below how in this case with handwriting, with more instructions the tool is able to determine if there is writing next to the word “YES” or “NO”.

    Fine Tuning

    You can give the AI more context by typing additional context and instructions into this text box. Try using specific examples, or explain the situation and the specific desired outcome. Consult the chat interface on the lefthand side to help you write clear instructions.

    Advanced Settings

    1. Text parsing approach
    You can specify the text parsing approach if necessary. The default setting is “Auto” and we recommend keeping it this way if possible. If it’s not properly parsing your PDF, you can choose between “OCR” and “Markdown”.

    • OCR - This will use a more sophisticated version of OCR text extraction that can be helpful for complex documents such as those with handwriting. This more advanced model may, however, result in the tool running slower.
    • Markdown - This will use Markdown for parsing. It is generally faster for parsing and may work better for certain documents, like pdfs that have nested columns and rows.

    2. Retry step on error
    The checkbox will be checked by default. LLMs can occasionally return unexpected errors and oftentimes, re-running the step will resolve the issue. When checked, this step will automatically attempt to re-run one time when encountering an unexpected error.

    3. Auto-update prompt versions
    The checkbox will be unchecked by default. Occasionally Parabola updates step prompts in order to make parsing results more accurate/reliable. These updates may change output results, and as a result, auto-updating is turned off by default. Enable this setting to always use the most reset prompt versions.

    4. Page filtering
    The checkbox will be unchecked by default. This setting allows users to define specific pages of a document to parse. If you only need specific values that are consistently on the same page(s), this can drastically improve run time. If you do check this box off, please make sure to complete the dropdown settings that appear below.

    • Keep, Remove, or Autodetect
      • The Autodetect option will allow the parser to choose what pages to use.
    • The first, the last, or these
      • If you select “the first”, input a number in the “#” box to instruct how many pages from the beginning of the file should be parsed.
      • If you select “the last”, input a number in the #” box to instruct how many pages from the end of the file should be parsed.
      • If you select “these”, input a comma-separated list of numbers in the blank box to specify which pages. For example, if you put “1, 10, 16”, the step will parse the first, tenth, and sixteenth page only of the file.

    Usage tips & Other Notes

    • The more document pages that are needed for parsing, the longer it may take. To expedite this process, you can configure the step to only review certain pages from your file. The fewer the pages, the faster the results!
    • If you need to pull data across multiple tables (from a single file), you will likely need multiple steps – one per table.
    • File size: PDF files must be <500 MB and 30 pages
    • PDFs cannot be password protected
    • We recommend always auditing the results returned in Parabola to ensure that they’re complete

    Using child columns

    Mark columns as “Child columns” if they contain rows that have values unique from the parent columns:

    Before:

    After marking “Size” as a child column:

    Use Extract from PDF to work with a single PDF file. Upload a file by either dragging a PDF file anywhere onto the canvas, or click "Click to upload a file" to select a file from your file picker.

    Step configuration instructions can be found here.

    Pull from PDF file step

    Extract from email can pull in data from a number of filetypes, including attached PDF files. Once configured, Parabola can be set to parse PDFs anytime the relevant email receives a PDF file.

    Step configuration instructions can be found here.

    Alternative option: Pull from file queue - PDF files

    Pull from file queue can receive PDF files and parse the relevant data. The file queue is a way to enqueue a Flow to run with a series of metadata + a file that is accessible via URL.

    Runs can be added to the file queue via API (webhook) or via Run another Parabola Flow.

    Integration: 

    Extract from email

    The Extract from email step gives you the ability to receive file attachments (CSV, XLS, PDF, or JSON files) from an incoming email and pass it to the next step (eg., combining email data with PDF or Google Sheets data). The step also gives you the ability to pull an email subject and body into a Parabola Flow. Use this unique step to trigger Flows, using content from the email itself.

    Watch the Parabola University video below to see this data pull in action.

    Default attachment settings

    To begin, take note of the generated email address that is unique to this specific flow. Copy the email address to your clipboard to start using this dedicated email address yourself or to share with others.

    The File Type is set to CSV / TSV, though you can also receive XLS / XLSX, PDF, or JSON files.

    The Delimiter is set to comma (,), but can also be adjusted to tab (\t) and semicolon (;). If needed, the default of Quote Character set to Double quote ( " " ) can be changed to single quote ( ' ' ).

    Custom settings

    This step contains optional Advanced settings, where you can tell Parabola to skip a certain number of rows or columns when receiving the attached file.

    Auto-forwarding emails into a Parabola flow

    To auto-forward a CSV attachment to an email outside of your domain, you may need to verify the @inbound.parabola.io email address. The below example shows how to set this up in Gmail.

    Video overview

    Step-by-step instructions

    1. Prepare Your Extract from Email Step in Parabola

    1. In your Parabola Flow, drag in a new Extract from Email step.
    2. Configure it to pull in email content, not just attachments.
    3. Click Update Results to save this configuration.

    💡 You’ll use this address to forward emails into your Parabola Flow. Don't forget to copy this email address.

    2. Set Up Forwarding in Gmail

    1. Go to Gmail → click the gear iconSee all settings.
    2. Navigate to the Forwarding and POP/IMAP tab.
    3. Click “Add a forwarding address.”
    4. Paste the email address from your Parabola step and click Next → Proceed.

    3. Confirm the Gmail Forwarding Request via Parabola

    1. Back in Parabola, wait for the Parabola flow to run from receiving the verification email.
    2. Click to "View email content" and click on the auto-forwarding link
    3. Follow Parabola's prompt to view an external URL
    4. Once you're on the confirmation URL, click "Confirm"

    ✅ Gmail will now recognize the Parabola address as a valid forwarding destination.

    4. Create a Gmail Filter to Automatically Forward Specific Emails

    1. In Gmail, go to Settings → Filters and Blocked AddressesCreate a new filter.
    2. Set criteria such as:
      • From: nycwarehouse@gmail.com
      • Subject: New York City Warehouse Inventory
      • Has attachment: ✅
    3. Click Create filter, then:
      • Check Forward it to and select your verified Parabola email address.
      • Click Create filter.

    5. Clean up your Flow (If necessary)

    1. If you created a temporary Extract from Email step just for the verification, you can now delete it.
    2. Your Parabola Flow will continue to receive the filtered, auto-forwarded emails daily.

    Other troubleshooting tips

    • If you do not see the email content come into the Flow after a few minutes, double-check the email settings on that step/Flow. Click on the gear icon in the lefthand side of the step where it says "View all Flow email settings". Make sure the checkbox "Reject emails that do not contain valid attachments" is unchecked.
    • If it is already checked, check your email inbox for an email with the subject line, "Sorry, we were unable to process your email attachment". The verification link from gmail should be available in the email content of this email. Click on the verification link and you should have successful verified this forwarding address!

    Pull multiple file attachments

    By default, Flows will run with the first valid attached file. If you want the Flow to run through multiple attached files (multiple attachments on one email), open the “Email trigger settings” modal and change the setting to “Run the Flow once per attachment:”

    (Access these settings from the Extract from email step, or from the Flow trigger settings on the published Flow page.)

    For emails with multiple files attached, the Flow will run once per file received, sequentially.

    • Files must be of the same type (CSV, XLS, PDF, or JSON) for the runs to process.
    • The file type is defined in the initial step settings (”File type” dropdown).
    • Any files received that are of a different type will cause a Flow run error.

    Pull from email content

    We also support the ability to pull in additional information about an email.The default behavior pulls:

    • Subject
    • Body (plain text)
    • CC
    • From
    • Attached file name

    Additional fields:

    • Body (HTML)
    • Body (all URLs)
    • Attached file URL

    To access these fields, you can toggle the “Pull data from" field to ‘Email content’. If you'd like to pull both an attachment and the subject and body, select ‘Email content and attachment’.

    Extract data from the body of an email with AI

    Use the “Extract data with AI” option to automatically extract tables and key values from email bodies to create structured output.

    Enable this option under "Parsing settings" when pulling in the “Email content”.

    Pull a sheet from an Excel file based on file position

    Use the "position is" option when pulling in an attached Excel document to specify which sheet to pull data from by its position, rather than its name. This is great for files that have key data in consistent sheet positions, but may not always have consistent sheet names.

    When using this option, only the number of sheets that are in the last emailed file will show in the dropdown. If a Flow using these settings is run and there is no sheet in the specified position, the step will error.

    Helpful tips

    • This step will run every time the dedicated email address receives a new attached file. This is useful for triggering your flow to run automatically, outside of a dedicated schedule or webhook.
    • If your XLS file has multiple sheets, this step auto-selects the first sheet but can be set to look for a specific sheet.
    • This step can handle attached files that are up to 5MB.
    • Each run of a Flow uses one file. If your Flow has multiple Extract from email steps, they will all access the same email / file.
    • What happens when multiple emails are received by your flow: If your flow is processing and another email (or multiple) comes in, then they will queue up to be pulled into your flow in the order they were received. All emails sent to a flow (up to 1,000 total) will be queued up and processed.
    • By default, emails that are sent to Flow email addresses must have a valid attachment. You can disable that, and allow emails without attachments, by accessing the email trigger management modal and disabling the checkbox.
    • This step can only ingest data from an email, not download a file. To generate and download a CSV from a link in an email, take the following steps:
      • Extract the CSV’s URL from the email content using Extract from email
      • Pass the URL into a Run another Flow step at the end of the Flow
      • Begin your destination Flow with Pull from file queue
      • End the destination Flow with a Generate CSV file step

    Use the "Extract data with AI" option to extract tables of data and individual values from messy and difficult excel files.

    Understanding your Excel data

    When extracting data from an Excel file, use the settings to extract a table, or individual values (or both)

    • Tables should be composed of columns and rows, with a row representing the names of the columns
    • Individual values are single pieces of data that are applicable to the entire document. For example, a date at the top of a document or an invoice number
    • Columns and individual values can be given additional information to ensure the tool is identifying and returning the correct information - more on that below!

    Step Configuration

    Selecting Excel extraction

    Once you have an Excel file in your flow, select "Extract data with AI". You will see options to add details to "Extract a table" and/or "Extract individual values".

    Clicking on either of those will show additional fields to fill out. Each step can extract 1 table and any number of individual values.

    Extract a table

    Once you enable table extraction, do the following:

    1. Give your table a description - this is used by AI to find the table so it's important to be clear and precise, especially if many tables are present.
    2. Define your columns - each column can be named, given example values, and additional instructions. If a column is conceptually clear (i.e. "Item description") then a name might be all you need. But if the name of the column is ambiguous, or its values are ambiguous, it is best practice to add example cell values, as well as additional instructions describing what this column represents and how an AI should find it.

    Extract individual values

    Once you enable individual value extraction, do the following:

    1. Define your value - each value can be named, given example values, and additional instructions. If a value is conceptually clear (i.e. "Port of entry") then a name might be all you need. But if the name of the column is ambiguous, or its values are ambiguous, it is best practice to add example cell values, as well as additional instructions describing what this value represents and how an AI should find it.

    Choosing the "type" for a column or individual value

    Columns and individual values are Text by default. But you can change that to improve accuracy:

    • Text - anything
    • True / False - results in either "True" or "False", can be used to detect checkmarks and other indicators
    • Number - will remove trailing zeros on any number
    • Currency -converts the currency to a number
    • Date - uses "2022-09-27T18:00:00.000" format
    • Signature - converts signatures to text
    • List of options - chooses from a list of possible options you provide

    Check out this Parabola University video for a quick intro to our PDF parsing capabilities, and see below for an overview of how to read and configure your PDF data in Parabola.

    Understanding your PDF data

    Parabola’s Pull from PDF file step can be configured to return Columns or Keys

    • Columns are parts of tables that are likely to have more than one row associated with them
    • Keys are single pieces of data that are applicable to the entire document. As an example - “Total” rows or fields like dates that only appear once at the top of a document are best expressed as keys
    • Sometimes AI can interpret something as a column or a key that a human might consider the other. If the tool is not correctly pulling a piece of information, you might try experimenting with columns versus keys for that data point
    • Both columns and keys can be given additional information from you to ensure the tool is identifying and returning the correct information - more on that below!

    Step Configuration

    You can use Extract from PDF, Extract from email, and Pull from file queue to parse PDFs. Once you have a PDF file uploaded into your Flow, the configuration settings are uniform.

    Extract a table

    1. Auto-detected Table (default)
    Parabola scans your PDF, detects possible tables, and labels the most likely columns. This option uses LLM technology and works exceptionally well if the PDF document has a clear, structured table. All detected tables will be available in the sub-dropdown under the "Use an auto-detected table" dropdown.

    • Quickest setup
    • Works best when your table has headers
    • You can manually add more columns or keys after

    2. Define a Custom Table
    Manually define the structure of your table if the AI didn’t pick it up. You can name the table and define the columns that you want to extract from the PDF by clicking on the + Add Column button.

    • Good for multi-table documents
    • Works well with tables spread across multiple pages
    • Requires a bit more setup

    3. Extract All Data (OCR-first mode)
    Use OCR to return all text from the PDF — helpful if the structure is complex or you're feeding the result into an AI step later. We only recommend this option if the first two extraction methods aren't yielding the desired results.

    Return formats:

    • All data → Every value, one per row
    • Table data → Tables split by page, each with a table ID
    • Key-value pairs → Labeled items like SKU: 12345
    • Raw text → One cell per page, useful for follow-up AI parsing

    Extract values

    If there are document-level values like invoice date and PO number that you want to extract, add them as keys in this section. You can add this by clicking on the “+ Add key” button. Each key that you configure will be represented as its own column and the value will be repeated across all the rows of the resulting data set.

    • Column and key names can be descriptive or instructive, and do not need to match exactly what the PDF says. However, you should try to ensure the name is something that the underlying AI can associate with the desired column of data
    • Providing examples is the best way to increase the accuracy of column (or key) parsing
    • The “Additional instructions to find this value” field is not required, however, here you can input further instructions on how to identify a value as well as instructions on how to manipulate that value. For example in a scenario where you want to make two distinct columns out of a singular value in the file, say an order number in the format “ABC:123".  You might use the prompt - “Take the order ID and extract all of the characters before the “:” into a new column”

    See below how in this case with handwriting, with more instructions the tool is able to determine if there is writing next to the word “YES” or “NO”.

    Fine Tuning

    You can give the AI more context by typing additional context and instructions into this text box. Try using specific examples, or explain the situation and the specific desired outcome. Consult the chat interface on the lefthand side to help you write clear instructions.

    Advanced Settings

    1. Text parsing approach
    You can specify the text parsing approach if necessary. The default setting is “Auto” and we recommend keeping it this way if possible. If it’s not properly parsing your PDF, you can choose between “OCR” and “Markdown”.

    • OCR - This will use a more sophisticated version of OCR text extraction that can be helpful for complex documents such as those with handwriting. This more advanced model may, however, result in the tool running slower.
    • Markdown - This will use Markdown for parsing. It is generally faster for parsing and may work better for certain documents, like pdfs that have nested columns and rows.

    2. Retry step on error
    The checkbox will be checked by default. LLMs can occasionally return unexpected errors and oftentimes, re-running the step will resolve the issue. When checked, this step will automatically attempt to re-run one time when encountering an unexpected error.

    3. Auto-update prompt versions
    The checkbox will be unchecked by default. Occasionally Parabola updates step prompts in order to make parsing results more accurate/reliable. These updates may change output results, and as a result, auto-updating is turned off by default. Enable this setting to always use the most reset prompt versions.

    4. Page filtering
    The checkbox will be unchecked by default. This setting allows users to define specific pages of a document to parse. If you only need specific values that are consistently on the same page(s), this can drastically improve run time. If you do check this box off, please make sure to complete the dropdown settings that appear below.

    • Keep, Remove, or Autodetect
      • The Autodetect option will allow the parser to choose what pages to use.
    • The first, the last, or these
      • If you select “the first”, input a number in the “#” box to instruct how many pages from the beginning of the file should be parsed.
      • If you select “the last”, input a number in the #” box to instruct how many pages from the end of the file should be parsed.
      • If you select “these”, input a comma-separated list of numbers in the blank box to specify which pages. For example, if you put “1, 10, 16”, the step will parse the first, tenth, and sixteenth page only of the file.

    Usage tips & Other Notes

    • The more document pages that are needed for parsing, the longer it may take. To expedite this process, you can configure the step to only review certain pages from your file. The fewer the pages, the faster the results!
    • If you need to pull data across multiple tables (from a single file), you will likely need multiple steps – one per table.
    • File size: PDF files must be <500 MB and 30 pages
    • PDFs cannot be password protected
    • We recommend always auditing the results returned in Parabola to ensure that they’re complete

    Using child columns

    Mark columns as “Child columns” if they contain rows that have values unique from the parent columns:

    Before:

    After marking “Size” as a child column:

    Extract from email can pull in data from a number of filetypes, including attached PDF files. Once configured, Parabola can be set to parse PDFs anytime the relevant email receives a PDF file.

    Step configuration instructions can be found here.

    Alternative option: Pull from file queue - PDF files

    Pull from file queue can receive PDF files and parse the relevant data. The file queue is a way to enqueue a Flow to run with a series of metadata + a file that is accessible via URL.

    Runs can be added to the file queue via API (webhook) or via Run another Parabola Flow.

    Integration: 

    FedEx

    The FedEx integration allows operators to automate custom shipping alerts, integrations, and reports using live data from Fedex.

    How to authenticate

    FedEx uses token client credentials for authentication. To connect FedEx to Parabola:

    1. Go to the FedEx Developer Portal and create an application to obtain your Client ID and Client Secret. See below for in-depth instructions.
    2. In Parabola, open the FedEx integration step and click Authorize. From there, you can enter your FedEx Client ID/Secret (and optionally Child Key/Secret).

    Parabola will securely store your credentials and use them to authenticate each request to FedEx.

    Creating an application in the FedEx Developer Portal

    1. Navigate to the FedEx Developer Portal.
    2. Click Login to access your FedEx account.
    3. In the side-menu, select My Projects.
    4. Click + CREATE API PROJECT.

    5. Complete the modal by selecting the option that best identifies your business needs for integrating with FedEx APIs.
    6. Navigate to the Select API(s) tab.
    7. Select the API(s) you want to include in your project. Based on the API(s) you select, you may need to make some additional selections.

    ⚠️ Note: If you select Track API, complete the additional steps below:
    1. Select an account number to associate with your production key.
    2. Review the Track API quotas, rate limits, and certification details.
    3. Choose whether or not you want to opt-in to emails that will notify you if you exceed your quota.

    8. Navigate to the Configure project tab.
    9. Configure your project settings with name, shipping location, and notification preferences.

    10. Navigate to the Confirm details tab.
    11. Review your project details, then accept the terms and conditions.

    12. On the Project overview page, retrieve your Client ID and Client Secret.

    💡 Tip: Use Production Keys to connect to live production data in Parabola. Use Test Keys to review the request and response formats using from the documentation.

    Available data

    Using the FedEx integration in Parabola, you can bring in:

    • Shipment tracking details: Tracking numbers, carrier codes, and shipment identifiers.
    • Shipment events and scan history: Time-stamped location scans, event types, and delivery milestones.
    • Package details: Weight, dimensions, package counts, and service descriptions.
    • Origin and destination data: Full address information including city, state, postal code, and country.
    • Status information: Current status codes and descriptions such as “In Transit,” “Delivered,” or “Exception.”
    • Associated shipments: Multi-piece shipment data linked to a master tracking number.
    • Reference-based tracking: Shipments tied to PO numbers, invoices, or customer references.

    Common use cases

    • Monitor delivery performance across regions and carriers.
    • Reconcile proof-of-delivery documents with invoicing or ERP systems.
    • Identify delayed or lost shipments and alert customer service teams.
    • Consolidate tracking data from multiple warehouses or fulfillment centers.
    • Generate daily shipment dashboards showing delivery statuses and exceptions.
    • Audit carrier billing against actual delivery and weight data.

    Tips for using Parabola with FedEx

    • Schedule your flow daily to automatically refresh shipment data and stay ahead of delivery delays.
    • Use Filters to flag shipments stuck in “In Transit” status for more than a set number of days.
    • Combine with other systems (like Shopify, Netsuite, or your warehouse management system) to create end-to-end logistics visibility.
    • Export tracking documents into cloud storage (Google Drive, OneDrive, etc.) or attach them to customer records automatically.
    • Add Alerts in Parabola to notify your operations team when exceptions or delivery failures occur through Slack messages or email.

    Integration: 

    Flexport

    How to authenticate

    Flexport uses OAuth 2.0 Client Credentials for secure API access. To connect Flexport to Parabola:

    1. Get credentials from Flexport: Ask your Flexport account administrators to enable API access. You’ll receive client credentials (Client ID and Client Secret) or an access token. Your administrator will:
      1. Log into the Flexport Developer Portal at https://developers.flexport.com
      2. Navigate to API Credentials
      3. Click "Create Credentials"
      4. Select the appropriate resources (endpoints) you need access to and enable them
      5. Click "Create" to generate your credentials
      6. Copy your Client ID and Client Secret
    1. Add the Pull from Flexport step to your flow
      1. In your Parabola flow, add a Pull from Flexport step.
      2. Click Authorize, and then add expiring access token. Enter your Client ID and Client Secret when prompted. Parabola will automatically handle token exchange and renewal with this option; alternatively you can use a manual Bearer token.
      3. Once authenticated, select the Flexport resource you want to pull (Shipments, Bookings, Invoices, etc).
      4. Configure any filters such as date ranges, statuses, or specific identifiers.

    Available data

    Using the Flexport integration in Parabola, you can bring in a comprehensive range of logistics and freight data.

    • Shipments: The core movement record, including key dates (ETD/ETA/actuals), milestones, buyers/consignees, related bookings, documents, customs entries, calculated weight/volume, dangerous goods, costs, and statuses.
    • Bookings: Space reservations with carriers, including service details and requested dates. Includes associated Booking Line Items that describe booked goods and quantities.
    • Shipment Containers (ocean): Container‑level details for ocean moves, including container numbers and attributes.
    • Commercial Invoices: Commercial invoice headers and values associated to shipments or orders.
    • Invoices: Flexport billing invoices you receive for logistics services and charges.
    • Customs Entries: Declarations and entry details tied to shipments for brokerage and compliance.
    • Documents: Files and metadata (e.g., BOL, packing list, invoice PDFs) attached to shipments and other records.
    • Products: Catalog items, descriptions, and classification data used on documents and entries.
    • Network – Companies, Company Entities, Contacts, Locations: Your trading partner graph (buyers, shippers, consignees), organizational entities, people, and addresses/locations.

    Common use cases

    • Shipment tracking
      • Inbound freight visibility & tracking
      • ERP shipment reconciliation
      • Cross‑carrier consolidation
    • Cost tracking
      • Cost reporting by carrier and/or lane
      • Centralized invoice database
      • Reconciling shipment costs with POs and quotes
    • ISF monitoring and reporting
      • Master dashboard for tracking filing statuses
      • Alerts for missing or delayed filings
    • Shipping milestone alerts
      • Slack or email alerts based on key milestone changes (e.g., ETA shifts, CPC departures)

    Tips for using Parabola with Flexport

    • Use transformation steps such as Expand JSON to re-format data to match your other systems or reporting dashboards.
    • Schedule your flow to run daily (or more frequently) so your operations team has current ETAs, milestones, and charges.
    • Use incremental pulls: Store the last successful run time and filter on updated dates to avoid reprocessing historical data.
    • Alert on exceptions: Add conditional steps to flag missing documents, large ETA deltas, or invoice variances and send Slack/email alerts.
    • Document handling: Pull document metadata first, filter to the file types you need, then retrieve and archive files to your DMS with consistent naming.
    • Map business keys: Keep a crosswalk of Flexport IDs (shipment, booking, container) to your ERP/warehouse references for reliable joins.

    Integration: 

    Frate

    How to authenticate

    1. Get your API token in Frate Returns
      1. Open the Frate app and navigate to Settings.
      2. Under API Tokens, generate a new token.
      3. Once you create it, it will only be available for a one-time-copy, to make sure to save it right away to somewhere safe.
    2. Connect in Parabola
      1. Add a Pull from Frate step.
      2. Click Authorize and paste your Frate API Token when prompted.

    Available data

    Parabola can import the following from Frate Returns:

    • Return groups: Grouped return records with IDs, associated order IDs/names, customer email, creation timestamps, and overall status. Filter by created/updated/shipped date ranges, order identifiers, and specific return group IDs.
    • Shipments: Shipment details for returns, including shipment ID, carrier, tracking number, label URL, and current status.
    • Allowlist items: Policy exception/allowlist entries with descriptions, creation timestamps, and permitted actions such as exchange, refund to original payment method, or refund to store credit.

    Common use cases

    • Pull return groups (and item-level context when available) to analyze patterns and trends by SKU, reason, and time window.
    • Detect spikes in return rate, repeat-return customers, or high-defect SKUs and trigger Slack/Email alerts for rapid CX intervention.
    • Combine Frate with sources like NetSuite and your WMS to produce cross-platform reports instantly.
    • Track SLAs by measuring time from return creation → shipment → delivery to monitor processing efficiency and surface bottlenecks.
    • Reconcile return and shipment data to confirm every return was shipped and received; auto-flag missing or duplicate records.

    Tips for using Parabola with Frate Returns

    • Filter first, then join Return Groups by date windows (created/updated/shipped) or specific orders to shrink payloads before joining to orders, CX, or ERP data.
    • Create Slack/Email alerts for anomalies (e.g., spike in “pending” returns, repeat-return customers, or SKUs with high defect reasons).
    • Track SLAs with timestamps to monitor processing speed and pinpoint slowdowns.
    • Reconcile shipments automatically by flagging missing tracking numbers, duplicates, or statuses stuck in transit.
    • Join returns data with NetSuite and your warehouse/WMS to power cross-platform reporting—no manual CSV exports.
    • Schedule your flow to run hourly or daily so finance, CX, and ops dashboards stay current.

    With Parabola + Frate, anything that used to start with a CSV export can now run hands-free.

    Integration: 

    Fulfil

    Use the Pull from Fulfil integration to bring key Fulfil data into Parabola — allowing you to transform your Fulfil data for more granular visibility, blend Fulfil data with information from other systems, and trigger alerts based on custom logic.

    How to authenticate

    Fulfil uses API Key authentication for secure access.

    1. Go to your Fulfil site and generate an API key. See below for in-depth instructions.
    2. In Parabola, add the Fulfil integration step.
    3. Click Authenticate and add your API key.
    4. Note: you will also need to update the tenant field to be your organization’s name. This can be found in the URL when accessing the Fulfil website (e.g., for https://acme.fulfil.io, use acme).

    Once connected, you can select from Fulfil’s available endpoints to bring live data into your flow.

    Creating a Fulfil API Key

    1. Navigate to the main page of your ERP by swapping your {tenant} in the URL: https://{tenant}.fulfil.app/client/#/

    2. Click on your username on the top right and then preferences

    3. Select Manage personal access tokens.

    4. In the upper right-hand corner select click the Generate Personal access token button.

    5. Enter a helpful token description and click the Generate button.

    6. Copy the API Key and store it somewhere safe.

    Available data

    Using the Fulfil integration, you can pull in a wide range of operational data, including:

    • Sales Orders and Lines: Order headers and line details, including customers, products, quantities, prices, and fulfillment states.
    • Products: Product master data, SKUs, pricing, and inventory attributes.
    • Customers and Suppliers (Parties): Contact details, account information, and classifications for customers and vendors.
    • Invoices: Sales and purchase invoices, including totals, taxes, and payment status.
    • Shipments (Outbound and Internal): Shipment records with itemized contents, destinations, and fulfillment status.
    • Stock Moves: Detailed inventory movement logs across warehouses and transactions.
    • Purchase Orders and Lines: Order headers and line-level details, including suppliers, costs, and received quantities.
    • Production BOMs: Bill of materials for manufactured products.
    • EDI Documents: Transactional documents exchanged with trading partners.
    • Automation Rules: Rules that drive workflow automation within Fulfil.

    Common use cases

    • Consolidate sales data across channels with data from Shopify, Amazon, or other platforms.
    • Reconcile shipments and invoices to confirm fulfillment accuracy.
    • Track purchasing and supplier performance by comparing purchase orders against receipts and lead times.
    • Monitor inventory levels and turn valuation reports into dashboards for real-time visibility.
    • Automate accounting workflows with invoice and payment data to reduce manual entry.

    Tips for using Parabola with Fulfil

    • Schedule your flow to run daily or hourly to keep reports, dashboards, and reconciliations up to date.
    • Filter by state or date to limit imports and sync only recent or relevant data.
    • Join related data like Orders, Lines, and Products to create unified operational views.
    • Add validation steps to flag data mismatches (e.g., invoices without shipments).
    • Document your logic with step notes so your team can easily maintain and audit your flows.

    By connecting Fulfil with Parabola, you turn your ERP data into actionable automation, powering real-time visibility, faster reconciliations, and smarter operations across your business.

    Integration: 

    Google Sheets

    No items found.

    Integration: 

    Looker

    Use the Pull from Looker step to run Looks and pull in that data from Looker.

    Connect your Looker account

    To connect to Looker, you’ll need to enter your Looker Client ID and your Looker API Host URL before authenticating:

    Finding your Client ID and Looker API Host URL

    These steps only need to be followed once per Looker instance! If someone else on your team has done this, you can use the same Client ID that they have set up.

    Your Looker permissions in Parabola will match the permissions of your connected Looker account. So you will only be able to view Looks that your connected Looker account can access.

    1. Create a new user in Looker dedicated to authenticating with Parabola. You can skip this step if you are going to use an existing user.
    2. The user will need to have User or Admin permissions in Looker in order to be able to find Looks and run them.
    3. Click on the Edit button next to the user entry and click on Edit Keys next to the API3 Keys header to generate credentials.
    4. Copy the Client ID, and go to the API Explorer in the Applications section of your Looker sidebar.
    5. In the API Explorer, search for the Register OAuth App API call, click on it, and then click Run it.
    6. In the API call run section, paste your Client ID in the first field, then set "redirect_uri" to the Parabola Redirect URI from this screen and “enabled” to true. It should look like this:
      {
      "redirect_uri": "https://parabola.io/api/auth/looker/callback", "display_name": "Parabola OAuth Connection",
      "description": "",
      "enabled": true,
      "group_id": ""
      }
    7. Run the call, and it should return a 200 OK response.
    8. Paste your Client ID into the modal in Parabola.
    9. Find and paste your Looker API Host URL into Parabola. This is usually the base URL that you see when accessing Looker, such as: https://company.cloud.looker.com
    10. Click Submit and you will see a modal that will ask you to login to your Looker account and authenticate the connection the Parabola.

    Custom settings

    Once your step is set up, you can choose the Look that you want to run from the Run this Look dropdown:

    There are also Cache settings that you can adjust:

    1. Ignore cache (default) - Ignores the cache that Looker has and asks for new data every time.
    2. Use cache if available - Looker checks if the cache is recent enough and runs the Look if the data seems stale, otherwise it returns data from the Looker cache.
    3. Only pull from cache - Looker only gives data back from their cache even if the data is out of date.

    There are also additional settings that you can adjust within the step:

    Perform table calculations: Some columns in Looker are generated from user-entered Excel-like formulas. Those calculations are not run by default in the API, but are run by default within Looker. This setting tells Looker to run those calculations.

    Apply visualization options: Enable if you want things like the column names to match the names given in the Look, as opposed to the actual names of the columns in the source data.

    Apply model-specific formatting: Requests the data in a way that respects any formatting rules applied to the data model. This can be things like date and time formats.

    Common issues and how to troubleshoot

    You may sometimes see a 404 error from the Pull from Looker step. Some common reasons for that error are:

    1. The Look may not exist in the Production environment and needs to be pushed to production.
    2. The authenticated user may not have the right permissions to run the Look and needs to get access in Looker.
    3. The Look may have been deleted.

    Integration: 

    NetSuite

    The Pull from NetSuite integration enables users to connect to any NetSuite account and pull in saved search results that have been built in the NetSuite UI. Multiple saved searches, across varying search types, can be configured in a single flow.

    The following document outlines the configuration requirements in NetSuite for creating the integration credentials, defining relevant role permissions, and running the integration in Parabola.

    NetSuite configuration process

    The following configuration steps are required in NetSuite prior to leveraging the Parabola integration:

    • Create or select a web services only role that can be used by Parabola
    • Create or select a user that will be used for the integration in NetSuite. Ensure the role from the step above is applied to this user record
    • Create a new integration in Netsuite
    • This will result in the creation of your consumer key and consumer secret
    • Create a new set of access tokens that reference the user, role, and integration specified above
    • This will result in the creation of your token id and token secret

    Once complete, you will enter the unique credentials generated in the steps above into the Pull from NetSuite step in Parabola. This will also require your account id, which is obtained from your NetSuite account’s url. Ex: https://ACCOUNTID.app.netsuite.com/

    The following document will review how to create each of the items above.

    Creating a NetSuite role

    The permissions specified on the role applied to your integration will determine which saved searches, transactions, lists, and results you’ll be able to access in Parabola. It is important for you to confirm that the role you plan to use has access to all of the relevant objects as required.

    The following permissions are recommended, in addition to any specific transaction/list/report specific you may require.

    In addition to the below permissions, we also recommend adding the permissions listed in this document.

    Transactions

    • Any specific transaction types required: sales orders, purchase orders, transfer orders, etc.
    • Find transaction

    Reports

    • Any specific report types required

    Lists

    • Any specific lists required: items, locations, companies, customers, etc.
    • Perform search, persist search, and publish search

    Setup

    • Log in using Access Tokens
    • SOAP Web Services

    Custom Records:

    • Any specific custom record objects required

    Ensure the checkbox for the web services only role is selected.

    Creating a NetSuite integration

    Video walk-though of the setup process:

    Follow the path below in the NetSuite UI to create a new integration record.

    1. Setup > Integration > Manage Integrations > New
    2. Specify an integration name, ensure the status is set to active, and select the token-based authentication option.
    3. Uncheck the TBA: Authorization Role and Authorization Code Grant checkboxes.
    4. Save the record.

    A consumer key and consumer secret will be generated upon saving the record. Record these items, as they will disappear once you leave this page.

    Creating a new access token

    Once the role, user, and integration have been created, you’ll need to generate the tokens which are required for authentication in Parabola.

    Follow the path below in the NetSuite UI to create a new token record.

    1. Setup > Users/Roles > Access Tokens > New Access Tokens
    2. Specify the integration created previously, the desired user, and role, and click save.
    3. The newly created token id and token secret will appear at the bottom of the page. Record these credentials, as they will disappear once you leave this page.

    Configure your settings in Parabola

    1. Gather the credentials created from each step earlier in the process and navigate to the Pull from NetSuite step in Parabola.
    2. Open the Pull from NetSuite step and click Authorize or Edit Accounts
    3. Enter each applicable token and consumer key/secret and click authorize.

    Once authorized, you’ll be prompted to select a search type and specific saved search to run. Click refresh and observe your results!

    The Return only columns specified in the search checkbox enables a user to determine if all available columns, or only the columns included in the original search, should be returned. This setting is helpful if you’d like to return additional data elements for filtered records without having to update your search in NetSuite.

    Helpful Tips

    • The Pull from NetSuite step integrates directly with the saved search function. Based on permissions, users have the ability to access all saved searches from the NetSuite UI within Parabola.
    • If no saved search options are returned for a specific transaction type, please validate your user and role have access to the specific object you’re attempting to access.
    • Users will need permissions within NetSuite to create new integrations, manage access tokens, edit roles, etc. in order to generate the credentials required for this integration
    • Formula fields within saved searches will not be returned
    • Saved searches which include summary results are not supported
    • Ensure the user/role configured for the integration has sufficient permissions to access all necessary saved searches and results

    By default, the NetSuite API will only return the full data results from the underlying search record type (item, customer, transaction, etc) and only the internal ids of related record types (vendors, locations, etc) in a search.

    For example, running the following search in Parabola would return all of the information as expected from the base record type (item in this scenario), and the internal id of the related object (vendor).

    The best way to return additional details from related objects (vendor in this scenario) is by adding joined fields within the search. Multiple joined fields can be added to a single search to return data as necessary.

    Alternatively, another solution would be running separate searches and joining the results by using a Combine Tables step within the flow. This is demonstrated below.

    Usage notes

    • Users will need permissions within NetSuite to create new integrations, manage access tokens, edit roles, etc. in order to generate the credentials required for this integration
    • Formula fields within saved searches will not be returned
    • Saved searches which include summary results are not supported
    • Ensure the user/role configured for the integration has sufficient permissions to access all necessary saved searches and results

    Connection

    The same credentials and role configured for pulling data from NetSuite can be leveraged within Parabola’s Send to NetSuite step.

    One key difference for posting data to NetSuite is ensuring the role has full access to REST Web Services

    It also is important to confirm that the role has sufficient permissions enabled to create and/or update the relevant objects that are in scope for your team’s use cases.

    This is completed by selecting the relevant permission and updating the access level to Full

    As an example, the following permissions need to be enabled for use cases that involve creating & updating sales orders:

    • Transactions > Sales Orders > Full Access

    Using the step

    Creating and updating fields within NetSuite requires providing the internal IDs for relevant objects (items, sales orders, customers, subsidiaries, etc) as opposed to providing the human readable names you’re familiar with (SKUs, order numbers, customer names, etc).

    It is a best practice to leverage Pull from NetSuite step or reference file within your flows to gather the internal IDs for reference prior to using the Send to NetSuite step to prevent errors.

    Use case example: Creating new sales orders based on a PDF Purchase Order from a customer

    Flow inputs:

    • An extract from email step that parses a new purchase order PDF from a customer
      • This step extracts the order number, customer name & address, each ordered SKU, the quantity, and the relevant order/ship dates
    • Pull from NetSuite steps that list master data references including name and internal ID:
      • Customers
      • Items

    Transformation logic:

    • Combine the parsed PDF data with the NetSuite reference searches based on the SKU name & Customer Name
    • Your dataset should now include the relevant internal ID for each object (Item, Customer, Location)

    Flow Outputs:

    • Connect your data to a Send to NetSuite step
      • Select the action “Create” and the object “Sales Order”
      • Configure the mandatory fields for entity ID (customer), item internal ID, status, etc.

    Record statuses

    NetSuite requires data to be imported using the specific internal status codes. Specify a status by inserting a custom value with the Status Internal Identifier from the table below.

    Bulk Creation

    A single flow run can create multiple records within NetSuite. It’s important to leveraging the “grouping” function on the item level mapping to ensure sub-items are consolidated into the relevant parent level record.

    An example is grouping sales order items by the sales order number to ensure each item is associated with the corresponding sales order.

    Tips:

    • Use Pull from NetSuite steps with saved searches for items, customers, vendors, or locations and reference them in flows to look up internal IDs
    • The NetSuite Entity field relates to Vendor on POs and Customer on SOs

    Integration: 

    Parabola Flows

    The Run another Parabola Flow step gives you the ability to trigger runs of other Parabola flows within a flow.

    Running other Flows

    Select the flow you want to trigger during your current flow's run. No data will pass through this step. It's strictly a trigger to automatically begin a consecutive run of a secondary flow.

    However, if you choose “Run once per row with a file URL”, data will be passed to the second Flow, which can be read using the Pull from file queue step.

    Use the Run behavior setting to indicate how the other Flow should run. The options that include wait will cause the step to wait until the second Flow has finished before it can complete it’s calculation. The other options will not wait.

    Using this step in a Flow

    This step can be used with or without input arrows. If you place this step into a Flow without input arrows, it will be the first step to run. If it does have input arrows, then it will run according to the normal sequence of the Flow. Any per row options require input arrows.

    Helpful tips

    • This step is only available on our Advanced plan.
    • It can be beneficial or necessary to split large and complex Parabola flows into multiple pieces. In order for your data to processed correctly, you may need a flow to run exactly after another flow. In this case, you can add the Run another Parabola flow step destination after the last step of your flow, and have it trigger the next flow in your sequence.
    • The flow that you're trying to trigger must be published. If you're unable to find a flow in the drop-down list, make sure it is published (ie. a live version of the flow exists).

    Integration: 

    Send emails by rows

    The Send emails by row step sends one email per row in your dataset using the email address listed in a specific column. This is useful for sending personalized messages to a list of recipients. The step supports up to 75 emails per run and all messages are sent from team@parabolamail.io, with a footer that says "Powered by Parabola."

    Setting Up the Step

    1. Add the step to your Flow by dragging it onto the canvas.
    2. Connect it to the last step that contains your column of email addresses.
    3. Open the step to configure its settings.
    4. Recipients: Choose the column with the email addresses.
    5. Body Format: Choose between plain text and HTML.
    6. Subject and Body: These are required fields. You can personalize them by merging values from other columns using {curly braces}.
    7. Reply To: Enter the email address where replies should be sent.

    Helpful tips

    • Use HTML formatting in the Body field by selecting HTML as the format.
    • Common HTML tags like <br>, <b>, and <a> are supported.
    • Avoid exceeding the 75-recipient limit per run to prevent errors.
    • If you need to send a single email with a file attached, use the “Email a file attachment” step instead. Unlike “Send emails by row,” which sends one email per row, the "Email a file attachment" step sends one email total with a file attachment—ideal for sharing reports or exports with a fixed list of recipients.

    Integration: 

    ShipHero

    Pull data from ShipHero to create custom reports, alerts, and processes to track key metrics and provide a great customer experience.

    ShipHero is a beta integration which requires a more involved setup process than our native integrations (like Shopify and Google Analytics). Following the guidance in this doc (along with our video walkthrough) should help even those without technical experience pull data from ShipHero.

    If you run into any questions, feel free to reach out to support@parabola.io.

    Access the ShipHero integration

    Inside your flow, search for "ShipHero" in the right sidebar. When you drag the step onto the canvas, a card containing 'snippets' will appear on the canvas. To start pulling in data from ShipHero, copy a snippet and paste it onto the canvas (how to paste a snippet).

    Connect your ShipHero account

    We must start by authorizing ShipHero's API. In the "Pull from ShipHero" step's Authentication section, select "Expiring Access Token". For the Access Token Request URL, you can paste: https://public-api.shiphero.com/auth/token

    In the Request Body Parameters section, you can "+add" username and password then enter your ShipHero login credentials. A second Request Header called "Accept" will exists by default – this can be deleted. Once completed, step authorization window should look as so:

    Custom Settings

    When you drag the ShipHero step onto the canvas, there will be 5 pre-built snippets available:

    • Shipments
    • Orders
    • Returns
    • Purchase Orders
    • Products

    For everything besides Products, it's common to pull in data for a specific date range (ex. previous day or week). This is why the card begins with steps that specify a dynamic date range. For example, if you put -2 as the Start Date and -1 as the End Date, you will pull orders from the previous full day.

    If you're wanting to pull data from ShipHero that is not captured by these pre-built connections, you can modify the GraphQL Query and/or add Mutations by referencing ShipHero's GraphQL Primer.

    Troubleshooting missing data

    By default, we pull in 20 pages of data (2,000 records). To increase this value, visit the "Pull from ShipHero" step and go to "Rate Limiting" --> "Maximum pages to fetch
    " and increase the value until all of your data is pulled in.

    Helpful Tips

    • Calculation Errors: The more complex your query is, the more likely the request is to fail. If you're receiving a "Calculation error", this is likely because of the complexity of your query. These results can be unstable once you begin hitting that error. To reduce the complexity of your query, eliminate any columns that you don't need from your request body, and check out ShipHero's documentation.
    • GraphQL: To learn more about making GraphQL API calls in Parabola, check out our API docs

    Integration: 

    Shopify

    The Pull from Shopify step can connect directly to your Shopify store and pull in orders, line item, customer, product data and much more!

    This step can pull in the following information from Shopify:

    • A list of Orders with the following details: Orders, a list of Line Items sold for each order, with refunds and fulfillment included, a list of Shipping Lines for each order, and Discount Applications that have been applied to your orders
    • Your Shop Balance
    • A list of your shop Customers
    • A list of shop Disputes
    • Your Product Inventory Levels per location
    • A list of Location Details associated with your shop
    • A list of shop Payouts
    • A list of all Collections with their associated products
    • A list of Products

    Connect your Shopify account

    Select the blue Authorize button. If you're coming to Parabola from the Shopify App Store, you should see an already-connected Pull from Shopify step on your flow.  

    Default settings

    By default, once you connect your Shopify account, we'll import your Orders data with Line Items detail for the last day. From here, you can customize the settings based on the data you'd like to access within Parabola.

    Custom settings

    This section will explain all the different ways you can customize the data being pulled in from Shopify. To customize these settings, start by clicking the dropdown in part 2 of the step.

    Pulling your Orders

    Shopify orders contain all of the information about each order that your shop has received. You can see totals associated with an order, as well as customer information and more. The default settings will pull in any Order with the Orders detail happened in the last day. This will include information like the order total, customer information, and even the inventory location the order is being shipped from.

    If you need more granular information about what products were sold, fulfilled, or returned, view your Orders with Line Items detail. This can be useful if you want relevant product data associated with each line item in the order. 

    Available filters for orders, line items, shipping lines, and discount applications

    • Choose to include the default columns (most commonly used) or include all columns (every field that your orders contain).
    • Choose to include or not include test orders
    • Filter by order status: any, cancelled, closed, open, open and closed
    • Filter by financial status: any, authorized, paid, partially_paid, partially_refunded, pending, refunded, unpaid, voided
    • Filter by fulfillment status: any, shipped, partial, unshipped, unfulfilled (partial + unshipped)

    Date filters for orders, line items, shipping lines, and discount applications

    • Choose to filter your data by order processed date or refund processed date
    • within the previous # day, hour, week, or month
    • based on when the flow is run or the most recently completed day
    • You can also add an offset to the previous period or previous year. We have a handy helper to confirm the date range we'll use to filter in the step:
    https://assets.website-files.com/5d9bdcad630fbe7a7468a9d8/5f3c7e42e112a613b2937519_Screen_Shot_2020-08-17_at_8.09.45_PM.png
    • within the current day to date, week to date, month to date, year to date
    • You can add an offset to the previous period or previous year.
    • after x date
    • between x and y dates

    Pulling your Line Items, with refunds and fulfillments

    Each order placed with your shop contains line items - products that were purchased. Each order could have many line items included in it. Each row of pulled data will represent a single item from an order, so you may see that orders span across many rows, since they may have many line items.

    There are 4 types of columns that show up in this pull: "Orders", "Line Items", "Refunds", and "Fulfillment columns". When looking at a single line item (a single row), you can scroll left and right to see information about the line item, about its parent order, refund information if it was refunded, and fulfillment information if that line item was fulfilled.

    Pulling your Shipping Lines

    As your orders are fulfilled, shipments are created and sent out. Each shipment for an order is represented as a row in this pull. Because an order may be spread across a few shipments, each order may show up more than one time in this pull. There are columns referring to information about the order, and columns referring to information about the shipment that the row represents.

    Pulling your Discounts

    Every order the passes through your shop may have some discounts associated with it. A shopper may use a few discount codes on their order. Since each order can have any number discount codes applied to it, each row in this pull represents a discount applied to an order. Orders may not show up in this table if they have none, or they may show up a few times! There are columns referring to information about the order, and columns referring to information about the discount that was applied.

    Pulling your Shop Balance

    This is a simple option that pulls in 1 row, containing the balance of your shop, and the currency that it is set to.

    Pulling your Customers

    This option will pull in 1 row for every customer that you have in your Shopify store records.
    Available filters:

    • Choose to include the default columns (most commonly used) or include all columns (every field that your orders contain).
    • By default, we will only pull in the default address for each customer. Because customers may have more than one address, you can select the checkbox to "Expand rows to include all addresses". If you select this option, any customer with more than a single address will show up on multiple rows. For example, if your customer Juanita has 3 addresses in your system, then you will see 3 rows for Juanita, with the address information being the only data that is different for each of her rows.
    Date filters for customer data:
    • Choose to filter your data by order processed date or refund processed date
    • Within the previous # day, hour, week, or month
    • Based on when the flow is run or the most recently completed day
    • You can also add an offset to the previous period or previous year. We have a handy helper to confirm the date range we'll use to filter in the step:
    https://assets.website-files.com/5d9bdcad630fbe7a7468a9d8/5f3c7e42e112a613b2937519_Screen_Shot_2020-08-17_at_8.09.45_PM.png
    • Within the current day to date, week to date, month to date, year to date
    • You can add an offset to the previous period or previous year.
    • After x date

    Pulling your Disputes

    Retrieve all disputes ordered by the date when it was initiated, with the most recent being first. Disputes occur when a buyer questions the legitimacy of a charge with their financial institution. Each row will represent 1 dispute.

    Pulling your Product Inventory

    An inventory level represents the available quantity of an inventory item at a specific location. Each inventory level belongs to one inventory item and has one location. For every location where an inventory item is available, there's an inventory level that represents the inventory item's quantity at that location.

    This includes product inventory item information as well, such as the cost field.

    You can choose any combination of locations to pull the inventory for, but you must choose at least one. Each row will contain a product that exists in a location, along with its quantity.

    Toggle "with product information" to see relevant product data in the same view as the Product Inventory.

    Pulling your Location Details

    This is a simple option that will pull in all of your locations for this shop. The data is formatted as one row per location.

    Pulling your Payouts

    Payouts represent the movement of money between a Shopify Payments account balance and a connected bank account. You can use this pull option to pull a list of those payouts, with each row representing a single payout.

    Pulling your Collections

    Pull the name, details, and products associated with each of your collections. By default, each row returns the basic details of each collection. You can also pull the associated products with each collection. 

    Available filters:

    • You can pull in a list of your manual collections. A manual collection contains a list of products that are manually added to the collection. They may have no relation to each other.
    • You can pull in a list of your smart collections. A smart collection contains a list of products that are automatically added to the collection based on a set of shared conditions like the product title or product tags.

    Pulling your Products

    This pulls in a list of your products. Each row represents product variant since a product can have any number of variants. You may see that a product is repeated across many rows, with one row for each of its variants. When you set up a product, it is created as a variant, so products cannot exist without having at least one variant, even if it is the only one.

    Available filters:

    • Choose to include the default columns (most commonly used) or include all columns (every field that your orders contain).
    • By default, we will only pull in one image per variant. Because you may have multiple images per variant, you can select the checkbox to "Expand rows to include all images". If you select this option, for product variants with many images, each image will be added to a new row, so product variant XYZ may show up on 3 rows if there are 3 images pulled for it.
    • You can also filter down your products by a few attributes: collection_id, handle, product_type, published status, title, and vendor.

    The Send to Shopify step can connect directly to your Shopify store and automatically update information in your store.

    This step can perform the following actions in Shopify:

    • Create new customers
    • Update existing Customers
    • Delete existing Customers
    • Add products to collections
    • Delete product-collection relationships
    • Update existing inventory items
    • Adjust existing inventory levels
    • Reset inventory levels
    • Issue refunds by line items

    Connect your Shopify account

    To connect your Shopify account from within Parabola, click on the blue "Authorize" button. For more help on connecting your Shopify account, jump to the section: Authorizing the Shopify integration and managing multiple stores.

    Custom settings

    Once you connect a step into the Send to Shopify step, you'll be asked to choose an export option.

    The first selection you'll make is whether this step is enabled and will export all data or disabled and will not export any data. By default, this step will be enabled, but you can always disable the export if you need to for whatever reason.

    Then you can tell the step what to do by selecting an option from the menu dropdown.

    Create New Customers

    When using this option, every row in your input data will be used to create a new customer, so be sure that your data is filtered down to the point that every row represents a new customer to create.

    When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.

    Every customer must have either a unique Phone Number or Email set (or both), so be sure those fields are present, filled in, and have a mapping.

    If you create customers with tags that do not already exist in your shop, the tags will still be added to the customer.

    The address fields in this step will be set as the primary address for the customer.

    Update Existing Customers

    When using this option, every row in your input data will be used to update an existing customer, so be sure that your data is filtered down to the point that every row represents a customer to update.

    When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.

    Every customer must have a Shopify customer ID present in order to update successfully, so be sure that column is present, has no blanks, and is mapped to the id field in the settings.

    The address fields in this step will be edit the primary address for the customer.

    Delete Existing Customers

    When using this option, every row in the step will be used to delete an existing customer, so be sure that your data is filtered down to the point that every row represents a customer to delete.

    This step only requires a single field to be mapped - a column of Shopify customer IDs to delete. Make sure your data has a column of those IDs without any blanks. You can find the IDs by using the Pull from Shopify step.

    Add Products to Collection

    Collections allow shops to organize products in interesting ways! When using this option, every row in the step will be used to add a product to a collection, so be sure that your data is filtered down to the point that every row represents a product to add to a collection.

    When using this option, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.

    You only need two mapped fields for this option to work - a Shopify product ID and a Shopify Collection ID. Each row will essentially say, "Add this product to this collection".

    Delete Product-Collection Relationships

    Why is this option not called "Remove products from collections" if that is what it does? Great question. Products are kept in collections by creating a relationship between a product ID and a Collection ID. That relationship exists, and has its own ID! Imagine a spreadsheet full of rows that have product IDs and Collection IDs specifying which product belongs to which collections - each of those rows needs their own ID too. That ID represents the relationship. In fact, you don't need to imagine. Use the Pull from Shopify step to pull in Product-Collection Relationships. Notice there is an ID for each entry that is not the ID of the product or the collection. That ID is what you need to use in this step.

    When using this option, every row in the step will be used to delete a product from a collection, so be sure that your data is filtered down to the point that every row represents a product-collection relationship that you want to remove.

    This step does not delete the product or the collection! It just removes the product from the collection.

    When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.

    You need 1 field mapped for this step to work - it is the ID of the product-collection relationship, which you can find by Pulling those relationships in the Pull from Shopify step. In the step, it is called a "collect_id", and it is the "ID" column when you pull the product-collection relationships table.

    Update Existing Inventory Items

    What's an inventory item? Well, it represents the goods available to be shipped to a customer. Inventory items exist in locations, have SKUs, costs and information about how they ship.

    There are a few aspects of an inventory item that you can update:

    • Cost: The unit cost associated with the inventory item - should be a number, such as 10 or 10.50
    • SKU: Any string of characters that you want to use as the SKU for this inventory item
    • Tracked: Whether the inventory item is tracked. Set this to true or false
    • Requires Shipping: Whether a customer needs to provide a shipping address when placing an order containing the inventory item. Set this to true or false

    When using this step, you need to provide an Inventory Item ID so that the step knows which Item you are trying to update. Remember, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be seny to Shopify.

    “Update” option behavior

    When using the “Update” option in the Send to Shopify step, Parabola will overwrite all existing values for any fields that are mapped in the step’s settings table. This behavior is standard for update requests and ensures that Shopify reflects the exact data provided in your flow.

    Any fields not mapped will remain unchanged in Shopify. To avoid unintended data loss or partial updates, make sure to explicitly map all fields you want to update and double-check your input data before running the flow.

    Adjust Existing Inventory Levels

    When using this option, every row in the step will be used to adjust an existing item's inventory level, so be sure that your data is filtered down to the point that every row represents an item to adjust.

    When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.

    Every item must have a Shopify inventory item ID present in order to adjust successfully, so be sure that column is present, has no blanks, and is mapped to the id field in the settings.

    You must provide the inventory item ID, the location ID where you want to adjust the inventory level, and the available adjustment number. That available adjustment number will be added to the inventory level that exists. So if you want to decrease the inventory level of an item by 2, set this value to -2. Similarly, use 5 to increase the inventory level by 5 units.

    Reset Inventory Levels

    When using this option, every row in the step will be used to reset an existing item's inventory level, so be sure that your data is filtered down to the point that every row represents an item to reset.

    When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.

    Every item must have a Shopify inventory item ID present in order to reset successfully, so be sure that column is present, has no blanks, and is mapped to the id field in the settings.

    You must provide the inventory item ID, the location ID where you want to adjust the inventory level, and the available number. That available number will be used to overwrite any existing inventory level that exists. So if you want to change an item's inventory from 10 to 102, then set this number to 102.

    To use the Pull from Shopify or Send to Shopify steps, you'll need to first authorize Parabola to connect to your Shopify store.

    To start, you will need your Shopify shop URL. Take a look at your Shopify store, and you may see something like this: awesome-socks.myshopfy.com - from that you would just need to copy awesome-socks to put into the first authorization prompt:

    https://assets.website-files.com/5d9bdcad630fbe7a7468a9d8/5f3c7f397823199d8485e990_Screen_Shot_2020-08-17_at_8.34.59_PM.png

    After that, you will be shown a window from Shopify, asking for you to authorize Parabola to access your Shopify store. If you have done this before, and/or if you are logged into Shopify in your browser, this step may be done automatically.

    Parabola handles authorization on the flow-level. Once you authorize your Shopify store on a flow, subsequent Shopify steps you use on the same flow will be automatically connected to the same Shopify store. For any new flows you create, you'll be asked to authorize your Shopify store again.

    Editing your authorization

    You can edit your authorizations at any time by doing the following:

    • Open your Pull from Shopify or Send to Shopify step.
    • Click on the authorization dropdown near the top of the result view.
    • Click on "Edit accounts" at the bottom of the dropdown.
    • Click the three dots next to the Shopify Auth that you are currently using or want to edit.
    • We recommend that you rename your Account Name so you can easily keep track of which Shopify store you're connected to.

    Managing multiple Shopify stores in a single flow

    If you manage multiple Shopify stores, you can connect to as many separate Shopify stores in a single flow as you need. This is really useful because you can combine data from across your Shopify-store and create wholistic custom reports that provide a full picture of how your business is performing.

    Adding an authorization for another Shopify Store

    • Open your Pull from Shopify or Send to Shopify step.
    • Click on authorization dropdown at the top of the result view.
    • Click on "Add new account" in the dropdown.
    • Another authorization window will appear for you to authorize to a different store. Don't worry, connecting to a different store in one Shopify step will not impact the already-connected Shopify steps already on your flow.
    • The "Edit Accounts" menu is how you can switch which account a step is pulling from or pushing to. We recommend renaming the Account Name of your various Shopify accounts so it's easier to toggle in between your different accounts.

    Deleting a Shopify account from authorization

    Please note that deleting a Shopify account from authorization will remove it from the entire flow, including any published versions.

    • Open your Pull from Shopify or Send to Shopify step.
    • Click on authorization dropdown near the top of the result view.
    • Click Edit accounts.
    • Click on the three dots next to the Shopify account that you'd like to remove authorization for and choose the delete option.

    This article goes over the date filters available in the Pull from Shopify step.

    The Orders and Customer pulls from the Pull from Shopify step have the most complex date filters. We wanted to provide lots of options for filtering your data from within the step to be able to reduce the size of your initial import and pull exactly the data you want to see.

    Date filters can be a little confusing though, so here's a more detailed explanation of how we've built our most complex date filters.

    The date filters in the Pull from Shopify step, when available, can be found the bottom of the lefthand side, right above the "Show Updated Results" button.

    The first date filter you can set is:

    • within the previous # day, hour, week, or month
    • based on when the flow is run or the most recently completed day
    • You can also add an offset to the previous period or previous year
    • Example 1: If today is August 17, 2020, and I select within the previous 1 day based on the most recently completed day with no offset, the date range used would be August 16, 2020 12:00am PDT -August 17, 2020 12:00am PDT. Since August 16, 2020 was the most recently completed day, it's pulling in data from that day.
    • Example 2: If today is August 17, 2020, and I select within the previous 1 week based on when the flow is run offset to the previous period, the date range used would be August 3, 2020 - August 10, 2020. This is temporarily calculated based on the assumption that I'll run my flow soon. It will be automatically recalculated at the time I actually run my flow. The previous one week from today would be August 10, 2020-August 17, 2020. Since I'm offsetting to the previous period (one week), the date range is pulling data from the week prior.
    • Example 3: If today is August 17, 2020, and I select within the previous 1 month based on the most recently completed month offset to the previous year, the date range used is July 1, 2019 12:00am PDT - August 1, 2019 12:00am PDT. The most recently created month will be July 2020 and I want to pull data from that month. By offsetting to the previous year, I see data from July 2019.

    The second date filter you can set is:

    • within the current day to date, week to date, month to date, year to date
    • You can add an offset to the previous period or previous year.
    • Example 1: If today is August 17, 2020, and I select within the current month to date with no offset, the date range used will be August 1, 2020-August 17, 2020.
    • Example 2: If today is August 17, 2020, and I select within the current year to date with offset to the previous period, the date range used will be January 1, 2019-August 17, 2019. The previous period in this situation is the same time frame, just the year before.
    • Example 3: If today is Tuesday, August 17, 2020 and I select within the current week to date with offset to the previous year, the date range used will be August 16, 2019-August 17, 2019. Week to date is calculated from Sunday being the first day of the week. Offsetting to the previous year will take the same dates, but pull data from those date from the previous year.

    The third date filter you can set is:

    • after x date
    • Example: after January 1, 2020

    The fourth and last date filter you can set is:

    • between x and y dates
    • Example: between January 1, 2020 and June 30, 2020

    Time zones

    In this step, we indicate what time zone we're using to pull your data. This time zone matches the time zone selected for your Shopify store.

    Confirming the date range

    At the bottom of the lefthand panel of your step, if you're still uncertain if you've configured the date filters correctly, we have a handy helper to confirm the date range we'll use to filter in the step:

    https://assets.website-files.com/5d9bdcad630fbe7a7468a9d8/5f3c7e42e112a613b2937519_Screen_Shot_2020-08-17_at_8.09.45_PM.png

    This article explains how to reproduce the most commonly-used Shopify metrics. If you don't see the metric(s) you're trying to replicate, send us a note and we can look into it for you.

    The Shopify Overview dashboard is full of useful metrics. One problem is that it doesn't let you drill into the data to understand how it's being calculated. A benefit of using Parabola to work with your Shopify data is that you can easily replicate most Shopify metrics and see exactly how the raw data is used to calculate these overview metrics.

    Total Sales by line items

    This formula will show you the total sales per line item by multiplying the price and quantity of the line items sold.

    Import Orders with Line Items details

    {Line Items: Quantity} * {Line Items: Price}

    Total Refunds by line items

    This formula will show you the total refund per line item by multiplying the refunded amount and refunded quantity. In this formula, we multiply by 1 to turn it into a negative number. If you'd like to display your refunds by line items as a positive number, just don't multiply by 1.

    Import Orders with Line Items details

    {Refunds: Refund Line Items: Quantity} * {Refunds: Refund Line Items: Subtotal}*-1

    Net quantity

    This formula will show you the net quantity of items sold, taking into account and removing the items that were refunded.

    Import Orders with Line Items details

    First, use the Sum by group  step to sum "Line Items: Quantity" and "Refunds: Refund Line Items: Quantity"

    Then, use the newly generated "sum" columns for your formula.

    {Line Items: Quantity (sum)}-{Refunds: Refund Line Items: Quantity (sum)}

    Gross sales

    Import Orders with Orders details.

    Add a Sum by group step. Sum the "Total Line Items Price" column.

    Net sales

    Import Orders with Orders details.

    To calculate net sales, you'll want to get gross sales - refunds - discounts. This will require two steps:

    1. Add a Sum by group step and sum the following columns: "Total Line Items Price", "Total Refunded Amount", and "Total Discounts".
    2. Add an Insert Math Column step and add in the following equation:
    {Total Line Items Price (sum)}-{Total Refunded Amount (sum)}-{Total Discounts (sum)}

    Total sales

    Import Orders with Line Items details.

    To calculate total sales, you'll want to get gross sales + taxes - refunds - discounts. This will require three steps:

    1. Add an Insert math column step and add in the following equations to get gross sales and call the column, "Sales":
    {Line Items: Quantity} * {Line Items: Price}
    1. Add in a Sum by group step and sum the following columns: {Sales}, {Line Items: Total Discount Allocations}, {Refunds: Refund Line Items: Subtotal}, {Line Items: Total Tax Lines}, and {Refunds: Refund Line Items: Total Tax}.
    2. Add in an Insert math column step with the following equation:
    {Sales (sum)} + ({Refunds: Refund Line Items: Subtotal}*-1) - {Line Items: Total Discount Allocations (sum)} + ({Line Items: Total Tax Lines (sum)} - {Refunds: Refund Line Items: Total Tax (sum)})

    Total refunds

    Import Orders with Orders details.

    • Add a Sum by group step and sum the following column: "Total Refunded Amount".

    Total discounts

    Import Orders with Orders details.

    • Add a Sum by group step and sum the following column: "Total Discounts".

    Total tax

    Import Orders with Orders details.

    • Add a Sum by group step and sum the following column: "Total Tax".

    Average order value

    Import Customers. This table will give us Total Spent per customer as well as the # of Orders by customer.

    • Add a Sum by group step and sum the columns: {Orders Count} and {Total Spent}.
    • Add an Insert math column step and use the following calculation:
    {Total Spent (sum)} / {Orders Count (sum)}.

    Alternatively, import Orders.

    • Add an Insert math column step and create a new column called Orders with the following calculation: =1
    • Add a Sum by group step and sum the columns: Orders and Total price
    • Add an Insert math column step and create a new column called Average Order Value and use the following calculation:
    {Total Price (sum)} / {Orders (sum)}

    Number of orders

    Use the Count by group step after pulling in orders.

    Integration: 

    Slack

    Use the Send to Slack step to automatically post messages from your Parabola flow into a Slack channel or DM.

    Setup & Authentication

    The first person to install the Parabola Slack app in your workspace may need admin permissions. Once installed, all workspace members can use the app.

    Your authentication process depends on your Slack workspace settings:

    • If your workspace allows app installs, the Parabola app installs during authentication of the Send to Slack step.
    • If not, you may need an admin to install it. Some workspaces provide an option to submit a request to your admin for approval.

    To connect a Send to Slack step:

    1. Drag a Send to Slack step onto your canvas.
    2. Click the blue Connect to Slack button.
      If you’re connecting for the first time, select + Add new account. To update an existing account, click Edit Accounts.
    3. If you’ve connected before, you’ll see available options in the dropdown for quick setup.
    4. Review the permissions in the pop-up window and click Allow. If no window appears, check for a pop-up blocker.
    5. If you’re already logged in to Slack, the step connects automatically. Otherwise, follow the login instructions to connect.

    Message Settings

    • Message type:
      • Send a single message sends one message with your configured text
      • Send one message per row sends a separate message for each row of data.
      • Select Channel message and choose a channel, or
      • Select Message to user and choose a user. 
        • Both channel messages and DMs send from the “Parabola app”, not your own Slack profile.
        • To direct message multiple users, duplicate the Send to Slack step, filter rows for each user, and configure each step separately.
        • Direct Messages sent with this integration will appear under the “Apps” section in your Slack sidebar.
    • Message text
      • Write plain text or Slack markdown.
      • Reference column values dynamically with curly braces. For example: {SKU}.
      • When sending a single message, values from the first row fill in the curly-braced references.
    • Message Settings Gear icon (all on by default):
      • Include a link to this flow (requires appropriate flow permissions for recipients).
      • Expand URLs and images in Slack
      • Link usernames and channels. Channels can be referenced as “#general” and users as @alex
      • Send messages when input data has at least 1 row (default). You can update this to send messages when input data has any number of rows (including 0).
    • Sending test messages:
      • Click Send test message to send test messages to yourself without running the full flow. Test messages do not use Parabola credits.
    • Attached file:
      • Do not attach anything is the default message. This means that the recipient will only receive the content configured in the Message Text box and a link to the flow if you kept this enabled in your Message Settings.
      • Attach entire table as a CSV - you’ll name the CSV file that will be sent via Slack. The individual file size limit is 1 GB, based on Slack’s file upload limit.
      • Attach a file by URL - Use this setting when you have a column that contains file URLs. Merge in that column’s value by wrapping the column name in curly braces. You can also enter a file URL manually if you have one stored elsewhere.

    Finding Channel names and IDs

    If you are using a version of this step that does not show a list of channels to send messages to, and requires you to type in the location of the channel, use this guide to find those names and IDs.

    Channel names are the same as they appear in Slack. i.e #general or #we-love-parabola but they can only be used if you are not attaching files of data. Always include the # symbol.

    When attaching files, indicate the channel using the ID (B07F36JHD), not the name (#general).

    The channel ID can be found by right clicking on the channel name in Slack, clicking “Copy link”, and taking the ID from the end of the link. For example, use the channel ID of B07F36JHD from this link: https://parabolaio.slack.com/archives/B07F36JHD

    Formatting messages with markdown 

    Basics

    _italic_ will produce italicized text

    *bold* will produce bold text

    ~strike~ will produce strikethrough text

    Line breaks 

    You can write multi-line text by typing a new line, or insert a newline by including the string “\n” in your text.

    Block quotes 

    You can highlight text as a block quote by using the > character at the beginning of one or more lines.

    Code blocks 

    If you have text that you want to be highlighted like code, surround it with back-tick (`) characters.For example:

    `This is a code block`

    You can also highlight larger, multi-line code blocks by placing 3 back-ticks before and after the block. For example: 

    ```This is a code block\nAnd it's multi-line```

    Lists 

    Create lists by using a - character followed by a space. For example:

    - This

    - is

    - a list

    Links 

    URLs will automatically work. Spaces in URLs will break the URL, so we recommend that you remove any spaces from your URL links.

    You can also use markdown to adjust the text that appears as the link from the URL to something else: For example:

    <http://www.example.com|This message *is* a link>

    And create email links:

    <mailto:bob@example.com|Email Bob Roberts>

    Emoji 

    Emoji can be included in their full-color, fully-illustrated form directly in text. Once published, Slack will then convert the emoji into their common 'colon' format. For example, a message published like this:

    It's Friday 😄

    will be converted into colon format:

    It's Friday :smile:

    If you're publishing text with emoji, you don't need to worry about converting them, just include them as-is.

    The compatible emoji formats are the Unicode Unified format (used by OSX 10.7+ and iOS 6+), the Softbank format (used by iOS 5) and the Google format (used by some Android devices). These will be converted into their colon-format equivalents. The list of supported emoji are taken from https://github.com/iamcal/emoji-data.

    Helpful Information

    • You can preview messages by sending yourself a test DM before running the Flow.
    • Slack messages have a 40k character limit.
    • Some Slack features (like @here) aren’t supported.
    • If you’re posting into a channel, make sure the Parabola app has been added to that channel first (or ask a Channel Manager to add it).
    • You can post to private Slack channels along as you have access to that channel.
    • If you want to still send a Slack message even when there are 0 rows in the input data, update the dropdown found in the Message Settings to "Send messages when input data has any number of rows (including 0)".

    Integration: 

    Twilio

    The Pull from Twilio step pulls messages and phone numbers from Twilio.

    Connect your Twilio account

    The first thing you'll need to do to start using the Pull from Twilio step is to authorize the step to access the data in your Twilio account.

    Double-click on the step and click "Authorize." This window will appear where you'll need to provide the Account SID and Auth Token from your Twilio account.

    To locate this information on your Twilio account, click on the blue link to Lookup Twilio Account Info. This will take you to https://www.twilio.com/console. You'll see your Account SID and Auth Token that you can copy and paste from your account to Parabola.

    Custom settings

    Once you're connected, you'll have the following data types to select from:

    • Outbound Messages
    • Inbound Messages
    • Phone Numbers

    Outbound Messages

    This option pulls logs of all outbound messages you sent from your Twilio account. The returned columns are: To (phone number), From (phone number), Status, Price, Date Sent, Body (of message).

    You have optional fields you can set to filter the data. Leaving the Date Sent field blank will simply pull in the most recent 100k messages.

    Inbound Messages

    This option pulls logs of any responses or inbound messages you've receive to the phone numbers associated with your Twilio account. The returned columns are: To (phone number), From (phone number), Status, Price, Date Sent, Body (of message).

    You have optional fields you can set to filter data. Leaving the Date Received field blank will simply pull in the most recent 100k messages.

    Phone Numbers

    This option pulls in phone numbers that are associated with your account. The returned columns are: Number ID, Phone Number, Friendly Name, SMS Enabled, MMS Enabled, Voice Enabled, Date Created, Date Updated.

    The Send to Twilio step triggers dynamic SMS messages sent via Twilio using data transformed in your Parabola flow. You can use Parabola to dictate who should receive your SMS messages, what message they should receive, and trigger Twilio to send them.

    Connect your Twilio account

    The first thing you'll need to do to start using the Send to Twilio step is to authorize the step to send data to your Twilio account.

    Double-click on the step and click on the blue button to Authorize. This window will appear where you'll need to provide the Account SID and Auth Token from your Twilio account.

    To locate this information on your Twilio account, click on the blue link to Lookup Twilio Account Info. This will take you to https://www.twilio.com/console. You'll see your Account SID and Auth Token that you can copy and paste from your account to Parabola.

    Custom settings

    By default, this step will be configured to Send text messages to recipients when the flow runs. If for whatever reason you need to disable this temporarily, you can select to not send text messages when the flow runs.

    Then, you'll select the following columns that contain the data for phone numbers you'd like to Send To, phone numbers you'd like to Send From, and text you'd like Twilio to send as Message Content.

    Please make sure that the phone numbers you'd like to Send From are valid Twilio phone numbers that your Twilio account is authorized to send from.  Verified Caller ID phone numbers cannot be used to send outbound SMS messages.

    For Message Content, you have the option to use content from an existing column or a custom message. Select the Custom option from the dropdown if you'd like to type in a custom message. While the custom message is a great, easy option, this means that all of your recipients will receive the same message. If you'd like your messages to be customized at all, you should create your dynamic messages in a column beforehand. The Insert column can be particularly useful here for creating dynamic text content.

    Each row will represent a single SMS. If your data contains 50 rows that means 50 SMS messages will be sent.

    Helpful tips

    • Twilio will charge you according to your account per message. You can monitor your Twilio usage by heading to Twilio's Console page.
    • Twilio has a rate limit on sending messages. They will only send as fast as one per second, or 60 per minute. If your flow is attempting to send a large number of messages, be aware that it may run for a long time to comply with this limit.
    • Parabola doesn’t automatically run the Flow upon each text, but you can pull in the texts based on some time/data parameters if you choose to schedule the Flow. It’s also possible, on theTwilio side, to set up a webhook that spins every time a text is sent, which can then be set to trigger the Flow via Parabola.

    Integration: 

    UPS

    The UPS integration is used by operators to integrate UPS’s shipping, tracking, and logistics services into their platforms and workflows.

    How to authenticate

    UPS uses OAuth 2.0 Client Credentials for secure API access.

    1. Go to your UPS Developer Portal and register an app. See below for in-depth instructions.
    2. Locate your Client ID and Client Secret from the app info page.
    3. Use these credentials in Parabola’s UPS integration setup form:
      • Enter your Client ID and Client Secret.
      • x-merchant-id is your 6-digit UPS account number.
      • Parabola will automatically use UPS’s token endpoint to request an access token
    4. Once authenticated, you can begin importing tracking data by inquiry number or reference number.

    Tips for using Parabola with UPS

    • Schedule your flow to run automatically for continuous visibility into shipments.
    • Use Filters to flag delayed or “Exception” status shipments for faster response.
    • Join UPS data with your order records to automate delivery confirmations.
    • ​​Combine with other systems (like Shopify, Netsuite, or your warehouse management system) to create end-to-end logistics visibility.
    • Create email or Slack alerts in Parabola for critical milestones like “Out for Delivery” or “Delivered.”
    • Leverage proof-of-delivery images to verify receipt for high-value or time-sensitive orders.

    Creating an application in the UPS Developer Portal

    1. Navigate to the UPS Developer Portal.

    2. Click Login to access your UPS account.

    3. Click Create Application to make a new application and generate your credentials.

    ⚠️ Note: This application will be linked to your shipper accounts(s) and email address associated with your UPS.com ID

    4. Select your use case, shipper account, and accept the agreement.

    5. Enter your contact information.

    💡 Tip: Consider using a group inbox that is accessible to others on your development team. You are unable to change this email once the credentials are created or you will lose access to your application.

    6. Define your application details that includes the name, associated billing account number, and custom products.

    ⚠️ Note: In the Callback URL field, add the following URL: https://parabola.io/api/steps/generic_api/callback

    7. Once saved, your Client ID and Client Secret are generated.

    💡 Tip: Click Add Products to enable additional products like the Tracking and Time in Transit APIs if they have not been added to your application.

    Available data

    Using the UPS TrackService API in Parabola, you can pull in:

    • Shipments: Overview of each tracked shipment, including inquiry numbers and user relationships.
    • Packages: Details for every package in a shipment, including tracking number, weight, dimensions, and delivery details.
    • Activities: Historical scan events with timestamps and locations.
    • Status: Current status and simplified text description of the shipment (e.g., “In Transit,” “Delivered”).
    • Milestones: Key progress checkpoints for the package’s journey.
    • Delivery Information: Delivery confirmation data such as location, proof of delivery, and signature image.
    • Addresses: Origin, destination, and delivery addresses for each shipment.
    • Payment Information: Collect-on-delivery or other payment records tied to a shipment.
    • Service details: The service level used (e.g., “UPS Ground,” “UPS Next Day Air”).

    Common use cases

    • Real-Time delivery tracking to integrate updates into your applications or websites, allowing businesses to monitor the status of their shipments.
    • Reconcile delivery confirmations with order records to mark items as delivered automatically.
    • Track and analyze delivery performance by service type, carrier zone, or region.
    • Calculate shipping costs for domestic and international shipments in real-time and provide accurate shipping costs to customers at checkout.
    • Calculate estimated delivery times for packages based on UPS’s delivery schedules and provide customers accurate delivery windows during checkout.
    • Identify delayed or stalled shipments and trigger alerts or workflows.
    • Sync shipment milestones (like “Out for Delivery” or “Delivered”) to internal dashboards or CRMs.

    Integration: 

    Visualize

    The Visualize step is a destination step used to display data as charts, styled tables, or key metrics. These visualizations can optionally be shown on the Flow canvas or on the Flow dashboard.

    Set up

    When first added to your Flow and connected to a step, the Visualize step will expand. Data flowing into the Visualize step will be shown as a table on the canvas.

    To customize this visualization and create new views, open the Visualize step by clicking "Edit this View."

    Configuring views

    Visualize steps can be configured with any number of views. Every view in a single Visualize step will use the same input data, but each view can be customized to  display data in a different way.

    Syncing views to the Flow dashboard

    The Visualize step is also used to sync views to your Flow dashboard tab. When the “Show on dashboard” step option is enabled, that  visualization will also appear in your Flow dashboard.

    Views in the Visualize step will be shown on your Flow dashboard by default. Uncheck the dashboard setting within the Visualize step to remove any views from the dashboard.

    Resizing, expanding and collapsing

    Visualize steps can be collapsed into normal-sized steps by clicking the collapse button, located in the top right of the expanded visualization. Similarly, collapsed Visualize steps can be expanded by clicking on the expand button under the step.

    Expanded Visualize steps can be resized using the handle in the bottom right of the step.

    Flow dashboards enable your team to easily view, share, and analyze the data that your Flows create. Use the Visualize step to create interactive reports that are shareable with your entire team. Visualizations can be powered by any step in your Flow or by Parabola Tables for historic reporting.

    Check out this Parabola University video for a brief intro to tables.

    How it works

    The Visualize step is a tool for creating tables, charts, and metrics from the output of your Flows. These views of data can be arranged and shared directly in Parabola from the Flow dashboard page.

    To create a Visualization, connect any step in your flow to a Visualize step:

    Data connected to a Visualize step will be usable to create any number of views. Those views are automatically added to your Flow dashboard, where they can be arranged and customized.

    Once you’ve added views to your Flow dashboard, you can:

    • Visualize your data in the form of tables, featured metrics, charts, and graphs.
    • Arrange a dashboard of multiple views, utilizing a tabbed or tiled layout.
    • Analyze the entire page of views using quick filters.

    Sharing tables with teammates

    Anyone with access to your Flow will be able to see the Flow dashboard:

    • "Can edit": any teammate with edit permissions can create and edit data views. Any changes to views will be visible immediately to anyone else who has access to the Flow.
    • "Can view": teammates with view permissions can see all data views, but cannot make changes.

    To share a view, you can either share the entire dashboard with your teammate (see instructions here), or click “Share” from a specific table view. Sharing the view will give your teammate access to the Flow (and it’s dashboard), and link them directly to that specific view.

    Sharing dashboards outside your team (external sharing)

    Any visualization can be exported as a CSV. Simply click on the "Export to CSV" button at the top right of your table or chart.

    Views are individual visualizations, accessible from the Visualize step, or on the Flow dashboard. The data connected to a Visualize step acts as a base dataset, which you can customize using views. Views can be visualized as tables, featured metrics, charts, and graphs.

    Ready for a deeper dive? This Parabola University video will walk you through some of the configurations available to fine-tune how you see your data.

    Page layout

    Arrange data views on the page with either a tab or tile layout.

    Tabs will appear like traditional spreadsheet tabs, which you can navigate through. Drag to rearrange their order.

    Tiles enable you to see all views simultaneously. You can completely customize the page by changing view height and width, and drag-and-drop to rearrange.

    Helpful tips:

    • Views will refresh their results if: the Flow runs, the base data is updated, and/or settings are changed
    • Click the overflow menu next to the name of a view to move, rename, duplicate, or delete it. Use the same menu to switch the page layout between tabs and tiles
    • Add new views by clicking the plus icon to the right of the last tab view, or by clicking the large “Add view” button below the last tile view. If you have too many tab views to see the icon, use the tab list menu on the right side of the table
    • Duplicated and new tab views will show up in the private views section, so you may need to scroll down to see your new view

    From the “Table/chart options” menu, you can select from several types of visualizations.

    Tables

    By default, visualizations display as tables. This format works well to show rows of data that are styled, calculated, grouped, sorted, or filtered.

    In the below image, the table options menu is at the top left, below the "All Inventory" tab. This is where you can access options to format and style columns, or to add aggregation calculations.

    Featured metrics

    Featured metrics allow you to display specific column calculations from the underlying table.

    Metrics can be renamed, given a color theme, and formatted (date, number, percent, currency, or accounting). The metrics options menu is in the same placement as above, represented with a '#' symbol.

    Charts and graphs

    Parabola supports several chart types:

    • Column chart
    • Line chart
    • Area chart
    • Scatter chart
    • Mixed chart (multiple types combined)

    Within the chart options menu, represented below as a mini bar graph, you can customize chart labels, color themes, gridlines, and legend placement.

    X axis

    Charts have a single value plotted on the horizontal X axis, along the bottom of the chart. Date or category values are commonly used for the X axis

    Use the grouping option on the X axis control to aggregate values plotted in the chart. For example, if you have a week's worth of transactions, and you want to see the total number of transactions per day, you would set your X axis to the day of the week, and group your data to find the sum. Ungrouped values will be plotted exactly as they appear in your dataset.

    Use the X axis options dropdown within the chart options menu to further fine-tune your formatting.

    Y axis

    Charts can have up to two Y axes, on the left, right, or both. Additionally, each Y axis can key to any number of data values, called series.

    Adding multiple series will show multiple bars, lines, or dots, depending on which chart you are using. The above image shows a chart using one Y axis, but several series with stacking enabled under the "Categories / stacking" dropdown.

    When you add a second Y axis, it will add a scale to the right side of the graph. Any series that are plotted in the second Y axis will adhere to that scale, whereas any series on the first Y axis will adhere to the first scale. Your charts are limited to two scales, but each series can be aggregated individually, so you can compare the mean of one data point with the sum of another, and the median of a third.

    Imagine using multiple Y axes to plot two sets of data that are related, but exist on different numerical scales, such as total revenue in one axis, and website conversion rate in another axis.

    Categories and stacking

    Many charts and graphs have category and stacking options. Depending on your previous selections with the X and Y axes, and the chart type, some options will be available in this menu.

    • “Categorize by …” will allow you to further split a Y axis value according to a subcategory that exists in your dataset. For example, you could categorize total revenue by store location to see a bar of total revenue for each store location.
    • The “Categorize and stack by …” option will function as above, and further stack your subcategories into a single bar – i.e. producing an overall column showing the total revenue, but with different colored segments for each store location.
    • The “Stack series” option will take multiple series on the X axis and stack them into a single bar, so that you can aggregate multiple columns together.

    Helpful tips

    • Add a title to charts and graphs from the “Table/chart options” menu
    • Clicking on an item in the legend will temporarily hide the series on the graph. Click again to make it reappear
    • All charts and graphs will export as CSV files that mirror the base table data

    View controls can be selected from the icons in the control bar on any view.

    Column calculations

    You can perform the following calculations on a column:

    • Count all: Counts the number or rows in the entire table, and for any groups
    • Count unique: Counts the number of unique values in the specified column for the entire table, and for any groups. Unique values are case-sensitive and space-sensitive
    • Count empty: Counts the number of blank cells in the specified column for the entire table, and for any groups. Cells with just a space character, or other invisible characters, are not considered empty or blank
    • Count not empty
    • Sum: Totals all numeric values in the specified column for the entire table, and for any groups. Cells that are blank or contain non-numeric values are skipped. If no result can be produced, a - - value will be shown
    • Average: Creates an average by totaling all numeric values in the specified column for the entire table, and for any groups, and dividing the total by the total number of values used. Cells that are blank or contain non-numeric values are skipped. If no result can be produced, a - - value will be shown
    • Median: Finds the value where one half the values are greater and half are less in the specified column for the entire table, and for any groups. Cells that are blank or contain non-numeric values are skipped. If no result can be produced, a - - value will be shown
    • Minimum (Min): Finds the smallest value in the specified column for the entire table, and for any groups. Cells that are blank or contain non-numeric values are skipped. If no result can be produced, a - - value will be shown
    • Maximum (Max): Finds the largest value in the specified column for the entire table, and for any groups. Cells that are blank or contain non-numeric values are skipped. If no result can be produced, a - - value will be shown

    Only one metric can be calculated per column.

    Grouping

    Tables can be grouped up to 6 times. (After 6 groups, the '+ Add grouping' option will be disabled.) Groups are applied in a nested order, starting at the first group, and creating subgroups with each subsequent rule.

    Use the sort options within the group rules to determine what order the groups are shown in. Normal sort rules will be used to sort the rows within the groups.

    Sorts

    Click the “Sort” button to quickly add a new sort rule (or the view options menu). These sorts define how rows are arranged in the view.

    Filters

    Click the “Filter” button to quickly add a new filter rule (or the view options menu). These filters define which rows are kept in the view.

    Filters work with dates – select the “Filter dates to…” option, and utilize either relative ranges (e.g. “Last 7 days”) or specify exact ones.

    Data formatting

    Columns, metrics, and axes can be formatted to change how their data is displayed and interpreted. Click the left-most of your configuration buttons, the "Table/Chart Options" button, to apply formatting to any column, metric, or axis. You can select auto-format, or choose from a list of categories and formats within those categories.

    In charts, the X-axis will be auto-formatted, and you can change the format as needed. All series in each Y-axis will share the same format. Axis formatting can be adjusted by clicking the gear icon next to the axis name.

    Formats will be used to adjust how data is displayed in the columns of a table, in the aggregations applied to groups and in the grand total row, and to featured metrics. When grouping a formatted column, the underlying, unformatted value will be used to determine which row goes in which group.

    When working with dates, the format is autodetected by default. If your date is not successfully detected, click the 3 dots next to the output format field and enter a custom starting format.

    Valid options are:

    If the output format uses a token that is not found in the input , e.g. converting MM-DD to MM-DD-YYYY, then certain values will be assumed:

    • Day - 1
    • Month - January
    • Year - 2000

    Dates that do not adhere to the starting format will remain unformatted in your table.

    Hiding Columns

    Use the "Table/Chart Options" to hide specific columns from your table view.

    Columns can be used for sorting, grouping, and filtering even when hidden. Those settings are applied before the columns are hidden for even more control over your final Table.

    Hidden columns will not show up in search results, unless the option for “Display all columns” is enabled.

    Hidden columns can be filtered by quick filters.

    Hidden columns will be present in CSV exports downloaded from the view.

    Freezing Columns and Rows

    Use the "Table/Chart Options" to freeze the first (left-most) column or the first row by using the checkboxes at the top. A frozen column or row will “stick,” and other columns and rows will scroll behind them.

    Quick filters

    Click "Quick Filter" in the top right corner of the dashboard to toggle the filter bar pictured below. Using "Add quick filter" or "Add date filter," you can filter data in specific columns across every view on the page. These filters are only applied for you, and will not affect how other users see this Flow. Refreshing the page will reset all quick filters.

    After 8 seconds, the combination of quick filters will be saved in the “Recents” drawer on the right side of the filter bar. Your recent filters are only visible to you, and can be reapplied with a click.

    Quick filters can only be used if you have at least one table on your Flow. Above the first table on your published Flow page, click to add a filter. The filter bar will then follow you as you scroll.

    Multiple quick filters are combined using a logical “and” statement. These filters are applied in conjunction with any filters set on individual views.

    Use the clear filters icon to remove all currently applied filters.

    Conditional formatting

    From the Table Options menu, use the “add color rule” button to apply formatting to the columns of your Table view.

    There are 3 types of formatting that can be added:

    • Set color
    • Color rule
    • Color scale

    (The same menu can be used to remove any existing colors applied to a column.)

    Set color

    Applies a chosen color to a column entirely. All cells will have a color applied.

    Color rule

    Uses a conditional rule to color specific cells. The following operators are supported:

    Color scale

    Applies a 2 color or 3 color scale to every cell in the column. All cells will have a color applied.

    When using two colors, by default the first color will be applied to the minimum value and the second color will be applied to the maximum value. When using three colors, by default, the middle color will be applied to the value 50% between the smallest and largest value in the column.

    Cells with values between the minimum, maximum, and middle value (if using 3 colors) will blend the colors they are between, creating a smooth gradient.

    When setting a custom value for the maximum or minimum on a color scale, any value in the table that is larger than the maximum or smaller than the minimum will have the the maximum color or minimum color applied, respectively.

    Click the ellipsis menu next to the format dropdown to access controls to adjust how the scale is applied.

    Switch each breakpoint to use a number, percent, or the default min/max value.

    Scales can be applied to columns containing dates, numbers, currency, etc.

    Applying multiple rules

    Multiple rules can be applied to the same column. They will be evaluated top down, starting with the first rule. Any cells that are not colored as a result of that rule move on to the next rule, until all rules have been evaluated, or all cells have been assigned a color. A cell will show the color of the first rule that evaluates to true for the value in that cell.

    After a set color or color scale is applied, no further rules will be evaluated, as all cells will have an assigned color after those rules.

    Migration from “Column Emphasis”

    Existing table views may have columns with column emphasis applied. Those columns will be migrated automatically to use a set color formatting rule.

    Integration: 

    Zendesk

    How to authenticate

    Zendesk uses basic authentication with an API token. Here's how to get your credentials and connect them in Parabola:

    1. Getting your Zendesk API token
      1. Log into your Zendesk account as an administrator.
      2. Navigate to Admin Center in the sidebar and go to Apps and integrations → APIs → Zendesk API.
      3. Click Add API Token in the API tokens section.
      4. Add a Description (e.g., "Parabola Integration") to identify the token.
      5. Copy the token immediately and store it somewhere secure. Once you close the window, the full token will never be displayed again.
    2. Connecting in Parabola
    3. In your Parabola flow, add a Pull from Zendesk step.
    4. Click Authorize and add your credentials when prompted.
    5. You will also need your Zendesk subdomain (e.g., if your URL is acme.zendesk.com, then the subdomain you enter is acme).
    6. Select the Zendesk resource you want to pull (Tickets, Users, Organizations, etc.) and configure any filters like date ranges or statuses.

    Once connected, Parabola will securely use your credentials to pull data from Zendesk into your flows.

    Available data

    Using the Zendesk integration in Parabola, you can pull in a wide range of customer service and support data, including:

    • Tickets: Support requests with details like status, priority, assignee, requester, description, tags, custom fields, timestamps, and satisfaction ratings
    • Users: Customer and agent profiles including names, emails, roles, organizations, time zones, and custom user fields
    • Organizations: Company profiles with domains, notes, tags, and custom organization fields
    • Groups: Agent teams and their memberships for routing and assignment tracking
    • Ticket Comments: All public and private comments on tickets, including attachments and author details
    • Ticket Audits: Complete change history for tickets showing what changed, when, and by whom
    • Ticket Forms: Custom forms used for different ticket types
    • Tags: Labels applied to tickets, users, and organizations for categorization
    • Triggers: Automated rules that run when ticket conditions are met
    • Automations: Time-based rules that execute on schedules
    • Macros: Pre-built responses and ticket actions for agents
    • Views: Saved ticket filters used by support teams

    Common use cases

    • Track team performance and SLA compliance including resolution times, first response times, and assignee information to monitor agent performance, identify bottlenecks, and ensure SLA targets are being met across teams and brands.
    • Analyze customer satisfaction trends with ticket details, product tags, and agent assignments to identify which issues, products, or agents drive positive or negative CSAT scores.
    • Reconcile support data with sales and billing systems in your CRM, ERP, or billing system to create unified customer views and ensure support interactions are tied to the right accounts.
    • Automate recurring support reports that pull ticket volumes, resolution metrics, and satisfaction data daily or weekly, then push formatted reports to Google Sheets, Slack, or email for stakeholders.
    • Monitor and escalate critical tickets by priority, status, or custom fields to flag urgent or stalled issues, then automatically send alerts to managers or create follow-up tasks in project management tools.
    • Audit agent activity and ticket changes for quality assurance, compliance, or training purposes.

    Tips for using Parabola with Zendesk

    • Schedule your flows to run automatically on the cadence your team needs. Run hourly for real-time dashboards, daily for operational reports, or weekly for trend analysis.
    • Use date filters when pulling tickets or audits to limit the data to recent records and keep your flows fast and focused.
    • Combine Zendesk data with other sources like Shopify orders, Stripe payments, or Salesforce accounts to create cross-system reports that show the full customer journey.
    • Normalize IDs early in your flow (ticket IDs, user IDs, organization IDs) so downstream joins and lookups work smoothly across systems.
    • Add Checks and Alerts to flag exceptions: tickets missing assignees, satisfaction ratings below a threshold, or SLA breaches.
    • Filter by status, priority, or tags to isolate specific ticket segments (e.g., only open tickets, high-priority issues, or tickets tagged "billing").
    • Use custom fields strategically: If your Zendesk instance uses custom fields extensively, map them clearly in your flow and document what each field represents for your team.
    • Set up Alerts via Slack or email in Parabola to notify your support team when critical conditions are met, like a spike in unresolved tickets or a drop in CSAT scores.

    With Parabola and Zendesk, you can turn your support data into automated workflows that save hours, improve visibility, and help your team deliver better customer experiences.