The first time interacting with an API can feel daunting. Each API is unique and requires different settings, but is generally standardized to make understanding and connecting to an API accessible.
To learn how to best use APIs in Parabola, check out our video guides.
Parabola works best with two types of APIs. The most common API type to connect to is a REST API. Another API type rising in popularity is a GraphQL API. Parabola may be able to connect to a SOAP API, but it is unlikely due to how they are structured.
To evaluate if Parabola can connect with an API, reference this flow chart.
A REST API is an API that can return data by making a request to a specific URL. Each request is sent to a specific resource of an API using a unique Endpoint URL. A resource is an object that contains the data being requested. Common examples of a resource include Orders, Customers, Transactions, and Events.
To receive a list of orders in Squarespace, the Pull from an API step will make a request to the Squarespace's Orders resource using an Endpoint URL:
GraphQL is a new type of API that allows Parabola to specify the exact data it needs from an API resource through a request syntax known as a GraphQL query. To get started with this type of API call in Parabola, set the request type to "POST" in any API step, then select "GraphQL" as the Protocol of the request body.
Once your request type is set, you can enter your query directly into the request body. When forming your query, it can be helpful to use a formatting tool to ensure correct syntax.
Our GraphQL implementation current supports Offset Limit pagination, using variables inserted directly into the query. Variables can be created by inserting any single word between the brackets '<%%>'. Once created, variables will appear in the dropdown list in the "Pagination" section. One of these variables should correspond to your "limit", and the other should correspond to "offset."

The limit field is static; it represents the number of results returned in each API request. The offset field is incremented in each subsequent request based on the "Increment each page by" value. The exact implementation will be specific to your API docs.
After configuring your pagination settings, also be sure to adjust the "Maximum pages to fetch" setting in the "Rate Limiting" section as well to retrieve more or less results.
GraphQL can be used for data mutations in addition to queries, as specified by the operation type at the start of your request body. For additional information on Graph queries and mutations, please reference GraphQL's official documentation.
The first step to connect to an API is to read the documentation that the service provides. Oftentimes, the documentation is commonly referred to as the API Reference, or something similar. These pages tend to feature URL and code block content.
The API Reference, always provides at least two points of instruction. The first point outlines how to Authenticate a request to give a user or application permission to access the data. The second point outlines the API resources and Endpoint URLs, or where a request can be sent.

Most APIs require authentication to access their data. This is likely the first part of their documentation. Try searching for the word "Authentication" in their documentation.

The most common types of authentication are Bearer Tokens, Username/Password (also referred to as Basic), and OAuth2.0.
This method requires you to send your API Key or API Token as a bearer token. Take a look at this example below:

The part that indicates it is a bearer token is this:
This method is also referred to as Basic Authorization or simply Basic. Most often, the username and password used to sign into the service can be entered here.
However, some APIs require an API key to be used as a username, password, or both. If that's the case, insert the API key into the respective field noted in the documentation.
The example below demonstrates how to connect to Stripe's API using the Basic Authorization method.

The Endpoint URL shows a request being made to a resource called "customers". The authorization type can be identified as Basic for two reasons:
This method is an authorization protocol that allows users to sign into a platform using a third-party account. OAuth2.0 allows a user to selectively grant access for various applications they may want to use.
Authenticating via OAuth2.0 does require more time to configure. For more details on how to authorize using this method, read our guide Using OAuth2.0 in Parabola.
Some APIs will require users to generate access tokens that have short expirations. Generally, any token that expires in less than 1 day is considered to be "short-lived" and may be using this type of authentication. This type of authentication in Parabola serves a grouping of related authentication styles that generally follow the same pattern.
One very specific type of authentication that is served by this option in Parabola is called OAuth2.0 Client Credentials. This differs from our standard OAuth2.0 support, which is built specifically for OAuth2.0 Authorization Code. Both Client Credentials and Authorization Code are part of the OAuth2.0 spec, but represent different Grant Types.
Authenticating with the Expiring Access Token option is more complex than options like Bearer Token, but less complex than OAuth2.0. For more details on how to use this option, read our guide Using Expiring Access Tokens in Parabola.
A resource is a specific category or type of data that can be queried using a unique Endpoint URL. For example, to get a list of customers, you might use the Customer resource. To add emails to a campaign, use the Campaign resource.
Each resource has a variety of Endpoint URLs that instruct you how to structure a URL to make a request to a resource. Stripe has a list of resources including "Balance", "Charges", "Events", "Payouts", and "Refunds".

HTTP methods, or verbs, are a specific type of action to make when sending a request to a resource. The primary verbs are GET, POST, PUT, PATCH, and DELETE.
A header is a piece of additional information to be sent with the request to an API. If an API requires additional headers, it is commonly noted in their documentation as -H.
Remember the authentication methods above? Some APIs list the authentication type to be sent as a header. Since Parabola has specific fields for authentication, those headers can typically be ignored.
Taking a look at Webflow's API, they show two headers are required:

The first -H header is linked to a key called Authorization. Parabola takes care of that. It does not need to be added as a header. The second -H header is linked to a key called accept-version. The value of the header is 1.0.0. This likely indicates which version of Webflow's API will be used.
JavaScript Object Notation, or more commonly JSON, is a way for an API to exchange data between you and a third-party. JSON is follows a specific set of syntax rules.
An object is set of key:value pairs and is wrapped in curly brackets {}. An array is a list of values linked to a single key or a list of keys linked to a single object.
JSON in API documentation may look like this:

Most documentation will use cURL to demonstrate how to make a request using an API.
Let's take a look at this cURL example referenced in Spotify's API:
We can extract the following information:
Because Parabola handles Authorization separately, the bearer token does not need to be passed as a header.
Here's another example of a cURL request in Squarespace:

This is what we can extract:
Parabola also passes Content-Type: application/json as a header automatically. That does not need to be added.
Check out this guide to learn more troubleshooting common API errors.
The Pull from an API step sends a request to an API to return specific data. In order for Parabola to receive this data, it must be returned in a CSV, JSON, or XML format. This step allows Parabola to connect to a third-party to import data from another service, platform, or account.
You might wonder when it is best to use the Pull from API step vs Enrich with API step. If you need to take existing data and pass it through an API, we recommend you use Enrich with API in the middle of the Flow. Enrich with API makes requests row by row. If you just need to fetch data and join it into the middle of a Flow, you could use the “Pull from API” step and then a join step.
To use the Pull from an API step, the "Request Type" and "API Endpoint URL" fields are required.

There are two ways to request data from an API: using a GET request or using a POST request. These are also referred to as verbs, and are standardized throughout REST APIs.

The most common request for this step is a GET request. A GET request is a simple way to ask for existing data from an API.
"Hey API, can you GET me data from the server?"
To receive all artists from Spotify, their documentation outlines using GET request to Artist resource using this Endpoint URL:

Some APIs will require a POST request to import data, however it is uncommon. A POST request is a simple way to make changes to existing data such as adding a new user to a table.
The request information is sent to the API in theJSON body of the request. The JSON body is a block that outlines the data that will be added.
Hey API, can you POST my new data to the server? The new data is in the JSON body.
Similar to typical websites, APIs use URLs to request or modify data. More specifically, an API Endpoint URL is used to determine where to request data from or where to send new data to. Below is an example of an API Endpoint URL.

To add your API Endpoint URL, click the API Endpoint URL field to open the editor. You can add URL parameters by clicking the +Add icon under the "URL Parameters" text in that editor. The endpoint dynamically changes based on the key/value pairs entered into this field.
Most APIs require authentication to access their data. This is likely the first part of their documentation. Try searching for the word Authentication in their documentation.
Here are the Authentication types available in Parabola:

The most common types of authentication are Bearer Tokens, Username/Password (also referred to as Basic), and OAuth2.0. Parabola has integrated these authentication types directly into this step.
This method requires you to send your API Key or API Token as a Bearer Token. Take a look at this example below:

The part that indicates it is a bearer token is this:
To add this specific token in Parabola, select Bearer Token from the Authorization menu and add "sk_test_WiyegCaE6iGr8eSucOHitqFF" as the value.
This method is also referred to as Basic Authorization or simply Basic. Most often, the username and password used to sign into the service can be entered here.
However, some APIs require an API key to be used as a username, password, or both. If that's the case, Insert the API key into the respective field noted in the documentation.
The example below demonstrates how to connect to Stripe's API using the Basic Authorization method.

The Endpoint URL shows a request being made to a resource called customers. The authorization type can be identified as Basic for two reasons:
To authorize this API in Parabola, fill in the fields below:

This method is an authorization protocol that allows users to sign into a platform using a third-party account. OAuth2.0 allows a user to selectively grant access for various applications they may want to use.
Authenticating via OAuth2.0 does require more time to configure. For more details on how to authorize using this method, read our guide Using OAuth2.0 in Parabola.
Some APIs will require users to generate access tokens that have short expirations. Generally, any token that expires in less than 1 day is considered to be "short-lived" and may be using this type of authentication. This type of authentication in Parabola serves a grouping of related authentication styles that generally follow the same pattern.
One very specific type of authentication that is served by this option in Parabola is called OAuth2.0 Client Credentials. This differs from our standard OAuth2.0 support, which is built specifically for OAuth2.0 Authorization Code. Both Client Credentials and Authorization Code are part of the OAuth2.0 spec, but represent different Grant Types.
Authenticating with the Expiring Access Token option is more complex than options like Bearer Token, but less complex than OAuth2.0. For more details on how to use this option, read our guide Using Expiring Access Tokens in Parabola.
A header is a piece of additional information to be sent with the request to an API. If an API requires additional headers, it is commonly noted in their documentation as -H.
Remember the authentication methods above? Some APIs list the authentication type to be sent as a header. Since Parabola has specific fields for authentication, those headers can typically be ignored.
Taking a look at Webflow's API, they show two headers are required.

The first -H header is linked to a key called Authorization. Parabola takes care of that. It does not need to be added as a header. The second -H header is linked to a key called accept-version. The value of the header is 1.0.0. This likely indicates which version of Webflow's API will be used.

APIs typically to structure data as a nested objects. This means data can exist inside data. To extract that data into separate columns and rows, use the Output section to select a top-level column.
For example, a character can exist as a data object. Inside the result object, additional data is included such as their name, date of birth, and location.

This API shows a data column linked result. To expand all of the data in the results object into neatly displayed columns, select results as the top-level column in the Output section.

If you only want to expand some of the columns, choose to keep specific columns and select the columns that you want to expand from the dropdown list.

APIs return data in pages. This might not be noticeable for small requests, but larger request will not show all results. APIs return 1 page of results. To view the other pages, pagination settings must configured
Each API has different Pagination settings which can be searched in their documentation. The three main types of pagination are Page, Offset and Limit, and Cursor based pagination.
APIs that use Page based pagination make it easy to request more pages. Documentation will refer to a specific parameter key for each request to return additional pages.
Intercom uses this style of pagination. Notice they reference the specific parameter key of page:

Parabola refers to this parameter as the Pagination Key. To request additional pages from Intercom's API, set the Pagination Key to page.

The Starting page is the first page to be requested. Most often, that value will be set to 0. For most pagination settings, 0 is the first page. The Increment by value is the number of pages to advance to. A value of 1 will fetch the next page. A value of 10 will fetch every tenth page.
APIs that use Offset and Limit based pagination require each request to limit the amount of items per page. Once that limit is reached, an offset is used to cycle through those pages.
Spotify refers to this type of pagination in their documentation:

To configure these pagination settings in Parabola, set the Pagination style to offset and limit.

The Starting Value is set to 0 to request the first page. The Increment by value is set to 10. The request will first return page 0 and skip to page 10 .
The Limit Key is set to limit to tell the API to limit the amount of items. The Limit Value is set to 10 to define the number of items to return.
Otherwise known as the bookmark of APIs, Cursor based pagination will mark a specific item with a cursor. To return additional pages, the API looks for a specific Cursor Key linked to a unique value or URL.
Squarespace uses cursor based pagination. Their documentation states that two Cursor Keys can be used. The first one is called nextPageCursor and has a unique value:
The second one is called nextPageUrl and has a URL value:


To configure cursor based pagination using Squarespace, use these values in Parabola:

Replace the Cursor path in response with pagination.nextPageURL to use the URL as a value. The API should return the same results.
Imagine someone asking thousands of questions all at once. Before the first question can be answered thousands of new questions are coming in. That can become overwhelming.
Servers are no different. Making paginated API calls requires a separate request for each page. To avoid this, APIs have rate limiting rules to protect their servers from being overwhelmed with requests. Parabola can adjust the Max Requests per Minute to avoid rate limiting.

By default, this value is set to 60 requests per minute. That's 1 request per second. The Max Requests per Minute does not set how many requests are made per minute. Instead, it ensures that Parabola will not ask too many questions.
Lowering the requests will avoid rate limiting but will calculate data much slower. Parabola will stop calculating a flow after 60 minutes.
To limit the amount of pages to fetch use this field to set the value. Lower values will return data much faster. Higher values will take longer return data.

The default value in Parabola is 5 pages. Just note, this value needs be larger than the expected number of pages to be returned. This prevents any data from being omitted.
If you are pulling a large amount of data and want to limit how much is being pulled in while building, you can set the step to pull a lower number of pages while editing the Flow than while running the Flow.
Note, there is a 1000 page limit when building vs. running flows.
URLs tend to break when there are special characters like spaces, accented characters, or even other URLs. Most often, this occurs when using {text merge} values to dynamically insert data into a URL.
Check the "Encode URLs" box to prevent the URL from breaking if special characters need to be passed.

By default, this step will parse the data sent back to Parabola from the API in the format indicated by the content-type header received. Sometimes, APIs will send a content-type that Parabola does not know how to parse. In these cases, adjust this setting from auto-detect to a different setting, to force the step to parse the data in a specific way.

Use the gzip option when the data is returned in a gzip format, but can be unzipped into csv, xml, or JSON data. If you enable gzip parsing, you must also specify a response type option.
Something not right? Check out this guide to learn more troubleshooting common API errors.
The Send to an API step sends a request to an API to export specific data. Data must be sent through the API using JSON formatted in the body of the request. This step can send data only when a flow is published.
This table shows the product information for new products to be added to a store. It shows common columns like "My Product Title", "My Product Description", "My Product Vendor", "My Product Tags".
These values can be used to create products in bulk via the Send to an API step.

To use the Send to an API step, a Request Type, API Endpoint URL, and Authentication are required. Some APIs require Custom Headers while other APIs nest their data into a single cell that requires a Top Level Key to format into rows and columns.

There are four ways to send data with an API using POST, PUT, PATCH, and DELETE requests. These methods are also known as verbs.

The POST verb is used to create new data. The DELETE verb is used to delete data. The PUT verb is used to update exiting data, and the PATCH verb is used to modify a specific portion of the data.
Hey API, can you POST new data to the server? The new data is in the JSON body.
The API Endpoint URL is the specific location where data will be sent. Each API Endpoint URL belongs to a specific resource. A resource is the broader category to be targeted when sending data.
To create a new product in Shopify, use their Products resource. Their documentation specifies making a POST request to that resource using this Endpoint URL:

Your Shopify store domain will need to prepended to each Endpoint URL:
The request information is sent to the API in the JSON body of the request. The JSON body is a block that outlines the data that will be added.
The body of each request is where data that will be sent through the API is added. The body must be in raw JSON format using key:value pairs. The JSON below shows common attributes of a Shopify product.
Notice the title, body_html, vendor, product_type, and tags can be generated when sending this data to an API.
Since each product exists per row, {text merge} values can be used to dynamically pass the data in the JSON body.

This will create 3 products: White Tee, Pink Pants, and Sport Sunglasses with their respective product attributes.
Most APIs require authentication to access their data. This is likely the first part of their documentation. Try searching for the word Authentication in their documentation. Below are the authentication types supported on Parabola:

The most common types of authentication are Bearer Tokens, Username/Password (also referred to as Basic), and OAuth 2.0. Parabola has integrated these authentication types directly into this step.
This method requires you to send your API Key or API Token as a bearer token. Take a look at this example below:

The part that indicates it is a bearer token is this:
To add this specific token in Parabola, select Bearer Token from the Authorization menu and add sk_test_WiyegCaE6iGr8eSucOHitqFF as the value.
This method is also referred to as Basic Authorization or simply Basic. Most often, the username and password used to sign into the service can be entered here.
However, some APIs require an API key to be used as a username, password, or both. If that's the case, Insert the API key into the respective field noted in the documentation.
The example below demonstrates how to connect to Stripe's API using the Basic Authorization method.

The Endpoint URL shows a DELETE request being made to a resource called customers. The authorization type can be identified as Basic for two reasons:
To delete this customer using Parabola, fill in the fields below:

This method is an authorization protocol that allows users to sign into a platform using a third-party account. OAuth2.0 allows a user to selectively grant access for various applications they may want to use.
Authenticating via OAuth2.0 does require more time to configure. For more details on how to authorize using this method, read our guide Using OAuth2.0 in Parabola.
Some APIs will require users to generate access tokens that have short expirations. Generally, any token that expires in less than 1 day is considered to be "short-lived" and may be using this type of authentication. This type of authentication in Parabola serves a grouping of related authentication styles that generally follow the same pattern.
One very specific type of authentication that is served by this option in Parabola is called OAuth2.0 Client Credentials. This differs from our standard OAuth2.0 support, which is built specifically for OAuth2.0 Authorization Code. Both Client Credentials and Authorization Code are part of the OAuth2.0 spec, but represent different Grant Types.
Authenticating with the Expiring Access Token option is more complex than options like Bearer Token, but less complex than OAuth2.0. For more details on how to use this option, read our guide Using Expiring Access Tokens in Parabola.
A header is a piece of additional information to be sent with the request to an API. If an API requires additional headers, it is commonly noted in their documentation as -H.
Remember the authentication methods above? Some APIs list the authentication type to be sent as a header. Since Parabola has specific fields for authentication, those headers can typically be ignored.
Taking a look at Webflow's API, they show two headers are required.

The first -H header is linked to a key called Authorization. Parabola takes care of that. It does not need to be added as a header. The second -H header is linked to a key called accept-version. The value of the header is 1.0.0. This likely indicates which version of Webflow's API will be used.

URLs tend to break when there are special characters like spaces, accented characters, or even other URLs. Most often, this occurs when using {text merge} values to dynamically insert data into a URL.
Check the "Encode URLs" box to prevent the URL from breaking if special characters need to be passed.

If you woud like to see the request that was sent to the API during the Flow run, you can dothis from the API step. To do this, click the square button next to the Request Settings section in the step to see more detailed information.

Check out this guide to learn more troubleshooting common API errors.
Use the Enrich with API step to make API requests using a list of data, enriching each row with data from an external API endpoint.
Our input data has two columns: "data.id" and "data.employee_name".

Our output data, after using this step, has three new columns appended to it: "api.status", "api.data.id", and "api.data.employee_name". This data was appended to each row that made the call to the API.

First, decide if your data needs a GET or POST operation, or the less common PUT or PATCH, and select it in the Type dropdown. A GET operation is the most common way to request data from an API. A POST is another way to request data, though it is more commonly used to make changes, like adding a new user to a table. PUT and PATCH make updates to data, and sometimes return a new value that can be useful.

Insert your API endpoint URL in the text field.

Most APIs require authentication to access their data. This is likely the first part of their documentation. Try searching for the word "authentication" in their documentation.
Here are the authentication types available in Parabola:

The most common types of authentication are 'Bearer Token', 'Username/Password' (also referred to as Basic), and 'OAuth2.0'. Parabola has integrated these authentication types directly into this step.
This method requires you to send your API key or API token as a bearer token. Take a look at this example below:

The part that indicates it is a bearer token is this:
To add this specific token in Parabola, select 'Bearer Token' from the 'Authorization' menu and add "sk_test_WiyegCaE6iGr8eSucOHitqFF" as the value.
This method is also referred to as "basic authorization" or simply "basic". Most often, the username and password used to sign into the service can be entered here.
However, some APIs require an API key to be used as a username, password, or both. If that's the case, insert the API key into the respective field noted in the documentation.
The example below demonstrates how to connect to Stripe's API using the basic authorization method.

The endpoint URL shows a request being made to a resource called customers. The authorization type can be identified as basic for two reasons:
To authorize this API in Parabola, fill in the fields below:

This method is an authorization protocol that allows users to sign into a platform using a third-party account. OAuth2.0 allows a user to selectively grant access for various applications they may want to use.
Authenticating via OAuth2.0 does require more time to configure. For more details on how to authorize using this method, read our guide Using OAuth2.0 in Parabola.
Some APIs will require users to generate access tokens that have short expirations. Generally, any token that expires in less than 1 day is considered to be "short-lived" and may be using this type of authentication. This type of authentication in Parabola serves a grouping of related authentication styles that generally follow the same pattern.
One very specific type of authentication that is served by this option in Parabola is called "OAuth2.0 Client Credentials". This differs from our standard OAuth2.0 support, which is built specifically for "OAuth2.0 Authorization Code". Both methods are part of the OAuth2.0 spec, but represent different grant types.
Authenticating with an expiring access token is more complex than using a bearer token, but less complex than OAuth2.0. For more details on how to use this option, read our guide Using Expiring Access Tokens in Parabola.
How to work with errors when you expect them in your API calls
In the Enrich with an API step and the Send to an API step, enable Error Handling to allow your API steps to pass through data even if one or more API requests fail. Modifying this setting will add new error handling columns to your dataset reporting on the status of those API calls
By default, this section will show that the step will stop running when 1 row fails. This has always been the standard behavior of our API steps. Remember, each row of data is a separate API call. With this default setting enabled, you will never see any error handling columns.

Update that setting, and you will see that new columns are set to be added to your data. These new columns are:
API Success Status will print out a true or false value to show if that row's API call succeeded or failed.
API Error Code will have an error code for that row if the API call failed, and will be blank if the API call succeeded.
API Error Message will display the error message associated with any API call that failed, if the API did in fact send us back a message.
With the exception of the default settings, these columns will still be included even if every row succeeded. In that case, you will see the API Success Status column with all true values, and the other two columns as all blank values.

It is smart to set a threshold where the step will still fail if enough rows have failed. Usually, if enough rows fail to make successful API calls, there may be a problem with your step settings, the data you are merging into those calls, or the API itself. In these cases, it is a good idea to ensure that the step can fully stop without needing to run through every row.
Choose to stop running this step if either a static number of rows fail, or if a percentage of rows fail.
You must choose a number greater than 0.
When using a percentage, Parabola will always round up to the next row if the percentage of the current set of rows results in a partial row.
In rare cases, you may want to ensure that your step never stops running, even if every row results in a failed API call. In that case, set your error handling threshold to any number greater than 100%, such as 101% or 200%.

Once you have enabled this setting, use these new columns to create a branch to deal with errors. The most common use case will be to use a Filter Rows step to filter down to just the rows that have failed, and then send those to a Google Sheet for someone to check on and make adjustments accordingly.
If you have a flow that is utilizing these error handling columns, the run logs on the live view of the flow will not indicate if any rows were recorded as failed. The run logs will only show a failure if the step was forced to stop by exceeding the threshold of acceptable errors. It is highly advisable that you set up your flow to create a CSV or a Google Sheet of these errors so that you have a record of them from each run.
Use the Pull from Amazon Seller Central step to import Amazon reports into your flow.

.png)
The Use CSV file step enables you to pull in tabular data from a CSV, TSV, or a semicolon delimited file.
The first thing to do when using this step is to either drag a file into the outlined box or select "Click to upload a file".
Once the file is uploaded and displayed in the Results tab, you'll see two settings on the lefthand side: File and Delimiter. You can click File to upload a different file. Parabola will default to using a comma delimiter, but you can always update the appropriate delimiter for your file by clicking on the Delimiter dropdown. Comma , , tab \t, and semi-colon ; are the three delimiter types we support.


In the "Advanced Settings", you can set a number of rows and a number of columns to skip when importing your data. This will skip rows from top-down and columns from left-to-right. You can also select a Quote Character which will help make sure data with commas in the values/cells don’t disrupt the CSV structure.

The "Generate CSV file" step allows you to export tabular data as a CSV file. You can use it to create custom datasets from various sources within your Flow. Once the Flow run is complete, the CSV file can be downloaded from the Flow’s Run History. You can also configure the step to email a download link to the Flow owner.
Once you connect your Flow to this export step, it will display a preview of the tabular data to be exported.

The name of the generated file will match the step’s title. To rename your custom dataset file, simply double-click the step title and enter a new name.

After publishing and running your Flow, you can download the generated CSV file from the Flow’s Run History panel. Past CSVs created by this step are also accessible there.
You can optionally configure the step to email a download link to the Flow owner when the run is complete. Please note that this link will expire after 24 hours.
If the step receives zero rows of data as input, no CSV file will be generated and no download link will be emailed.

Files generated by this step are stored by Parabola for your convenience. This allows the data to be reloaded the next time you open the Flow. Your data is stored securely in an Amazon S3 bucket, with all connections established over SSL and encrypted.
This step supports only one input source at a time.
If your Flow includes multiple branches or datasets, you'll need to connect each one to its own Generate CSV file step to export them separately.
Alternatively, consider using the "Generate Excel file" step, which allows multiple inputs and creates a single Excel file with each input as a separate tab.
The DHL Shipment Tracking API is used to provide up-to-the-minute shipment status reports by retrieving tracking information for shipments, identifying DHL service providers, and verifying DHL delivery addresses.
DHL is a beta integration which requires a slightly more involved setup process than our native integrations. Following the guidance in this document should help even those without technical experience pull data from DHL. If you run into any questions, shoot our team an email at support@parabola.io.
📖 DHL Reference docs:
https://developer.dhl.com/api-reference/shipment-tracking#reference-docs-section
🔐 DHL Authentication doc links:
https://developer.dhl.com/api-reference/shipment-tracking#get-started-section/user-guide
1. Click My Apps on the portal website.
2. Click the + Add App button.
3. The “Add App” form appears.
4. Complete the Add App form.
5. You can select the APIs you want to access.
6. When you have completed the form, click the Add App button.
7. From the My Apps screen, click on the name of your app. The Details screen appears.
8. If you have access to more than one API, click the name of the relevant API.
⚠️ Note: The APIs are listed under the Credentials section.
9. Click the Show link below the asterisk that is hiding the Consumer Key.
1. Add an Enrich tracking from DHL step template to your canvas.
2. Click into the Enrich with API: DHL Tracking step to configure your authentication.
3. Under the Authentication Type, select None.
4. Click into the Request Settings to configure your request using the format below:

Get started with this template.
Test URL
https://api-test.dhl.com/track/
Production URL
https://api-eu.dhl.com/track/
1. Add a Use sample data step to your Flow. You can also import a dataset with tracking numbers into your Flow. (Pull from Excel File, Pull from Google Drive, Pull from API, Use sample data, etc.)
💡 Tip: When using your own data, use the Edit columns step to rename the tracking column in your source data to Tracking Number.
2. Connect it to the Enrich with API: DHL Tracking step.
3. Under Authentication Type, select None.
4. Click into the Request Settings to configure your request using the format below:
💡 Tip: The Enrich with API step makes dynamic requests for each row in the table by inserting the tracking number in the API Endpoint URL.
The example above assumes, there is a Tracking Number column and is referenced using curly brackets:{Tracking Number}
Enclose your column header containing tracking numbers with curly brackets to dynamically reference the tracking numbers in your table.
5. Click Refresh data to display the results.

⚠️ Note: Rate limits protect the DHL infrastructure from suspicious requests that exceed defined thresholds.
When you first request access to the API, you will get the initial service level which allows 250 calls per day with a maximum of 1 call every 5 seconds.
Additional rate limits are available and they are granted according to your specific use case. If you would like to request for additional limits, please proceed with the following steps:
1. Create an app as described under the Get Access section.
2. Click My Apps on the portal website.
3. Click on the App you created.
4. Scroll down to the APIs list and click on the "Request Upgrade" button.

Use the Send to DocSpring step to automatically create submissions for your DocSpring PDF Templates.
To connect to your DocSpring account, you'll first need to click the blue "Authorize" button.

You'll need your DocSpring API Token ID and your DocSpring API Token Secret to proceed. To do so, visit your API Token settings in DocSpring.
Reminder: if you're creating a new API Token, the Token Secret will only be revealed immediately after creating the new Token. Be sure to copy and paste or write it down in a secure location. Once you've created or copied your API Token ID and Secret, come back to Parabola and paste them into the correct fields.

To pull in the correct DocSpring Template, you'll need to locate the Template ID. Open the Template you want to connect in DocSpring and locate the URL. The Template ID is the string of characters following templates/ in the URL:

Paste the ID from the URL in the Template ID field.

The Email a file attachment step gives you the ability to send an email to a list of recipients with a custom message and an attached file (CSV or Excel) of your transformed data.
You can insert dynamic values in Email recipients, Email subject, Email body, File name, and Reply-to by wrapping a column name in {}.
Example: {Name} inserts the value from the first row of the {Name} column from the first connected input.
If File format = Excel, the step can accept multiple inputs. Each input becomes a separate tab in the generated file. Give each tab a unique name.
Parabola stores files you send through this step so your flow can reload results next time. We store data securely in Amazon S3, and all connections use SSL with encryption.
The Use Excel file step enables you to pull in tabular data from an Excel file.
First, select Click to upload a file.

If your Excel file has multiple sheets, select which one you'd like to use in the dropdown menu for Sheet.
In the Advanced Settings, you may also select to skip rows or columns. This will skip rows from top-down and columns from left-to-right.

Formatted data

Cell data is imported as formatted values from Excel. Dates, numbers, and currencies will be represented as they appear in the Excel workbook, as opposed to their true underlying value.
Enabling unformatted values will import the underlying data from Excel. Most notably, this will show raw numbers without any rounding applied, and will convert dates to Excel's native date format (the number of days since 1900-01-01).
This step can't pull in file updates from your computer, so if you make dataset changes and wish to bring them into Parabola, this requires manually uploading the updated Excel file. When you upload an Excel file, all formulas are converted to their value, and formatting is stripped (formatting or formulas are not preserved). If you want to pull in live updates on each run without having to upload a file manually, you can use a step like Pull from SharePoint, OneDrive, or Google Drive.
The files you upload through this step are stored by Parabola. We store the data as a convenience, so that the next time you open the flow, the data is still loaded into it. Your data is stored securely in an Amazon S3 Bucket, and all connections are established over SSL and encrypted.
Once you connect your Flow to this export step, it will show a preview of the tabular data to be sent.

The step will automatically send this downloadable Excel file link to the email address of the Flow owner.
By default, the name of the file will be ‘Parabola Excel File’—if you'd like to rename your dataset, click the box under ‘Download a Excel file named’ and type your new filename.

Note that the Generate Excel file step can take multiple inputs. Each input step will send data to a separate sheet, and the names of these sheets can be customized. 'Input 1' will map to 'Sheet 1' by default, and so forth. Refer to the 'Input' tabs at the top of your step window to ensure your step is sending your data to the desired source.
Once you publish and run your Flow, the emailed Excel file link will expire after 24 hours.
If the step has no data in it (0 rows), then even after running your Flow, an email with an Excel file won't be sent.
You can download past Excel files that were generated with this step by opening the “Run History” panel, navigating to the Flow run, and clicking Download Excel.

The files you send through this step are stored by Parabola. We store the data as a convenience, so that the next time you open the Flow, the data is still loaded into it. Your data is stored securely in an Amazon S3 Bucket, and all connections are established over SSL and encrypted.
All sheet names must be less than or equal to 31 characters, or the Flow will fail.
You can import PDF files in a few different ways:
Check out this Parabola University video for a quick intro to our PDF parsing capabilities, and see below for an overview of how to read and configure your PDF data in Parabola.
Parabola’s Pull from PDF file step can be configured to return Columns or Keys
You can use Extract from PDF, Extract from email, and Pull from file queue to parse PDFs. Once you have a PDF file uploaded into your Flow, the configuration settings are uniform.
1. Auto-detected Table (default)
Parabola scans your PDF, detects possible tables, and labels the most likely columns. This option uses LLM technology and works exceptionally well if the PDF document has a clear, structured table. All detected tables will be available in the sub-dropdown under the "Use an auto-detected table" dropdown.
2. Define a Custom Table
Manually define the structure of your table if the AI didn’t pick it up. You can name the table and define the columns that you want to extract from the PDF by clicking on the + Add Column button.
3. Extract All Data (OCR-first mode)
Use OCR to return all text from the PDF — helpful if the structure is complex or you're feeding the result into an AI step later. We only recommend this option if the first two extraction methods aren't yielding the desired results.
Return formats:
If there are document-level values like invoice date and PO number that you want to extract, add them as keys in this section. You can add this by clicking on the “+ Add key” button. Each key that you configure will be represented as its own column and the value will be repeated across all the rows of the resulting data set.
See below how in this case with handwriting, with more instructions the tool is able to determine if there is writing next to the word “YES” or “NO”.

You can give the AI more context by typing additional context and instructions into this text box. Try using specific examples, or explain the situation and the specific desired outcome. Consult the chat interface on the lefthand side to help you write clear instructions.
1. Text parsing approach
You can specify the text parsing approach if necessary. The default setting is “Auto” and we recommend keeping it this way if possible. If it’s not properly parsing your PDF, you can choose between “OCR” and “Markdown”.
2. Retry step on error
The checkbox will be checked by default. LLMs can occasionally return unexpected errors and oftentimes, re-running the step will resolve the issue. When checked, this step will automatically attempt to re-run one time when encountering an unexpected error.
3. Auto-update prompt versions
The checkbox will be unchecked by default. Occasionally Parabola updates step prompts in order to make parsing results more accurate/reliable. These updates may change output results, and as a result, auto-updating is turned off by default. Enable this setting to always use the most reset prompt versions.
4. Page filtering
The checkbox will be unchecked by default. This setting allows users to define specific pages of a document to parse. If you only need specific values that are consistently on the same page(s), this can drastically improve run time. If you do check this box off, please make sure to complete the dropdown settings that appear below.
Mark columns as “Child columns” if they contain rows that have values unique from the parent columns:
Before:

After marking “Size” as a child column:

Use Extract from PDF to work with a single PDF file. Upload a file by either dragging a PDF file anywhere onto the canvas, or click "Click to upload a file" to select a file from your file picker.
Step configuration instructions can be found here.

Extract from email can pull in data from a number of filetypes, including attached PDF files. Once configured, Parabola can be set to parse PDFs anytime the relevant email receives a PDF file.
Step configuration instructions can be found here.

Pull from file queue can receive PDF files and parse the relevant data. The file queue is a way to enqueue a Flow to run with a series of metadata + a file that is accessible via URL.
Runs can be added to the file queue via API (webhook) or via Run another Parabola Flow.
The Extract from email step gives you the ability to receive file attachments (CSV, XLS, PDF, or JSON files) from an incoming email and pass it to the next step (eg., combining email data with PDF or Google Sheets data). The step also gives you the ability to pull an email subject and body into a Parabola Flow. Use this unique step to trigger Flows, using content from the email itself.
Watch the Parabola University video below to see this data pull in action.
To begin, take note of the generated email address that is unique to this specific flow. Copy the email address to your clipboard to start using this dedicated email address yourself or to share with others.

The File Type is set to CSV / TSV, though you can also receive XLS / XLSX, PDF, or JSON files.
The Delimiter is set to comma (,), but can also be adjusted to tab (\t) and semicolon (;). If needed, the default of Quote Character set to Double quote ( " " ) can be changed to single quote ( ' ' ).

This step contains optional Advanced settings, where you can tell Parabola to skip a certain number of rows or columns when receiving the attached file.

To auto-forward a CSV attachment to an email outside of your domain, you may need to verify the @inbound.parabola.io email address. The below example shows how to set this up in Gmail.
💡 You’ll use this address to forward emails into your Parabola Flow. Don't forget to copy this email address.
✅ Gmail will now recognize the Parabola address as a valid forwarding destination.
nycwarehouse@gmail.comNew York City Warehouse InventoryBy default, Flows will run with the first valid attached file. If you want the Flow to run through multiple attached files (multiple attachments on one email), open the “Email trigger settings” modal and change the setting to “Run the Flow once per attachment:”

(Access these settings from the Extract from email step, or from the Flow trigger settings on the published Flow page.)
For emails with multiple files attached, the Flow will run once per file received, sequentially.
We also support the ability to pull in additional information about an email.The default behavior pulls:
Additional fields:
To access these fields, you can toggle the “Pull data from" field to ‘Email content’. If you'd like to pull both an attachment and the subject and body, select ‘Email content and attachment’.

Use the “Extract data with AI” option to automatically extract tables and key values from email bodies to create structured output.
Enable this option under "Parsing settings" when pulling in the “Email content”.
Use the "position is" option when pulling in an attached Excel document to specify which sheet to pull data from by its position, rather than its name. This is great for files that have key data in consistent sheet positions, but may not always have consistent sheet names.
When using this option, only the number of sheets that are in the last emailed file will show in the dropdown. If a Flow using these settings is run and there is no sheet in the specified position, the step will error.

Use the "Extract data with AI" option to extract tables of data and individual values from messy and difficult excel files.
When extracting data from an Excel file, use the settings to extract a table, or individual values (or both)
Once you have an Excel file in your flow, select "Extract data with AI". You will see options to add details to "Extract a table" and/or "Extract individual values".
Clicking on either of those will show additional fields to fill out. Each step can extract 1 table and any number of individual values.

Once you enable table extraction, do the following:
Once you enable individual value extraction, do the following:
Columns and individual values are Text by default. But you can change that to improve accuracy:
Check out this Parabola University video for a quick intro to our PDF parsing capabilities, and see below for an overview of how to read and configure your PDF data in Parabola.
Parabola’s Pull from PDF file step can be configured to return Columns or Keys
You can use Extract from PDF, Extract from email, and Pull from file queue to parse PDFs. Once you have a PDF file uploaded into your Flow, the configuration settings are uniform.
1. Auto-detected Table (default)
Parabola scans your PDF, detects possible tables, and labels the most likely columns. This option uses LLM technology and works exceptionally well if the PDF document has a clear, structured table. All detected tables will be available in the sub-dropdown under the "Use an auto-detected table" dropdown.
2. Define a Custom Table
Manually define the structure of your table if the AI didn’t pick it up. You can name the table and define the columns that you want to extract from the PDF by clicking on the + Add Column button.
3. Extract All Data (OCR-first mode)
Use OCR to return all text from the PDF — helpful if the structure is complex or you're feeding the result into an AI step later. We only recommend this option if the first two extraction methods aren't yielding the desired results.
Return formats:
If there are document-level values like invoice date and PO number that you want to extract, add them as keys in this section. You can add this by clicking on the “+ Add key” button. Each key that you configure will be represented as its own column and the value will be repeated across all the rows of the resulting data set.
See below how in this case with handwriting, with more instructions the tool is able to determine if there is writing next to the word “YES” or “NO”.

You can give the AI more context by typing additional context and instructions into this text box. Try using specific examples, or explain the situation and the specific desired outcome. Consult the chat interface on the lefthand side to help you write clear instructions.
1. Text parsing approach
You can specify the text parsing approach if necessary. The default setting is “Auto” and we recommend keeping it this way if possible. If it’s not properly parsing your PDF, you can choose between “OCR” and “Markdown”.
2. Retry step on error
The checkbox will be checked by default. LLMs can occasionally return unexpected errors and oftentimes, re-running the step will resolve the issue. When checked, this step will automatically attempt to re-run one time when encountering an unexpected error.
3. Auto-update prompt versions
The checkbox will be unchecked by default. Occasionally Parabola updates step prompts in order to make parsing results more accurate/reliable. These updates may change output results, and as a result, auto-updating is turned off by default. Enable this setting to always use the most reset prompt versions.
4. Page filtering
The checkbox will be unchecked by default. This setting allows users to define specific pages of a document to parse. If you only need specific values that are consistently on the same page(s), this can drastically improve run time. If you do check this box off, please make sure to complete the dropdown settings that appear below.
Mark columns as “Child columns” if they contain rows that have values unique from the parent columns:
Before:

After marking “Size” as a child column:

Extract from email can pull in data from a number of filetypes, including attached PDF files. Once configured, Parabola can be set to parse PDFs anytime the relevant email receives a PDF file.
Step configuration instructions can be found here.

Pull from file queue can receive PDF files and parse the relevant data. The file queue is a way to enqueue a Flow to run with a series of metadata + a file that is accessible via URL.
Runs can be added to the file queue via API (webhook) or via Run another Parabola Flow.
The FedEx integration allows operators to automate custom shipping alerts, integrations, and reports using live data from Fedex.
FedEx uses token client credentials for authentication. To connect FedEx to Parabola:
Parabola will securely store your credentials and use them to authenticate each request to FedEx.
1. Navigate to the FedEx Developer Portal.
2. Click Login to access your FedEx account.
3. In the side-menu, select My Projects.
4. Click + CREATE API PROJECT.

5. Complete the modal by selecting the option that best identifies your business needs for integrating with FedEx APIs.
6. Navigate to the Select API(s) tab.
7. Select the API(s) you want to include in your project. Based on the API(s) you select, you may need to make some additional selections.

⚠️ Note: If you select Track API, complete the additional steps below:
1. Select an account number to associate with your production key.
2. Review the Track API quotas, rate limits, and certification details.
3. Choose whether or not you want to opt-in to emails that will notify you if you exceed your quota.
8. Navigate to the Configure project tab.
9. Configure your project settings with name, shipping location, and notification preferences.

10. Navigate to the Confirm details tab.
11. Review your project details, then accept the terms and conditions.

12. On the Project overview page, retrieve your Client ID and Client Secret.
💡 Tip: Use Production Keys to connect to live production data in Parabola. Use Test Keys to review the request and response formats using from the documentation.
Using the FedEx integration in Parabola, you can bring in:
Flexport uses OAuth 2.0 Client Credentials for secure API access. To connect Flexport to Parabola:
Using the Flexport integration in Parabola, you can bring in a comprehensive range of logistics and freight data.
Parabola can import the following from Frate Returns:
With Parabola + Frate, anything that used to start with a CSV export can now run hands-free.
Use the Pull from Fulfil integration to bring key Fulfil data into Parabola — allowing you to transform your Fulfil data for more granular visibility, blend Fulfil data with information from other systems, and trigger alerts based on custom logic.
Fulfil uses API Key authentication for secure access.
Once connected, you can select from Fulfil’s available endpoints to bring live data into your flow.
1. Navigate to the main page of your ERP by swapping your {tenant} in the URL: https://{tenant}.fulfil.app/client/#/
2. Click on your username on the top right and then preferences
3. Select Manage personal access tokens.
.png)
4. In the upper right-hand corner select click the Generate Personal access token button.
.png)
5. Enter a helpful token description and click the Generate button.
.png)
6. Copy the API Key and store it somewhere safe.
Using the Fulfil integration, you can pull in a wide range of operational data, including:
By connecting Fulfil with Parabola, you turn your ERP data into actionable automation, powering real-time visibility, faster reconciliations, and smarter operations across your business.
Use the Pull from Looker step to run Looks and pull in that data from Looker.
To connect to Looker, you’ll need to enter your Looker Client ID and your Looker API Host URL before authenticating:

These steps only need to be followed once per Looker instance! If someone else on your team has done this, you can use the same Client ID that they have set up.
Your Looker permissions in Parabola will match the permissions of your connected Looker account. So you will only be able to view Looks that your connected Looker account can access.
Once your step is set up, you can choose the Look that you want to run from the Run this Look dropdown:

There are also Cache settings that you can adjust:

There are also additional settings that you can adjust within the step:

Perform table calculations: Some columns in Looker are generated from user-entered Excel-like formulas. Those calculations are not run by default in the API, but are run by default within Looker. This setting tells Looker to run those calculations.
Apply visualization options: Enable if you want things like the column names to match the names given in the Look, as opposed to the actual names of the columns in the source data.
Apply model-specific formatting: Requests the data in a way that respects any formatting rules applied to the data model. This can be things like date and time formats.
You may sometimes see a 404 error from the Pull from Looker step. Some common reasons for that error are:
The Pull from NetSuite integration enables users to connect to any NetSuite account and pull in saved search results that have been built in the NetSuite UI. Multiple saved searches, across varying search types, can be configured in a single flow.
The following document outlines the configuration requirements in NetSuite for creating the integration credentials, defining relevant role permissions, and running the integration in Parabola.
The following configuration steps are required in NetSuite prior to leveraging the Parabola integration:
Once complete, you will enter the unique credentials generated in the steps above into the Pull from NetSuite step in Parabola. This will also require your account id, which is obtained from your NetSuite account’s url. Ex: https://ACCOUNTID.app.netsuite.com/
The following document will review how to create each of the items above.
The permissions specified on the role applied to your integration will determine which saved searches, transactions, lists, and results you’ll be able to access in Parabola. It is important for you to confirm that the role you plan to use has access to all of the relevant objects as required.
The following permissions are recommended, in addition to any specific transaction/list/report specific you may require.
In addition to the below permissions, we also recommend adding the permissions listed in this document.
Custom Records:
Ensure the checkbox for the web services only role is selected.
Video walk-though of the setup process:
Follow the path below in the NetSuite UI to create a new integration record.

A consumer key and consumer secret will be generated upon saving the record. Record these items, as they will disappear once you leave this page.

Once the role, user, and integration have been created, you’ll need to generate the tokens which are required for authentication in Parabola.
Follow the path below in the NetSuite UI to create a new token record.


Once authorized, you’ll be prompted to select a search type and specific saved search to run. Click refresh and observe your results!

The Return only columns specified in the search checkbox enables a user to determine if all available columns, or only the columns included in the original search, should be returned. This setting is helpful if you’d like to return additional data elements for filtered records without having to update your search in NetSuite.
By default, the NetSuite API will only return the full data results from the underlying search record type (item, customer, transaction, etc) and only the internal ids of related record types (vendors, locations, etc) in a search.
For example, running the following search in Parabola would return all of the information as expected from the base record type (item in this scenario), and the internal id of the related object (vendor).

The best way to return additional details from related objects (vendor in this scenario) is by adding joined fields within the search. Multiple joined fields can be added to a single search to return data as necessary.

Alternatively, another solution would be running separate searches and joining the results by using a Combine Tables step within the flow. This is demonstrated below.

The same credentials and role configured for pulling data from NetSuite can be leveraged within Parabola’s Send to NetSuite step.
One key difference for posting data to NetSuite is ensuring the role has full access to REST Web Services

It also is important to confirm that the role has sufficient permissions enabled to create and/or update the relevant objects that are in scope for your team’s use cases.
This is completed by selecting the relevant permission and updating the access level to Full

As an example, the following permissions need to be enabled for use cases that involve creating & updating sales orders:
Creating and updating fields within NetSuite requires providing the internal IDs for relevant objects (items, sales orders, customers, subsidiaries, etc) as opposed to providing the human readable names you’re familiar with (SKUs, order numbers, customer names, etc).
It is a best practice to leverage Pull from NetSuite step or reference file within your flows to gather the internal IDs for reference prior to using the Send to NetSuite step to prevent errors.
Use case example: Creating new sales orders based on a PDF Purchase Order from a customer
Flow inputs:
Transformation logic:
Flow Outputs:

Record statuses
NetSuite requires data to be imported using the specific internal status codes. Specify a status by inserting a custom value with the Status Internal Identifier from the table below.

Bulk Creation
A single flow run can create multiple records within NetSuite. It’s important to leveraging the “grouping” function on the item level mapping to ensure sub-items are consolidated into the relevant parent level record.
An example is grouping sales order items by the sales order number to ensure each item is associated with the corresponding sales order.
Tips:
The Run another Parabola Flow step gives you the ability to trigger runs of other Parabola flows within a flow.
Select the flow you want to trigger during your current flow's run. No data will pass through this step. It's strictly a trigger to automatically begin a consecutive run of a secondary flow.
.png)
However, if you choose “Run once per row with a file URL”, data will be passed to the second Flow, which can be read using the Pull from file queue step.
Use the Run behavior setting to indicate how the other Flow should run. The options that include wait will cause the step to wait until the second Flow has finished before it can complete it’s calculation. The other options will not wait.
This step can be used with or without input arrows. If you place this step into a Flow without input arrows, it will be the first step to run. If it does have input arrows, then it will run according to the normal sequence of the Flow. Any per row options require input arrows.
The Send emails by row step sends one email per row in your dataset using the email address listed in a specific column. This is useful for sending personalized messages to a list of recipients. The step supports up to 75 emails per run and all messages are sent from team@parabolamail.io, with a footer that says "Powered by Parabola."
{curly braces}.
<br>, <b>, and <a> are supported.Pull data from ShipHero to create custom reports, alerts, and processes to track key metrics and provide a great customer experience.
ShipHero is a beta integration which requires a more involved setup process than our native integrations (like Shopify and Google Analytics). Following the guidance in this doc (along with our video walkthrough) should help even those without technical experience pull data from ShipHero.
If you run into any questions, feel free to reach out to support@parabola.io.
Inside your flow, search for "ShipHero" in the right sidebar. When you drag the step onto the canvas, a card containing 'snippets' will appear on the canvas. To start pulling in data from ShipHero, copy a snippet and paste it onto the canvas (how to paste a snippet).
We must start by authorizing ShipHero's API. In the "Pull from ShipHero" step's Authentication section, select "Expiring Access Token". For the Access Token Request URL, you can paste: https://public-api.shiphero.com/auth/token
In the Request Body Parameters section, you can "+add" username and password then enter your ShipHero login credentials. A second Request Header called "Accept" will exists by default – this can be deleted. Once completed, step authorization window should look as so:

When you drag the ShipHero step onto the canvas, there will be 5 pre-built snippets available:
For everything besides Products, it's common to pull in data for a specific date range (ex. previous day or week). This is why the card begins with steps that specify a dynamic date range. For example, if you put -2 as the Start Date and -1 as the End Date, you will pull orders from the previous full day.
If you're wanting to pull data from ShipHero that is not captured by these pre-built connections, you can modify the GraphQL Query and/or add Mutations by referencing ShipHero's GraphQL Primer.
By default, we pull in 20 pages of data (2,000 records). To increase this value, visit the "Pull from ShipHero" step and go to "Rate Limiting" --> "Maximum pages to fetch
" and increase the value until all of your data is pulled in.
The Pull from Shopify step can connect directly to your Shopify store and pull in orders, line item, customer, product data and much more!
This step can pull in the following information from Shopify:

Select the blue Authorize button. If you're coming to Parabola from the Shopify App Store, you should see an already-connected Pull from Shopify step on your flow.
By default, once you connect your Shopify account, we'll import your Orders data with Line Items detail for the last day. From here, you can customize the settings based on the data you'd like to access within Parabola.
This section will explain all the different ways you can customize the data being pulled in from Shopify. To customize these settings, start by clicking the dropdown in part 2 of the step.

Shopify orders contain all of the information about each order that your shop has received. You can see totals associated with an order, as well as customer information and more. The default settings will pull in any Order with the Orders detail happened in the last day. This will include information like the order total, customer information, and even the inventory location the order is being shipped from.
If you need more granular information about what products were sold, fulfilled, or returned, view your Orders with Line Items detail. This can be useful if you want relevant product data associated with each line item in the order.

Each order placed with your shop contains line items - products that were purchased. Each order could have many line items included in it. Each row of pulled data will represent a single item from an order, so you may see that orders span across many rows, since they may have many line items.
There are 4 types of columns that show up in this pull: "Orders", "Line Items", "Refunds", and "Fulfillment columns". When looking at a single line item (a single row), you can scroll left and right to see information about the line item, about its parent order, refund information if it was refunded, and fulfillment information if that line item was fulfilled.
As your orders are fulfilled, shipments are created and sent out. Each shipment for an order is represented as a row in this pull. Because an order may be spread across a few shipments, each order may show up more than one time in this pull. There are columns referring to information about the order, and columns referring to information about the shipment that the row represents.
Every order the passes through your shop may have some discounts associated with it. A shopper may use a few discount codes on their order. Since each order can have any number discount codes applied to it, each row in this pull represents a discount applied to an order. Orders may not show up in this table if they have none, or they may show up a few times! There are columns referring to information about the order, and columns referring to information about the discount that was applied.
This is a simple option that pulls in 1 row, containing the balance of your shop, and the currency that it is set to.
This option will pull in 1 row for every customer that you have in your Shopify store records.
Available filters:

Retrieve all disputes ordered by the date when it was initiated, with the most recent being first. Disputes occur when a buyer questions the legitimacy of a charge with their financial institution. Each row will represent 1 dispute.
An inventory level represents the available quantity of an inventory item at a specific location. Each inventory level belongs to one inventory item and has one location. For every location where an inventory item is available, there's an inventory level that represents the inventory item's quantity at that location.
This includes product inventory item information as well, such as the cost field.
You can choose any combination of locations to pull the inventory for, but you must choose at least one. Each row will contain a product that exists in a location, along with its quantity.
Toggle "with product information" to see relevant product data in the same view as the Product Inventory.

This is a simple option that will pull in all of your locations for this shop. The data is formatted as one row per location.
Payouts represent the movement of money between a Shopify Payments account balance and a connected bank account. You can use this pull option to pull a list of those payouts, with each row representing a single payout.
Pull the name, details, and products associated with each of your collections. By default, each row returns the basic details of each collection. You can also pull the associated products with each collection.
Available filters:
This pulls in a list of your products. Each row represents product variant since a product can have any number of variants. You may see that a product is repeated across many rows, with one row for each of its variants. When you set up a product, it is created as a variant, so products cannot exist without having at least one variant, even if it is the only one.
Available filters:
The Send to Shopify step can connect directly to your Shopify store and automatically update information in your store.
This step can perform the following actions in Shopify:
To connect your Shopify account from within Parabola, click on the blue "Authorize" button. For more help on connecting your Shopify account, jump to the section: Authorizing the Shopify integration and managing multiple stores.
Once you connect a step into the Send to Shopify step, you'll be asked to choose an export option.
The first selection you'll make is whether this step is enabled and will export all data or disabled and will not export any data. By default, this step will be enabled, but you can always disable the export if you need to for whatever reason.
Then you can tell the step what to do by selecting an option from the menu dropdown.

When using this option, every row in your input data will be used to create a new customer, so be sure that your data is filtered down to the point that every row represents a new customer to create.
When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.
Every customer must have either a unique Phone Number or Email set (or both), so be sure those fields are present, filled in, and have a mapping.
If you create customers with tags that do not already exist in your shop, the tags will still be added to the customer.
The address fields in this step will be set as the primary address for the customer.
When using this option, every row in your input data will be used to update an existing customer, so be sure that your data is filtered down to the point that every row represents a customer to update.
When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.
Every customer must have a Shopify customer ID present in order to update successfully, so be sure that column is present, has no blanks, and is mapped to the id field in the settings.
The address fields in this step will be edit the primary address for the customer.
When using this option, every row in the step will be used to delete an existing customer, so be sure that your data is filtered down to the point that every row represents a customer to delete.
This step only requires a single field to be mapped - a column of Shopify customer IDs to delete. Make sure your data has a column of those IDs without any blanks. You can find the IDs by using the Pull from Shopify step.
Collections allow shops to organize products in interesting ways! When using this option, every row in the step will be used to add a product to a collection, so be sure that your data is filtered down to the point that every row represents a product to add to a collection.
When using this option, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.
You only need two mapped fields for this option to work - a Shopify product ID and a Shopify Collection ID. Each row will essentially say, "Add this product to this collection".
Why is this option not called "Remove products from collections" if that is what it does? Great question. Products are kept in collections by creating a relationship between a product ID and a Collection ID. That relationship exists, and has its own ID! Imagine a spreadsheet full of rows that have product IDs and Collection IDs specifying which product belongs to which collections - each of those rows needs their own ID too. That ID represents the relationship. In fact, you don't need to imagine. Use the Pull from Shopify step to pull in Product-Collection Relationships. Notice there is an ID for each entry that is not the ID of the product or the collection. That ID is what you need to use in this step.
When using this option, every row in the step will be used to delete a product from a collection, so be sure that your data is filtered down to the point that every row represents a product-collection relationship that you want to remove.
This step does not delete the product or the collection! It just removes the product from the collection.
When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.
You need 1 field mapped for this step to work - it is the ID of the product-collection relationship, which you can find by Pulling those relationships in the Pull from Shopify step. In the step, it is called a "collect_id", and it is the "ID" column when you pull the product-collection relationships table.
What's an inventory item? Well, it represents the goods available to be shipped to a customer. Inventory items exist in locations, have SKUs, costs and information about how they ship.
There are a few aspects of an inventory item that you can update:
When using this step, you need to provide an Inventory Item ID so that the step knows which Item you are trying to update. Remember, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be seny to Shopify.
When using the “Update” option in the Send to Shopify step, Parabola will overwrite all existing values for any fields that are mapped in the step’s settings table. This behavior is standard for update requests and ensures that Shopify reflects the exact data provided in your flow.
Any fields not mapped will remain unchanged in Shopify. To avoid unintended data loss or partial updates, make sure to explicitly map all fields you want to update and double-check your input data before running the flow.
When using this option, every row in the step will be used to adjust an existing item's inventory level, so be sure that your data is filtered down to the point that every row represents an item to adjust.
When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.
Every item must have a Shopify inventory item ID present in order to adjust successfully, so be sure that column is present, has no blanks, and is mapped to the id field in the settings.
You must provide the inventory item ID, the location ID where you want to adjust the inventory level, and the available adjustment number. That available adjustment number will be added to the inventory level that exists. So if you want to decrease the inventory level of an item by 2, set this value to -2. Similarly, use 5 to increase the inventory level by 5 units.
When using this option, every row in the step will be used to reset an existing item's inventory level, so be sure that your data is filtered down to the point that every row represents an item to reset.
When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.
Every item must have a Shopify inventory item ID present in order to reset successfully, so be sure that column is present, has no blanks, and is mapped to the id field in the settings.
You must provide the inventory item ID, the location ID where you want to adjust the inventory level, and the available number. That available number will be used to overwrite any existing inventory level that exists. So if you want to change an item's inventory from 10 to 102, then set this number to 102.
To use the Pull from Shopify or Send to Shopify steps, you'll need to first authorize Parabola to connect to your Shopify store.
To start, you will need your Shopify shop URL. Take a look at your Shopify store, and you may see something like this: awesome-socks.myshopfy.com - from that you would just need to copy awesome-socks to put into the first authorization prompt:

After that, you will be shown a window from Shopify, asking for you to authorize Parabola to access your Shopify store. If you have done this before, and/or if you are logged into Shopify in your browser, this step may be done automatically.
Parabola handles authorization on the flow-level. Once you authorize your Shopify store on a flow, subsequent Shopify steps you use on the same flow will be automatically connected to the same Shopify store. For any new flows you create, you'll be asked to authorize your Shopify store again.
You can edit your authorizations at any time by doing the following:

If you manage multiple Shopify stores, you can connect to as many separate Shopify stores in a single flow as you need. This is really useful because you can combine data from across your Shopify-store and create wholistic custom reports that provide a full picture of how your business is performing.

Please note that deleting a Shopify account from authorization will remove it from the entire flow, including any published versions.
This article goes over the date filters available in the Pull from Shopify step.
The Orders and Customer pulls from the Pull from Shopify step have the most complex date filters. We wanted to provide lots of options for filtering your data from within the step to be able to reduce the size of your initial import and pull exactly the data you want to see.
Date filters can be a little confusing though, so here's a more detailed explanation of how we've built our most complex date filters.
The date filters in the Pull from Shopify step, when available, can be found the bottom of the lefthand side, right above the "Show Updated Results" button.

In this step, we indicate what time zone we're using to pull your data. This time zone matches the time zone selected for your Shopify store.
At the bottom of the lefthand panel of your step, if you're still uncertain if you've configured the date filters correctly, we have a handy helper to confirm the date range we'll use to filter in the step:

This article explains how to reproduce the most commonly-used Shopify metrics. If you don't see the metric(s) you're trying to replicate, send us a note and we can look into it for you.
The Shopify Overview dashboard is full of useful metrics. One problem is that it doesn't let you drill into the data to understand how it's being calculated. A benefit of using Parabola to work with your Shopify data is that you can easily replicate most Shopify metrics and see exactly how the raw data is used to calculate these overview metrics.
This formula will show you the total sales per line item by multiplying the price and quantity of the line items sold.
Import Orders with Line Items details
This formula will show you the total refund per line item by multiplying the refunded amount and refunded quantity. In this formula, we multiply by 1 to turn it into a negative number. If you'd like to display your refunds by line items as a positive number, just don't multiply by 1.
Import Orders with Line Items details
This formula will show you the net quantity of items sold, taking into account and removing the items that were refunded.
Import Orders with Line Items details
First, use the Sum by group step to sum "Line Items: Quantity" and "Refunds: Refund Line Items: Quantity"
Then, use the newly generated "sum" columns for your formula.
Import Orders with Orders details.
Add a Sum by group step. Sum the "Total Line Items Price" column.
Import Orders with Orders details.
To calculate net sales, you'll want to get gross sales - refunds - discounts. This will require two steps:
Import Orders with Line Items details.
To calculate total sales, you'll want to get gross sales + taxes - refunds - discounts. This will require three steps:
Import Orders with Orders details.
Import Orders with Orders details.
Import Orders with Orders details.
Import Customers. This table will give us Total Spent per customer as well as the # of Orders by customer.
Alternatively, import Orders.
Use the Count by group step after pulling in orders.
Use the Send to Slack step to automatically post messages from your Parabola flow into a Slack channel or DM.

The first person to install the Parabola Slack app in your workspace may need admin permissions. Once installed, all workspace members can use the app.
Your authentication process depends on your Slack workspace settings:
If you are using a version of this step that does not show a list of channels to send messages to, and requires you to type in the location of the channel, use this guide to find those names and IDs.
Channel names are the same as they appear in Slack. i.e #general or #we-love-parabola but they can only be used if you are not attaching files of data. Always include the # symbol.
When attaching files, indicate the channel using the ID (B07F36JHD), not the name (#general).
The channel ID can be found by right clicking on the channel name in Slack, clicking “Copy link”, and taking the ID from the end of the link. For example, use the channel ID of B07F36JHD from this link: https://parabolaio.slack.com/archives/B07F36JHD
_italic_ will produce italicized text
*bold* will produce bold text
~strike~ will produce strikethrough text
You can write multi-line text by typing a new line, or insert a newline by including the string “\n” in your text.
You can highlight text as a block quote by using the > character at the beginning of one or more lines.
If you have text that you want to be highlighted like code, surround it with back-tick (`) characters.For example:
`This is a code block`
You can also highlight larger, multi-line code blocks by placing 3 back-ticks before and after the block. For example:
```This is a code block\nAnd it's multi-line```
Create lists by using a - character followed by a space. For example:
- This
- is
- a list
URLs will automatically work. Spaces in URLs will break the URL, so we recommend that you remove any spaces from your URL links.
You can also use markdown to adjust the text that appears as the link from the URL to something else: For example:
<http://www.example.com|This message *is* a link>
And create email links:
<mailto:bob@example.com|Email Bob Roberts>
Emoji can be included in their full-color, fully-illustrated form directly in text. Once published, Slack will then convert the emoji into their common 'colon' format. For example, a message published like this:
It's Friday 😄
will be converted into colon format:
It's Friday :smile:
If you're publishing text with emoji, you don't need to worry about converting them, just include them as-is.
The compatible emoji formats are the Unicode Unified format (used by OSX 10.7+ and iOS 6+), the Softbank format (used by iOS 5) and the Google format (used by some Android devices). These will be converted into their colon-format equivalents. The list of supported emoji are taken from https://github.com/iamcal/emoji-data.
The Pull from Twilio step pulls messages and phone numbers from Twilio.
The first thing you'll need to do to start using the Pull from Twilio step is to authorize the step to access the data in your Twilio account.
Double-click on the step and click "Authorize." This window will appear where you'll need to provide the Account SID and Auth Token from your Twilio account.

To locate this information on your Twilio account, click on the blue link to Lookup Twilio Account Info. This will take you to https://www.twilio.com/console. You'll see your Account SID and Auth Token that you can copy and paste from your account to Parabola.
Once you're connected, you'll have the following data types to select from:
This option pulls logs of all outbound messages you sent from your Twilio account. The returned columns are: To (phone number), From (phone number), Status, Price, Date Sent, Body (of message).
You have optional fields you can set to filter the data. Leaving the Date Sent field blank will simply pull in the most recent 100k messages.
This option pulls logs of any responses or inbound messages you've receive to the phone numbers associated with your Twilio account. The returned columns are: To (phone number), From (phone number), Status, Price, Date Sent, Body (of message).
You have optional fields you can set to filter data. Leaving the Date Received field blank will simply pull in the most recent 100k messages.
This option pulls in phone numbers that are associated with your account. The returned columns are: Number ID, Phone Number, Friendly Name, SMS Enabled, MMS Enabled, Voice Enabled, Date Created, Date Updated.
The Send to Twilio step triggers dynamic SMS messages sent via Twilio using data transformed in your Parabola flow. You can use Parabola to dictate who should receive your SMS messages, what message they should receive, and trigger Twilio to send them.
The first thing you'll need to do to start using the Send to Twilio step is to authorize the step to send data to your Twilio account.
Double-click on the step and click on the blue button to Authorize. This window will appear where you'll need to provide the Account SID and Auth Token from your Twilio account.

To locate this information on your Twilio account, click on the blue link to Lookup Twilio Account Info. This will take you to https://www.twilio.com/console. You'll see your Account SID and Auth Token that you can copy and paste from your account to Parabola.
By default, this step will be configured to Send text messages to recipients when the flow runs. If for whatever reason you need to disable this temporarily, you can select to not send text messages when the flow runs.

Then, you'll select the following columns that contain the data for phone numbers you'd like to Send To, phone numbers you'd like to Send From, and text you'd like Twilio to send as Message Content.
Please make sure that the phone numbers you'd like to Send From are valid Twilio phone numbers that your Twilio account is authorized to send from. Verified Caller ID phone numbers cannot be used to send outbound SMS messages.
For Message Content, you have the option to use content from an existing column or a custom message. Select the Custom option from the dropdown if you'd like to type in a custom message. While the custom message is a great, easy option, this means that all of your recipients will receive the same message. If you'd like your messages to be customized at all, you should create your dynamic messages in a column beforehand. The Insert column can be particularly useful here for creating dynamic text content.
Each row will represent a single SMS. If your data contains 50 rows that means 50 SMS messages will be sent.
The UPS integration is used by operators to integrate UPS’s shipping, tracking, and logistics services into their platforms and workflows.
UPS uses OAuth 2.0 Client Credentials for secure API access.
1. Navigate to the UPS Developer Portal.
2. Click Login to access your UPS account.
3. Click Create Application to make a new application and generate your credentials.

⚠️ Note: This application will be linked to your shipper accounts(s) and email address associated with your UPS.com ID
4. Select your use case, shipper account, and accept the agreement.

5. Enter your contact information.

💡 Tip: Consider using a group inbox that is accessible to others on your development team. You are unable to change this email once the credentials are created or you will lose access to your application.
6. Define your application details that includes the name, associated billing account number, and custom products.
⚠️ Note: In the Callback URL field, add the following URL: https://parabola.io/api/steps/generic_api/callback
7. Once saved, your Client ID and Client Secret are generated.

💡 Tip: Click Add Products to enable additional products like the Tracking and Time in Transit APIs if they have not been added to your application.
Using the UPS TrackService API in Parabola, you can pull in:
The Visualize step is a destination step used to display data as charts, styled tables, or key metrics. These visualizations can optionally be shown on the Flow canvas or on the Flow dashboard.
When first added to your Flow and connected to a step, the Visualize step will expand. Data flowing into the Visualize step will be shown as a table on the canvas.
To customize this visualization and create new views, open the Visualize step by clicking "Edit this View."

Visualize steps can be configured with any number of views. Every view in a single Visualize step will use the same input data, but each view can be customized to display data in a different way.
The Visualize step is also used to sync views to your Flow dashboard tab. When the “Show on dashboard” step option is enabled, that visualization will also appear in your Flow dashboard.
Views in the Visualize step will be shown on your Flow dashboard by default. Uncheck the dashboard setting within the Visualize step to remove any views from the dashboard.
Visualize steps can be collapsed into normal-sized steps by clicking the collapse button, located in the top right of the expanded visualization. Similarly, collapsed Visualize steps can be expanded by clicking on the expand button under the step.
Expanded Visualize steps can be resized using the handle in the bottom right of the step.
Flow dashboards enable your team to easily view, share, and analyze the data that your Flows create. Use the Visualize step to create interactive reports that are shareable with your entire team. Visualizations can be powered by any step in your Flow or by Parabola Tables for historic reporting.
Check out this Parabola University video for a brief intro to tables.
The Visualize step is a tool for creating tables, charts, and metrics from the output of your Flows. These views of data can be arranged and shared directly in Parabola from the Flow dashboard page.
To create a Visualization, connect any step in your flow to a Visualize step:

Data connected to a Visualize step will be usable to create any number of views. Those views are automatically added to your Flow dashboard, where they can be arranged and customized.
Once you’ve added views to your Flow dashboard, you can:

Anyone with access to your Flow will be able to see the Flow dashboard:
To share a view, you can either share the entire dashboard with your teammate (see instructions here), or click “Share” from a specific table view. Sharing the view will give your teammate access to the Flow (and it’s dashboard), and link them directly to that specific view.

Any visualization can be exported as a CSV. Simply click on the "Export to CSV" button at the top right of your table or chart.

Views are individual visualizations, accessible from the Visualize step, or on the Flow dashboard. The data connected to a Visualize step acts as a base dataset, which you can customize using views. Views can be visualized as tables, featured metrics, charts, and graphs.
Ready for a deeper dive? This Parabola University video will walk you through some of the configurations available to fine-tune how you see your data.
Arrange data views on the page with either a tab or tile layout.
Tabs will appear like traditional spreadsheet tabs, which you can navigate through. Drag to rearrange their order.

Tiles enable you to see all views simultaneously. You can completely customize the page by changing view height and width, and drag-and-drop to rearrange.

From the “Table/chart options” menu, you can select from several types of visualizations.
By default, visualizations display as tables. This format works well to show rows of data that are styled, calculated, grouped, sorted, or filtered.
In the below image, the table options menu is at the top left, below the "All Inventory" tab. This is where you can access options to format and style columns, or to add aggregation calculations.

Featured metrics allow you to display specific column calculations from the underlying table.
Metrics can be renamed, given a color theme, and formatted (date, number, percent, currency, or accounting). The metrics options menu is in the same placement as above, represented with a '#' symbol.

Parabola supports several chart types:
Within the chart options menu, represented below as a mini bar graph, you can customize chart labels, color themes, gridlines, and legend placement.

Charts have a single value plotted on the horizontal X axis, along the bottom of the chart. Date or category values are commonly used for the X axis
Use the grouping option on the X axis control to aggregate values plotted in the chart. For example, if you have a week's worth of transactions, and you want to see the total number of transactions per day, you would set your X axis to the day of the week, and group your data to find the sum. Ungrouped values will be plotted exactly as they appear in your dataset.
Use the X axis options dropdown within the chart options menu to further fine-tune your formatting.
Charts can have up to two Y axes, on the left, right, or both. Additionally, each Y axis can key to any number of data values, called series.
Adding multiple series will show multiple bars, lines, or dots, depending on which chart you are using. The above image shows a chart using one Y axis, but several series with stacking enabled under the "Categories / stacking" dropdown.
When you add a second Y axis, it will add a scale to the right side of the graph. Any series that are plotted in the second Y axis will adhere to that scale, whereas any series on the first Y axis will adhere to the first scale. Your charts are limited to two scales, but each series can be aggregated individually, so you can compare the mean of one data point with the sum of another, and the median of a third.
Imagine using multiple Y axes to plot two sets of data that are related, but exist on different numerical scales, such as total revenue in one axis, and website conversion rate in another axis.
Many charts and graphs have category and stacking options. Depending on your previous selections with the X and Y axes, and the chart type, some options will be available in this menu.
View controls can be selected from the icons in the control bar on any view.
You can perform the following calculations on a column:
Only one metric can be calculated per column.
Tables can be grouped up to 6 times. (After 6 groups, the '+ Add grouping' option will be disabled.) Groups are applied in a nested order, starting at the first group, and creating subgroups with each subsequent rule.
Use the sort options within the group rules to determine what order the groups are shown in. Normal sort rules will be used to sort the rows within the groups.
Click the “Sort” button to quickly add a new sort rule (or the view options menu). These sorts define how rows are arranged in the view.
Click the “Filter” button to quickly add a new filter rule (or the view options menu). These filters define which rows are kept in the view.
Filters work with dates – select the “Filter dates to…” option, and utilize either relative ranges (e.g. “Last 7 days”) or specify exact ones.
Columns, metrics, and axes can be formatted to change how their data is displayed and interpreted. Click the left-most of your configuration buttons, the "Table/Chart Options" button, to apply formatting to any column, metric, or axis. You can select auto-format, or choose from a list of categories and formats within those categories.
In charts, the X-axis will be auto-formatted, and you can change the format as needed. All series in each Y-axis will share the same format. Axis formatting can be adjusted by clicking the gear icon next to the axis name.
Formats will be used to adjust how data is displayed in the columns of a table, in the aggregations applied to groups and in the grand total row, and to featured metrics. When grouping a formatted column, the underlying, unformatted value will be used to determine which row goes in which group.
When working with dates, the format is autodetected by default. If your date is not successfully detected, click the 3 dots next to the output format field and enter a custom starting format.
.png)
Valid options are:

If the output format uses a token that is not found in the input , e.g. converting MM-DD to MM-DD-YYYY, then certain values will be assumed:
Dates that do not adhere to the starting format will remain unformatted in your table.
Use the "Table/Chart Options" to hide specific columns from your table view.
Columns can be used for sorting, grouping, and filtering even when hidden. Those settings are applied before the columns are hidden for even more control over your final Table.
Hidden columns will not show up in search results, unless the option for “Display all columns” is enabled.
Hidden columns can be filtered by quick filters.
Hidden columns will be present in CSV exports downloaded from the view.
Use the "Table/Chart Options" to freeze the first (left-most) column or the first row by using the checkboxes at the top. A frozen column or row will “stick,” and other columns and rows will scroll behind them.
Click "Quick Filter" in the top right corner of the dashboard to toggle the filter bar pictured below. Using "Add quick filter" or "Add date filter," you can filter data in specific columns across every view on the page. These filters are only applied for you, and will not affect how other users see this Flow. Refreshing the page will reset all quick filters.
After 8 seconds, the combination of quick filters will be saved in the “Recents” drawer on the right side of the filter bar. Your recent filters are only visible to you, and can be reapplied with a click.
Quick filters can only be used if you have at least one table on your Flow. Above the first table on your published Flow page, click to add a filter. The filter bar will then follow you as you scroll.
Multiple quick filters are combined using a logical “and” statement. These filters are applied in conjunction with any filters set on individual views.
Use the clear filters icon to remove all currently applied filters.

From the Table Options menu, use the “add color rule” button to apply formatting to the columns of your Table view.
.png)
There are 3 types of formatting that can be added:
(The same menu can be used to remove any existing colors applied to a column.)
.png)
Applies a chosen color to a column entirely. All cells will have a color applied.
Uses a conditional rule to color specific cells. The following operators are supported:
.png)
Applies a 2 color or 3 color scale to every cell in the column. All cells will have a color applied.
When using two colors, by default the first color will be applied to the minimum value and the second color will be applied to the maximum value. When using three colors, by default, the middle color will be applied to the value 50% between the smallest and largest value in the column.
Cells with values between the minimum, maximum, and middle value (if using 3 colors) will blend the colors they are between, creating a smooth gradient.
When setting a custom value for the maximum or minimum on a color scale, any value in the table that is larger than the maximum or smaller than the minimum will have the the maximum color or minimum color applied, respectively.
Click the ellipsis menu next to the format dropdown to access controls to adjust how the scale is applied.
.png)
Switch each breakpoint to use a number, percent, or the default min/max value.
Scales can be applied to columns containing dates, numbers, currency, etc.
Multiple rules can be applied to the same column. They will be evaluated top down, starting with the first rule. Any cells that are not colored as a result of that rule move on to the next rule, until all rules have been evaluated, or all cells have been assigned a color. A cell will show the color of the first rule that evaluates to true for the value in that cell.
After a set color or color scale is applied, no further rules will be evaluated, as all cells will have an assigned color after those rules.
Existing table views may have columns with column emphasis applied. Those columns will be migrated automatically to use a set color formatting rule.
Zendesk uses basic authentication with an API token. Here's how to get your credentials and connect them in Parabola:
Once connected, Parabola will securely use your credentials to pull data from Zendesk into your flows.
Using the Zendesk integration in Parabola, you can pull in a wide range of customer service and support data, including:
With Parabola and Zendesk, you can turn your support data into automated workflows that save hours, improve visibility, and help your team deliver better customer experiences.