The first time interacting with an API can feel daunting. Each API is unique and requires different settings, but is generally standardized to make understanding and connecting to an API accessible.
To learn how to best use APIs in Parabola, check out our video guides.
Parabola works best with two types of APIs. The most common API type to connect to is a REST API. Another API type rising in popularity is a GraphQL API. Parabola may be able to connect to a SOAP API, but it is unlikely due to how they are structured.
To evaluate if Parabola can connect with an API, reference this flow chart.
A REST API is an API that can return data by making a request to a specific URL. Each request is sent to a specific resource of an API using a unique Endpoint URL. A resource is an object that contains the data being requested. Common examples of a resource include Orders, Customers, Transactions, and Events.
To receive a list of orders in Squarespace, the Pull from an API step will make a request to the Squarespace's Orders resource using an Endpoint URL:
GraphQL is a new type of API that allows Parabola to specify the exact data it needs from an API resource through a request syntax known as a GraphQL query. To get started with this type of API call in Parabola, set the request type to "POST" in any API step, then select "GraphQL" as the Protocol of the request body.
Once your request type is set, you can enter your query directly into the request body. When forming your query, it can be helpful to use a formatting tool to ensure correct syntax.
Our GraphQL implementation current supports Offset Limit pagination, using variables inserted directly into the query. Variables can be created by inserting any single word between the brackets '<%%>'. Once created, variables will appear in the dropdown list in the "Pagination" section. One of these variables should correspond to your "limit", and the other should correspond to "offset."
The limit field is static; it represents the number of results returned in each API request. The offset field is incremented in each subsequent request based on the "Increment each page by" value. The exact implementation will be specific to your API docs.
After configuring your pagination settings, also be sure to adjust the "Maximum pages to fetch" setting in the "Rate Limiting" section as well to retrieve more or less results.
GraphQL can be used for data mutations in addition to queries, as specified by the operation type at the start of your request body. For additional information on Graph queries and mutations, please reference GraphQL's official documentation.
The first step to connect to an API is to read the documentation that the service provides. Oftentimes, the documentation is commonly referred to as the API Reference, or something similar. These pages tend to feature URL and code block content.
The API Reference, always provides at least two points of instruction. The first point outlines how to Authenticate a request to give a user or application permission to access the data. The second point outlines the API resources and Endpoint URLs, or where a request can be sent.
Most APIs require authentication to access their data. This is likely the first part of their documentation. Try searching for the word "Authentication" in their documentation.
The most common types of authentication are Bearer Tokens, Username/Password (also referred to as Basic), and OAuth2.0.
This method requires you to send your API Key or API Token as a bearer token. Take a look at this example below:
The part that indicates it is a bearer token is this:
This method is also referred to as Basic Authorization or simply Basic. Most often, the username and password used to sign into the service can be entered here.
However, some APIs require an API key to be used as a username, password, or both. If that's the case, insert the API key into the respective field noted in the documentation.
The example below demonstrates how to connect to Stripe's API using the Basic Authorization method.
The Endpoint URL shows a request being made to a resource called "customers". The authorization type can be identified as Basic for two reasons:
This method is an authorization protocol that allows users to sign into a platform using a third-party account. OAuth2.0 allows a user to selectively grant access for various applications they may want to use.
Authenticating via OAuth2.0 does require more time to configure. For more details on how to authorize using this method, read our guide Using OAuth2.0 in Parabola.
Some APIs will require users to generate access tokens that have short expirations. Generally, any token that expires in less than 1 day is considered to be "short-lived" and may be using this type of authentication. This type of authentication in Parabola serves a grouping of related authentication styles that generally follow the same pattern.
One very specific type of authentication that is served by this option in Parabola is called OAuth2.0 Client Credentials. This differs from our standard OAuth2.0 support, which is built specifically for OAuth2.0 Authorization Code. Both Client Credentials and Authorization Code are part of the OAuth2.0 spec, but represent different Grant Types.
Authenticating with the Expiring Access Token option is more complex than options like Bearer Token, but less complex than OAuth2.0. For more details on how to use this option, read our guide Using Expiring Access Tokens in Parabola.
A resource is a specific category or type of data that can be queried using a unique Endpoint URL. For example, to get a list of customers, you might use the Customer resource. To add emails to a campaign, use the Campaign resource.
Each resource has a variety of Endpoint URLs that instruct you how to structure a URL to make a request to a resource. Stripe has a list of resources including "Balance", "Charges", "Events", "Payouts", and "Refunds".
HTTP methods, or verbs, are a specific type of action to make when sending a request to a resource. The primary verbs are GET, POST, PUT, PATCH, and DELETE.
A header is a piece of additional information to be sent with the request to an API. If an API requires additional headers, it is commonly noted in their documentation as -H.
Remember the authentication methods above? Some APIs list the authentication type to be sent as a header. Since Parabola has specific fields for authentication, those headers can typically be ignored.
Taking a look at Webflow's API, they show two headers are required:
The first -H header is linked to a key called Authorization. Parabola takes care of that. It does not need to be added as a header. The second -H header is linked to a key called accept-version. The value of the header is 1.0.0. This likely indicates which version of Webflow's API will be used.
JavaScript Object Notation, or more commonly JSON, is a way for an API to exchange data between you and a third-party. JSON is follows a specific set of syntax rules.
An object is set of key:value pairs and is wrapped in curly brackets {}. An array is a list of values linked to a single key or a list of keys linked to a single object.
JSON in API documentation may look like this:
Most documentation will use cURL to demonstrate how to make a request using an API.
Let's take a look at this cURL example referenced in Spotify's API:
We can extract the following information:
Because Parabola handles Authorization separately, the bearer token does not need to be passed as a header.
Here's another example of a cURL request in Squarespace:
This is what we can extract:
Parabola also passes Content-Type: application/json as a header automatically. That does not need to be added.
Check out this guide to learn more troubleshooting common API errors.
The Pull from an API step sends a request to an API to return specific data. In order for Parabola to receive this data, it must be returned in a CSV, JSON, or XML format. This step allows Parabola to connect to a third-party to import data from another service, platform, or account.
To use the Pull from an API step, the "Request Type" and "API Endpoint URL" fields are required.
There are two ways to request data from an API: using a GET request or using a POST request. These are also referred to as verbs, and are standardized throughout REST APIs.
The most common request for this step is a GET request. A GET request is a simple way to ask for existing data from an API.
"Hey API, can you GET me data from the server?"
To receive all artists from Spotify, their documentation outlines using GET request to Artist resource using this Endpoint URL:
Some APIs will require a POST request to import data, however it is uncommon. A POST request is a simple way to make changes to existing data such as adding a new user to a table.
The request information is sent to the API in theJSON body of the request. The JSON body is a block that outlines the data that will be added.
Hey API, can you POST my new data to the server? The new data is in the JSON body.
Similar to typical websites, APIs use URLs to request or modify data. More specifically, an API Endpoint URL is used to determine where to request data from or where to send new data to. Below is an example of an API Endpoint URL.
To add your API Endpoint URL, click the API Endpoint URL field to open the editor. You can add URL parameters by clicking the +Add icon under the "URL Parameters" text in that editor. The endpoint dynamically changes based on the key/value pairs entered into this field.
Most APIs require authentication to access their data. This is likely the first part of their documentation. Try searching for the word Authentication in their documentation.
Here are the Authentication types available in Parabola:
The most common types of authentication are Bearer Tokens, Username/Password (also referred to as Basic), and OAuth2.0. Parabola has integrated these authentication types directly into this step.
This method requires you to send your API Key or API Token as a Bearer Token. Take a look at this example below:
The part that indicates it is a bearer token is this:
To add this specific token in Parabola, select Bearer Token from the Authorization menu and add "sk_test_WiyegCaE6iGr8eSucOHitqFF" as the value.
This method is also referred to as Basic Authorization or simply Basic. Most often, the username and password used to sign into the service can be entered here.
However, some APIs require an API key to be used as a username, password, or both. If that's the case, Insert the API key into the respective field noted in the documentation.
The example below demonstrates how to connect to Stripe's API using the Basic Authorization method.
The Endpoint URL shows a request being made to a resource called customers. The authorization type can be identified as Basic for two reasons:
To authorize this API in Parabola, fill in the fields below:
This method is an authorization protocol that allows users to sign into a platform using a third-party account. OAuth2.0 allows a user to selectively grant access for various applications they may want to use.
Authenticating via OAuth2.0 does require more time to configure. For more details on how to authorize using this method, read our guide Using OAuth2.0 in Parabola.
Some APIs will require users to generate access tokens that have short expirations. Generally, any token that expires in less than 1 day is considered to be "short-lived" and may be using this type of authentication. This type of authentication in Parabola serves a grouping of related authentication styles that generally follow the same pattern.
One very specific type of authentication that is served by this option in Parabola is called OAuth2.0 Client Credentials. This differs from our standard OAuth2.0 support, which is built specifically for OAuth2.0 Authorization Code. Both Client Credentials and Authorization Code are part of the OAuth2.0 spec, but represent different Grant Types.
Authenticating with the Expiring Access Token option is more complex than options like Bearer Token, but less complex than OAuth2.0. For more details on how to use this option, read our guide Using Expiring Access Tokens in Parabola.
A header is a piece of additional information to be sent with the request to an API. If an API requires additional headers, it is commonly noted in their documentation as -H.
Remember the authentication methods above? Some APIs list the authentication type to be sent as a header. Since Parabola has specific fields for authentication, those headers can typically be ignored.
Taking a look at Webflow's API, they show two headers are required.
The first -H header is linked to a key called Authorization. Parabola takes care of that. It does not need to be added as a header. The second -H header is linked to a key called accept-version. The value of the header is 1.0.0. This likely indicates which version of Webflow's API will be used.
APIs typically to structure data as a nested objects. This means data can exist inside data. To extract that data into separate columns and rows, use the Output section to select a top-level column.
For example, a character can exist as a data object. Inside the result object, additional data is included such as their name, date of birth, and location.
This API shows a data column linked result. To expand all of the data in the results object into neatly displayed columns, select results as the top-level column in the Output section.
If you only want to expand some of the columns, choose to keep specific columns and select the columns that you want to expand from the dropdown list.
APIs return data in pages. This might not be noticeable for small requests, but larger request will not show all results. APIs return 1 page of results. To view the other pages, pagination settings must configured
Each API has different Pagination settings which can be searched in their documentation. The three main types of pagination are Page, Offset and Limit, and Cursor based pagination.
APIs that use Page based pagination make it easy to request more pages. Documentation will refer to a specific parameter key for each request to return additional pages.
Intercom uses this style of pagination. Notice they reference the specific parameter key of page:
Parabola refers to this parameter as the Pagination Key. To request additional pages from Intercom's API, set the Pagination Key to page.
The Starting page is the first page to be requested. Most often, that value will be set to 0. For most pagination settings, 0 is the first page. The Increment by value is the number of pages to advance to. A value of 1 will fetch the next page. A value of 10 will fetch every tenth page.
APIs that use Offset and Limit based pagination require each request to limit the amount of items per page. Once that limit is reached, an offset is used to cycle through those pages.
Spotify refers to this type of pagination in their documentation:
To configure these pagination settings in Parabola, set the Pagination style to offset and limit.
The Starting Value is set to 0 to request the first page. The Increment by value is set to 10. The request will first return page 0 and skip to page 10 .
The Limit Key is set to limit to tell the API to limit the amount of items. The Limit Value is set to 10 to define the number of items to return.
Otherwise known as the bookmark of APIs, Cursor based pagination will mark a specific item with a cursor. To return additional pages, the API looks for a specific Cursor Key linked to a unique value or URL.
Squarespace uses cursor based pagination. Their documentation states that two Cursor Keys can be used. The first one is called nextPageCursor and has a unique value:
The second one is called nextPageUrl and has a URL value:
To configure cursor based pagination using Squarespace, use these values in Parabola:
Replace the Cursor path in response with pagination.nextPageURL to use the URL as a value. The API should return the same results.
Imagine someone asking thousands of questions all at once. Before the first question can be answered thousands of new questions are coming in. That can become overwhelming.
Servers are no different. Making paginated API calls requires a separate request for each page. To avoid this, APIs have rate limiting rules to protect their servers from being overwhelmed with requests. Parabola can adjust the Max Requests per Minute to avoid rate limiting.
By default, this value is set to 60 requests per minute. That's 1 request per second. The Max Requests per Minute does not set how many requests are made per minute. Instead, it ensures that Parabola will not ask too many questions.
Lowering the requests will avoid rate limiting but will calculate data much slower. Parabola will stop calculating a flow after 60 minutes.
To limit the amount of pages to fetch use this field to set the value. Lower values will return data much faster. Higher values will take longer return data.
The default value in Parabola is 5 pages. Just note, this value needs be larger than the expected number of pages to be returned. This prevents any data from being omitted.
If you are pulling a large amount of data and want to limit how much is being pulled in while building, you can set the step to pull a lower number of pages while editing the Flow than while running the Flow.
URLs tend to break when there are special characters like spaces, accented characters, or even other URLs. Most often, this occurs when using {text merge} values to dynamically insert data into a URL.
Check the "Encode URLs" box to prevent the URL from breaking if special characters need to be passed.
By default, this step will parse the data sent back to Parabola from the API in the format indicated by the content-type header received. Sometimes, APIs will send a content-type that Parabola does not know how to parse. In these cases, adjust this setting from auto-detect to a different setting, to force the step to parse the data in a specific way.
Use the gzip option when the data is returned in a gzip format, but can be unzipped into csv, xml, or JSON data. If you enable gzip parsing, you must also specify a response type option.
Something not right? Check out this guide to learn more troubleshooting common API errors.
The Send to an API step sends a request to an API to export specific data. Data must be sent through the API using JSON formatted in the body of the request. This step can send data only when a flow is published.
This table shows the product information for new products to be added to a store. It shows common columns like "My Product Title", "My Product Description", "My Product Vendor", "My Product Tags".
These values can be used to create products in bulk via the Send to an API step.
To use the Send to an API step, a Request Type, API Endpoint URL, and Authentication are required. Some APIs require Custom Headers while other APIs nest their data into a single cell that requires a Top Level Key to format into rows and columns.
There are four ways to send data with an API using POST, PUT, PATCH, and DELETE requests. These methods are also known as verbs.
The POST verb is used to create new data. The DELETE verb is used to delete data. The PUT verb is used to update exiting data, and the PATCH verb is used to modify a specific portion of the data.
Hey API, can you POST new data to the server? The new data is in the JSON body.
The API Endpoint URL is the specific location where data will be sent. Each API Endpoint URL belongs to a specific resource. A resource is the broader category to be targeted when sending data.
To create a new product in Shopify, use their Products resource. Their documentation specifies making a POST request to that resource using this Endpoint URL:
Your Shopify store domain will need to prepended to each Endpoint URL:
The request information is sent to the API in the JSON body of the request. The JSON body is a block that outlines the data that will be added.
The body of each request is where data that will be sent through the API is added. The body must be in raw JSON format using key:value pairs. The JSON below shows common attributes of a Shopify product.
Notice the title, body_html, vendor, product_type, and tags can be generated when sending this data to an API.
Since each product exists per row, {text merge} values can be used to dynamically pass the data in the JSON body.
This will create 3 products: White Tee, Pink Pants, and Sport Sunglasses with their respective product attributes.
Most APIs require authentication to access their data. This is likely the first part of their documentation. Try searching for the word Authentication in their documentation. Below are the authentication types supported on Parabola:
The most common types of authentication are Bearer Tokens, Username/Password (also referred to as Basic), and OAuth 2.0. Parabola has integrated these authentication types directly into this step.
This method requires you to send your API Key or API Token as a bearer token. Take a look at this example below:
The part that indicates it is a bearer token is this:
To add this specific token in Parabola, select Bearer Token from the Authorization menu and add sk_test_WiyegCaE6iGr8eSucOHitqFF as the value.
This method is also referred to as Basic Authorization or simply Basic. Most often, the username and password used to sign into the service can be entered here.
However, some APIs require an API key to be used as a username, password, or both. If that's the case, Insert the API key into the respective field noted in the documentation.
The example below demonstrates how to connect to Stripe's API using the Basic Authorization method.
The Endpoint URL shows a DELETE request being made to a resource called customers. The authorization type can be identified as Basic for two reasons:
To delete this customer using Parabola, fill in the fields below:
This method is an authorization protocol that allows users to sign into a platform using a third-party account. OAuth2.0 allows a user to selectively grant access for various applications they may want to use.
Authenticating via OAuth2.0 does require more time to configure. For more details on how to authorize using this method, read our guide Using OAuth2.0 in Parabola.
Some APIs will require users to generate access tokens that have short expirations. Generally, any token that expires in less than 1 day is considered to be "short-lived" and may be using this type of authentication. This type of authentication in Parabola serves a grouping of related authentication styles that generally follow the same pattern.
One very specific type of authentication that is served by this option in Parabola is called OAuth2.0 Client Credentials. This differs from our standard OAuth2.0 support, which is built specifically for OAuth2.0 Authorization Code. Both Client Credentials and Authorization Code are part of the OAuth2.0 spec, but represent different Grant Types.
Authenticating with the Expiring Access Token option is more complex than options like Bearer Token, but less complex than OAuth2.0. For more details on how to use this option, read our guide Using Expiring Access Tokens in Parabola.
A header is a piece of additional information to be sent with the request to an API. If an API requires additional headers, it is commonly noted in their documentation as -H.
Remember the authentication methods above? Some APIs list the authentication type to be sent as a header. Since Parabola has specific fields for authentication, those headers can typically be ignored.
Taking a look at Webflow's API, they show two headers are required.
The first -H header is linked to a key called Authorization. Parabola takes care of that. It does not need to be added as a header. The second -H header is linked to a key called accept-version. The value of the header is 1.0.0. This likely indicates which version of Webflow's API will be used.
URLs tend to break when there are special characters like spaces, accented characters, or even other URLs. Most often, this occurs when using {text merge} values to dynamically insert data into a URL.
Check the "Encode URLs" box to prevent the URL from breaking if special characters need to be passed.
Check out this guide to learn more troubleshooting common API errors.
Use the Enrich with API step to make API requests using a list of data, enriching each row with data from an external API endpoint.
Our input data has two columns: "data.id" and "data.employee_name".
Our output data, after using this step, has three new columns appended to it: "api.status", "api.data.id", and "api.data.employee_name". This data was appended to each row that made the call to the API.
First, decide if your data needs a GET or POST operation, or the less common PUT or PATCH, and select it in the Type dropdown. A GET operation is the most common way to request data from an API. A POST is another way to request data, though it is more commonly used to make changes, like adding a new user to a table. PUT and PATCH make updates to data, and sometimes return a new value that can be useful.
Insert your API endpoint URL in the text field.
Most APIs require authentication to access their data. This is likely the first part of their documentation. Try searching for the word "authentication" in their documentation.
Here are the authentication types available in Parabola:
The most common types of authentication are 'Bearer Token', 'Username/Password' (also referred to as Basic), and 'OAuth2.0'. Parabola has integrated these authentication types directly into this step.
This method requires you to send your API key or API token as a bearer token. Take a look at this example below:
The part that indicates it is a bearer token is this:
To add this specific token in Parabola, select 'Bearer Token' from the 'Authorization' menu and add "sk_test_WiyegCaE6iGr8eSucOHitqFF" as the value.
This method is also referred to as "basic authorization" or simply "basic". Most often, the username and password used to sign into the service can be entered here.
However, some APIs require an API key to be used as a username, password, or both. If that's the case, insert the API key into the respective field noted in the documentation.
The example below demonstrates how to connect to Stripe's API using the basic authorization method.
The endpoint URL shows a request being made to a resource called customers. The authorization type can be identified as basic for two reasons:
To authorize this API in Parabola, fill in the fields below:
This method is an authorization protocol that allows users to sign into a platform using a third-party account. OAuth2.0 allows a user to selectively grant access for various applications they may want to use.
Authenticating via OAuth2.0 does require more time to configure. For more details on how to authorize using this method, read our guide Using OAuth2.0 in Parabola.
Some APIs will require users to generate access tokens that have short expirations. Generally, any token that expires in less than 1 day is considered to be "short-lived" and may be using this type of authentication. This type of authentication in Parabola serves a grouping of related authentication styles that generally follow the same pattern.
One very specific type of authentication that is served by this option in Parabola is called "OAuth2.0 Client Credentials". This differs from our standard OAuth2.0 support, which is built specifically for "OAuth2.0 Authorization Code". Both methods are part of the OAuth2.0 spec, but represent different grant types.
Authenticating with an expiring access token is more complex than using a bearer token, but less complex than OAuth2.0. For more details on how to use this option, read our guide Using Expiring Access Tokens in Parabola.
How to work with errors when you expect them in your API calls
In the Enrich with an API step and the Send to an API step, enable Error Handling to allow your API steps to pass through data even if one or more API requests fail. Modifying this setting will add new error handling columns to your dataset reporting on the status of those API calls
By default, this section will show that the step will stop running when 1 row fails. This has always been the standard behavior of our API steps. Remember, each row of data is a separate API call. With this default setting enabled, you will never see any error handling columns.
Update that setting, and you will see that new columns are set to be added to your data. These new columns are:
API Success Status will print out a true or false value to show if that row's API call succeeded or failed.
API Error Code will have an error code for that row if the API call failed, and will be blank if the API call succeeded.
API Error Message will display the error message associated with any API call that failed, if the API did in fact send us back a message.
With the exception of the default settings, these columns will still be included even if every row succeeded. In that case, you will see the API Success Status column with all true values, and the other two columns as all blank values.
It is smart to set a threshold where the step will still fail if enough rows have failed. Usually, if enough rows fail to make successful API calls, there may be a problem with your step settings, the data you are merging into those calls, or the API itself. In these cases, it is a good idea to ensure that the step can fully stop without needing to run through every row.
Choose to stop running this step if either a static number of rows fail, or if a percentage of rows fail.
You must choose a number greater than 0.
When using a percentage, Parabola will always round up to the next row if the percentage of the current set of rows results in a partial row.
In rare cases, you may want to ensure that your step never stops running, even if every row results in a failed API call. In that case, set your error handling threshold to any number greater than 100%, such as 101% or 200%.
Once you have enabled this setting, use these new columns to create a branch to deal with errors. The most common use case will be to use a Filter Rows step to filter down to just the rows that have failed, and then send those to a Google Sheet for someone to check on and make adjustments accordingly.
If you have a flow that is utilizing these error handling columns, the run logs on the live view of the flow will not indicate if any rows were recorded as failed. The run logs will only show a failure if the step was forced to stop by exceeding the threshold of acceptable errors. It is highly advisable that you set up your flow to create a CSV or a Google Sheet of these errors so that you have a record of them from each run.
Use the Pull from Airtable step to pull in your data from your Airtable databases.
On August 1, 2023, Airtable will no longer allow users to generate new API keys. If you have a Pull from Airtable step that was authorized before July 27th, 2023 (using an API key for authentication), it will continue to pull in data until February 1, 2024. After that date, the step will no longer function. To migrate your step to the new authentication method, open the step, click "Choose Accounts" -> "Add new account". Once that authentication has been added to one step in your Flow, you can switch other Airtable steps to use it as well.
To connect to your Airtable account, click the blue Authorize button.
Clicking Authorize will launch a window where you can sign in to Airtable and confirm which bases you would like Parabola to have access to. Any base that you do not select from this menu will not be available to pull data from.
Once connected, you can select the Base, Table and View from your Airtable bases. In the example below, we are pulling data from our Shopify Orders base and our Orders table using the Grid view.
You can also click Fetch new settings to reload any bases, tables, or views since your data was last imported.
If your base uses linked records to connect tables, those values will be pulled in as record ids. To get the complete data associated with those records, use another Pull from Airtable step to import the related table. Using the Combine tables step, you can merge the tables together based on a shared record id.
If a column has no values in it, that column will not be imported. There must be at least one value present in a row for the column itself to come through.
If a column has a duration in an h:mm format, Airtable exports duration value in millisecond units, parses incoming duration value using minutes. For example, Airtable sends 0:01 as 60.
Use the Send to Airtable step to create, update, or delete records in your Airtable base. Just map the fields in your Airtable base to the related columns in Parabola.
On August 1, 2023, Airtable will no longer allow users to generate new API keys. If you have a Send to Airtable step that was authorized before July 27th, 2023 (using an API key for authentication), it will continue to pull in data until February 1, 2024. After that date, the step will no longer function. To migrate your step to the new authentication method, open the step, click "Choose Accounts" -> "Add new account". Once that authentication has been added to one step in your Flow, you can switch other Airtable steps to use it as well.
To connect to your Airtable account, click the blue Authorize button.
Clicking Authorize will launch a window where you can sign in to Airtable and confirm which bases you would like Parabola to have access to. Any base that you do not select from this menu will not be available to pull data from.
Once connected, you can chose to create records, update records, or delete records from the base and table of your choosing.
In the example below, we are adding order #2001 to our Orders table within our Shopify Orders base.
Note how the Airtable fields are displayed on the left-hand side. Each of the columns from your Airtable base appears. On the right-hand side, map the values from our Parabola data to be added into those fields.
You can also target a specific record to be updated. Map the Record ID* to the id column in Parabola that contains that data. You can also chose the specific fields you want to update.
In this example, we are updating the Order: Name of record recYmhxVBRqxWNT7N.
To delete a record, simply map the Record ID* to the id column in Parabola. In this example, we are deleting record recYmhxVBRqxWNT7N.
Convert your percentages to decimal values using before sending data to Airtable. For example, if your data contains 0.51%, convert that to 0.0051 and adjust your precision values in Airtable. By default, Airtable may interpret that as 0.01%.
You can automatically pass the values of your select options to set those values in your Airtable base. If you enter a select option that does not exist, Airtable will automatically create new select option for that value.
Airtable parses incoming duration value using minutes. For example, if your duration is 60 milliseconds, Airtable will parse that value as 1:00.
Set a value of true to toggle a checkbox in your table. Set a value of false to un-toggle a checkbox in your table.
When updating Airtable column with field type collaborator, you can pass in an id or email value. Passing a name value will return an error of "Cannot parse value".
Use the Pull from Amazon Seller Central step to pull in reports from Amazon Seller Central.
This step is currently offered to users on our Advanced Plan. Check out the Pricing Page for additional information.
In your builder, bring in the Amazon Seller Central step and Click on “Authorize Amazon Seller”. You will see a module pop-up prompting you to login to your Amazon Seller Central account.
Select your Report Category to pull in the different categories. Additional details on Report Categories, such as the description, is located on Amazon’s Developer Documents.
There will be different Types of reports you can pull in based on the input selected in Report Category .
Your timeframe will default to the previous 1 month. We recommend pulling in data for the least amount of timeframe you need to minimize the time it takes to pull in the report.
Lastly, there will be Report Options which is available for some report types.
Our step currently pulls from the Reporting API. If you’re looking for data from the Orders or Customers API, we recommend pulling from a report where this data exists.
There are two inventory types - Inventory and Fulfillment By Amazon (FBA) Inventory. You may need to check both report types to find the dataset you need.
The Amazon Seller Central API can take up to an hour to return your report results. Unless necessary, we recommend setting the report to pull in the least amount of data needed for your Flow.
The timezone will be set to your browser’s timezone by default, and you can adjust the timezone if needed. Parabola will take your timeframe and timezone, then adjust it to UTC when requesting the report.
If you see a report that exists in your Amazon Seller Central that you don’t see in Parabola, let us know at help@parabola.io!
The Pull from Box step gives you the ability to pull a CSV or Excel file from your Box account.
To connect your Box account to Parabola, select Authorize and follow the prompt to grant Parabola access to your Box files.
Once you have authorized your Box account, select your file in the File dropdown.
Additionally, you can tell Parabola if you're using a different delimiter, such as tab (\t) or semicolon (;), by selecting in the Delimiter dropdown. By default, Parabola will use the standard comma (,) delimiter.
The Send to Box step gives you the ability to create a new or update an existing file in your Box account.
To connect your Box account to Parabola, select Authorize and follow the prompt to grant Parabola access to your Box files.
Select the File dropdown to choose if you want to overwrite an existing file or create a new file.
If creating a new file, give the file a name in the New File Name field.
You can also decide if this is a one-off creation, or if you'd like to create a new file every time your flow runs. If you choose to create a "New File Every Run", each new file will have a timestamp appended to the file name in Box.
Use the Pull from Bubble beta step to retrieve data from your Bubble app.
Parabola works through Bubble’s Data API, so make sure the Data API is enabled. You can do this in the API section of the settings tab in your Bubble app.
In the Pull from Bubble step, insert your App Name and Object Name in the API Endpoint URL field.
Let's say the thing you want to retrieve from Bubble is "Recipes" — you would replace OBJECTNAME with recipes. The general rule of thumb of here is for the object you want to retrieve, remove the spaces in the name and use lowercase letters! Also, it's worth noting that the Endpoint URL is different if your app isn't live yet (Bubble provides a URL to hit if that's the case). If you need more results, open the advanced settings and increase Max Pages to fetch. You'll also need to add your API Token to the Bearer Token section to authenticate.
To find your API Token, go to Bubble's Settings Tab and generate and copy the token. Paste your API Token into the Bearer Token field and click Show Updated Results.
The result of the API call is structured in JSON. To flatten within this step, adjust your Top Level Key top level key should be response and 2nd level key should be results.
Use the Send to Bubble step to send your data to / update your data in your Bubble app.
Parabola works through Bubble’s Data API, so make sure the Data API is enabled. You can do this in the API section of the settings tab in your Bubble app.
To connect to your Bubble account, you'll need to do so through the Bubble API. The Send to Bubble step pre-fills much of the the information you already need!
When we send data to Bubble, we'll likely use a PATCH or a POST request, which we can use to send data to / update data in your Bubble app. Make sure to update the API Endpoint URL to include your app name and the object you'd like to work with. The general rule of thumb of here is for the object you want to retrieve, remove the spaces in the name and use lowercase letters! Also, it's worth noting that the Endpoint URL is different if your app isn't live yet (Bubble provides a URL to hit if that's the case).
If you need to send data to your endpoint, use the Body field to build the JSON and merge in any cell values by referencing the column name in {curly braces}. In the below example, we show you what it might look like if you wanted to update product inventory in your Bubble app!
You'll need to add your API Token to the Bearer Token section to authenticate. To find your API Token, go to Bubble's Settings Tab and generate and copy the token. Paste your API Token into the Bearer Token field.
The Use CSV file step enables you to pull in tabular data from a CSV, TSV, or a semicolon delimited file.
The first thing to do when using this step is to either drag a file into the outlined box or select "Click to upload a file".
Once the file is uploaded and displayed in the Results tab, you'll see two settings on the lefthand side: File and Delimiter. You can click File to upload a different file. Parabola will default to using a comma delimiter, but you can always update the appropriate delimiter for your file by clicking on the Delimiter dropdown. Comma , , tab \t, and semi-colon ; are the three delimiter types we support.
In the "Advanced Settings", you can set a number of rows and a number of columns to skip when importing your data. This will skip rows from top-down and columns from left-to-right. You can also select a Quote Character which will help make sure data with commas in the values/cells don’t disrupt the CSV structure.
The Generate CSV file step enables you to export tabular data in a CSV file. This lets you create custom datasets from a variety of sources in your flow and automatically send them to your email or your colleague's email.
Once you connect your flow to this export step, it will show a preview of the tabular data to be sent.
The step will automatically send this downloadable CSV file link to your Parabola account's email address used.
The name of the file will be the step's title, so if you'd like to name your custom dataset file, you can double-click on it to write a new one.
Once you publish and run your flow, the emailed CSV file link will expire after 24 hours.
If the step has no data in it (0 rows), then even after running your flow, an email with a CSV file won't be sent.
The files you send through this step are stored by Parabola. We store the data as a convenience, so that the next time you open the flow, the data is still loaded into it. Your data is stored securely in an Amazon S3 Bucket, and all connections are established over SSL and encrypted.
Only one input source can be connected to this step at a time. If you have multiple flows on a canvas, or have a multi-branch flow, you can email CSVs of these other datasets by connecting them to individual Generate CSV file steps.
Write or paste a sheet of data by hand. Sheets are best used for small datasets like lookup tables or additional rows that can be fed to subsequent steps. This step is limited to 100 rows and 100 columns.
Create a sheet of data by typing in values, or copying and pasting from an existing spreadsheet. The sheet has 100 rows and 10 columns by default. Extra columns will be added automatically if the data you have pasted requires them. You can also use the "+ Column" button to add more columns manually.
Data can be highlighted across rows, columns, or cells to be edited or deleted. Use the "Clear sheet" button to clear out all data from the sheet, including the headers.
Updates to the dataset will only be saved to be used by other steps in your Flow once you click the "Save this sheet" button.
The DHL Shipment Tracking API is used to provide up-to-the-minute shipment status reports by retrieving tracking information for shipments, identifying DHL service providers, and verifying DHL delivery addresses.
DHL is a beta integration which requires a slightly more involved setup process than our native integrations. Following the guidance in this document should help even those without technical experience pull data from DHL. If you run into any questions, shoot our team an email at support@parabola.io.
📖 DHL Reference docs:
https://developer.dhl.com/api-reference/shipment-tracking#reference-docs-section
🔐 DHL Authentication doc links:
https://developer.dhl.com/api-reference/shipment-tracking#get-started-section/user-guide
1. Click My Apps on the portal website.
2. Click the + Add App button.
3. The “Add App” form appears.
4. Complete the Add App form.
5. You can select the APIs you want to access.
6. When you have completed the form, click the Add App button.
7. From the My Apps screen, click on the name of your app. The Details screen appears.
8. If you have access to more than one API, click the name of the relevant API.
⚠️ Note: The APIs are listed under the Credentials section.
9. Click the Show link below the asterisk that is hiding the Consumer Key.
1. Add an Enrich tracking from DHL step template to your canvas.
2. Click into the Enrich with API: DHL Tracking step to configure your authentication.
3. Under the Authentication Type, select None.
4. Click into the Request Settings to configure your request using the format below:
Get started with this template.
Test URL
https://api-test.dhl.com/track/
Production URL
https://api-eu.dhl.com/track/
1. Add a Use sample data step to your Flow. You can also import a dataset with tracking numbers into your Flow. (Pull from Excel File, Pull from Google Drive, Pull from API, Use sample data, etc.)
💡 Tip: When using your own data, use the Edit columns step to rename the tracking column in your source data to Tracking Number.
2. Connect it to the Enrich with API: DHL Tracking step.
3. Under Authentication Type, select None.
4. Click into the Request Settings to configure your request using the format below:
💡 Tip: The Enrich with API step makes dynamic requests for each row in the table by inserting the tracking number in the API Endpoint URL.
The example above assumes, there is a Tracking Number column and is referenced using curly brackets:{Tracking Number}
Enclose your column header containing tracking numbers with curly brackets to dynamically reference the tracking numbers in your table.
5. Click Refresh data to display the results.
⚠️ Note: Rate limits protect the DHL infrastructure from suspicious requests that exceed defined thresholds.
When you first request access to the API, you will get the initial service level which allows 250 calls per day with a maximum of 1 call every 5 seconds.
Additional rate limits are available and they are granted according to your specific use case. If you would like to request for additional limits, please proceed with the following steps:
1. Create an app as described under the Get Access section.
2. Click My Apps on the portal website.
3. Click on the App you created.
4. Scroll down to the APIs list and click on the "Request Upgrade" button.
The Send to Databox step is a beta step. This means that while it's not a fully built-out integration, it's a preconfigured Send to an API step that makes it easy to get set up and send data to Databox using their API.
Databox is a business analytics platform that enables you to pull and analyze all of your data in one place.
The first thing to do is get an API Token from your Databox account. Click here for instructions on how to find your pre-assigned Databox token.
Once you have it, paste the API Token to the Username field on the Send to Databox step. Leave the Password field blank. This is all you need to do to authenticate with the Databox API.
Now, it's time to configure the rest of the Send to Databox step.
When sending data to the Databox API, you will be sending your data row-by-row. Whether you're sending 1 row of data or 500 rows of data, the way you set up the Body field will not change. You can consult Databox's full API documentation here.
When sending multiple metrics to Databox, the Body field of your Send to Databox step should look something like this:
The metrics and attributes wrapped in double-quotes " " are the metrics and attributes in Databox. The values wrapped in double-quote and curly braces {} are the column names that store those values in Parabola.
In this example, 3 metrics are being sent, "clicks", "sales", and "users" with its corresponding values store in the columns {Clicks}, {Sales}, and {Users} respectively. "date" is the attribute we're sending for each metric.
The dollar sign $ before a metric name is mandatory. This character is used to differentiate a metric from its attributes.
When sending a single metric to Databox, the Body field of your Send to Databox step should look something like this:
In this example, 1 metric is being sent, "sales" with its corresponding value stored in the column {Sales}. "date" is the attribute we're sending for that metric.
When sending a metric with multiple attributes to Databox, the Body field of your Send to Databox step should look something like this:
In this example, 1 metric is being sent with 2 attributes. "sales" is the metric being send with its corresponding value in the column, {Sales}, and "date" and "channel" are the attributes with its corresponding values.
The Start with date & time row step creates a single row with the current date and time, with customizable offsets by day and timezones. As its name indicates, this step is a starting step so it does not accept any inputs. The current date and time will be determined automatically at the time the flow is run.
You would start your flow off with the Start with date & time row if you need relative date data as one of your data sources. The most common use for this step is if you need to provide date variables when working with APIs. Many APIs require dates to be sent in order to pull the information that you need. Since manually adjusting those dates before every flow run would defeat the purpose of an automation, this Start with date & time row solves for that.
You can add multiple rules to this step by clicking on the blue Add Date & Time Rule link. Each rule will be represented in a new column.
By default, the Days Offset field will be set to 0, meaning the date displayed is the current date and time. If you choose a positive value for the offset, it will display a future date, and if you choose a negative value, it will display a past date.
All date and time values created by this step look like this: 2019-09-18 12:33:09 which is a format of YYYY-MM-DD hh:mm:ss. If you prefer a different date format, connect a Format dates step right after this one to get the date values in your preferred format.
Use the Send to DocSpring step to automatically create submissions for your DocSpring PDF Templates.
To connect to your DocSpring account, you'll first need to click the blue "Authorize" button.
You'll need your DocSpring API Token ID and your DocSpring API Token Secret to proceed. To do so, visit your API Token settings in DocSpring.
Reminder: if you're creating a new API Token, the Token Secret will only be revealed immediately after creating the new Token. Be sure to copy and paste or write it down in a secure location. Once you've created or copied your API Token ID and Secret, come back to Parabola and paste them into the correct fields.
To pull in the correct DocSpring Template, you'll need to locate the Template ID. Open the Template you want to connect in DocSpring and locate the URL. The Template ID is the string of characters following templates/ in the URL:
Paste the ID from the URL in the Template ID field.
The Pull from Drip step is a beta step. This means that while it's not a fully built-out integration, it's a preconfigured Pull from an API step that makes it easy to get set up and pull data from Drip using their API.
Drip is a marketing automation platform built for ecommerce.
You will need the following 3 things to connect to the Drip API:
You should be able to locate your API Key from your User Settings page on Drip.
Once you've located this information from Drip:
By default, the Pull from Drip beta step is set up to pull data from the Subscribers API endpoint which pull a list of all subscribers.
You can update that endpoint URL in the API Endpoint URL field if you'd like to pull in other data from Drip's API. You can read their full API docs here.
The Pull from Dropbox step gives you the ability to pull in a spreadsheet from your Dropbox account. You can pull in CSV and XLS files from a personal or shared folder.
To connect to your Dropbox account, select Authorize to login in with your Dropbox credentials.
To pull a Dropbox file into Parabola, select it from the File dropdown. You will see all files that you have access to (for Dropbox Business customers, that means both personal and team files).
If your file is a CSV, you can then choose the Delimiter. By default, the delimiter is set to comma , , but you can also select tab \t or semicolon ; to match your data source.
The Send to Dropbox step gives you the ability to send CSV data to your Dropbox account. You can choose between creating a completely new file, once or everytime the flow runs, or updating an existing file in Dropbox.
To connect to your Dropbox account, click Authorize to login in with your Dropbox credentials.
Under the File dropdown, decide if your data will create a brand new file or overwrite an existing file that already exists in Dropbox. When overwriting an existing file, you will see all files you have access to (for Dropbox Business customers, that means both personal and team files).
If you select to Create New File, you must also give your file a New File Name.
You can toggle New File Every Run, which when turned off, gives you the ability to send this newly created file is a one-off, or if turned on, will create a separate, new file in Dropbox each time the flow runs.
The Email a file attachment step gives you the ability to send an email to a list of recipients with a custom message and an attached file (CSV or Excel) of your transformed data.
After connecting your flow to this destination step, enter up to ten email addresses in Email Recipients section. Enter what you'd like the email subject line to be in the Email Subject section. Enter your custom message in the Email Body section. Please note that all of these fields are required.
You can use merge tags {} to include dynamic values in any field (recipients, subject, body, file name, reply to). Those work as follows:
In the "Advanced Settings" dropdown, enter the email address you wish for recipients to reply to. This will ensure that any replies to these emails will go to the right place.
The step can accept multiple input arrows of data if it is set to generate an Excel file. Each input will be a new tab within the generated file, and each tab must be given a unique name.
The files you send through this step are stored by Parabola. We store the data as a convenience, so that the next time you open the flow, the data is still loaded into it. Your data is stored securely in an Amazon S3 Bucket, and all connections are established over SSL and encrypted.
Emails sent through this step must be 30MB or smaller.
The Pull from inbound email step gives you the ability to receive file attachments (CSV, XLS, PDF, or JSON files) from an incoming email and pass it to the next step. The step also gives you the ability to pull an email subject and body into a Parabola Flow. Use this unique step to trigger Flows, using content from the email itself.
Watch the Parabola University video below to see this data pull in action.
Note: PDF file support is currently offered to users on our Advanced Plan. Check out the Pricing Page for additional information.
To begin, take note of the generated email address that is unique to this specific flow. Copy the email address to your clipboard to start using this dedicated email address yourself or to share with others.
The File Type is set to CSV / TSV, though you can also receive XLS / XLSX, PDF, or JSON files.
The Delimiter is set to comma (,), but can also be adjusted to tab (\t) and semicolon (;). If needed, the default of Quote Character set to Double quote ( " " ) can be changed to single quote ( ' ' ).
This step contains optional Advanced settings, where you can tell Parabola to skip a certain number of rows or columns when receiving the attached file.
To auto-forward a CSV attachment to an email outside of your domain, you may need to verify the @inbound.parabola.io email address. The below example shows how to set this up in Gmail.
Auto-forwarding is now set up to trigger your flow! Please note, you will need to do this each time you create a new flow using this step.
By default, Flows will run with the first valid attached file. If you want the Flow to run through multiple attached files (multiple attachments on one email), open the “Email trigger settings” modal and change the setting to “Run the Flow once per attachment:”
(Access these settings from the Pull from Email attachment step, or from the Flow trigger settings on the published Flow page.)
For emails with multiple files attached, the Flow will run once per file received, sequentially.
We also support the ability to pull in additional information about an email, including:
To access these fields, you can toggle the "Pull data from" field to pull in Email subject and body. If you'd like to pull both an attachment and the subject and body, you can use two separate steps to pull in both of these datasets.
Use the "position is" option when pulling in an attached Excel document to specify which sheet to pull data from by its position, rather than its name. This is great for files that have key data in consistent sheet positions, but may not always have consistent sheet names.
When using this option, only the number of sheets that are in the last emailed file will show in the dropdown. If a Flow using these settings is run and there is no sheet in the specified position, the step will error.
Check out this Parabola University video for a quick intro to our PDF parsing capabilities, and see below for an overview of how to read and configure your PDF data in Parabola.
Parabola’s Pull from PDF file step can be configured to return Columns or Keys
Once you have a PDF file in your flow, you will see a prompt for the second step - “Select table columns,” where you will provide information to the tool to determine what fields it should extract from the file. Parabola offers three methods for this configuration -
First, we’ll outline how these choices will impact your results and then we will discuss tips and best practices for fine tuning these results:
See below how in this case with handwriting, with more instructions the tool is able to determine if there is writing next to the word “YES” or “NO”
Parabola’s Pull from PDF step has four additional configurations:
Mark columns as “Child columns” if they contain rows that have values unique from the parent columns:
Before
After marking “Size” as a child column
Use the “Multiple Formats” option in the Pull from email attachment step to create multiple parsing methods (formats) for inbound documents. The step can be configured to automatically choose a format for each document sent to the Flow.
In the attachment settings, choose the option "PDFs (multiple formats, with AI)". Any existing format will be consolidated to a card called "Format 1" (we recommend renaming this).
To add more formats, click “add a format” beneath the last format card. When working with multiple formats, consider enabling the option "Put the format name in a new column" for reference.
With multiple formats in the step, each format card will show additional fields, outlining when the format will be used. These settings are evaluated from top to bottom, using the first format where the rules match the current email.
If no format matches the current email, then the step will error.
Tip: To ensure that the step is always able to find a format, set your final format filter to “From” “is not blank”, as the “From” field is guaranteed to never show up blank.
Each time a format is used to parse a document, it will indicate that it was the latest format used by this step. This will always correspond to the data shown in the results to the right.
To access the parsing settings for a format, click the “PDF parsing settings” button in the format card.
Once inside of these parsing settings, you can test that format on a test file, or on the last file that was emailed to the Flow.
Clicking “Show updated results” here will skip the format routing process and instead will run the document through this format.
The Use Excel file step enables you to pull in tabular data from an Excel file.
First, select Click to upload a file.
If your Excel file has multiple sheets, select which one you'd like to use in the dropdown menu for Sheet.
In the Advanced Settings, you may also select to skip rows or columns. This will skip rows from top-down and columns from left-to-right.
Formatted data
Cell data is imported as formatted values from Excel. Dates, numbers, and currencies will be represented as they appear in the Excel workbook, as opposed to their true underlying value.
Enabling unformatted values will import the underlying data from Excel. Most notably, this will show raw numbers without any rounding applied, and will convert dates to Excel's native date format (the number of days since 1900-01-01).
This step can't pull in file updates from your computer, so if you make dataset changes and wish to bring them into Parabola, this requires manually uploading the updated Excel file. When you upload an Excel file, all formulas are converted to their value, and formatting is stripped (formatting or formulas are not preserved).
The files you upload through this step are stored by Parabola. We store the data as a convenience, so that the next time you open the flow, the data is still loaded into it. Your data is stored securely in an Amazon S3 Bucket, and all connections are established over SSL and encrypted.
Pull in files from an FTP, SFTP, or FTPS server. Target exact files, or find files based on rules within a folder. Supports csv, tsv, excel, xml, and json file parsing. Can parse edi files as csv.
The first thing that you need to do is connect to your server in order to pull in any files.
When you first add an FTP step to a flow, you can open it and will see an Authorize button.
Click Authorize, and you will see this form:
You will need to fill in each field in order to connect.
The Port can be manually set, or it will default to a port depending on which choice you have selected for the transfer protocol.
Using FTP (instead of SFTP or FTPS) is not recommended. Most FTP servers offer one of the other options.
If you are connecting via SFTP and are using a private key to connect, you will need to check the "Use public key authentication" box to see the option to upload that key and connect.
If you need to edit or add another connection, open your FTP step, click on "Select accounts", and then either click to add a new account, or edit the existing one.
After editing your connection settings, click the refresh button to have the step re-connect with the new settings.
The main option at the top of the step allows you to switch between pulling in a specific file and a file based on rules.
When pulling in a specific file, enter the path to that file. All paths should start with / and then progress through any folders until ending in the name of the file and its extension.
Click the 3-dot more menu to override how to parse a file. By default, this step will parse the file based on the extension of the file. But you can change that. For example, if you have a .txt file that is really a csv file inside, you can choose to parse that txt file as if it were a csv.
The main option at the top of the step allows you to switch between pulling in a specific file and a file based on rules.
When pulling a file based on rules, a new file will be pulled in every time the flow is run, or the step is refreshed.
A file can be selected based on:
First, choose between pulling the newest file or the oldest file, based on its last modified date.
Second, choose a file name pattern. If you select is anything, no filtering based on file name will be applied. You can select to filter for files that start with, end with, or contain a certain set of characters. This can also be used to match the file extension (.csv for example).
Third, choose a folder to find the file within. If you use / then it will search the root folder. Other folders that are inside of the folder that you have indicated will not be searched and will be ignored.
Finally, select a parsing option if you want to override the default.
Every time a file is pulled in from a rule, the name will be displayed in the step settings.
Enable the Archive file once processed setting to automatically move files from the target folder to a different folder.
Files will be moved immediately after the data from the file is fetched by the Pull from FTP step. If the step fails for some reason with an error, the file will not be moved.
If the file is pulled in successfully, but another step causes the Flow to fail, then the file will still be archived, even if the overall Flow failed to complete.
In the run history of the Flow, the names of any files pulled in from FTP will be listed to show what file was moved during successful or failed runs.
Use of this setting is best combined with the “Pull in a file based on rules” setting. With this combination, a Pull from FTP step can continuously cycle through a specific FTP folder and process any files available within it.
Sometimes XML files will not successfully pull into this step. In that case, it may be due to how the step is parsing the file by default. Use the Top Level Key field to indicate which key to expand into the table. This can help if there is a list of data, but there are other keys surrounding it, and you just need to get to that interior list. You can indicate a deeper key by placing dots between each key level. For example, if you have an object called Cars, and inside it is a list called Colors, which you want to expand, you would put Cars.Colors in the Top Level Keys field.
This FTP step can be used to pull in files up to 600MB. Contact us if you need larger files to be pulled in.
Global limits my stop your file before its size does, however. Steps can only run for 1 hour, and can only pull in 5 million rows and 5000 columns.
Create or overwrite files in an FTP, SFTP, or FTPS server. Supports CSV, TSV, Excel, and JSON file creation and overwriting.
The first thing that you need to do is connect to your server in order to send any files.
When you first add an FTP step to a flow, you can open it and will see an Authorize button.
Click Authorize, and you will see this form:
You will need to fill in each field in order to connect.
The Port can be manually set, or it will default to a port depending on which choice you have selected for the transfer protocol.
Using FTP (instead of SFTP or FTPS) is not recommended. Most FTP servers offer one of the other options.
If you need to edit or add another connection, open your FTP step, click on "Select accounts", and then either click to add a new account, or edit the existing one.
After editing your connection settings, click the refresh button to have the step re-connect with the new settings.
The main option at the top of the step allows you to switch between creating a new file, and overwriting a file.
When creating a new file, you have a few settings to fill out:
JSON files generated have their array as the top level element. Each row will be converted into an object, and then each row-object will be comma separated in the top level array.
Given data in Parabola that looks like this:
You can expect JSON that looks like this:
Excel files that are created by this step are in the .xlsx format. They will have no additional formatting applied.
In the field for the name of your file, you can type anything that you'd like to name your file. Do not include the extension, as one will be automatically added by the step, according to the format you have chosen.
If you put "my file.csv" in the file name field, and then have the step create a CSV file, it will ultimately be named "my file.csv.csv" in your FTP server.
Most servers will not be happy if you try to name a file, and that name already exists in that folder. To get around this, you can use merge tags to add dates and time to your file name. Anywhere you place that tag in the name field, the date of the run will be inserted in the following formats:
All dates and times are in UTC timezone.
The final setting is used to indicate where the file should go.
The root of your server will be at / and any other folder will start with / as well. If you have a folder named "reports" that is located in the Root folder, then you would use /reports in the folder field.
The main option at the top of the step allows you to switch between creating a new file or overwriting a file.
Overwriting a file is simple - enter the path to the file to overwrite each time, and the format for the new data inside that file.
It is best to select the format of the file that it's extension indicates. Because the data is fully replaced within the file, the format that Parabola sends does not strictly need to match the format that the name of the file indicates.
For example, you could send CSV data to a file named jobs.txt and it would work fine. But having an extension on a file that does not represent how it should be used or read can cause issues down the line.
The final setting is used to indicate the path to the file to overwrite.
Paths should always start with a / which is the root folder. From there, you can add more folders (or not), and end with the file name and its extension.
In the image above, we are targeting a file named customers.csv which is in the root folder. If that file was in a sub folder named crm, then the path would look like this:
Use the Pull from Facebook Ads step to connect to any Facebook Ads account and pull in custom reports or columns and breakdowns.
Double-click on the reviewing in webflow step and click the blue button to "Login with Facebook". A pop-up window will appear asking you to log in to your Facebook account to connect your data to Parabola.
If you ever need to change the Facebook Ads account that your step is connected to, or connect to multiple Facebook Ads account within a single flow, click "Edit accounts" at the top of the step. Head here for more info.
The default settings for this flow will allow you to see data from your Facebook Ads account right away. If you have multiple Ads accounts, be sure to select the correct account here:
By default, the step will pull in insight for the last 7 days.
We've added a lot of standard reports that Facebook Ads shows in their Ads Manager page. Selecting a standard report will update your Columns and Breakdowns selection fields to show the columns that will be imported.
These standard reports can be used as it, or can also start as a great starting point to further customize your report.
To further customize your Facebook Ads data being pulled into Parabola, you can select Columns and Breakdowns.
Each breakdown will also add its own column, and break each row into multiple rows. For example, you could look at your Reach column, and break it down by Campaign to see the reach of each campaign
You can either select a preset relative date or a custom date range in this step.
Select a preset relative date range, such as the Last 7 Days, to pull data from a range that will update every time this flow runs.
Select a custom period between, such as September 17, 2020 - September 24, 2020 to pull from a static date range that will always pull from that set range when the flow runs.
At the bottom of the step, we'll display the attribution window that is being used to product your report:
Using a 28-day click and 1-day view attribution window in your Facebook account's time zone.
Your Facebook account time zone will be used to determine how to pull data from your selected date range.
Currently there is a known issue in the Facebook API that has not been resolved by their team yet. It causes certain requests to timeout or error when they should work. Our team is keeping tabs on the issue and will remove this known issue when it has been fixed by Facebook. In the meantime, you may need to remove certain columns of breakdowns from your settings in order to get the step working and returning data!
The FedEx API is used by businesses, developers, and logistics managers to integrate FedEx's shipping, tracking, and logistics services into their platforms and operations.
FedEx is a beta integration which requires a slightly more involved setup process than our native integrations. Following the guidance in this document should help even those without technical experience to enrich data from FedEx. If you run into any questions, shoot our team an email at support@parabola.io.
📖 FedEx API Reference:
https://developer.fedex.com/api/en-us/get-started.html
🔐 FedEx Authentication Documentation:https://developer.fedex.com/api/en-us/catalog/authorization/v1/docs.html
1. Navigate to the FedEx Developer Portal.
2. Click Login to access your FedEx account.
3. In the side-menu, select My Projects.
4. Click + CREATE API PROJECT.
5. Complete the modal by selecting the option that best identifies your business needs for integrating with FedEx APIs.
6. Navigate to the Select API(s) tab.
7. Select the API(s) you want to include in your project. Based on the API(s) you select, you may need to make some additional selections.
⚠️ Note: If you select Track API, complete the additional steps below:
1. Select an account number to associate with your production key.
2. Review the Track API quotas, rate limits, and certification details.
3. Choose whether or not you want to opt-in to emails that will notify you if you exceed your quota.
8. Navigate to the Configure project tab.
9. Configure your project settings with name, shipping location, and notification preferences.
10. Navigate to the Confirm details tab.
11. Review your project details, then accept the terms and conditions.
12. On the Project overview page, retrieve your API Key and Secret Key.
💡 Tip: Use Production Keys to connect to live production data in Parabola. Use Test Keys to review the request and response formats using from the documentation.
1. Add an Enrich tracking from FedEx step template to your canvas.
2. Click into the Enrich with API: FedEx Tracking step to configure your authentication.
3. Under the Authentication Type, select Expiring Access Token before selecting Configure Auth.
4. Enter your credentials to make a request to the OAuth endpoint using the format below:
(POST)
Sandbox URL
https://apis-sandbox.fedex.com/oauth/token
Production URL
https://apis.fedex.com/oauth/token
⚠️ Note: Use your API Key in place of your Client ID. Use your Secret Key in place of your Client Secret.
access_token
5. Click Advanced Options
Authorization
Bearer {token}
6. Click Authorize
Get started with this template.
1. Add a Use sample data step to yourFlow. You can also import a dataset with tracking numbers into your Flow (Pull from Excel File, Pull from Google Drive, Pull from API, etc.)
💡 Tip: When using your own data, use the Edit columns step to rename the tracking column in your source data to Tracking Number.
2. Connect it to the Enrich with API: FedEx Tracking step.
3. Under Authentication Type, select Expiring Access Token to use your authentication credentials.
4. Click into the Request Settings to configure your request using the format below:
💡 Tip: The Enrich with API step makes dynamic requests for each row in the table by inserting the tracking number in the Body field.
The example above assumes, there is a Tracking_Number column and is referenced using curly brackets:{Tracking_Number}
Enclose your column header containing tracking numbers with curly brackets to dynamically reference the tracking numbers in your table.
5. Click Refresh data to display the results.
The Pull from file queue receives a file URL (CSV, PDF, Excel) along with associated data. Use this to trigger Flows to process a file via a URL that is sent to the Flow.
This step is currently offered to users on our Advanced Plan. Check out the Pricing Page for additional information.
The file queue processes files that are accessible via URL. To send a file to your Parabola Flow, make an API call to the file queue endpoint. The Pull from file queue step, once added and enabled, will show a modal containing the endpoint details. For example:
Any valid POST requests to that endpoint will trigger the Flow to run, processing the file using the file parsing settings within the step. Additional requests will be queued up to run one after another.
Alternatively, use the Run another Parabola Flow step with the following configuration to trigger runs of another Flow through the file queue:
The Pull from Front step pulls in data from your Front account so you can quickly make insightful reports on your team and customers.
To connect your Front account, select Authorize.
A new page will pop up asking you to Authorize again. After you do, it will return to your Parabola page.
Once you're back in the step's settings, in the Info Type dropdown menu, select the type of data you'd like to pull in
Here are the Info Types that are available:
Instantly leverage AI-generated text in your Parabola Flows using our GPT-3 Beta integration. Build prompts to generate custom messaging, analyze content in emails and attachments, generate new product listings – the possibilities are endless.
If you're new to APIs, this integration guide should help you set up the API connection in minutes. If you run into any issues, reach out at support@parabola.io.
To get started, you'll need to add your OpenAI API key to the Parabola "Prompt GPT-3" step. To find your API key, follow these steps:
Please note that, to use the GPT-3 API, you need to have an OpenAI API key and also check their pricing plan or pricing models.
Here are 10 ChatGPT-generated use case examples:
Use the Send to Geckoboard step to send your data to Geckoboard's data visualization tool and automatically update the underlying data of your dashboards.
To connect your Geckoboard account, click Authorize.
Follow the link to lookup the Geckoboard API Key, copy from your Geckoboard account settings, and paste into Parabola. Click Authorize to complete the connection.
First, choose a Dataset Name. This name will auto-format to remove capital letters and spaces, as required by Geckoboard.
Using the dropdowns, map your data's columns to the appropriate field data types available in Geckoboard. If you want to make a line chart with this dataset, you must have a "Date" column.
This step will only work with Google Analytics V4. If you have not yet migrated over to GA V4 and are using Google’s Universal Analytics, you will need to use the Pull from Google Analytics UA step to pull in your Google Analytics data. Google is deprecating Universal Analytics on July 1, 2023. Once you have moved your data over to Google Analytics 4, you will need to update your Flows to use this Parabola step to continue accessing your Google Analytics data. Read more about how Google is updating this here.
Use the Pull from Google Analytics 4 step to bring all of your Google Analytics data into Parabola in a familiar format. Choose a date range and which metrics and dimensions to pull in to create a report just like you are used to doing in Google Analytics.
Begin by authenticating your Google Analytics account by clicking Authorize.
First, select the Account and Property that you would like to pull data from.
Then, select which metrics to pull in. These are same metrics that are available in Google analytics. Every metric that you add will result in a column being added to your report. You can select as many metrics as necessary for your report, including New Users, Bounces, Sessions, and many more.
Use dimensions to group your metrics and break them into more rows. Each dimension adds a column to the front of your table, and often will change any how many rows your report contains. Leaving the dimensions field blank will result in a single row of data.
The time frame can be updated to let you pull data from:
You can also adjust for when you'd like the timeframe calculation to run, giving you the ability to pick between when the Flow is run or the most recently completed month/week/day/hour. The latter option is great for running a report for the last month, on the 1st of the following month, while excluding any data collected for far that day.
Lastly, if you choose, you can add offset for your date timeframe.
If you are looking to compare this data set to the same set, but from the previous period, a great way to do that is to pull in the two data sets, and the use the Combine tables step to combine them, using their dimensions in the matching rules.
Google is deprecating Universal Analytics on July 1, 2023. To continue accessing your Google Analytics data, you will need to update to Google Analytics 4 as outlined by Google here. Once you’ve migrated over to GA4, you will need to use this new Pull from Google Analytics 4 step to pull in your data.
In any existing Flow that has a Pull from Google Analytics step, you will need to replace it with a new Pull from Google Analytics 4 step.
Setting your replacement steps up should be as easy as replicating the metrics and dimensions that you were pulling.
Keep in mind that combinations of metrics and dimensions that may have been valid in Google Analytics UA (the prior version) may no longer be valid in Google Analytics 4. Our new Pull from Google Analytics 4 step will only show you options for metrics and dimensions that are compatible (as defined by Google) with your current selection.
Google is deprecating Universal Analytics on July 1, 2023. To continue accessing your Google Analytics data, you will need to update to Google Analytics V4 as outlined by Google here. Once you’ve migrated over to GA4, you will need to use our new "Pull from Google Analytics 4" step to pull in your data.
New data will continue to be pulled in by your existing Pull from Google Analytics steps until July 1, 2023. After that date, existing data will continue to be accessible in Parabola for at least 6 months, until Google no longer allows access to that historic data.
If you have any questions, please reach out to help@parabola.io.
Begin by authenticating your Google Analytics account by clicking Authorize.
Make sure that you're pulling data from the correct property and site.
You can adjust which property or site's data is being pulled into Parabola by selecting from the dropdown.
By default, the Pull from Google Analytics step will bring in Users by Medium.
The timeframe is also set to within the previous 1 week based upon when the flow is run, without an offset of dates.
We offer a variety of preset reports that are the same as those in the Google Analytics sidebar. Selecting a preset report will update the columns in your Metrics to use and Dimensions to use to group the metrics selection fields.
Use these as-is, or as a base for building your own customized reports.
You will find the Metrics to use field shows the same metrics you'd see in Google Analytics. Every metric that you add will result in a column being added to your report. You can select as many metrics as necessary for your report, including New Users, Bounces, Sessions, and more.
You can use various Dimensions to use to group the metrics, including Medium, Source, Campaign, Social Media, and more. Each dimension also adds a column, usually to the front, and it also will change how many rows you see in your data. Leaving this field blank will result in a single row of data, which is not grouped by anything.
The time frame can be updated to let you pull data from:
1. Between two dates
2. Between a date and today
3. The previous X days/weeks/months/etc.
4. The current X day/week/month/year to date.
You can also adjust for when you'd like the timeframe calculation to run, giving you the ability to pick between when the flow is run or the most recently completed month/week/day/hour. The latter option is great for running a report for the last month, on the 1st of the following month, while excluding any data collected for far that day.
Lastly, if you choose, you can add offset for your date timeframe.
If you are looking to compare this data set to the same set, but from the previous period, a great way to do that is to pull in the two data sets, and the use the Combine tables step to combine them, using their dimensions in the matching rules.
The Pull from Google Drive step gives you the ability to pull in CSV, Excel files, and Google Sheets from your Google Drive.
To connect your Google Drive account, click Authorize to login with your Google account credentials.
Use the file selector to select which file to pull data from
If you have multiple dataset sheets (tabs) in a file, specify which one you'd like to pull in by clicking on the dropdown menu under the file name.
You can also select to skip rows or columns of your choosing. This will skip rows from top-down and columns from left-to-right.
The Send to Google Sheets step gives you the ability to automate sending custom datasets to Google Sheets. You can create a new Google Sheets file or update a specific sheet by having the dataset overwrite or append (add on) to an existing file.
To connect to your Google account, click Authorize to login with your Google account credentials.
Select how you want this step to export data:
Once you’ve selected a file to add data to, or have created a new file, select a sheet to send data to (one for each input to the Send to Google Sheets step).
When creating a new file or creating a new file on every run, you can select to create that file in the root of your Drive, or within a specific folder.
The Send to Google Drive step gives you the ability to export data to in CSV, Excel files, or Google Sheets in your Google Drive.
To connect to your Google account, click Authorize to login with your Google account credentials.
Select how you want this step to export data:
Google Sheets files are the only file type that can have data appended.
For Excel and Google Sheets files, each input to the step can be used to populate data in a different tab of the file. CSV file may only accept a single input.
When creating a new file or creating a new file on every run, you can select to create that file in the root of your Drive, or within a specific folder.
Continually improve your customer experience by creating custom reports and processes based on your Gorgias tickets.
Gorgias is a beta integration which requires a slightly more involved setup process than our native integrations (like Facebook Ads and Google Analytics). Following the guidance in this doc should help even those without technical experience pull data from Gorgias. If you run into any questions, shoot our team an email at support@parabola.io.
To pull data from Gorgias, you'll need to start by accessing your Gorgias API Key. Here's a step-by-step:
Use the Pull from HubSpot step to pull in Contacts, Companies, Deals, and Engagements data from your HubSpot CRM.
To connect your HubSpot account, click Authorize.
Once you've logged in and authorized your HubSpot account, you can begin to pull in data from your Contacts, Companies, Deals, and Engagements records in your CRM by selecting a Data Type.
When selecting a Data Type, you'll see an additional Properties dropdown. Here, you can add or remove columns from your data set.
With the Contacts, Companies, and Deals datasets, you can also include historical data for all properties. This setting is not available for Engagements.
Use the Send to HubSpot step to send Contacts, Companies, and Deals data to your HubSpot CRM.
To connect your HubSpot account, click Authorize.
Select the Data Type you're looking to update in HubSpot.
All Data Types must include a column that maps to an ID. For Contacts, you may use the "Email" column as a unique identifier. For Companies, only a "companyId" property will suffice.
Similarly, for Deals, a "deal ID" will be required to correctly map your data to HubSpot's data.
Additionally, in order to send your data successfully to HubSpot, you will need to map every column of your dataset to a property that exists in HubSpot. If there are columns you do not want to be sent to HubSpot, try using our Select columns step to remove them prior to connecting to this Send to HubSpot step.
All other properties not mapped to your data's columns are optional.
The Use JSON file step enables you to pull in datasets from JSON files.
To get started, either drag a file into the outlined box or click on Click to upload a file.
After you upload the file, the step will automatically redirect you to the settings page with a display of your JSON blob data.
In the Advanced Settings, you can set a number of rows and a number of columns to skip when importing your data. This will skip rows from top-down and columns from left-to-right.
The files you upload through this step are stored by Parabola. We store the data as a convenience, so that the next time you open the flow, the data is still loaded into it. Your data is stored securely in an Amazon S3 Bucket, and all connections are established over SSL and encrypted.
Parabola can't pull in updates to this file from your computer automatically, so you must manually upload the file's updates if you change the original file. Formatting and formulas from a file will not be preserved. When you upload this file, all formulas are converted to their value and formatting is stripped.
Stay up-to-date on your marketing KPIs by pulling metrics from Klaviyo's API. When set up correctly, this data will match what you see in your Klaviyo Dashboard.
Klaviyo is a beta integration which requires a slightly more involved setup process than our native integrations (like Facebook Ads and Google Analytics). Following the guidance in this doc should help even those without technical experience pull data from Klaviyo. If you run into any questions, shoot our team an email at support@parabola.io.
Get started by fetching your API key from Klaviyo. From the Klaviyo dashboard, click the icon in the top right and navigate to "Account" --> "Settings" --> "API Keys" --> "Create Private API Key". Once you generate an API Key, copy it and head back over to Parabola.
After dragging in our "Pull from Klaviyo" step, open up the step and paste your API key into the empty box under URL Parameters, to the right of "api_key."
Regardless of the timezone that your Klaviyo account is set to, when you pull in data from Klaviyo's API, the timestamp is in UTC time. That means that if you don't adjust the timestamp, your metrics will not match what you see in Klaviyo.
For example, if my Klaviyo account is in PST, I would search for San Francisco to find that the time offset is -7 hours:
From there, I would multiple -7 by 3600 to get -25200. This value then goes in the "SET TIMEZONE" step, making that formula: {timestamp}-25200
If your offset is positive, the formula would be: {timestamp}+X
By default, this flow will summarize metrics from the previous complete day. This time frame is set in the "SET DATE RANGE" filter step. In the step before (the "Compare Dates" step), we are finding the number of days since an event occurred (ex. if something happened yesterday, the value would be between -1 and -2).
Toggle the filter settings to pull in your specified date range.
Use the Pull from Looker step to run Looks and pull in that data from Looker.
This step is currently offered to users on our Advanced Plan. Check out the Pricing Page for additional information.
To connect to Looker, you’ll need to enter your Looker Client ID and your Looker API Host URL before authenticating:
These steps only need to be followed once per Looker instance! If someone else on your team has done this, you can use the same Client ID that they have set up.
Your Looker permissions in Parabola will match the permissions of your connected Looker account. So you will only be able to view Looks that your connected Looker account can access.
Once your step is set up, you can choose the Look that you want to run from the Run this Look dropdown:
There are also Cache settings that you can adjust:
There are also additional settings that you can adjust within the step:
Perform table calculations: Some columns in Looker are generated from user-entered Excel-like formulas. Those calculations are not run by default in the API, but are run by default within Looker. This setting tells Looker to run those calculations.
Apply visualization options: Enable if you want things like the column names to match the names given in the Look, as opposed to the actual names of the columns in the source data.
Apply model-specific formatting: Requests the data in a way that respects any formatting rules applied to the data model. This can be things like date and time formats.
You may sometimes see a 404 error from the Pull from Looker step. Some common reasons for that error are:
The Pull from MS SQL step connects to and pulls data from a remote Microsoft SQL server. MS SQL is a relational database management system developed by Microsoft.
Double-click on the Pull from MS SQL step and click Authorize. These are the following fields required to connect:
You should be able to find these fields by viewing your MS SQL profile.
If no port is specified during authorization, this step will default to port 1433.
You can leave fields blank (like "Password") if they are not needed for the database to authorize connection.
Once you are successfully connected to your server, you'll first see a dropdown option to select a table from your server. By default, Parabola pulls the whole table using the query: select *.
If you'd like to be able to pull in more specific, relevant data, or reduce the size of your default import, you can do so by writing your own SQL statement to filter your table's data.
To do so, click into the step's Advanced Settings and input your query into the Query (optional) field.
The Send to MS SQL step can insert and update rows in a remote Microsoft SQL server. MS SQL is a relational database management system developed by Microsoft.
Double-click on the Pull from MS SQL step and click Authorize. These are the following fields required to connect:
You should be able to find these fields by viewing your MS SQL profile.\
If no port is specified during authorization, this step will default to port 1433.
You can leave fields blank (like "Password") if they are not needed for the database to authorize connection.
Once you are successfully connected to your server, you'll first see a dropdown option to select the table you'd like to send data to.
By default, this field is set to 20, which should be safe for most databases. This setting controls how many connections Parabola generates to your database in order to write your changes faster. The number of rows you are trying to export will be divided across the connections pool, allowing for concurrent and fast updates.
Be aware, every database has its own maximum number of connections that it will accept. It is not advisable to set the Maximum Connections field in Parabola to the same number of connections that your database max is set at. If you do that, you will be using every connection when the flow runs, and nothing else will be able to connect to your database. 50% - 60% of the total available connections is as high as you should go. Talk to the whoever runs, manages, or set up your database to find out how many connections it can handle.
If you set this field to less than 1 or more than the total number allowed by your database, the step will error.
Next, you'll select an Operation. The available Operation options are:
The Insert option will insert new rows in the database. Once you select the "Insert" operation, you'll be asked to map your columns in Parabola to columns from your selected MS SQL table. You can leave some column mappings blank. If you're using the Insert operation, make sure that it's okay that Parabola create these new rows in your table. For example, you may want to check for possible duplicates.
The Upsert option will updates rows if possible, and inserts new rows if not. The Upsert operation requires you to specify the primary key of the database table ("Unique Identifier Column in Database"), or the column that contains unique values in Parabola ("Unique Identifier Column in Results"). Mapping these columns is important so Parabola can use to figure out which rows to update versus insert new rows. A primary key / unique identifier must be configured on the database table in order for this dropdown to show any options.
Then, you need to map your columns in Parabola to columns from your selected MS SQL table.
The Update option will only update rows. It will not insert any new rows. The Update operation requires you to specify the primary key of the database table ("Unique Identifier Column in Database"), or the column that contains unique values in Parabola ("Unique Identifier Column in Results"). Mapping these columns is important so Parabola can use to figure out which rows to update. A primary key / unique identifier must be configured on the database table in order for this dropdown to show any options.
Then, you need to map your columns in Parabola to columns from your selected MS SQL table.
The Send to MS SQL step handles errors in a different way than other steps. When the flow runs, the step attempts to export each row, starting at the top of the table and processing down until every row has been attempted. Most exports in Parabola will halt all execution if a row fails, but this export will not halt. In the event that a row fails to export, this step will log the error, but will skip past the row and continue to attempt all rows. When the step finishes attempting every row, it will either be in a normal (successful) state if every row succeeded, or it will be in a error (failure) state if at least 1 row was skipped due to errors.
The Pull from Magento step pulls in data from your Magento instance. Magento is a flexible and powerful eCommerce solution that enables anyone to build a fully custom eCommerce solution. Magento can scale up or down to fit the exact needs of any eCommerce retailer.
To connect your Magento account, you need to first create a SOAP/XML Role and then a SOAP/XML User.
In the Pull from Magento step, click the blue button to "Authorize". Provide the following: 1. Host 2. Port 3. API Path 4. API Username 5. API Key
Once you've populated the form, click on the blue button to "Authorize" to complete your connection.
The Pull from Magento step can pull can bring in Sales Orders, Customers, and Products. Select the appropriate dataset from the Dataset dropdown and select the date range we should use from the Created dropdown. Click Show Updated Results to see your data from Magento.
Use the Pull from Mailchimp step to retrieve data from your Mailchimp account. You can use Parabola to pull in List and Campaign data.
To connect your Mailchimp account, click Authorize to login with your Mailchimp credentials.
You can retrieve two different Data Type options from Mailchimp: List and Campaign.
Once you select a data type, you'll be prompted to select your dataset, which can be either a List or a Campaign.
A List pull will provide details in columns like "Email Address", "First Name", "Last Name", and so on from your Mailchimp Audience.
A Campaign pull will provide detailed results of your email campaigns, such as the action taken and timestamp.
The Pull from OneDrive step gives you the ability to pull in datasets from your Microsoft OneDrive files.
To connect your OneDrive account, click Authorize to login with your Microsoft account credentials.
To select the specific file you want to work with:
The Send to Microsoft OneDrive step gives you the ability to automate sending custom datasets to OneDrive. You can create a new file or update a file by having the dataset overwrite or append (add on) to an existing file.
To connect your OneDrive account, click Authorize to login with your Microsoft account credentials.
First, select whether to create a new file, or update an existing file.
Select the file type, and enter a file name. Then, indicate which drive the file should be saved to. Within a drive, you can either save to the root of the drive (default), or search for a specific folder to save to.
First, choose the file you want to update by selecting a drive. Then, search for the file by name.
Once your file is selected, you can decide how to update it:
(Note, you can specify which sheet of an Excel file to update.)
The Pull from SharePoint step gives you the ability to pull in datasets from your Microsoft SharePoint files.
To connect your SharePoint account, click Authorize to login with your Microsoft account credentials.
Note: you may be asked to set up an authenticator app (for multi-factor authentication), or submit an authorization request to your IT administrator. This is dictated by your company’s Microsoft account settings.
To select the specific file you want to work with:
The Send to Microsoft SharePoint step gives you the ability to automate sending custom datasets to SharePoint drives. You can create a new file or update a file by having the dataset overwrite or append (add on) to an existing file.
To connect your SharePoint account, click Authorize to login with your Microsoft account credentials.
First, select whether to create a new file, or update an existing file.
Select the file type, and enter a file name. Then, indicate where the file should be saved: select a site, and drive. Within a drive, you can either save to the root of the drive (default), or search for a specific folder to save to.
First, choose the file you want to update by selecting a site, and drive. Then, search for the file by name.
Once your file is selected, you can decide how to update it:
(Note, you can specify which sheet of an Excel file to update.)
The Pull from MongoDB step enables you to connect to your MongoDB database and access your NoSQL data in Parabola. MongoDB is a document-oriented database platform, also classified as a NoSQL database program.
Double-click on the Pull from MongoDB step and click on the blue button to Authorize. These are the following fields required to connect:
Once you are successfully connected to MongoDB, your first collection will be pulled in automatically. You can update the imported collection by clicking on the Collection dropdown.
Select your desired collection from the Collection dropdown menu.
The Pull from MySQL step connects to and pulls data from a remote MySQL server. MySQL is an open-source relational database management system developed by Oracle.
Double-click on the Pull from MySQL step and click Authorize. These are the following fields required to connect:
You should be able to find these fields by viewing your MySQL profile.
If no port is specified during authorization, this step will default to port 3306.
You can leave fields blank (like "Password") if they are not needed for the database to authorize connection.
Once you are successfully connected to your server, you'll first see a dropdown option to select a table from your server. By default, Parabola pulls the whole table using the query: select *.
If you'd like to be able to pull in more specific, relevant data, or reduce the size of your default import, you can do so by writing your own SQL statement to filter your table's data.
To do so, click into the step's Advanced Settings and input your query into the Query (optional) field.
The Send to MySQL step can insert and update rows in a remote MySQL database. MySQL is an open-source relational database management system developed by Oracle.
Double-click on the Send to MySQL step and click on the blue button to Authorize. These are the following fields required to connect. You should be able to find these fields by viewing your MySQL profile.
If no port is specified during authorization, this step will default to port 3306.
You can leave fields blank (like password) if they are not needed for the database to authorize connection.
Once you are successfully connected to your server, you'll first see a dropdown option to select the table you'd like to send data to.
By default, this field is set to 20, which should be safe for most databases. This setting controls how many connections Parabola generates to your database in order to write your changes faster. The number of rows you are trying to export will be divided across the connections pool, allowing for concurrent and fast updates.
Be aware, every database has its own maximum number of connections that it will accept. It is not advisable to set the Maximum Connections field in Parabola to the same number of connections that your database max is set at. If you do that, you will be using every connection when the flow runs, and nothing else will be able to connect to your database. 50% - 60% of the total available connections is as high as you should go. Talk to the whoever runs, manages, or set up your database to find out how many connections it can handle.
If you set this field to less than 1 or more than the total number allowed by your database, the step will error.
Next, you'll select an Operation. The available Operation options are:
The Insert option will insert new rows in the database. Once you select the "Insert" operation, you'll be asked to map your columns in Parabola to columns from your selected MySQL table. You can leave some column mappings blank. If you're using the Insert operation, make sure that it's okay that Parabola create these new rows in your table. For example, you may want to check for possible duplicates.
The Upsert option will updates rows if possible, and inserts new rows if not. The Upsert operation requires you to specify the primary key of the database table ("Unique Identifier Column in Database"), or the column that contains unique values in Parabola ("Unique Identifier Column in Results"). Mapping these columns is important so Parabola can use to figure out which rows to update versus insert new rows. A primary key / unique identifier must be configured on the database table in order for this dropdown to show any options.
Then, you need to map your columns in Parabola to columns from your selected MySQL table.
The Update option will only update rows. It will not insert any new rows. The Update operation requires you to specify the primary key of the database table ("Unique Identifier Column in Database"), or the column that contains unique values in Parabola ("Unique Identifier Column in Results"). Mapping these columns is important so Parabola can use to figure out which rows to update. A primary key / unique identifier must be configured on the database table in order for this dropdown to show any options.
Then, you need to map your columns in Parabola to columns from your selected MySQL table.
The Send to MySQL step handles errors in a different way than other steps. When the flow runs, the step attempts to export each row, starting at the top of the table and processing down until every row has been attempted. Most exports in Parabola will halt all execution if a row fails, but this export will not halt. In the event that a row fails to export, this step will log the error, but will skip past the row and continue to attempt all rows. When the step finishes attempting every row, it will either be in a normal (successful) state if every row succeeded, or it will be in a error (failure) state if at least 1 row was skipped due to errors.
The Pull from NetSuite integration enables users to connect to any NetSuite account and pull in saved search results that have been built in the NetSuite UI. Multiple saved searches, across varying search types, can be configured in a single flow.
This step is currently offered to users on our Advanced Plan. Check out the Pricing Page for additional information.
The following document outlines the configuration requirements in NetSuite for creating the integration credentials, defining relevant role permissions, and running the integration in Parabola.
The following configuration steps are required in NetSuite prior to leveraging the Parabola integration:
Once complete, you will enter the unique credentials generated in the steps above into the Pull from NetSuite step in Parabola. This will also require your account id, which is obtained from your NetSuite account’s url. Ex: https://ACCOUNTID.app.netsuite.com/
The following document will review how to create each of the items above.
The permissions specified on the role applied to your integration will determine which saved searches, transactions, lists, and results you’ll be able to access in Parabola. It is important for you to confirm that the role you plan to use has access to all of the relevant objects as required.
The following permissions are recommended, in addition to any specific transaction/list/report specific you may require.
Custom Records:
Ensure the checkbox for the web services only role is selected.
Video walk-though of the setup process:
Follow the path below in the NetSuite UI to create a new integration record.
A consumer key and consumer secret will be generated upon saving the record. Record these items, as they will disappear once you leave this page.
Once the role, user, and integration have been created, you’ll need to generate the tokens which are required for authentication in Parabola.
Follow the path below in the NetSuite UI to create a new token record.
Once authorized, you’ll be prompted to select a search type and specific saved search to run. Click refresh and observe your results!
The Return only columns specified in the search checkbox enables a user to determine if all available columns, or only the columns included in the original search, should be returned. This setting is helpful if you’d like to return additional data elements for filtered records without having to update your search in NetSuite.
By default, the NetSuite API will only return the full data results from the underlying search record type (item, customer, transaction, etc) and only the internal ids of related record types (vendors, locations, etc) in a search.
For example, running the following search in Parabola would return all of the information as expected from the base record type (item in this scenario), and the internal id of the related object (vendor).
The best way to return additional details from related objects (vendor in this scenario) is by adding joined fields within the search. Multiple joined fields can be added to a single search to return data as necessary.
Alternatively, another solution would be running separate searches and joining the results by using a Combine Tables step within the flow. This is demonstrated below.
The NetSuite REST Web Services API is used to interact programmatically with NetSuite data, allowing developers to manage, retrieve, and manipulate data and execute business operations directly in NetSuite. SuiteQL is a query language to provide advanced query capabilities to access your NetSuite records and data.
📖 NetSuite API Reference docs:
https://docs.oracle.com/en/cloud/saas/netsuite/ns-online-help/chapter_1540811107.html
🔎 NetSuite SuiteQL Example docs:
🔐 NetSuite Authentication docs:
https://docs.oracle.com/en/cloud/saas/netsuite/ns-online-help/section_158074210415.html
Name of your role
> Edit.https://parabola.io/api/steps/generic_api/callback
within the Redirect URI fieldGive your authorization account an identifiable name.
https://<account-id>.app.netsuite.com/app/login/oauth2/authorize.nl
💡 Tip: Swap in your account-id into the Authorization Request URL.
URL Parameters
https://<account-id>.suitetalk.api.netsuite.com/services/rest/auth/oauth2/v1/token
💡 Tip: Swap in your account-id into the Authorization Request URL.
Request Body
https://<account-id>.suitetalk.api.netsuite.com/services/rest/auth/oauth2/v1/token
💡 Tip: Swap in your account-id into the Authorization Request URL.
Request Body
Run a SuiteQL query to retrieve data from a record. Get started with this template.
Update the properties of your sales orders by making API requests to NetSuite’s SuiteTalk REST web services. Get started with this template
⚠️ Note: An Internal Id associated with the sales order must be provided.
⚠️ Note: Depending on the property, custom fields can accept string values or Internal Ids. If a property is configured to accept an Id, using the property’s literal string value will throw an error.
How to best work with PDFs in Parabola
Parabola’s PDF parsing leverages both optical character recognition (OCR) and large language model (LLM) technology to extract information from PDF documents.
Each billing plan comes with a certain number of PDF pages included per month (see pricing). If your usage exceeds this thresholds, we will reach out to discuss pricing for additional PDF parsing!
In building this step we’ve seen tens of thousands of PDF pages, and we know that these documents can be messy. Based on extensive testing and production usage of this step, we have identified a few common scenarios in which specific document features or components may impact the quality and/or consistency of the output. Many of these can be solved with additional configuration and rule setting - so we encourage you to use fine tuning or additional instructions before determining this step won’t work for your file!
Documents that tend to experience challenges or may not be not parsable:
See below for the various ways you can bring a PDF file into Parabola
Use the Pull from PDF file step to work with a single PDF file. Upload a file by either dragging one into the outlined box, or select "Click to upload a file."
The Pull from inbound email step can pull in data from a number of filetypes, including attached PDF files. Once configured, Parabola can be set to parse PDFs anytime the relevant email receives a PDF file.
Check out this Parabola University video for a quick intro to our PDF parsing capabilities, and see below for an overview of how to read and configure your PDF data in Parabola.
Parabola’s Pull from PDF file step can be configured to return Columns or Keys
Once you have a PDF file in your flow, you will see a prompt for the second step - “Select table columns,” where you will provide information to the tool to determine what fields it should extract from the file. Parabola offers three methods for this configuration -
First, we’ll outline how these choices will impact your results and then we will discuss tips and best practices for fine tuning these results:
See below how in this case with handwriting, with more instructions the tool is able to determine if there is writing next to the word “YES” or “NO”
Parabola’s Pull from PDF step has four additional configurations:
Mark columns as “Child columns” if they contain rows that have values unique from the parent columns:
Before
After marking “Size” as a child column
The Run another Parabola flow step gives you the ability to trigger runs of other Parabola flows within a flow.
Select the flow you want to trigger during your current flow's run. No data will pass through this step. It's strictly a trigger to automatically begin a consecutive run of a secondary flow.
However, if you choose “Run once per row with a file URL”, data will be passed to the second Flow, which can be read using the Pull from file queue step.
Use the Run behavior setting to indicate how the other Flow should run. The options that include wait will cause the step to wait until the second Flow has finished before it can complete it’s calculation. The other options will not wait.
This step can be used with or without input arrows. If you place this step into a Flow without input arrows, it will be the first step to run. If it does have input arrows, then it will run according to the normal sequence of the Flow. Any per row options require input arrows.
The Pull from Parabola Table step is a source step used to pull data from a Parabola Table that you have access to. If you are an Editor or Viewer on a Flow, any Parabola Tables on that Flow will be available to be pulled in as a data source using this step.
The dropdown options for Tables to import will be located on the left-hand side. Tables that you have access to will be listed in the dropdown options. This step can access any Table in your Parabola account that you are authorized to access (whether as Viewer, Editor or owner).
This step pulls the base data in your Parabola Table. Views applied on to your table, such as filters, sorts, aggregations, groups and visual formatting will not show up in this step.
If you do not see your Parabola Table in the dropdown, check to make sure the Allow other Flows to pull data from this table option is enabled on your Send to Parabola Table step.
If you need to bring in multiple Tables, use multiple Pull from Parabola Table steps to bring in the data. Then combine the dataset using a Stack tables or Combine tables step.
Limitations: when working across multiple Flows, the Pull from Tables step will only pull from a Table that has been published on a Flow with a successful run. When working within the same Flow, you can also pull from a draft (unpublished) table.
The Send to Parabola Table step is a destination step that lets you store your dataset in a Parabola Table. Data sent to that table will be visible to anyone with access to that Flow (Viewer, Editor or Owner).
When configured, the Send to Parabola Table step has two tabs - an "Input" tab, and an "Existing Table" tab.
Overwrite the table
Append new data
Update existing rows
Storing dataOnce you run your Flow, the Table will populate and update.Tables are useful to store data that can be accessed between Flows or to create reports when used in conjunction with the Visualize step. More info on how to visualize here.
Use an arrow to connect this step to other steps in a sequence. For example, you can connect this step to the Run another Flow step to first send data to a Table and then run a Flow that pulls data from that Table.
Security: the data you send through this step is stored by Parabola. We store the data as a convenience, so that the next time you open the Flow, the data is loaded into it. Your data is stored securely in an Amazon S3 Bucket, and all connections are established over SSL and encrypted.
Your Table’s content is never discarded. To remove the data, you will need to delete the step from both Draft and Published versions of the flow (or delete the entire Flow).
Limitations: Parabola Tables will be limited to our current cell count limitation (described here).
At launch, you can use unlimited Parabola Tables at no extra charge to your team. After a beta period, we’ll move to a usage-based model for Parabola Tables data storage. (You will not be charged retroactively for usage during the beta period.)
Parashift is an Intelligent Document Processing platform that provides Out-of-the-box solutions for data extraction from various types of documents, including PDFs. Parashift leverages proprietary AI-based technology to read and parse documents, resulting in cleaned data that is available via API.
Parabola’s beta integration with Parashift receives parsed PDF data in real time via Webhook and makes that data accessible along with any other data source or target within Parabola.
The following document outlines how to configure the required Webhook settings in Parashift and integration in Parabola.
The first step in the configuration process is generating a webhook URL in Parabola that can be added in Parashift. Review our Receive from webhook page for detailed overview of how to create a webhook and retrieve the corresponding URL.
Navigate to the Webhooks page, listed under the </> Development section, within the side panel in your Parashift account.
Create a new webhook using the “+ New” icon in the top right of the screen. Give your newly created webhook a name and paste in the Parabola URL that was generated in the previous step.
Enable the Processing Finished checkbox within the Deliver Topic. This will ensure a message is posted to the Parabola webhook each time a document is uploaded to Parashift and finishes processing. Additional topics can be selected if you’d like to receive other types of notifications within Parabola. Click save once complete.
Parashift will send a message to the specified Parabola webhook for each event type specified in the section above. These messages will typically include a batch ID, document ID, status, and timestamp. An example of the Processing Finished message is below:
Navigate to the API Keys page, listed under the </> Development section, within the side panel in your Parashift account.
Create a new API Key using the “+ New” icon in the top right of the screen. Give your newly created API Key a name and click save. Your API key will become visible and can be copied from this screen.
Once completed, this API key should be passed in all API requests to Parashift as a Bearer Token.
After receiving a message that a document has finished processing, the next step is to retrieve the document details. An API call can be made to the following endpoint to return the parsed attributes of a given document.
https://api.parashift.io/v2/documents/{attributes document_id}/?include=document_fields&extra_fields[document_fields]=extraction_candidates
The API response will leverage the JSON:API specification, which will require expanding several JSON objects in Parabola in order to effectively work with the data. An example of this process is below and also included as part of the beta integration
Use the Pull from ParseHub step to pull in data from your webscraping data in ParseHub.
To connect to your ParseHub account, select Authorize in the left-side toolbar.
You'll be prompted to insert your ParseHub API Key, which can be found on your account settings page.
Enter your API Key and select Authorize.
Select your Project from the dropdown in the settings bar.
Your data from the most recent web scrape will now be pulled into Parabola.
Use the Send to ParseHub step to send dynamic data to ParseHub to kick off a web scraping project.
To connect to your ParseHub account, select Authorize.
You'll be prompted to insert your ParseHub API Key, which can be found on your account settings page.
Enter your API Key and select Authorize.
Choose the Project you'd like ParseHub to run from the dropdown in the settings bar.
Choose the columns that contains the values that your ParseHub project is expecting in its Start Values section.
If you already have the URLs to use defined in your ParseHub project, and would not like to send ParseHub any start URLs, then you can target a blank column to send.
The Pull from PostgreSQL step connects to and pulls from a PostgreSQL database. PostgreSQL is an open-source object-relational database management system.
Double-click on the Pull from PostgreSQL step and click Authorize. These are the following fields required to connect:
If no port is specified during authorization, this step will default to port 5439.
You can leave fields blank (like password) if they are not needed for the database to authorize connection.
Once you are successfully connected to your PostgreSQL server, you'll first see a dropdown option to select a table from your server. By default, Parabola pulls the whole table using the query:
If you'd like to be able to pull in more specific, relevant data by writing your own SQL statement, you can do so by clicking into "Advanced Settings" and input your query into the Query (optional) field.
The Send to PostgreSQL step can insert and update rows in a remote PostgreSQL server. PostgreSQL is an open source relational database management system.
Double-click on the Pull from PostgreSQL step and click Authorize. These are the following fields required to connect:
You should be able to find these fields by viewing your PostgreSQL profile.
If no port is specified during authorization, this step will default to port 5432.
You can leave fields blank (like "Password") if they are not needed for the database to authorize connection.
Once you are successfully connected to your server, you'll first see a dropdown option to select the table you'd like to send data to.
By default, this field is set to 20, which should be safe for most databases. This setting controls how many connections Parabola generates to your database in order to write your changes faster. The number of rows you are trying to export will be divided across the connections pool, allowing for concurrent and fast updates.
Be aware, every database has its own maximum number of connections that it will accept. It is not advisable to set the Maximum Connections field in Parabola to the same number of connections that your database max is set at. If you do that, you will be using every connection when the flow runs, and nothing else will be able to connect to your database. 50% - 60% of the total available connections is as high as you should go. Talk to the whoever runs, manages, or set up your database to find out how many connections it can handle.
If you set this field to less than 1 or more than the total number allowed by your database, the step will error.
Next, you'll select an Operation. The available Operation options are:
The Insert option will insert new rows in the database. Once you select the "Insert" operation, you'll be asked to map your columns in Parabola to columns from your selected PostgreSQL table. You can leave some column mappings blank. If you're using the Insert operation, make sure that it's okay that Parabola create these new rows in your table. For example, you may want to check for possible duplicates.
The Upsert option will updates rows if possible, and inserts new rows if not. The Upsert operation requires you to specify the primary key of the database table ("Unique Identifier Column in Database"), or the column that contains unique values in Parabola ("Unique Identifier Column in Results"). Mapping these columns is important so Parabola can use to figure out which rows to update versus insert new rows. A primary key / unique identifier must be configured on the database table in order for this dropdown to show any options.
Then, you need to map your columns in Parabola to columns from your selected PostgreSQL table.
The Update option will only update rows. It will not insert any new rows. The Update operation requires you to specify the primary key of the database table ("Unique Identifier Column in Database"), or the column that contains unique values in Parabola ("Unique Identifier Column in Results"). Mapping these columns is important so Parabola can use to figure out which rows to update. A primary key / unique identifier must be configured on the database table in order for this dropdown to show any options.
Then, you need to map your columns in Parabola to columns from your selected PostgreSQL table.
The Send to PostgreSQL step handles errors in a different way than other steps. When the flow runs, the step attempts to export each row, starting at the top of the table and processing down until every row has been attempted. Most exports in Parabola will halt all execution if a row fails, but this export will not halt. In the event that a row fails to export, this step will log the error, but will skip past the row and continue to attempt all rows. When the step finishes attempting every row, it will either be in a normal (successful) state if every row succeeded, or it will be in a error (failure) state if at least 1 row was skipped due to errors.
Pull data on all of your subscription customers using our Recharge beta step. Track how many new customers subscribed/ cancelled in a day, and report on order data passing through Recharge.
Recharge is a beta integration which requires a slightly more involved setup process than our native integrations (like Facebook Ads and Google Analytics). Following the guidance in this doc should help even those without technical experience pull data from Recharge. If you run into any questions, shoot our team an email at support@parabola.io.
Follow the guidance in this post from Recharge to secure your API key. In your "Pull from Recharge" step, this key will go in the "Request Header" section under "X-Recharge-Access-Token".
To specify a date range in your 'pull orders' step, visit the 'Set timeframe' step and modify the start and end dates.
The Pull from Redshift step connects to and pulls data that is stored in your Amazon Redshift database. Amazon Redshift is a data warehouse product within the AWS ecosystem.
Double-click on the Pull from Redshift step and click Authorize. These are the following fields required to connect:
If no port is specified during authorization, this step will default to post 5439.
You can leave fields blank (like Password) if they are not needed for the database to authorize connection.
Once you are successfully connected to your database, you'll first see a dropdown option to select a table from your Redshift database. By default, Parabola pulls the whole table using the query:
If you'd like to be able to pull in more specific, relevant data, or reduce the size of your default import, you can do so by writing your own SQL statement to filter your table's data.
To do so, click into "Advanced Settings" and input your query into the Query (optional) field.
The Pull from Salesforce step gives you the ability to bring data from your Salesforce CRM into Parabola by object type and fields. You can also filter your results by a selected View which can be set within Salesforce.
Connect your Salesforce account by clicking Authorize and following the prompts to log in with your Salesforce details.
When pulling in data from Salesforce, you can select the Object Type and View. Object Types are based upon the objects you would find in Salesforce, for example, Accounts, Opportunities, and Contacts.
Views give you the ability to trim down the results of your pull using Salesforce's native View criteria as a filter.
Your data will return in the same structure as your View within Salesforce. Under "Advanced Settings", you'll see the ability to choose the Fields that return with your data. This will override the default structure set from your selected View.
Use the Send to Salesforce step to add or update data from Parabola to your Salesforce CRM.
Connect your Salesforce account by clicking Authorize and following the prompts to log in with your Salesforce details.
The default Operation will be to Upsert, which will map your data to existing records and create new records if there is no match. You can also select Insert, which will only create new records, but this may create duplicate records.
Select the appropriate Object Type to ensure your records are correctly mapping to your CRM. These Object Types are similar to the Pull from Salesforce step, and include Account, Opportunities, Contact, amongst others.
All columns must be mapped to their corresponding Salesforce fields, and the Upsert operation requires a column to be apped to "Id". This is the Id of the object you are targeting, such as lead or contact. To map your columns, click the dropdown menus and select each matching Salesforce field. The names of your columns do not need to match the fields.
The Use sample data step allows you to quickly begin building a Flow leveraging sample datasets. This is particularly useful when you want to test Parabola’s data transformation and visualization features, but don’t necessarily want to integrate your live data sources yet.
This step provides both generic data, such as US census and stock market data, as well as data that resembles specific tools like Shopify, ShipHero, Salesforce, and NetSuite.
Simply drag a Use sample data step from the Integrations tab of the search bar onto the canvas to immediately begin seeing data in Parabola. Double-click the step to view and modify the sample data that you’re working with.
This step includes both generic datasets as well as tool-specific datasets.
Beyond generic datasets like census and stock market data, the step also includes datasets that resemble what the data will actually look like when you pull it from another system.
For instance, if you select the “Shopify: Orders” sample data, the table returned will actually resemble the Pull from Shopify step’s output.
Once you have your sample data loaded up, imagine what you might do if you were working with that data in a spreadsheet. Would you do any filtering? What math calculations might you apply? Do the dates need to be reformatted?
Once you know how you want the data to be transformed, then you can shift focus to what step you need to use to apply that transformation. Check out the Transformations section of the search bar (and search for keywords) to find the right step for the job.
The Send emails by row step enables you to send emails to email addresses listed in a specific column. For example, if you have a column with client email addresses, using this step sends one email per row (per email address) in that column. Please note that this step can only send emails to 75 recipients at one time. The emails will be sent from team@parabolamail.io and it will include text at the bottom that says "Powered by Parabola."
Drag and drop the step out onto your canvas. If you double-click on it to open the step's settings, this will be the default view:
Move the step to the right of your flow that has the column with a row (list) of email addresses. Connect the flow's latest step to this one and double-click on it to open up the display window:
In the Recipients field, select the column where the rows of email addresses are.
Values in the fields Subject and Body are required in order to finish the step set up and see a display result window. If you don't enter values, the step will error and not finish. You can merge values from other columns in the Subject and Body fields for a more personalized, customized message. To do this, you'll wrap the column names in curly braces like {column name}.
In the Reply To field, enter an email address you'd like your recipients (clients/customers or colleagues) replies to be sent to.
The Send to SendGrid step gives you the ability to automatically send emails through SendGrid without code. Quickly build and iterate sales, marketing, and internal solutions without tying up engineering resources.
To begin, click Authorize and login with your credentials.
You will need your API Key to link your SendGrid account to Parabola. You can find that on your SendGrid account's Settings API Keys page.
The API Key will be obfuscated by a row of dots, but you can just select the dots and copy and paste into Parabola and your key will be added.
First, select your column of recipient email addresses. Each row will receive one email. If you have duplicate email addresses, they will receive multiple emails. Try using our Remove duplicate rows step to remove for duplicate addresses prior to connecting the data to the this step.
Enter the email address that you'd like emails to be set from in the Send From field. The Send to SendGrid step can only send from a single address.
Next, enter your Sender Name and Email Subject.
Now you can select your Email Content Type. You can choose between Text and HTML. To use Text, your email will appear in plain text, which can be written out in the Email Body field directly. To use HTML, you can also enter your formatted HTML in the Email Body field.
Enter your text in the Email Body field. You can reference column data to use as a mail merge in both the Email Subject and Email Body by wrapping the column names in {curly braces}. If the body of your email is already in a column, simply reference that column with a merge value. Be aware that if your email body column itself includes merge fields, those fields will need to be merged prior to this step. All merges used in the Email Body and Email Subject fields will appear in the email as they do in the column.
Pull data from ShipHero to create custom reports, alerts, and processes to track key metrics and provide a great customer experience.
ShipHero is a beta integration which requires a more involved setup process than our native integrations (like Shopify and Google Analytics). Following the guidance in this doc (along with our video walkthrough) should help even those without technical experience pull data from ShipHero.
If you run into any questions, feel free to reach out to support@parabola.io.
Inside your flow, search for "ShipHero" in the right sidebar. When you drag the step onto the canvas, a card containing 'snippets' will appear on the canvas. To start pulling in data from ShipHero, copy a snippet and paste it onto the canvas (how to paste a snippet).
We must start by authorizing ShipHero's API. In the "Pull from ShipHero" step's Authentication section, select "Expiring Access Token". For the Access Token Request URL, you can paste: https://public-api.shiphero.com/auth/token
In the Request Body Parameters section, you can "+add" username and password then enter your ShipHero login credentials. A second Request Header called "Accept" will exists by default – this can be deleted. Once completed, step authorization window should look as so:
When you drag the ShipHero step onto the canvas, there will be 5 pre-built snippets available:
For everything besides Products, it's common to pull in data for a specific date range (ex. previous day or week). This is why the card begins with steps that specify a dynamic date range. For example, if you put -2 as the Start Date and -1 as the End Date, you will pull orders from the previous full day.
If you're wanting to pull data from ShipHero that is not captured by these pre-built connections, you can modify the GraphQL Query and/or add Mutations by referencing ShipHero's GraphQL Primer.
By default, we pull in 20 pages of data (2,000 records). To increase this value, visit the "Pull from ShipHero" step and go to "Rate Limiting" --> "Maximum pages to fetch
" and increase the value until all of your data is pulled in.
The Pull from ShipStation step allows you to pull in orders, shipments, and fulfillments from your ShipsStation account.
After clicking Authorize, you'll need to get your API Key and Secret and add them, which will enable this flow to pull from your ShipStation account. You can find your API Key and Secret here: https://ship11.shipstation.com/settings/api.
By default, thstep will pull in Orders that were created within the last week. The orders pull defaults to also pulling in the line items for each order. This means that each row represents an item in an order. You can also pull in Shipments and Fulfillments.
Across Orders, Shipments, and Fulfillments, you can modify the time frame, default or all columns, and you can filter based on things like status and carrier.
When pulling in shipments, you can select to pull in the default column set or all columns. By default, the Orders pull includes line items. You can change this by updating the settings to show orders without line items
Orders can be filtered in this step to only include those with order status (i.e. Awaiting Shipment).
Orders can also be filtered down by the date they were created.
When pulling in shipments, you can select to pull in the default column set or all columns. By default, the Shipments pull includes line items. You can change this by updating the settings to show shipments without line items. Shipments can be filtered in this step to only include those sent via a certain carrier (i.e. UPS). Shipments can also be filtered down by the date they were created.
When pulling in fulfillments, you can select to pull in the default column set or all columns. Fulfillments can also be filtered down by the date they were created.
The ShipStation API is used for managing and automating shipping tasks, integrating with e-commerce platforms, and streamlining order fulfillment and shipment processes.
ShipStation is a beta integration which requires a slightly more involved setup process than our native integrations. Following the guidance in this document should help even those without technical experience pull data from ShipStation. If you run into any questions, shoot our team an email at support@parabola.io.
📖 ShipStation API reference docs:
https://www.shipstation.com/docs/api/
🔐 ShipStation Authentication docs:
https://www.shipstation.com/docs/api/requirements/#authentication
1. Navigate to your ShipStation settings in your account.
2. In the API Keys section, create or regenerate your API Key and API Secret.
3. Save your credentials before connecting to Parabola.
1. Add a Pull carrier rates from ShipStation step template to your canvas.
2. Click into the Pull from API: Carriers step to configure your authentication.
3. Under the Authentication Type, select None.
4. Click into the Request Settings to configure your request using the format below:
💡 Tip: You can configure an Authorization Header Value using a base-64 encoder. Encode your Client ID and Client Secret separated by a colon: Client ID:Client Secret.
In Parabola, use the Header Value field to type Basic , followed by a space, and paste in your encoded credentials: Basic {encoded credentials here}.
5. Click into the Enrich with API: ShipStation Rates step and apply the same authentication settings used in steps 1-4.
⚠️ Note: In this example, the API Key isapi_key
. The API Secret isapi_secret
.
Base-64 encoding the API Key and API Secret, separated by a colon, generates the following string:YXBpX2tleTphcGlfc2VjcmV0
Get started with this template.
1. Add a Use Sample data step to your canvas. You also can import a dataset with tracking numbers into your Flow (Pull from Excel File, Pull from Google Drive, Pull from API, etc.)
2. Select the Ecommerce: Orders dataset and click Refresh Data.
💡 Tip: Connect the sample data to a Limit rows step to get rates for 1 sample order.
3. Use an Add text columns step to generate a new column Merge.
1
.4. Add a Pull from API step beneath the Use sample data step.
5. Click into the step. Under Authentication Type, select None.
6. Click into the Request Settings and configure a request to list all carriers in your ShipStation account:
API Endpoint URL
Request Headers
7. Click Refresh data to display the results.
8. Select orders as a Nested Key.
9. Click Refresh data once more to expand the order data into a table.
10. Connect this step to Edit columns step.
11. In the Edit columns step, keep the name and code columns.
12. Use an Add text columns step to generate a new column Merge
.
1
.13. Use a Combine tables step and connect these steps:
14. Click into the step to configure the settings.
Merge
column matches.15. Copy and paste the Products - Weight and dimensions.csv file snippet into your flow: parabola:cb:86331de2-e00b-4634-b629-d37098bbbdfe
16. Use another Combine tables step and connect these steps:
17. Click into the step to configure the settings.
Product Title
and Product
columns match18. Connect the dataset to an Enrich with API step.
19. Click into the step. Under Authentication Type, select None.
30. Click into the Request Settings to configure a request to get shipping rates for the specified shipping details:
API Endpoint URL
Request Body
Request Headers
Note: The weight of the order must be provided in the API request. The dimensions are optional. Consider using an Add math column and Sum by group steps to calculate weight and dimension values by order and quantity.
The Pull from Shopify step can connect directly to your Shopify store and pull in orders, line item, customer, product data and much more!
This step can pull in the following information from Shopify:
Select the blue Authorize button. If you're coming to Parabola from the Shopify App Store, you should see an already-connected Pull from Shopify step on your flow.
By default, once you connect your Shopify account, we'll import your Orders data with Line Items detail for the last day. From here, you can customize the settings based on the data you'd like to access within Parabola.
This section will explain all the different ways you can customize the data being pulled in from Shopify. To customize these settings, start by clicking the blue dropdown after Show me all of the ____ with ____ detail.
Shopify orders contain all of the information about each order that your shop has received. You can see totals associated with an order, as well as customer information and more. The default settings will pull in any Order with the Orders detail happened in the last day. This will include information like the order total, customer information, and even the inventory location the order is being shipped from.
If you need more granular information about what products were sold, fulfilled, or returned, view your Orders with Line Items detail. This can be useful if you want relevant product data associated with each line item in the order.
Each order placed with your shop contains line items - products that were purchased. Each order could have many line items included in it. Each row of pulled data will represent a single item from an order, so you may see that orders span across many rows, since they may have many line items.
There are 4 types of columns that show up in this pull: "Orders", "Line Items", "Refunds", and "Fulfillment columns". When looking at a single line item (a single row), you can scroll left and right to see information about the line item, about its parent order, refund information if it was refunded, and fulfillment information if that line item was fulfilled.
As your orders are fulfilled, shipments are created and sent out. Each shipment for an order is represented as a row in this pull. Because an order may be spread across a few shipments, each order may show up more than one time in this pull. There are columns referring to information about the order, and columns referring to information about the shipment that the row represents.
Every order the passes through your shop may have some discounts associated with it. A shopper may use a few discount codes on their order. Since each order can have any number discount codes applied to it, each row in this pull represents a discount applied to an order. Orders may not show up in this table if they have none, or they may show up a few times! There are columns referring to information about the order, and columns referring to information about the discount that was applied.
This is a simple option that pulls in 1 row, containing the balance of your shop, and the currency that it is set to.
This option will pull in 1 row for every customer that you have in your Shopify store records.
Available filters:
Retrieve all disputes ordered by the date when it was initiated, with the most recent being first. Disputes occur when a buyer questions the legitimacy of a charge with their financial institution. Each row will represent 1 dispute.
An inventory level represents the available quantity of an inventory item at a specific location. Each inventory level belongs to one inventory item and has one location. For every location where an inventory item is available, there's an inventory level that represents the inventory item's quantity at that location.
This includes product inventory item information as well, such as the cost field.
You can choose any combination of locations to pull the inventory for, but you must choose at least one. Each row will contain a product that exists in a location, along with its quantity.
Toggle "with product information" to see relevant product data in the same view as the Product Inventory.
This is a simple option that will pull in all of your locations for this shop. The data is formatted as one row per location.
Payouts represent the movement of money between a Shopify Payments account balance and a connected bank account. You can use this pull option to pull a list of those payouts, with each row representing a single payout.
Pull the name, details, and products associated with each of your collections. By default, each row returns the basic details of each collection. You can also pull the associated products with each collection.
Available filters:
This pulls in a list of your products. Each row represents product variant since a product can have any number of variants. You may see that a product is repeated across many rows, with one row for each of its variants. When you set up a product, it is created as a variant, so products cannot exist without having at least one variant, even if it is the only one.
Available filters:
The Send to Shopify step can connect directly to your Shopify store and automatically update information in your store.
This step can perform the following actions in Shopify:
To connect your Shopify account from within Parabola, click on the blue "Authorize" button. For more help on connecting your Shopify account, jump to the section: Authorizing the Shopify integration and managing multiple stores.
Once you connect a step into the Send to Shopify step, you'll be asked to choose an export option.
The first selection you'll make is whether this step is enabled and will export all data or disabled and will not export any data. By default, this step will be enabled, but you can always disable the export if you need to for whatever reason.
Then you can tell the step what to do by selecting an option from the menu dropdown.
When using this option, every row in your input data will be used to create a new customer, so be sure that your data is filtered down to the point that every row represents a new customer to create.
When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.
Every customer must have either a unique Phone Number or Email set (or both), so be sure those fields are present, filled in, and have a mapping.
If you create customers with tags that do not already exist in your shop, the tags will still be added to the customer.
The address fields in this step will be set as the primary address for the customer.
When using this option, every row in your input data will be used to update an existing customer, so be sure that your data is filtered down to the point that every row represents a customer to update.
When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.
Every customer must have a Shopify customer ID present in order to update successfully, so be sure that column is present, has no blanks, and is mapped to the id field in the settings.
The address fields in this step will be edit the primary address for the customer.
When using this option, every row in the step will be used to delete an existing customer, so be sure that your data is filtered down to the point that every row represents a customer to delete.
This step only requires a single field to be mapped - a column of Shopify customer IDs to delete. Make sure your data has a column of those IDs without any blanks. You can find the IDs by using the Pull from Shopify step.
Collections allow shops to organize products in interesting ways! When using this option, every row in the step will be used to add a product to a collection, so be sure that your data is filtered down to the point that every row represents a product to add to a collection.
When using this option, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.
You only need two mapped fields for this option to work - a Shopify product ID and a Shopify Collection ID. Each row will essentially say, "Add this product to this collection".
Why is this option not called "Remove products from collections" if that is what it does? Great question. Products are kept in collections by creating a relationship between a product ID and a Collection ID. That relationship exists, and has its own ID! Imagine a spreadsheet full of rows that have product IDs and Collection IDs specifying which product belongs to which collections - each of those rows needs their own ID too. That ID represents the relationship. In fact, you don't need to imagine. Use the Pull from Shopify step to pull in Product-Collection Relationships. Notice there is an ID for each entry that is not the ID of the product or the collection. That ID is what you need to use in this step.
When using this option, every row in the step will be used to delete a product from a collection, so be sure that your data is filtered down to the point that every row represents a product-collection relationship that you want to remove.
This step does not delete the product or the collection! It just removes the product from the collection.
When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.
You need 1 field mapped for this step to work - it is the ID of the product-collection relationship, which you can find by Pulling those relationships in the Pull from Shopify step. In the step, it is called a "collect_id", and it is the "ID" column when you pull the product-collection relationships table.
What's an inventory item? Well, it represents the goods available to be shipped to a customer. Inventory items exist in locations, have SKUs, costs and information about how they ship.
There are a few aspects of an inventory item that you can update:
When using this step, you need to provide an Inventory Item ID so that the step knows which Item you are trying to update. Remember, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be seny to Shopify.
When using this option, every row in the step will be used to adjust an existing item's inventory level, so be sure that your data is filtered down to the point that every row represents an item to adjust.
When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.
Every item must have a Shopify inventory item ID present in order to adjust successfully, so be sure that column is present, has no blanks, and is mapped to the id field in the settings.
You must provide the inventory item ID, the location ID where you want to adjust the inventory level, and the available adjustment number. That available adjustment number will be added to the inventory level that exists. So if you want to decrease the inventory level of an item by 2, set this value to -2. Similarly, use 5 to increase the inventory level by 5 units.
When using this option, every row in the step will be used to reset an existing item's inventory level, so be sure that your data is filtered down to the point that every row represents an item to reset.
When using this step, any field that is not mapped in the settings table in the step will not be sent. Only the mapped fields will be sent to Shopify.
Every item must have a Shopify inventory item ID present in order to reset successfully, so be sure that column is present, has no blanks, and is mapped to the id field in the settings.
You must provide the inventory item ID, the location ID where you want to adjust the inventory level, and the available number. That available number will be used to overwrite any existing inventory level that exists. So if you want to change an item's inventory from 10 to 102, then set this number to 102.
To use the Pull from Shopify or Send to Shopify steps, you'll need to first authorize Parabola to connect to your Shopify store.
To start, you will need your Shopify shop URL. Take a look at your Shopify store, and you may see something like this: awesome-socks.myshopfy.com - from that you would just need to copy awesome-socks to put into the first authorization prompt:
After that, you will be shown a window from Shopify, asking for you to authorize Parabola to access your Shopify store. If you have done this before, and/or if you are logged into Shopify in your browser, this step may be done automatically.
Parabola handles authorization on the flow-level. Once you authorize your Shopify store on a flow, subsequent Shopify steps you use on the same flow will be automatically connected to the same Shopify store. For any new flows you create, you'll be asked to authorize your Shopify store again.
You can edit your authorizations at any time by doing the following:
If you manage multiple Shopify stores, you can connect to as many separate Shopify stores in a single flow as you need. This is really useful because you can combine data from across your Shopify-store and create wholistic custom reports that provide a full picture of how your business is performing.
Please note that deleting a Shopify account from authorization will remove it from the entire flow, including any published versions.
This article goes over the date filters available in the Pull from Shopify step.
The Orders and Customer pulls from the Pull from Shopify step have the most complex date filters. We wanted to provide lots of options for filtering your data from within the step to be able to reduce the size of your initial import and pull exactly the data you want to see.
Date filters can be a little confusing though, so here's a more detailed explanation of how we've built our most complex date filters.
The date filters in the Pull from Shopify step, when available, can be found the bottom of the lefthand side, right above the "Show Updated Results" button.
In this step, we indicate what time zone we're using to pull your data. This time zone matches the time zone selected for your Shopify store.
At the bottom of the lefthand panel of your step, if you're still uncertain if you've configured the date filters correctly, we have a handy helper to confirm the date range we'll use to filter in the step:
This article explains how to reproduce the most commonly-used Shopify metrics. If you don't see the metric(s) you're trying to replicate, send us a note and we can look into it for you.
The Shopify Overview dashboard is full of useful metrics. One problem is that it doesn't let you drill into the data to understand how it's being calculated. A benefit of using Parabola to work with your Shopify data is that you can easily replicate most Shopify metrics and see exactly how the raw data is used to calculate these overview metrics.
This formula will show you the total sales per line item by multiplying the price and quantity of the line items sold.
Import Orders with Line Items details
This formula will show you the total refund per line item by multiplying the refunded amount and refunded quantity. In this formula, we multiply by 1 to turn it into a negative number. If you'd like to display your refunds by line items as a positive number, just don't multiply by 1.
Import Orders with Line Items details
This formula will show you the net quantity of items sold, taking into account and removing the items that were refunded.
Import Orders with Line Items details
First, use the Sum by group step to sum "Line Items: Quantity" and "Refunds: Refund Line Items: Quantity"
Then, use the newly generated "sum" columns for your formula.
Import Orders with Orders details.
Add a Sum by group step. Sum the "Total Line Items Price" column.
Import Orders with Orders details.
To calculate net sales, you'll want to get gross sales - refunds - discounts. This will require two steps:
Import Orders with Line Items details.
To calculate total sales, you'll want to get gross sales + taxes - refunds - discounts. This will require three steps:
Import Orders with Orders details.
Import Orders with Orders details.
Import Orders with Orders details.
Import Customers. This table will give us Total Spent per customer as well as the # of Orders by customer.
Alternatively, import Orders.
Use the Count by group step after pulling in orders.
Quickly and easily pipe data from your Parabola Flows into Slack. This integration is often use to notify your team about a process running successfully, or to report on specific updates or actions (e.g., "Gross sales from yesterday totaled X," or "a new opportunity was created in Salesforce").
Slack is a beta integration which requires a slightly more involved setup process than our native integrations. Following the guidance in this doc should help even those without technical experience send messages in Slack. If you run into any questions, shoot our team an email at support@parabola.io.
In order to send messages in Slack, you need to create an 'app':
The Pull from Smartsheet step enables you to pull in data from Smartsheet (a collaborative spreadsheet tool) used for organizing and working with data. This way, you'll be able to view your data as a table, workflow, or timeline and automate the process of making reports. You may also combine with other data sources.
To authorize your Smartsheet account in this step, select Authorize.
Then, a new webpage tab will open and redirect you to log into your Smartsheet account. Once you login, select Allow to finalize the authorization.
After this, your webpage will return to the tab with your Parabola flow on it and refresh the step automatically.
The step will automatically select and pull in the first Sheet listed in your Smartsheet account's Sheets section. To bring in a different Smartsheet Sheet with the dataset you'd like to work with, select the name of the sheet to pull it in and click on the circular arrow icon next to the step's name Pull from Smartsheet to refresh the display window.
After a dataset from your Smartsheet sheet is pulled in, select the blue "Show Updated Results" button to save these settings in the step.
The Send to Smartsheet step enables you to automate data entry in Smartsheet, automatically add new data into existing Sheets, send reports to customers and clients, or add new data to existing Sheets.
To authorize your Smartsheet account in this step, select Authorize.
Then, a new webpage tab will open and redirect you to log into your Smartsheet account. Once you login, select Allow to finalize the authorization.
After this, your webpage will return to the tab with your Parabola flow on it and refresh the step automatically.
Select the Sheet you'd like to overwrite and update, or select Create New Sheet to make a new one in Smartsheet.
Select Show Updated Results to save the step settings and update the display window.
Under the Map "column name" to field type: settings section, you may also select one of 11 field types to customize a column's field type in Smartsheet.
Use the Pull from Snowflake step to pull in your data from your Snowflake database.
This step is currently offered to users on our Advanced Plan. Check out the Pricing Page for additional information.
Before you get started, check to see if your team has already set up their Client ID and Client Secret for Parabola. If you or someone else on your team has already set this up on the Snowflake side, you will not need to go through this process again and can jump straight to the Parabola Step Set Up section**.**
In order to perform these steps, you must have the right permission level in Snowflake to create a security integration.
Login to your Snowflake account. Once you’re logged in, click on “Worksheets” tab in the sidebar and click + Worksheet button in the upper right hand corner
In the worksheet, paste the query below into the worksheets query box. This will instantiate a custom client OAuth server on your Snowflake instance that Parabola will use to connect to.
create security integration oauth_parabola_prod
type = oauth
enabled = true
oauth_client = custom
oauth_client_type = 'CONFIDENTIAL'
oauth_redirect_uri = 'https://parabola.io/api/auth/snowflake/callback'
oauth_issue_refresh_tokens = true
oauth_refresh_token_validity = 10368000
The configuration above is the basic default settings for OAuth server up, but can be customized further for your needs. Additional information located on Snowflake documents here.
Click the Run/Play button. If successful, you should see a notification on the lower portion of the screen confirming integration creation was successful.
Run the following query:
select system$show_oauth_client_secrets('OAUTH_PARABOLA_PROD');
Note: The name of your integration passed into this statement should be all capitalized. Ex “oauth_parabola_prod” should be entered as 'OAUTH_PARABOLA_PROD'
Click on the result in the lower half of the page and copy the oauth_client_id and oauth_client_secret values in the resulting json
In your builder, bring in the Snowflake step and Click on “Authorize Snowflake”. You will see a form asking for client_id, client_secret, and account_identifier. For client_id and client_secret, paste the values you received above.
For account_identifier, paste your Snowflake account id. Your account ID will be the located in your URL:
<account_identifier>.snowflakecomputing.com
If your Snowflake URL has a region included in it, along with an account identifier, you may need to include that region as well in this step.
After you hit Submit, a module will pop up which will ask to authenticate. Login to your Snowflake account as you always would. After logging in, you should be taken back to Parabola. You will now be able to query data from Snowflake!
When a user authorizes our "Pull from Snowflake" step, their access to data within the Parabola step will be the same as their access to data within the Snowflake platform. If a user has granular permissions configured in Snowflake, their access will be gated in the same fashion within Parabola.
While credentials like Client ID and Client Secret are at the organization level, when a user actually authenticates the step through their Snowflake login, we ensure that the actual user account permissions are enforced within the step itself.
By default Parabola will mimic the permissions you have within your Snowflake instance. The request will check the users default role, warehouse, and database/schema. If these values are not set, or the users default values are not sufficient to make a certain request, you will see an error message like below:
Settings Error: Error occurred with Snowflake API (status_code: 422, message: “SQL compilation error: Object ‘CUSTOMER’ does not exist or not authorized.”)
If this occurs, open up the settings on the left-hand side labeled Connection Options and manually enter the values you would like to use to make a query:
You can play around with these values in the Snowflake worksheets section to find a configuration that works for you. In the upper left hand corner of the page select for role or warehouse, and the sidebar for database or schema respectively:
Use the Send to Snowflake step to insert, update, or merge data into your Snowflake database.
This step is currently offered to users on our Advanced Plan. Check out the Pricing Page for additional information.
Before you get started, check to see if your team has already set up their Client ID and Client Secret for Parabola. If you or someone else on your team has already set this up on the Snowflake side, you will not need to go through this process again and can jump straight to the Parabola Step Set Up section.
In order to perform these steps, you must have the right permission level in Snowflake to create a security integration.
Login to your Snowflake account. Once you’re logged in, click on “Worksheets” tab in the sidebar and click + Worksheet button in the upper right hand corner
In the worksheet, paste the query below into the worksheets query box. This will instantiate a custom client OAuth server on your Snowflake instance that Parabola will use to connect to.
create security integration oauth_parabola_prod
The configuration above is the basic default settings for OAuth server up, but can be customized further for your needs. Additional information located on Snowflake documents here.
Click the Run/Play button. If successful, you should see a notification on the lower portion of the screen confirming integration creation was successful.
Run the following query:
select system$show_oauth_client_secrets('OAUTH_PARABOLA_PROD');
Note: The name of your integration passed into this statement should be all capitalized. Ex “oauth_parabola_prod” should be entered as 'OAUTH_PARABOLA_PROD'
Click on the result in the lower half of the page and copy the oauth_client_id and oauth_client_secret values in the resulting json
In your Flow builder, add the Send to Snowflake step and click on “Authorize Snowflake”. You will see a form asking for client_id, client_secret, and account_identifier. For client_id and client_secret, paste the values you received above.
For account_identifier, paste your Snowflake account id. Your account ID will be the located in your URL:
<account_identifier>.snowflakecomputing.com
If your Snowflake URL has a region included in it, along with an account identifier, you may need to include that region as well in this step.
After you hit “Submit”, a window will pop up which will ask to authenticate. Login to your Snowflake account as you always would. After logging in, you should be taken back to Parabola. You will now be able to send data to Snowflake!
When a user authorizes our Send to Snowflake step, their access to data within the Parabola step will be the same as their access to data within the Snowflake platform. If a user has granular permissions configured in Snowflake, their access will be gated in the same fashion within Parabola.
While credentials like Client ID and Client Secret are at the organization level, when a user actually authenticates the step through their Snowflake login, we ensure that the actual user account permissions are enforced within the step itself.
This step can send data in 3 different ways:
Both update and merge require a Snowflake column to be used as the unique identifier.
This step cannot create or remove tables within Snowflake. A database table must already exist in Snowflake, with a schema of columns, to use this step.
Any columns within Parabola that are not mapped to corresponding columns in Snowflake will not be sent. If any Snowflake columns do not have corresponding columns mapped within Parabola, the resulting new rows will have blank values in those columns.
⚠️ Note: when using the “update” option, Snowflake will not send back an error if an update was unable to find a matching row. The Parabola Flow will indicate success and look like it sent a number of rows, but if any of those rows during the update process were unable to match any rows in Snowflake, no error will be returned. This is unfortunately a Snowflake limitation.
By default Parabola will mimic the permissions you have within your Snowflake instance. The request will check the users default role, warehouse, and database/schema. If these values are not set, or the users default values are not sufficient to make a certain request, you will see an error message like below:
Settings Error: Error occurred with Snowflake API (status_code: 422, message: “SQL compilation error: Object ‘CUSTOMER’ does not exist or not authorized.”)
If this occurs, try updating the Role, Warehouse, Database, or Schema settings.
You can play around with these values in the Snowflake worksheets section to find a configuration that works for you. In the upper left hand corner of the page select for role or warehouse, and the sidebar for database or schema respectively:
The Pull from Square step connects directly to your data in Square. Pull in data on transactions, refunds, customers, locations, inventory, and more.
To connect your Square account to Parabola, double-click on the Pull from Square step and click "Authorize." A window will pop up asking you to sign in to your Square account using your email and password. Once you complete the login, you'll see the step on Parabola connected and pulling in your data.
When you first connect to the Pull from Square step, it'll pull in Location Details which is the first option in the data type dropdown.
If you click into "Advanced Settings," you can filter locations if you have multiple locations and you want to filter to see data for those particular locations.
Here are the available data sets in the data type dropdown:
Pulling in Transactions data will return the following columns:
By default, this option will pull in all data for your selected time frame. However, you can filter for the following subsets of data: Tenders, Refunds, Line Items, Transactions Report, and Item Details Report.
The Timeframe will default to the Last 7 Days, but the following timeframe options are available: Last 24 Hours, Last 1 Day, Last 7 Days, Last 30 Days, Last Month, Last 3 Months, Last 6 Months, Last Year, This Year, and Custom Range.
If you select the Custom Range option, you can configure a Start Date and End Date. Please make sure to provide these dates in the following format: MM-DD-YYYY. So, February 28, 2020 will be indicated as 02-28-2020.
You should also set the appropriate Time Zone to use to filter for your dates. By default, the Africa/Abidjan time zone will be selected since that's the first time zone listed in our alphabetical list.
If you click into "Advanced Settings," you'll see an option to Filter Locations if it'd be useful to filter your data by one or many locations.
You can also adjust the offset of your relative timeframe by customizing how many days, weeks, or months ago we should start the timeframe from.
You can also specify a Day Start Time which will be 12:00AM as a default.
Pulling in Refunds data will return the following columns:
By default, this option will pull in all data for your selected time frame. However, you can filter for the following subsets of data: Original Transaction Tenders, Original Transaction Line Items, Refunds Report, Item Details Report.
The Timeframe, Time Zone, and Advanced Settings are all the same as the Transactions data type above.
Pulling in Category data will return your item catalog including items, variations, categories, discounts, taxes, modifiers, and more. A total of 92 columns are returned.
Pulling in Inventory data will return the following columns:
If you click into Advanced Settings, you can filter locations if you have multiple locations and you want to filter to see data for those particular locations.
Pulling in Customers data will return the following columns:
Pulling in Employees data will return the following columns:
If you click into "Advanced Settings," you can filter locations if you have multiple locations and you want to filter to see data for those particular locations.
The Pull from Squarespace step pulls data from your Squarespace account via their API.
The Pull from Squarespace step is a beta step. It is a Pull from an API step that has been pre-configured to work with the Squarespace API.
NOTE: Squarespace requires an "Advanced Commerce" plan to pull data from their Commerce API. For additional information, please visit their pricing page.
Connecting to the Squarespace API is straightforward. You will need to provide an API Key from your Squarespace account. Head here for instructions from Squarespace on generating an API key.
Once you have your API Key, add it into the step into the Bearer Token field.
If the pull does not bring back all of your data, increase the Max Requests field so that more pages are fetched.
This beta step is pre-configured to pull data in from the Squarespace Orders endpoint. You can update the URL in the API Endpoint URL field if you'd like to access data from a different endpoint. You can view all available endpoints from Squarespace's Commerce API here.
The Pull from Stripe step connects to your Stripe account and pulls the following data types into Parabola in a familiar spreadsheet format:
Double-click on the Pull from Stripe step and click "Authorize." A pop-up window will appear asking you to log in to your Stripe account to connect your data to Parabola
If you ever need to change the Stripe account that your Parabola flow is connected to, click "Edit accounts" at the top of the step and select to either "Edit" or "Add new account." Both options will prompt the same Stripe login window to update or add a new account.
The first thing you'll want to do is select a data type to pull in from Stripe. Below are the seven different data types available.
See data about coupons existing in your Stripe account. Please note that Stripe returns "Amount Off" with no decimals, so if you see 50000 in the "Amount Off" column, that will equal 500.00. You can connect our Insert math column step and Format numbers to update this if you prefer.
See data about your customers in your Stripe account. The Created field displays when the customer was created in Stripe. The time is represented in Unix time. You can connect our Format dates step to update the date to your preferred format.
See data about invoices that exist in your Stripe account. The "Created" field displays when the invoices were created in Stripe. The time is represented in Unix time. You can connect our Format dates step to update the date to your preferred format.
See data about payments that exist in your Stripe account. The "Created" field displays when the payment was created in Stripe. The time is represented in Unix time. You can connect our Format dates step to update the date to your preferred format. Please note that Stripe returns Amount with no decimals, so if you see 50000 in the Amount column, that will equal 500.00. You can connect our Insert math column step and Format numbers to update this if you prefer.
See data about plans that exist in your Stripe account. The "Created" field displays when the plan was created in Stripe. The time is represented in Unix time. You can connect our Format dates step to update the date to your preferred format. Please note that Stripe returns Amount with no decimals, so if you see 50000 in the Amount column, that will equal 500.00. You can connect our Insert math column step and Format numbers to update this if you prefer.
See data about products that exist in your Stripe account. The "Created" field displays when the plan was created in Stripe. The time is represented in Unix time. You can connect our Format dates step to update the date to your preferred format.
See data about subscriptions that exist in your Stripe account. The "Created" date field returned is represented in Unix time. You can connect our Format dates step to update the date to your preferred format.
For every data type available in the Pull from Stripe, we support the ability to customize the timeframe to use to pull the relevant data as well as the time zone we should use for the selected timeframe. Parabola will retrieve rows of data that were created within your selected timeframe.
Get a full-picture view of your marketing performance across channels by adding TikTok data to your automated reports. Track key metrics like clicks, impressions, and payments, and combine your spend across platforms for a blended CAC metrics.
TikTok is a beta integration which requires a more involved setup process than our native integrations (like Facebook Ads and Google Analytics). Following the guidance in this doc should help even those without technical experience pull data from TikTok. If you run into any questions, shoot our team an email at support@parabola.io.
To pull marketing data from TikTok, you must start by registering as a TikTok developer through their Marketing Portal.
Once registered, you can then 'Create a Developer App.' Heads up – TikTok says this app may take 2-3 business days for them to review and approve.
With your developer app approved, you'll be provided with an auth_token URL that generates your access token. If you click on this URL or paste it into a new browser tab and search, you'll see an access token appended to the resulting URL. That access token can be copied and inserted into the "Pull from TikTok" step in the "Request Header" section.
You'll also need to acquire your "Advertiser ID", which can be pasted in the "Input Advertiser ID" card.
Our TikTok integration was built to support TikTok's Basic Reports and Audience Reports. To help you get started, we've brought in a list of all the Metrics (ex. Spend, CPM) and Dimensions (ex. group by Campaign and Day) supported in TikTok's reports.
To start outputting your data once you've successfully set up your TikTok Developer Account, you'll need to follow 4 steps:
The Pull from Twilio step pulls messages and phone numbers from Twilio.
The first thing you'll need to do to start using the Pull from Twilio step is to authorize the step to access the data in your Twilio account.
Double-click on the step and click "Authorize." This window will appear where you'll need to provide the Account SID and Auth Token from your Twilio account.
To locate this information on your Twilio account, click on the blue link to Lookup Twilio Account Info. This will take you to https://www.twilio.com/console. You'll see your Account SID and Auth Token that you can copy and paste from your account to Parabola.
Once you're connected, you'll have the following data types to select from:
This option pulls logs of all outbound messages you sent from your Twilio account. The returned columns are: To (phone number), From (phone number), Status, Price, Date Sent, Body (of message).
You have optional fields you can set to filter the data. Leaving the Date Sent field blank will simply pull in the most recent 100k messages.
This option pulls logs of any responses or inbound messages you've receive to the phone numbers associated with your Twilio account. The returned columns are: To (phone number), From (phone number), Status, Price, Date Sent, Body (of message).
You have optional fields you can set to filter data. Leaving the Date Received field blank will simply pull in the most recent 100k messages.
This option pulls in phone numbers that are associated with your account. The returned columns are: Number ID, Phone Number, Friendly Name, SMS Enabled, MMS Enabled, Voice Enabled, Date Created, Date Updated.
The Send to Twilio step triggers dynamic SMS messages sent via Twilio using data transformed in your Parabola flow. You can use Parabola to dictate who should receive your SMS messages, what message they should receive, and trigger Twilio to send them.
The first thing you'll need to do to start using the Send to Twilio step is to authorize the step to send data to your Twilio account.
Double-click on the step and click on the blue button to Authorize. This window will appear where you'll need to provide the Account SID and Auth Token from your Twilio account.
To locate this information on your Twilio account, click on the blue link to Lookup Twilio Account Info. This will take you to https://www.twilio.com/console. You'll see your Account SID and Auth Token that you can copy and paste from your account to Parabola.
By default, this step will be configured to Send text messages to recipients when the flow runs. If for whatever reason you need to disable this temporarily, you can select to not send text messages when the flow runs.
Then, you'll select the following columns that contain the data for phone numbers you'd like to Send To, phone numbers you'd like to Send From, and text you'd like Twilio to send as Message Content.
Please make sure that the phone numbers you'd like to Send From are valid Twilio phone numbers that your Twilio account is authorized to send from. Verified Caller ID phone numbers cannot be used to send outbound SMS messages.
For Message Content, you have the option to use content from an existing column or a custom message. Select the Custom option from the dropdown if you'd like to type in a custom message. While the custom message is a great, easy option, this means that all of your recipients will receive the same message. If you'd like your messages to be customized at all, you should create your dynamic messages in a column beforehand. The Insert column can be particularly useful here for creating dynamic text content.
Each row will represent a single SMS. If your data contains 50 rows that means 50 SMS messages will be sent.
The Pull from Typeform step enables you to connect to your Typeform account and pull response data from your Typeform forms into Parabola.
Double-click on the Pull from Typeform step and click the blue button to Authorize. A pop-up window will appear asking you to log in to your Typeform account and connect your data to Parabola.
If you ever need to change the Typeform account that your Parabola flow is connected to, click "Edit accounts" at the top of the step and select to either "Edit" or "Add new account." Both options will prompt the same Typeform login window to update or add a new account.
The first thing you'll be asked to do is select the relevant Typeform form you'd like to pull in. Click on the "Form" dropdown on the lefthand side and you'll see all of the forms you have created in Typeform.
By default, the checkbox below to Include metadata from responses will be unchecked. With this option unchecked, a column will be created for every survey question, and a row of answers will appear for every response you receive.
If you check the box to Include metadata from responses, Parabola will also pull in metadata about a client's HTTP request that Typeform collected along with their responses. The following columns will be pulled into Parabola in addition to the question columns:
- "landing_id"
- "token"
- "response_id"
- "landed_at"
- "submitted_at"
- "hidden"
- "calculated"
- "user_agent"
- "platform"
- "referrer"
- "network_id"
- "browser"
The UPS API is used by businesses and developers to integrate UPS’s shipping, tracking, and logistics services into their platforms and workflows.
UPS is a beta integration which requires a slightly more involved setup process than our native integrations. Following the guidance in this document should help even those without technical experience pull data from UPS. If you run into any questions, shoot our team an email at support@parabola.io.
📖 UPS API Reference:
https://developer.ups.com/catalog?loc=en_US
🔐 UPS Authentication Documentation:
https://developer.ups.com/api/reference?loc=en_US#tag/OAuth-Auth-Code
1. Navigate to the UPS Developer Portal.
2. Click Login to access your UPS account.
3. Click Create Application to make a new application and generate your credentials.
⚠️ Note: This application will be linked to your shipper accounts(s) and email address associated with your UPS.com ID
4. Select your use case, shipper account, and accept the agreement.
5. Enter your contact information.
💡 Tip: Consider using a group inbox that is accessible to others on your development team. You are unable to change this email once the credentials are created or you will lose access to your application.
6. Define your application details that includes the name, associated billing account number, and custom products.
⚠️ Note: In the Callback URL field, add the following URL: https://parabola.io/api/steps/generic_api/callback
7. Once saved, your Client ID and Client Secret are generated.
💡 Tip: Click Add Products to enable additional products like the Tracking and Time in Transit APIs if they have not been added to your application.
8. Configure an OAuth 2.0 request to the OAuth Code endpoint in Parabola.
1. Add an Enrich tracking from UPS step template to your canvas.
2. Click into the Enrich with API: UPS Tracking step to configure your authentication.
3. Under the Authentication Type, select OAuth 2.0 before selecting Configure Auth.
4. Toggle on Switch to custom settings.
5. Enter your credentials to make a request to the OAuth Code endpoint using the format below:
Give your authorization account an identifiable name.
(GET)
Test URL
https://wwwcie.ups.com/security/v1/oauth/authorize
Production URL
https://onlinetools.ups.com/security/v1/oauth/authorize
URL Parameters
(POST)
Test URL
https://wwwcie.ups.com/security/v1/oauth/token
Production URL
https://onlinetools.ups.com/security/v1/oauth/token
Body Parameters
Request Headers
💡 Tip: You can configure an Authorization Header Value using a base-64 encoder. Encode your Client ID and Client Secret separated by a colon: Client ID:Client Secret.
In Parabola, use the Header Value field to type Basic , followed by a space, and paste in your encoded credentials: Basic {encoded credentials here}.
(POST)
Test URL
https://wwwcie.ups.com/security/v1/oauth/refresh
Production URL
https://onlinetools.ups.com/security/v1/oauth/refresh
Body Parameters
Request Headers
6. Click Apply custom Oauth 2 settings and a new window will appear.
7. Enter your email address, authorize Parabola to access the data, and click Continue to complete the authorization process.
Get started with this template.
1. Add a Use sample data step to your Flow. You can also import a dataset with tracking numbers into your Flow (Pull from Excel File, Pull from Google Drive, Pull from API, etc.)
💡 Tip: When using your own data, use the Edit columns step to rename the tracking column in your source data to Tracking Number.
2. Connect it to an Enrich with API: UPS Tracking step.
3. Under Authentication Type, ensure OAuth 2.0 is selected to use your authentication credentials.
4. Click into the Request Settings to configure your request using the format below:
6. Click Refresh data to display the results.
The Visualize step is a destination step used to display data as charts, styled tables, or key metrics. These visualizations can optionally be shown on the Flow canvas or on the Flow dashboard.
When first added to your Flow and connected to a step, the Visualize step will expand. Data flowing into the Visualize step will be shown as a table on the canvas.
To customize this visualization and create new views, open the Visualize step by clicking "Edit this View."
Visualize steps can be configured with any number of views. Every view in a single Visualize step will use the same input data, but each view can be customized to display data in a different way.
The Visualize step is also used to sync views to your Flow dashboard tab. When the “Show on dashboard” step option is enabled, that visualization will also appear in your Flow dashboard.
Views in the Visualize step will be shown on your Flow dashboard by default. Uncheck the dashboard setting within the Visualize step to remove any views from the dashboard.
Visualize steps can be collapsed into normal-sized steps by clicking the collapse button, located in the top right of the expanded visualization. Similarly, collapsed Visualize steps can be expanded by clicking on the expand button under the step.
Expanded Visualize steps can be resized using the handle in the bottom right of the step.
Flow dashboards enable your team to easily view, share, and analyze the data that your Flows create. Use the Visualize step to create interactive reports that are shareable with your entire team. Visualizations can be powered by any step in your Flow or by Parabola Tables for historic reporting.
Check out this Parabola University video for a brief intro to tables.
The Visualize step is a tool for creating tables, charts, and metrics from the output of your Flows. These views of data can be arranged and shared directly in Parabola from the Flow dashboard page.
To create a Visualization, connect any step in your flow to a Visualize step:
Data connected to a Visualize step will be usable to create any number of views. Those views are automatically added to your Flow dashboard, where they can be arranged and customized.
Once you’ve added views to your Flow dashboard, you can:
Anyone with access to your Flow will be able to see the Flow dashboard:
To share a view, you can either share the entire dashboard with your teammate (see instructions here), or click “Share” from a specific table view. Sharing the view will give your teammate access to the Flow (and it’s dashboard), and link them directly to that specific view.
Any visualization can be exported as a CSV. Simply click on the "Export to CSV" button at the top right of your table or chart.
Views are individual visualizations, accessible from the Visualize step, or on the Flow dashboard. The data connected to a Visualize step acts as a base dataset, which you can customize using views. Views can be visualized as tables, featured metrics, charts, and graphs.
Ready for a deeper dive? This Parabola University video will walk you through some of the configurations available to fine-tune how you see your data.
Arrange data views on the page with either a tab or tile layout.
Tabs will appear like traditional spreadsheet tabs, which you can navigate through. Drag to rearrange their order.
Tiles enable you to see all views simultaneously. You can completely customize the page by changing view height and width, and drag-and-drop to rearrange.
From the “Table/chart options” menu, you can select from several types of visualizations.
By default, visualizations display as tables. This format works well to show rows of data that are styled, calculated, grouped, sorted, or filtered.
In the below image, the table options menu is at the top left, below the "All Inventory" tab. This is where you can access options to format and style columns, or to add aggregation calculations.
Featured metrics allow you to display specific column calculations from the underlying table.
Metrics can be renamed, given a color theme, and formatted (date, number, percent, currency, or accounting). The metrics options menu is in the same placement as above, represented with a '#' symbol.
Parabola supports several chart types:
Within the chart options menu, represented below as a mini bar graph, you can customize chart labels, color themes, gridlines, and legend placement.
Charts have a single value plotted on the horizontal X axis, along the bottom of the chart. Date or category values are commonly used for the X axis
Use the grouping option on the X axis control to aggregate values plotted in the chart. For example, if you have a week's worth of transactions, and you want to see the total number of transactions per day, you would set your X axis to the day of the week, and group your data to find the sum. Ungrouped values will be plotted exactly as they appear in your dataset.
Use the X axis options dropdown within the chart options menu to further fine-tune your formatting.
Charts can have up to two Y axes, on the left, right, or both. Additionally, each Y axis can key to any number of data values, called series.
Adding multiple series will show multiple bars, lines, or dots, depending on which chart you are using. The above image shows a chart using one Y axis, but several series with stacking enabled under the "Categories / stacking" dropdown.
When you add a second Y axis, it will add a scale to the right side of the graph. Any series that are plotted in the second Y axis will adhere to that scale, whereas any series on the first Y axis will adhere to the first scale. Your charts are limited to two scales, but each series can be aggregated individually, so you can compare the mean of one data point with the sum of another, and the median of a third.
Imagine using multiple Y axes to plot two sets of data that are related, but exist on different numerical scales, such as total revenue in one axis, and website conversion rate in another axis.
Many charts and graphs have category and stacking options. Depending on your previous selections with the X and Y axes, and the chart type, some options will be available in this menu.
View controls can be selected from the icons in the control bar on any view.
You can perform the following calculations on a column:
Only one metric can be calculated per column.
Tables can be grouped up to 6 times. (After 6 groups, the '+ Add grouping' option will be disabled.) Groups are applied in a nested order, starting at the first group, and creating subgroups with each subsequent rule.
Use the sort options within the group rules to determine what order the groups are shown in. Normal sort rules will be used to sort the rows within the groups.
Click the “Sort” button to quickly add a new sort rule (or the view options menu). These sorts define how rows are arranged in the view.
Click the “Filter” button to quickly add a new filter rule (or the view options menu). These filters define which rows are kept in the view.
Filters work with dates – select the “Filter dates to…” option, and utilize either relative ranges (e.g. “Last 7 days”) or specify exact ones.
Columns, metrics, and axes can be formatted to change how their data is displayed and interpreted. Click the left-most of your configuration buttons, the "Table/Chart Options" button, to apply formatting to any column, metric, or axis. You can select auto-format, or choose from a list of categories and formats within those categories.
In charts, the X-axis will be auto-formatted, and you can change the format as needed. All series in each Y-axis will share the same format. Axis formatting can be adjusted by clicking the gear icon next to the axis name.
Formats will be used to adjust how data is displayed in the columns of a table, in the aggregations applied to groups and in the grand total row, and to featured metrics. When grouping a formatted column, the underlying, unformatted value will be used to determine which row goes in which group.
When working with dates, the format is autodetected by default. If your date is not successfully detected, click the 3 dots next to the output format field and enter a custom starting format.
Valid options are:
If the output format uses a token that is not found in the input , e.g. converting MM-DD to MM-DD-YYYY, then certain values will be assumed:
Dates that do not adhere to the starting format will remain unformatted in your table.
Use the "Table/Chart Options" to hide specific columns from your table view.
Columns can be used for sorting, grouping, and filtering even when hidden. Those settings are applied before the columns are hidden for even more control over your final Table.
Hidden columns will not show up in search results, unless the option for “Display all columns” is enabled.
Hidden columns can be filtered by quick filters.
Hidden columns will be present in CSV exports downloaded from the view.
Use the "Table/Chart Options" to freeze the first (left-most) column or the first row by using the checkboxes at the top. A frozen column or row will “stick,” and other columns and rows will scroll behind them.
Click "Quick Filter" in the top right corner of the dashboard to toggle the filter bar pictured below. Using "Add quick filter" or "Add date filter," you can filter data in specific columns across every view on the page. These filters are only applied for you, and will not affect how other users see this Flow. Refreshing the page will reset all quick filters.
After 8 seconds, the combination of quick filters will be saved in the “Recents” drawer on the right side of the filter bar. Your recent filters are only visible to you, and can be reapplied with a click.
Quick filters can only be used if you have at least one table on your Flow. Above the first table on your published Flow page, click to add a filter. The filter bar will then follow you as you scroll.
Multiple quick filters are combined using a logical “and” statement. These filters are applied in conjunction with any filters set on individual views.
Use the clear filters icon to remove all currently applied filters.
From the Table Options menu, use the “add color rule” button to apply formatting to the columns of your Table view.
There are 3 types of formatting that can be added:
(The same menu can be used to remove any existing colors applied to a column.)
Applies a chosen color to a column entirely. All cells will have a color applied.
Uses a conditional rule to color specific cells. The following operators are supported:
Applies a 2 color or 3 color scale to every cell in the column. All cells will have a color applied.
When using two colors, by default the first color will be applied to the minimum value and the second color will be applied to the maximum value. When using three colors, by default, the middle color will be applied to the value 50% between the smallest and largest value in the column.
Cells with values between the minimum, maximum, and middle value (if using 3 colors) will blend the colors they are between, creating a smooth gradient.
When setting a custom value for the maximum or minimum on a color scale, any value in the table that is larger than the maximum or smaller than the minimum will have the the maximum color or minimum color applied, respectively.
Click the ellipsis menu next to the format dropdown to access controls to adjust how the scale is applied.
Switch each breakpoint to use a number, percent, or the default min/max value.
Scales can be applied to columns containing dates, numbers, currency, etc.
Multiple rules can be applied to the same column. They will be evaluated top down, starting with the first rule. Any cells that are not colored as a result of that rule move on to the next rule, until all rules have been evaluated, or all cells have been assigned a color. A cell will show the color of the first rule that evaluates to true for the value in that cell.
After a set color or color scale is applied, no further rules will be evaluated, as all cells will have an assigned color after those rules.
Existing table views may have columns with column emphasis applied. Those columns will be migrated automatically to use a set color formatting rule.
The Walmart API is used to programmatically interact with Walmart's platform and provides access to various Walmart services including order managements, inventory and stock levels, product data, and customer insights.
Walmart is a beta integrations which requires a slightly more involved setup process than our native integrations. Following the guidance in this document should help even those without technical experience to pull data from Walmart. If you run into any questions, shoot our team an email at support@parabola.io.
📖 Walmart API Reference:
https://developer.walmart.com/home/us-mp/
🔐 Walmart Authentication Documentation
https://developer.walmart.com/doc/us/us-mp/us-mp-auth/
1. Navigate to the Walmart Developer Portal.
2. Click My Account to log into your Marketplace.
3. Click Add New Key For A Solution Provider to set permissions for the provider to generate a Client ID and Client Secret.
💡 Tip: Use Production Keys to connect to live production data in Parabola. Use Sandbox Keys to review the request and response formats using mock data.
4. Select the Solution Provider from the drop-down list.
⚠️ Note: If your Solution Provider is not listed, contact Walmart. You need to have a contract with Walmart before you can delegate access to a Solution Provider.
5. Specify specific permissions, or to take the defaults, click Submit.
6. Configure an Expiring Access Token request to the Token API in Parabola.
1. Add a Pull orders from Walmart step template to your canvas.
2. Click into any of the Enrich with API steps to configure your authentication.
3. Under the Authentication Type, select Expiring Access Token before selecting Configure Auth.
4. Enter your credentials to make a request to the Token API using the format below:
Sandbox URL
https://sandbox.walmartapis.com/v3/token
Production URL
https://marketplace.walmartapis.com/v3/token
💡 Tip: You can configure an Authorization Header Value using a base-64 encoder. Encode your Client ID and Client Secret separated by a colon: Client ID:Client Secret.
In Parabola, use the Header Value field to type Basic , followed by a space, and paste in your encoded credentials: Basic {encoded credentials here}.
💡 Tip: You can generate a WM_QOS.CORRELATION_ID Header Value using a GUID generator. Click Generate some GUIDS and copy the result to your clipboard.
In Parabola, paste the results in the WM_QOS.CORRELATION_ID Header Value.
access_token
5. Click Advanced Options
WM_SEC.ACCESS_TOKEN
{token}
6. Click Authorize
7. Click into the other Enrich with API steps and select the Expiring Access Token as your Authentcation Type to apply the same credentials.
Get started with this template.
1. Add a Start with date & time step to the canvas to define the earliest order date.
2. Connect it to a Format dates step to format the Current DateTime into yyyy-MM-dd.
3. Connect it to the Enrich with API step.
4. Under Authentication Type, ensure Expiring Access Token is selected to use your authentication credentials.
5. Click into the Request Settings to configure your request using the format below:
6. Click Refresh data to display the results.
⚠️ Note: Parabola cannot support the API’s cursor-style pagination at this time. We can import up to 200 records at a time. Configuring a smaller, dynamic date range with frequent Flow runs is highly recommended.
Webflow is currently only accessible via Parabola using an API step. Access the Webflow API docs here: https://developers.webflow.com/data/reference/rest-introduction
All API requests require authentication to access your Webflow data. The easiest way to connect Parabola to Webflow is through an authorization token.
To create and manage site tokens, see Webflow’s documentation.
Once you have a token, set your API step to use a “Bearer token”, and paste your Webflow site token into the bearer token field.
The most common data to pull from Webflow is a list of items in a specific collection. To do this with an API step, you will need to use the List Collection Items bulk API - docs here.
Using the API step in Parabola, configure a GET request to this endpoint:
https://api.webflow.com/v2/collections/:collection_id/items
Replace the :collection_id
section of the URL with a collection ID from your Webflow site. Collection IDs can be found in the Webflow Designer, at the top of the settings panel for that specific collection:
Webflow APIs use Offset & Limit pagination - set both the offset and the limit to 100, and set the pages to fetch (each page will be 100 items) to an appropriate number.
Use this snippet (copy it and paste it anywhere in any Flow) to see a step that is mostly set up to pull collection items: parabola:cb:be322aeb-6ef6-4eed-9153-aec3d82cb336
The API step can be used to keep specific columns and rename them.
The Pull from webhook step receives data that was sent to Parabola in an external service's Webhook feature. It is a source step that brings in data triggered by an event that took place on the external service, such as a customer purchasing an item on a platform like Shopify.
This step is currently offered to users on our Advanced Plan. Check out the Pricing Page for additional information.
First, set up an example flow with one import step (Pull from webhook) and one destination step of your choosing (for example, Send to Parabola Table). Once those steps are connected and configured, publish and run the Flow (see the button in the top right corner of a Flow canvas).
Once this Flow has been run with the Pull from webhook step, open the Schedules / Triggers pane from the published Flow screen: you’ll see a webhook trigger.
Click the pencil icon to copy, configure, and see the history of this webhook trigger.
Highlight and copy the webhooks link to give to your external service in their webhooks section. (Be sure to not return to the Draft mode yet; if you have, refrain from publishing that Draft and return to the published Flow view).
After you've copied the Webhook link and entered it into your external tool's webhooks area, do a test initiation event to trigger this webhook (or wait for one to happen naturally, like a customer purchasing an item). Then, return to your Flow — it should have run automatically from this external event. Start a new Draft to open up the Flow builder again.
The Flow will now have the test webhook data pulled into it. Double-click on the Receive from webhook step to view it. This way, you'll get an idea of what the service's hook data looks like when its received and you can build out a flow that deals with it in the way you'd like. Please note that you must wait for a webhook to run at least once in order to go back to the flow's editor mode and see displayed hook data — otherwise, the step's display will be blank.