Stitch Documentation
has moved!

Please update your bookmarks to https://www.stitchdata.com/docs

If you're not automatically redirected after 5 seconds, click here.

Redshift: Expected Data Loading Behavior

Important!
The info in this doc only applies to Redshift data warehouses.

Because data can come from a variety of integrations and all those integrations may structure or handle data differently, Stitch will likely encounter numerous scenarios when replicating and loading your data. In this doc, you'll find a number of these scenarios broken out by type:

New Table Scenarios

Summary Scenario Expected Behavior
Single Primary Key First batch of data arrives with a single Primary Key Table is created without Primary Key and no NOT-NULL columns. Primary Key info is stored as a comment on the table.
Multiple Primary Keys First batch of data arrives with multiple Primary Keys Table is created without Primary Key and no NOT-NULL columns. Primary Key info is stored as a comment on the table.
No Primary Keys First batch of data arrives without Primary Keys Table is created without Primary Key and no NOT-NULL columns.
Replication Key contains NULLs First batch of data arrives where the Replication Key column contains NULLs Table is created and rows with NULL Replication Keys are created.

Rows with NULL Replication Keys will not be replicated after the initial sync.
Some empty columns First batch of data arrives with some columns that are completely empty Only columns that are populated in at least one record are created.
ALL empty columns First batch of data arrives with columns that are all NULL Table is created and contains only the Stitch replication (_sdc) columns.
Table name is too long Data arrives with a table name that exceeds the maximum length for Redshift. Table name limit is 127 characters. Redshift will reject all data for the table
Column name is too long Data arrives with a column name that exceeds the maximum length for Redshift. Column names are limited to 115 characters. Columns with names longer than 115 characters will be rejected; columns with names less than 115 characters will persist to Redshift.

Note that the limit of 115 characters is to leave room for suffixes that are a result of column splitting.
Too many columns A table arrives with more columns than the data warehouse allows. Redshift’s limit is 1600 columns. Redshift will reject all data for the table and surface a “too many columns” error.
Duplicate fields Data arrives with two column names that canonicalize to the same name Redshift will reject the entire record containing the two columns. Note that Redshift is case-insensitive.
Negative/positive infinity, NaN Special cases of FLOAT/DOUBLE values These values will be converted into NULLs.
Mixed case column name Data arrives with columns named LiKeThIs Column names will be converted to lowercase
Column name with spaces Data arrives with column names containing spaces Spaces are maintained
Unsupported special characters in column name Data arrives with column names containing special characters that are unsupported by Redshift Special characters are removed

 

Data Loading Scenarios

Summary Scenario Expected Behavior
Data for existing table version Data arrives that updates existing records in the data warehouse (matched on Primary Key) Also known as an upsert, the original row is deleted and a new row is created with the updated data
Data with a new table version A full set of data for a table (set to Full Table Replication) arrives Table is truncated and new data is loaded in a single atomic action
Replication Key contains NULLs (initial batch) Initial batch of data arrives where the Replication Key column contains NULLs Table is created and all rows are loaded
Replication Key contains NULLs (subsequent batch) Subsequent batch of data arrives where the Replication Key column contains NULLs Stitch will not replicate rows where the Replication Key is NULL. Rows with values in the Replication Key column will persist to Redshift.
Timestamps out of range Data arrives with a date that is out of range for the data warehouse to handle. Redshift’s range is 4713 BC to 294276 AD. Redshift will reject records that fall outside the supported time range
VARCHARs max width Data arrives within a text column that exceeds the Redshift size maximum. The VARCHAR limit is 65K. Data will be truncated to the maximum width
Decimal range Data arrives within a decimal column that exceeds the Redshift size limit. Redshift limits decimals to 38 numbers, or places. Redshift will reject the entire record
Integer range Data arrives within an integer column that exceeds the Redshift size limit Redshift will reject records that exceed the size limit

 

Data Typing Scenarios

Summary Scenario Expected Behavior
Split data types (same batch) Initial batch of data contains mixed data types within a single column. For example: a column contains string and INTEGER data. Column will be “split” and the column name will be appended with the data type.

For example: age_fl, age_bi
Split Data types (different batch) Initial batch of data contains single data type columns, but a subsequent batch contains mixed data types in a single column. For example: a column contains string and INTEGER data. Column will be “split” and the column name will be appended with the data type.

For example: age_fl, age_bi
VARCHARs (same batch) Initial data for a VARCHAR column arrives with mixed sizes. Some records contain more than 128 characters. Column is created at a width large enough to contain the largest value
VARCHARs (different batch) Data for an existing VARCHAR column arrives; some records exceed the max size for the existing column. New column is created at a size to accommodate the largest value. Data is copied into the new column, the old column is dropped, and the new column is renamed.
Dates with timezones Timestamp data arrives with timezone information Data type is changed to TIMESTAMP WITHOUT TIME ZONE and adjusted to show as UTC
Decimals Data arrives in decimal format with an arbitrary amount of precision. Redshift limits the range after decimals to 6 numbers. Redshift will truncate the data to 6 places after the decimal

 

Schema Changing Scenarios

Summary Scenario Expected Behavior
New column A new column is added to an existing table New column is appended to the end of the table
Column removed A column is no longer seen within rows of data coming for an existing table Default NULL values are placed into the column
Column added to nested table Data arrives with a nested table that contains a field that did not exist in that nested table before New field is added to the nested table

 

Nested Data Scenarios

For a detailed explanation of how nested data structures are handled, refer to the Handling of Nested Data Structures doc.

Summary Scenario Expected Behavior
Nested JSON objects Data arrives that is nested with many top-level properties Keys in the object are de-nested and become columns in the table where the column name follows this naming convention:

top_level_key__secondary__third__etc
Nested JSON top-level arrays Data arrives that is nested and amongst the nested data are properties that contain arrays A subtable is created. Subtables follow this naming convention:

top_level_table__array_name
Nested JSON sub-level arrays Data arrives that is nested and amongst the nested data are properties that contain arrays, which themselves contain properties that are arrays See above
Nested JSON new columns in arrays The sub-level arrays in nested JSON data have a varying set of columns See above

 

Direct Data Warehouse Changes

Summary Scenario Expected Behavior
You add columns You add additional columns to a Stitch-generated table in Redshift Fine, as long as nullable
You apply indexes You apply indexes to columns created by Stitch No issue; Stitch will not remove them
You apply Sort / Distribution Keys You apply Sort / Distribution Keys to columns created by Stitch Stitch operations are unaffected as long as the Primary Key info is maintained via a table comment
You remove Stitch (_sdc) columns from a table You remove one or more of the _sdc columns created by Stitch Inserting and updating data will break as Stitch requires the _sdc_table_version and _sdc_sequence columns
You remove a data column from the data source You remove one or more of the data columns from the data source Default NULL values are placed into the column
You remove a data column from the data warehouse You remove one or more of the data columns created by Stitch from the data warehouse

Note: it may be possible to remove columns from database integrations only.
Removing a column with a data type suffix (ex: __boolean) will break the table.

If new data is detected for the deleted column, Stitch will recreate it in your data warehouse.

You revoke permissions to schemas and/or tables You remove Stitch’s access to create schemas/tables or to write to Stitch-created tables Data replication will stop until sufficient permissions are re-instated
You revoke data warehouse permissions You remove access to the data warehouse as a whole Data will fail to load and you will be notified

 

Related

Was this article helpful?
0 out of 0 found this helpful

Comments

Questions or suggestions? If something in our documentation is unclear, let us know in the comments!