Snowflake Data Warehouse
JVM AGENT
Overview
The SQL connector enables Seeq to access data from Snowflake.
Prerequisites
You will need some information to configure a connection to your Snowflake Data Warehouse.
Account
Your account is the subdomain of your base URL. For example, if you access https://my-company.snowflakecomputing.com/, your account would be my-company
.
Warehouse
Seeq will need an existing warehouse to perform any queries.
User Account
Seeq needs an account to make queries. This account should have the Timezone set to UTC to avoid having to set this parameter per session. This can be achieved with the following command:
ALTER USER SET TIMEZONE = 'UTC';
Further information can be found here: https://docs.snowflake.com/en/sql-reference/parameters#timezone.
Configuration
This is an example configuration template that is displayed in the Additional Configuration box that appears when you click Configure for an existing datasource (or if a new datasource is being created, in the Create new datasource connection modal that appears after clicking Add Datasource and choosing the Snowflake template) on the Datasources administration page.
{
"QueryDefinitions": null,
"Type": "SNOWFLAKE",
"Hostname": null,
"Port": 0,
"DatabaseName": null,
"Username": "",
"Password": null,
"InitialSql": null,
"TimeZone": null,
"PrintRows": false,
"UseWindowsAuth": false,
"SqlFetchBatchSize": 100000,
"Account": "",
"Warehouse": "",
"ExpandQueryRange": true,
"AlterSnowflakeSessionTimezone": true
}
Standard SQL Additional Configuration
Property Name | Default Value | Data Type | Description |
---|---|---|---|
| null | Array[QueryDefinition] | The definition for how Seeq should query for data. If your hostname is of the form |
| null | String | The hostname of your datasource. |
| 1,000,000 | Integer | The maximum number of signals that can be indexed from a single query definition. This value is here to protect against incorrect query definitions producing many millions of invalid signals. |
| 0 | Integer | The port for the JDBC Connection. |
| null | String | Optional: Can be defined here or as part of a fully qualified table name in the QueryDefinition. |
| “" | String | The user name |
| null | String/SecretFile | The user password. |
| null | String | Optional: Can be specified if you have a known, functioning JDBC connection string. If specified, |
| null | String | Optional: A SQL command that would be run one upon establishing a connection. |
| null | String | Optional: The time zone to use for timestamp or datetime columns. For example, to set this to US Pacific Time, you would use |
| false | Boolean | The rows from the SQL query will be printed to the jvm-link log. This is for debugging purposes only, and should be set to |
| false | Boolean | Note: This is not available for all Database Types. If you are using a database type that supports Windows Authenication, you will need to ensure that the remote agent is running as the correct user. |
| null | Map<String, String> | A map of key value pairs that are configured as connection properties |
Time Zone
Some SQL date and/or time column types have no zone information. The TimeZone field is available to specify the time zone that Seeq should use for data coming from columns types that have no time zone information of their own. UTC offsets (+01:00, -10:30, etc.) and IANA regions (America/Los_Angeles) are accepted. If no time zone is specified, Seeq defaults to the local region of the Seeq server. If your data was stored in UTC time, set this field to "UTC" or "+00:00". If your data was entered using a "wall clock", set this to the IANA time region of the "wall clock". Note that offsets are constant throughout the year whereas a region may observe daylight savings time. If you used a wall clock in a location that observes daylight savings time, a region is a better choice than an offset for this field. A list of IANA regions (tz database time zones) can be found here.
Snowflake-Specific SQL Additional Configuration
Property Name | Default Value | Data Type | Description |
---|---|---|---|
| ““ | String | The account name to be used for the connection. |
| ““ | String | The warehouse to use for the query. |
| true | Boolean | If true, the query range will be expanded to include the maximum interpolation range for a signal. This will avoid costly LIMIT 1 queries. Only set to false if you have a large maximum interpolation range and high frequency data. |
| false | Boolean | Only If true, the connector will attempt to alter the time zone session parameter to UTC. Ideally, this parameter should be set as per the perquisites section above. |
| false | Boolean | Will turn off connection monitoring which causes a request once every 5 seconds. This request does not use warehouse time, but does count against your cloud services credits. |
Known Issues
Using the cast operator is not supported. Instead an explicit cast will need to be made. For example, instead of this:
SELECT '2022-04-01'::DATE;
You will need to use this:
SELECT CAST('2022-04-01' AS DATE);
--OR:
SELECT TO_DATE('2022-04-01');
Username and Password is the only currently supported authentication mechanism. Other mechanisms are on the roadmap.
Troubleshooting
A couple of errors that you may encounter are:
String-valued samples are prohibited in numeric-valued signal
If the y-axis value of the signal is a string, then the Value Unit Of Measure property is required and must be set to "string". See Example 2.
Since the Value Unit Of Measure is different for string and numeric signals, it may be easiest to write one query definition for the numeric signals and write another for the string signals. Alternatively, the Value Unit Of Measure property could be set according to an SQL IF statement similar to the technique used in Example 13.
Samples must be ordered by their keys
If this is occurring when trending near the daylight savings transition, this is an indication that the TimeZone is not configured properly. For example, if TimeZone is set to "America/Los_Angeles", this means that the timestamp data in the SQL table was recorded using "America/Los_Angeles" time (Pacific) which observes daylight savings. During the spring daylight savings transition, time skips from 01:59:59.9 to 03:00:00.0 which means that the 02:00 hour doesn't exist and therefore there should be no data in the SQL table during that 02:00 hour. Any data in the 02:00 hour is interpreted as being in the 03:00 hour. If there is also data in the 03:00 hour, the samples will be out of order.
Original data: | 01:15, 01:45, 02:15, 02:45, 03:15 |
After accounting for non-existent 02:00 hour: | 01:15, 01:45, 03:15, 03:45, 03:15 |
If data exists in the 02:00 hour, it must mean it was either recorded in error or was recorded in a time zone that doesn't observe daylight savings such as UTC or a constant offset from UTC.
For more information, see the TimeZone field in the Configuration section above.
If you are running into issues with connecting to or access data from Snowflake, view our guide for troubleshooting datasource issues.
Performance considerations
When using filter criteria in a where clause, you will want to ensure you have properly set up your clustering keys.
Evaluating execution times can be helpful when trying to determine the cause of a slow query.
Ensure you are using timestamp types for your time columns vs having your timestamps as strings or the results of concatenations.
Querying data from a table vs a stage will generally be much more performant.
View our guide on optimizing datasource performance for general guidance.