Skip to main content
Skip table of contents

Understanding Data Caching

Seeq can be configured to cache historian data on its local hard drive to improve data retrieval performance for historical data.

Overview

One of our core principles at Seeq is that data should not be duplicated. We feel strongly about this because the minute that data is duplicated, it is out of date and a host of issues and questions arise as to the integrity and legitimacy of the copy. At the same time, we strive to optimize the user experience of interacting with data in historians. Therefore, to enhance data access performance, Seeq can use a local cache on the Seeq Server's hard drive. It is a true cache, since its data may be discarded at any time without affecting the integrity of the analytics created in Seeq. Depending on the specifics of an installation, the performance benefit of enabling the cache on historian data can be dramatic. Please work with a Seeq team member to evaluate if your Seeq server is a good candidate for datasource caching.

How It Works

Reading/Writing

 Data is written to the cache as it is accessed or computed. Data is read from the cache whenever cached data exists in the request time range. For instance, if the user requests data from a series for January, and then requests data for March, the cache will look like this:

If the user then requests data from Jan 15th through March 15th, the cached data in January and March will be used and the missing February data will be computed or fetched and then written to the cache. The cache will then look like this:

Any subsequent requests for data that overlap January through March will source the overlapping data from the cache, piecing cached data together with new data as necessary. Any requests for data that fall entirely within previously cached regions will use just the cache.

Invalidation

Most time series data changes by adding new data, with very infrequent changes to past data. The cache is designed with this property in mind, and only requires invalidation (purging of cached data) when past data is altered. While invalidation of data directly from a datasource is very rare, invalidation of calculated data is reasonably frequent as changes to a calculation itself often requires invalidation of the entire signal and all signals that depend on the affected signal. This invalidation is handled by the system automatically and transparently.

In the rare situation that data must be manually invalidated, users can manage the caching state through the user interface. Users can enable/disable the cache and clear (invalidate) the cached data in the advanced options of an item's properties:

Note an item's cache and the cache of calculations based on it can also be cleared by modifying any of the following properties: Interpolation Method, Uncertainty Override, Request Interval Clamp, Key Unit Of Measure, Value Unit Of Measure, Source Maximum Interpolation, Override Maximum Interpolation, Unit Of Measure or Maximum Duration. 

Turning Caching on For a Datasource

Datasource caching is enabled by default for all newly-created datasources. Datasource caching can be modified or disabled with the Datasources/Cache/Enabled configuration flag.

To enable caching for a particular datasource, navigate to the Datasources tab of the Administration page. Find the applicable datasource and click the Cache Enabled slider that is outlined in red in the following screenshot:

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.