Skip to main content
Version: 5.5

"Snowflake AI Data Cloud" plugin

The Snowflake plugin enables bidirectional data exchange between OPC Router and the cloud data platform Snowflake. Data can be written to Snowflake as well as read from Snowflake. Various methods are available to specifically control costs, performance and latency.

Architecture and mode of operation

All operations are always executed via a Snowflake Warehouse. The warehouse handles the processing and calculation of all SQL queries and write operations.

warning

Cost note: Every read or write operation via the warehouse incurs Snowflake credits. This also applies when Auto Resume / Suspend is enabled – costs are incurred as soon as the warehouse is active.

For pure ingest scenarios without queries, a cost-optimised alternative via Snowpipe is recommended.

Cost-optimised ingest variants (write-only)

For write-only scenarios, in which the OPC Router only inserts data into Snowflake, there are two options available:

1. REST plug-in → Snowpipe

The OPC Router can send JSON or CSV data to a Snowpipe endpoint via the REST plug-in. Snowpipe loads this data asynchronously into the target tables without keeping a warehouse active.

Advantages

  • No active warehouse required
  • High cost efficiency for continuous ingest
  • Supports event-driven transfers (e.g. from production systems)

2. Cloud File Access plug-in → S3 / Stage

Alternatively, OPC Router can write data directly to a cloud stage (e.g. Amazon S3). Snowpipe or a copy command in Snowflake then automatically reads the data.

Advantages

  • Batch processing of larger data volumes
  • Ideal for periodic or buffered writing
  • No permanent warehouse utilisation

Bidirectional use

The Snowflake plug-in not only supports writing, but also reading and returning data. This allows closed-loop scenarios (e.g. feedback from cloud analyses back to machines) to be implemented.

Two common approaches are available:

1. Trigger via transfer state column

The Snowflake transfer status trigger initiates a transfer when a value in the monitored table is marked as not yet transferred. Once the transfer is complete, the record is marked as transferred or failed. Failed transfers can be repeated as often as desired using an additional counter.

Process

  1. The trigger monitors a defined table in the database.
  2. A data record is marked as not transferred (e.g. with 0) or already added as such.
  3. As soon as such a data record is detected, a transfer is triggered.
  4. After successful transfer, the data record is marked as transferred (e.g. 1) or failed (e.g. 2).
  5. Failed transfers can optionally be repeated multiple times, controlled by a repeat column and number.

Advantages

  • Automated transfer process based on status values.
  • Repeat mechanism for failed transfers increases reliability.
  • Flexible configuration of status values and repeat logic.
  • Option to limit the transfer quantity per run.

Notes

  • The monitored table must have a single primary key.
  • The status column and repeat column must be initialised with 0 or the configured value – NULL values are not permitted.
  • Sorting and filtering of the data records to be transferred can be configured.

2. Trigger via data change in table

Alternatively, the OPC Router can detect data changes by evaluating a column such as updated_at or sequence_id.

Process

  1. Periodic polling for changed data records.
  2. Processing of new or changed data records.
  3. Storage of the last checkpoint in the router.

Advantages

  • No additional status field required
  • Easy to integrate into existing structures

Notes

  • The monitored table must have a single primary key.
  • Unique sorting (e.g. ORDER BY updated_at, id) recommended.
  • Duplicate detection via hash or unique ID.
  • Select small batch sizes for high change rates.
  • Optionally, a transfer can be triggered directly when connecting to the database.

Best practices

  • Choose warehouse size carefully: smaller clusters for continuous streams, larger ones only for batch loads.
  • Make active use of suspend: automatic deactivation significantly reduces running costs.
  • Outsource asynchronous ingest if real-time is not required (Snowpipe, REST).
  • Version schema changes to ensure stable transfers.
ScenarioMethodWarehouse requiredCost-optimisedRecommended for
Synchronous inserts/selectsWarehouse connectionValidations, small amounts of data
Asynchronous ingest (REST)REST plug-in → SnowpipeEvent/log data, edge → cloud
Batch ingest (file upload)Cloud File Access → S3/StageLarge data volumes, periodic uploads
Read data from SnowflakeSQL trigger, state or timestamp columnRetransfer or data feedback

See also