Skip to main content
Version: 5.4

OPC data transfer via multi-data change trigger

tip

In our performance benchmark, we were able to simultaneously transfer around 28,500 OPC data points per second to MQTT and InfluxDB. The transfer rate was limited by the processing capacity of the OPC server, which became the limiting factor under higher loads.

Systems

OS: Windows Server 2019 Standard 1809
RAM: 16GB
Processor: 64bit 2,6GHz (4 Kerne)
OPC Router: 5.3.5008.157

OS: Windows Server 2019 Standard 1809
RAM: 12GB
Processor: 64bit 2.6GHz (4 Cores)
KEPServerEx: V6.16.203.0

OS: Windows 10 Pro N 22H2
RAM: 12GB
Processor: 64bit 2.6GHz (4 Cores)
KEPServerEx: V6.16.203.0

Projects

2 KEPServerEx projects

  • with one channel each (simulator)
  • with one device each (16-bit device)
  • with two tag groups each
  • with 40 sub-tag groups each
  • with 200 tags each (random 1000)

Total 32,000 tags

OPC Router project

  • Four OPC UA client connections (two plug-ins each for each KEPServerEx)
  • Each plug-in is used in 40 template instances in the multi-data change trigger
  • Each multi-data change trigger monitors a tag group (with 200 tags each)
  • The values read when the triggers are activated are written to Influx and via MQTT

Summary: Monitoring and reading of 8,000 tags

OPC UA client connection

Deviations from the default:

  • Subscription – Register OPC tags on start: true
  • Advanced – Sample rate (ms): 1000

Multi-Data Change Trigger

Deviations from the default:

  • Update items: false

Evaluation

The average time between the executions of the respective multi-data change triggers was considered.

ConnectionsData pointsPlug-insOPC ServerØ Time bet. triggersPlug-in
16032000421100ms

KEPServerEx 1

Connection #1

16032000421027ms

KEPServerEx 1

Connection #2

16032000421204ms

KEPServerEx 2

Connection #1

16032000421101ms

KEPServerEx 2

Connection #2

warning

The OPC server proved to be the limiting factor when the number of data points to be transferred increased. The performance analysis showed that the server's processing capacity was insufficient to handle higher loads efficiently, which limited the overall transfer rate.

Project files

Download Benchmark_KepServer1.opf

Download Benchmark_Kepserver2.opf

Download Benchmark_Multidatachange.rpe