Skip to main content
Version: 5.2

Highly available and redundant architecture

This section discusses the concepts and implementation strategies for highly available and redundant systems in the context of the OPC Router. High availability and redundancy are crucial to minimize downtime and ensure the reliability of data communication. There are various approaches to achieve these goals, including centralized and decentralized architectures, as well as combinations of both approaches.

Centralized architecture with high availability and redundancy

Cross-redundancy

  • Definition: Cross-redundancy not only involves operating two or more central OPC Router instances in parallel, but also designing the connected systems redundantly. Each instance is able to take over the tasks of the others, using redundant connections to the data sources.
  • Implementation:
    • Centralized servers: Two or more powerful servers in a centralized data center.
    • Redundant connection: Configuration of the OPC Router instances with redundant connections to OPC UA servers, databases, SAP systems, etc. For example, a connection is established to two redundant OPC UA servers to enable “hot standby”.
    • Synchronization: Regular data and configuration synchronization between the servers.
    • Failover mechanism: Automatic switchover to the redundant instance in the event of a server failure.

Advantages of cross-redundancy

  • High availability: The architecture enables constant availability through parallel operation and fast recovery in the event of a fault.
  • Redundant connection: The use of redundant connections to critical systems such as OPC UA servers, databases and SAP systems increases reliability.
  • Fast switchover times: Hot standby allows the redundant instance to be switched over extremely quickly in the event of a failure, minimizing downtime.
  • Dynamic compensation: Redundancies that are designed to be multiple can dynamically compensate for failing components in the chain.

Challenges of cross-redundancy

  • Increased costs due to multiple central servers.
  • Complexity in implementing and managing synchronization mechanisms.

Decentralized architecture

Close to the source in a fault zone

  • Definition: In a decentralized architecture, OPC Router instances are operated as close as possible to the data source (e.g. production line, machine hall). Each instance operates in a defined “fault zone” so that local failures do not affect other zones.
  • Implementation:
    • Local servers or edge devices: Use of local servers or edge devices in the vicinity of the data sources.
    • Local redundancy: Setting up redundancy at the local level, e.g. by using duplicate edge devices.
    • Store and Forward: Use of the Store and Forward add-on of the OPC Router to store data locally in case of connection interruptions and forward it later.
    • Local broker/database: Use of a local broker or database to buffer and synchronize data with central systems.

edge-device-architecture.md

Advantages of decentralized architecture

  • Reduced latency times through local data processing.
  • Higher fault tolerance through isolated fault zones.
  • Flexibility and scalability.

Challenges of decentralized architecture

  • Increased complexity due to the management of multiple decentralized instances.
  • The need for robust synchronization mechanisms between local and central systems.

Combination of both architectures

Hybrid architecture

  • Definition: A hybrid architecture combines centralized and decentralized approaches to take advantage of both models.
  • Implementation:
    • Centralized servers: Setting up centralized servers with high availability and redundancy (e.g. cross-redundancy).
    • Decentralized edge devices: Use of edge devices close to the data source that are designed with local redundancy.
    • Data synchronization: Implementation of synchronization mechanisms between decentralized and centralized instances, e.g. through store and forward or local brokers/databases.

Advantages of the hybrid architecture

  • Maximized availability and fault tolerance.
  • Optimized latency times through local processing and central administration.

Challenges of hybrid architecture

  • Complexity in the implementation and management of synchronized systems.
  • Increased costs due to the combination of multiple approaches.

Kubernetes and container orchestration

Definition

  • Definition: Kubernetes is an open-source platform for automating the deployment, scaling and management of containerized applications. By using Helm Charts, the OPC Router can be efficiently deployed and managed in a Kubernetes cluster.

Implementation

  • Deployment: The OPC Router Helm Chart is used to install the OPC Router on a Kubernetes cluster. The Helm Chart supports various configurations, including authentication and redundancy.
  • Configuration: Support for global parameters (e.g. Docker image registry), common parameters (e.g. service account settings) and specific OPC Router parameters (e.g. project repository, environment variables).
  • Redundancy: Configuration of redundancy for both the OPC Router and MongoDB. For redundancy of the OPC Router, a second pod is provided that is activated in the event of a failure of the main instance. For MongoDB, a redundant database provision is enabled by increasing the number of pods.

Advantages of Kubernetes

  • Scalability: Easy scaling of OPC Router instances according to requirements.
  • Flexibility: Ability to manage different configurations and implementations.
  • Automation: Automated management of deployments, updates and rollbacks.

Challenges of using Kubernetes

  • Complexity: Implementing and managing Kubernetes clusters can be complex and requires specialized expertise.
  • Resource requirements: Running a Kubernetes cluster requires significant hardware resources and can be costly.

For detailed implementation strategies, practical implementations and specific configuration examples, please visit the Kubernetes and Container Orchestration subpage.

Practical implementation

Centralized architecture

  1. Server provisioning: Setting up multiple centralized servers.
  2. Configuration synchronization: Implement mechanisms to synchronize configuration and data between servers.
  3. Failover test: Regular testing of the failover mechanism to ensure availability.

Decentralized architecture

  1. Edge device deployment: Install and configure edge devices near the data sources.
  2. Local redundancy: Implementation of redundant edge devices for local fault tolerance.
  3. Store and Forward: Activating and configuring the Store and Forward add-on.
  4. Local broker/database: Set up a local broker or database for data caching and synchronization.

Hybrid architecture

  1. Centralized and decentralized components: Combination of the steps from the centralized and decentralized architectures.
  2. Synchronization strategies: Implementation of robust synchronization strategies between the central and decentralized components.
  3. Monitoring: Regular monitoring and maintenance of both the central and decentralized instances.

Summary

For highly available and redundant systems, the OPC Router offers both centralized and decentralized architecture options that can be combined as needed. A centralized architecture with cross-redundancy provides high availability and centralized management, while a decentralized architecture minimizes latency and increases fault tolerance. A hybrid architecture combines the advantages of both approaches to ensure maximum flexibility and reliability.