Skip to main content
Version: 5.4

Highly available and redundant architecture

This section covers the concepts and implementation strategies for highly available and redundant systems in the context of the OPC Router. High availability and redundancy are crucial for minimizing downtime and ensuring the reliability of data communication. There are various approaches to achieving these goals, including centralized and decentralized architectures, as well as combinations of both approaches.

Centralized architecture with high availability and redundancy

Cross redundancy

Definition

Cross redundancy involves not only operating two or more centralized OPC Router instances in parallel, but also designing the connected systems to be redundant. Each instance is capable of taking over the tasks of the others, using redundant connections to the data sources.

Implementation

  • Central servers: Two or more powerful servers in a central data center.
  • Redundant connection: Configuration of the OPC Router instances with redundant connections to OPC UA servers, databases, SAP systems, etc. For example, a connection is established to two redundant OPC UA servers to enable "hot standby."
  • Synchronization: Regular data and configuration synchronization between the servers.
  • Failover mechanism: Automated switchover to the redundant instance in the event of a server instance failure.

Advantages of cross redundancy

  • High availability: The architecture enables constant availability through parallel operation and fast recovery in the event of a failure.
  • Redundant connection: The use of redundant connections to critical systems such as OPC UA servers, databases, and SAP systems increases reliability.
  • Fast switchover times: Hot standby allows very fast switchover to the redundant instance in the event of a failure, minimizing downtime.
  • Dynamic compensation: Multiple redundancies can react dynamically to the failure of components in the chain and compensate for them.

Challenges with cross redundancy

  • Increased costs due to multiple central servers.
  • Complexity in implementing and managing synchronization mechanisms.

Decentralized architecture

Close to the source in a fault zone

Definition

In decentralized architecture, OPC Router instances are operated as close as possible to the data source (e.g., production line, machine hall). Each instance operates in a defined "fault zone" so that local failures have no impact on other zones.

Implementation

  • Local servers or edge devices: Use of local servers or edge devices close to the data sources.
  • Local redundancy: Establishment of redundancy at the local level, e.g., through duplicate edge devices.
  • Store and forward: Use of the OPC Router's store and forward add-on to store data locally in the event of connection interruptions and forward it later.
  • Local broker/database: Use of a local broker or database for temporary storage and synchronization of data with central systems.

edge-device-architektur.md

Advantages of decentralized architecture

  • Reduced latency through local data processing.
  • Higher fault tolerance through isolated fault zones.
  • Flexibility and scalability.

Challenges of decentralized architecture

  • Increased complexity due to the management of multiple decentralized instances.
  • Need for robust synchronization mechanisms between local and central systems.

Combination of both architectures

Hybrid architecture

Definition

A hybrid architecture combines centralized and decentralized approaches to leverage the advantages of both models.

Implementation

  • Central servers: Setting up central servers with high availability and redundancy (e.g., cross-redundancy).
  • Decentralized edge devices: Use of edge devices close to the data sources, which are designed to be locally redundant.
  • Data synchronization: Implementation of synchronization mechanisms between decentralized and centralized instances, e.g., through store and forward, local brokers, or databases.

Advantages of hybrid architecture

  • Maximized availability and fault tolerance.
  • Optimized latency times through local processing and centralized management.

Challenges of hybrid architecture

  • Complexity of implementing and managing synchronized systems.
  • Increased costs due to the combination of multiple approaches.

Kubernetes and container orchestration

note

A basic understanding of Kubernetes is necessary to fully comprehend the following.

Definition

Kubernetes is an open-source platform for automating the deployment, scaling, and management of containerized applications. By using Helm Charts, OPC Router can be efficiently deployed and managed in a Kubernetes cluster.

Implementation

  • Deployment: The OPC Router is installed on a Kubernetes cluster using the OPC Router Helm Chart. The Helm Chart supports various configurations, including authentication and redundancy.
  • Configuration: Support for global parameters (e.g., Docker image registry), general parameters (e.g., service account settings), and specific OPC Router parameters (e.g., project repository, environment variables).
  • Redundancy: Configuration of redundancy for both OPC Router and MongoDB. For OPC Router redundancy, a second pod is provided that is activated in the event of a failure of the main instance. For MongoDB, increasing the number of pods enables redundant database provisioning.

Advantages of Kubernetes

  • Scalability: Easy scaling of OPC Router instances according to requirements.
  • Flexibility: Ability to manage different configurations and implementations.
  • Automation: Automated management of deployments, updates, and rollbacks.

Challenges of using Kubernetes

  • Complexity: Implementing and managing Kubernetes clusters can be complex and requires specialized expertise.
  • Resource requirements: Operating a Kubernetes cluster requires significant hardware resources and can be costly.

For detailed implementation strategies, practical implementations, and specific configuration examples, please visit the subpage Kubernetes and Container Orchestration.

Practical Implementation

Centralized Architecture

  1. Server provisioning: Set up multiple central servers.
  2. Configuration synchronization: Implement mechanisms to synchronize configuration and data between servers.
  3. Failover testing: Regularly test the failover mechanism to ensure availability.

Decentralized architecture

  1. Edge device provisioning: Installation and configuration of edge devices near the data sources.
  2. Local redundancy: Implementation of redundant edge devices for local fault tolerance.
  3. Store and forward: Activation and configuration of the store and forward add-on.
  4. Local broker or database: Setting up a local broker or database for data caching and synchronization.

Hybrid architecture

  1. Centralized and decentralized components: Combining the steps from the centralized and decentralized architectures.
  2. Synchronization strategies: Implementation of robust synchronization strategies between the central and decentralized components.
  3. Monitoring: Regular monitoring and maintenance of the central and decentralized instances.

Summary

For highly available and redundant systems, OPC Router offers both centralized and decentralized architecture options that can be combined as required. A centralized architecture with cross-redundancy offers high availability and centralized management, while a decentralized architecture minimizes latency and increases fault tolerance. A hybrid architecture combines the advantages of both approaches to ensure maximum flexibility and reliability.