Skip to main content
Version: 5.5

MQTT Performance

This page describes the performance characteristics of the MQTT plug-in in OPC Router based on real benchmark measurements (see benchmark details).

Factors influencing performance

Main factors

  • Round Trip Time (RTT) – Latency between OPC Router and broker. Has a direct impact on throughput and latency, especially with QoS 1 and QoS 2.
  • QoS Level – Determines the number of acknowledgment steps between client and broker.
  • Connection Pooling – Increases throughput for parallel transfers.
  • Broker Performance – Internal queues, topic filters, and CPU/I/O load.
  • Router Configuration – Number of transfers per second, activated options (deduplication, wait-for-transfer, etc.).

Quality of Service (QoS)

QoSDescriptionCommunication phasesTypical RTTRelative speedTypical local latency
0Fire & forget – no confirmation1 (Client → Broker)0.5 × RTT🟢 Very high< 10 ms
1Receipt confirmation (PUBACK)2 (Client ↔ Broker)1 × RTT🟡 Medium20–80 ms
24-way handshake (PUBREC/PUBREL/PUBCOMP)4 (Client ↔ Broker)2 × RTT🔴 Low40–150 ms

Network Round Trip Time (RTT)

RTT (Round Trip Time) is the time it takes for a data packet to travel from the OPC Router to the MQTT broker and back again. It depends heavily on the network environment and directly influences the performance of MQTT communication.

EnvironmentTypical RTTDescription
Local (same host)0.5–2 msLocal Mosquitto broker or Docker container on the same machine
Local area network (LAN)1–5 msBroker in the same network segment or data center
Corporate network (Corp-VPN)10–40 msVPN or MPLS-based site connection, additional encryption latency
Cloud (hyperscaler)30–100 msMQTT broker in Azure, AWS, or Google Cloud, depending on region and peering
Wide area network (WAN)80–250 msInternational or mobile connections, possibly via public networks
Note

With QoS 1 or QoS 2, the effective transmission time increases linearly with the RTT, as each acknowledgment round requires a complete round-trip time.


Connection Pooling

Connection pooling distributes publish operations across multiple parallel MQTT connections, significantly increasing the throughput rate.

ParameterDescription
Min. connectionsMinimum number of simultaneously open connections
Max. connectionsUpper limit for parallel MQTT connections (adjustable up to 100)
Idle timeoutTime until inactive connections are closed

Advantages

  • Significantly higher throughput for many transfers
  • Load distribution across multiple threads
  • Avoidance of blockages with QoS 1/2

Recommendation

ScenarioRecommended setting
Few transfers / simple telemetryDisable pool (single connection)
Medium to high loadMin. = 5, Max. = 50
Very high load or parallel topicsMin. = 10, Max. = 100

Real benchmark results (local)

Measured with Mosquitto 2.0.22, OPC Router 5.5, Windows Sandbox (i7-13700, 4 GB RAM), 100 byte payload. See MQTT benchmark results

transfer flowsQoSAvg. throughput [msg/s]Max. [msg/s]Comment
106465Base load without pool
116365Hardly any difference to QoS 0 (local)
100640650Linear scaling with number of connections
101616648Slight overhead due to PUBACK
50026703117Strong parallelization, slight fluctuations
100042305286Maximum tested rate without pool
50 (pool)030253230+13% increase due to pooling
100 (pool)055566118Highest measured throughput
Note

All values are from local benchmarks (RTT≈1ms). In real networks, throughput is reduced proportionally to RTT, as each QoS confirmation requires additional runtime.


Optimization recommendations

  1. QoS 0 for fast, non-critical data (e.g., telemetry, measured values)
  2. QoS 1 as standard for delivery reliability with acceptable performance
  3. QoS 2 only if absolute accuracy is required
  4. Enable connection pooling for high parallelism or MQTT 5 batch transfers
  5. Disable deduplication if data often remains the same
  6. Only reduce Cycle Time (expert setting) if CPU load remains stable
  7. Adjust Keep Alive / timeout values to the RTT of the network

See also