MQTT Performance
This page describes the performance characteristics of the MQTT plug-in in OPC Router based on real benchmark measurements (see benchmark details).
Factors influencing performance
Main factors
- Round Trip Time (RTT) – Latency between OPC Router and broker. Has a direct impact on throughput and latency, especially with QoS 1 and QoS 2.
- QoS Level – Determines the number of acknowledgment steps between client and broker.
- Connection Pooling – Increases throughput for parallel transfers.
- Broker Performance – Internal queues, topic filters, and CPU/I/O load.
- Router Configuration – Number of transfers per second, activated options (deduplication, wait-for-transfer, etc.).
Quality of Service (QoS)
| QoS | Description | Communication phases | Typical RTT | Relative speed | Typical local latency |
|---|---|---|---|---|---|
| 0 | Fire & forget – no confirmation | 1 (Client → Broker) | 0.5 × RTT | 🟢 Very high | < 10 ms |
| 1 | Receipt confirmation (PUBACK) | 2 (Client ↔ Broker) | 1 × RTT | 🟡 Medium | 20–80 ms |
| 2 | 4-way handshake (PUBREC/PUBREL/PUBCOMP) | 4 (Client ↔ Broker) | 2 × RTT | 🔴 Low | 40–150 ms |
Network Round Trip Time (RTT)
RTT (Round Trip Time) is the time it takes for a data packet to travel from the OPC Router to the MQTT broker and back again. It depends heavily on the network environment and directly influences the performance of MQTT communication.
| Environment | Typical RTT | Description |
|---|---|---|
| Local (same host) | 0.5–2 ms | Local Mosquitto broker or Docker container on the same machine |
| Local area network (LAN) | 1–5 ms | Broker in the same network segment or data center |
| Corporate network (Corp-VPN) | 10–40 ms | VPN or MPLS-based site connection, additional encryption latency |
| Cloud (hyperscaler) | 30–100 ms | MQTT broker in Azure, AWS, or Google Cloud, depending on region and peering |
| Wide area network (WAN) | 80–250 ms | International or mobile connections, possibly via public networks |
With QoS 1 or QoS 2, the effective transmission time increases linearly with the RTT, as each acknowledgment round requires a complete round-trip time.
Connection Pooling
Connection pooling distributes publish operations across multiple parallel MQTT connections, significantly increasing the throughput rate.
| Parameter | Description |
|---|---|
| Min. connections | Minimum number of simultaneously open connections |
| Max. connections | Upper limit for parallel MQTT connections (adjustable up to 100) |
| Idle timeout | Time until inactive connections are closed |
Advantages
- Significantly higher throughput for many transfers
- Load distribution across multiple threads
- Avoidance of blockages with QoS 1/2
Recommendation
| Scenario | Recommended setting |
|---|---|
| Few transfers / simple telemetry | Disable pool (single connection) |
| Medium to high load | Min. = 5, Max. = 50 |
| Very high load or parallel topics | Min. = 10, Max. = 100 |
Real benchmark results (local)
Measured with Mosquitto 2.0.22, OPC Router 5.5, Windows Sandbox (i7-13700, 4 GB RAM), 100 byte payload. See MQTT benchmark results
| transfer flows | QoS | Avg. throughput [msg/s] | Max. [msg/s] | Comment |
|---|---|---|---|---|
| 1 | 0 | 64 | 65 | Base load without pool |
| 1 | 1 | 63 | 65 | Hardly any difference to QoS 0 (local) |
| 10 | 0 | 640 | 650 | Linear scaling with number of connections |
| 10 | 1 | 616 | 648 | Slight overhead due to PUBACK |
| 50 | 0 | 2670 | 3117 | Strong parallelization, slight fluctuations |
| 100 | 0 | 4230 | 5286 | Maximum tested rate without pool |
| 50 (pool) | 0 | 3025 | 3230 | +13% increase due to pooling |
| 100 (pool) | 0 | 5556 | 6118 | Highest measured throughput |
All values are from local benchmarks (RTT≈1ms). In real networks, throughput is reduced proportionally to RTT, as each QoS confirmation requires additional runtime.
Optimization recommendations
- QoS 0 for fast, non-critical data (e.g., telemetry, measured values)
- QoS 1 as standard for delivery reliability with acceptable performance
- QoS 2 only if absolute accuracy is required
- Enable connection pooling for high parallelism or MQTT 5 batch transfers
- Disable deduplication if data often remains the same
- Only reduce Cycle Time (expert setting) if CPU load remains stable
- Adjust Keep Alive / timeout values to the RTT of the network