We are building an OT-to-cloud data pipeline for one of our clients from Ignition to Azure. Shopfloor data is published from Ignition using Sparkplug B / Protobuf, and the Azure-side consumers require JSON payloads on structured UNS topics. We need guidance on the best approach for the Protobuf-to-JSON conversion layer.
We are evaluating three options and have concerns around message integrity, store-and-forward coverage, and scalability.
OPTION A — Protobuf-to-JSON conversion via Cirrus Link modules (current setup)
- Sparkplug Transmitter publishes shopfloor tags to the broker as Sparkplug B Protobuf.
- MQTT Engine subscribes to spBv1.0/# topics, decodes Protobuf, and creates UNS tags.
- UNS Transmitter monitors [MQTT Engine]/UNS tags and republishes each tag change as an individual JSON message on unsAv1.0/ topics.
- An enterprise broker bridges the unsAv1.0/ JSON topics to Azure.
Blocking issue: Source tags are UDT instances. With Convert UDTs enabled on the Sparkplug Transmitter, MQTT Engine still recreates them as UDT instances (not atomic leaf tags) in the UNS folder. The UNS Transmitter only discovers leaf tags, so it reports Tag Count = 0 and publishes nothing.
We need to understand:
- Does the UNS Transmitter support publishing leaf members inside UDT instances? If not, is this a planned feature?
- What is the recommended tag structure for source tags that need to flow through the full Sparkplug → Engine → UNS Transmitter pipeline?
- Is there a configuration we are missing, or is this a known limitation?
- Is there a supported way to force the UNS Transmitter to traverse UDT instances and publish their leaf members?
- What is the recommended approach for publishing UDT-based source tags through the Sparkplug-to-UNS pipeline?
OPTION B — Protobuf-to-JSON conversion at the broker (HiveMQ Data Hub)
- Sparkplug Transmitter publishes shopfloor tags to HiveMQ as Sparkplug B Protobuf.
- HiveMQ Data Hub deserializes the Protobuf, transforms metrics to JSON, and fans out individual metrics to UNS topics.
- An enterprise broker bridges the unsAv1.0/ JSON topics to Azure.
This approach moves the conversion out of Ignition and eliminates the UDT leaf-tag issue entirely, since HiveMQ operates at the Sparkplug metric level, not the Ignition tag level.
OPTION C — UNS Transmitter only, no Sparkplug (JSON from source)
- UNS Transmitter reads directly from the source tag provider and publishes each tag change as JSON to unsAv1.0/ topics on the broker. No Sparkplug Transmitter, no MQTT Engine in the data path.
- An enterprise broker bridges the unsAv1.0/ JSON topics to Azure.
This is the simplest architecture — JSON from the source, no Protobuf-to-JSON conversion needed, and the UDT leaf-tag issue is resolved because the UNS Transmitter reads the source provider directly and traverses into UDT members.
However, we lose all Sparkplug advantages:
- No Protobuf compression — JSON payloads are significantly larger on the wire
- No Sparkplug birth/death session management — no automatic metric discovery, no stale quality on disconnect
- No Sparkplug sequence numbers — no built-in mechanism for message loss detection
- No batching — one MQTT message per tag change from the start, increasing broker load at scale
What we retain:
- QoS 1 (at least once delivery) on the UNS Transmitter MQTT client
- Store-and-forward via the UNS Transmitter History Store when the broker connection is lost
We need to understand:
- Is QoS 1 + History Store on the UNS Transmitter considered a reliable store-and-forward mechanism, comparable to the Sparkplug Transmitter’s History Store + Primary Host ID?
- Without Sparkplug sequence numbers, how should we validate end-to-end message integrity on the JSON-only path?
- At scale (thousands of tags, sub-second updates), is the UNS Transmitter designed to handle the message volume of being the sole publisher, or is it intended as a lightweight republisher sitting behind MQTT Engine?
CONCERNS FOR ALL OPTIONS
-
Store-and-forward gap on the JSON path:
The Sparkplug Transmitter has store-and-forward (History Store + Primary Host ID). The UNS Transmitter (Options A and C) is publish-only with no Sparkplug session management. If the downstream broker or Azure is temporarily unavailable, are JSON messages lost? The UNS Transmitter does support a History Store and QoS 1, but does this provide equivalent reliability to the Sparkplug path? Is there any way to add buffering to the UNS Transmitter path, or is this an accepted trade-off of the unsAv1.0 architecture? -
Fan-out impact on message volume:
Sparkplug batches multiple metrics per DDATA message. The JSON layer fans this out to one MQTT message per metric per change. For a system scaling to thousands of tags with sub-second update rates, what are the practical throughput limits of the UNS Transmitter? -
End-to-end message integrity:
Sparkplug provides sequence numbers for loss detection between Transmitter and Engine. There is no equivalent mechanism on the JSON path. How do you recommend validating that no metric changes are dropped across the full pipeline — from shopfloor tag change to Azure consumer receipt?
ENVIRONMENT
- Ignition 8.3.3
- Cirrus Link modules v5.0.2
- Current MQTT broker: MQTT Distributor and HiveMQ Enterprise
- Source tags: UDT instances
- Current scale: small (< 50 tags), expected to grow to thousands
- Target: Azure cloud services requiring JSON
We appreciate any guidance on the recommended architecture, and specifically:
- Whether the UDT-to-UNS Transmitter limitation in Option A has a known workaround
- Whether Option B (HiveMQ) or Option C (UNS-only) is the better path forward
- What trade-offs we should expect at scale for each option