I have Ignition tags of type Document containing pre-built JSON payloads that I want to publish to MQTT Broker so IT-side consumers receive the raw JSON as-is.
Here’s an example of my tag tree structure:
Enterprise
└── Site
└── M6
└── F6A
└── R-21130
└── Equipment
└── Parameters
└── Temperature
├── Control
└── Measurement
├── Sensor 1 ← Document tag
└── Sensor 2 ← Document tag
Each sensor tag (type: Document) contains a pre-built JSON payload similar to this:
{
"template": "PI_Value_update",
"messageType": "add",
"globalTimeStamp": "2026-03-04T16:06:45Z",
"messageID": "dd21081f-ea44-4146-819e-f0a30c1e4bd6",
"version": "1.1.0",
"metadata": {
"DataGroup": "PI Data",
"Alias": null,
"DefaultUOM": "UOM",
"DataType": "Integer",
"UNSPath": "Enterprise/Site/M6/F6A/R-21130/Equipment/Parameters/Temperature/Measurement/Sensor 1"
},
"data": [
{
"uom": "",
"dataQuality": "Good",
"questionable": false,
"substituted": false,
"source": "SINUSOID",
"value": "25.1700172",
"annotated": false,
"timestamp": "2026-03-04T16:00:27Z",
"dataValidation": {
"result": "Failed",
"reason": ["Missing mandatory field(s): uom"]
}
},
{
"uom": "",
"dataQuality": "Good",
"questionable": false,
"substituted": false,
"source": "SINUSOID",
"value": 26.3145771,
"annotated": false,
"timestamp": "2026-03-04T16:03:27Z",
"dataValidation": {
"result": "Failed",
"reason": ["Missing mandatory field(s): uom"]
}
}
]
}
From my evaluation:
-
The UNS Transmitter maps Document to String, escaping my JSON inside its own JSON envelope (double-wrapped)
-
The Sparkplug Transmitter encodes everything into Protobuf, so my JSON becomes a String metric inside a binary payload
Neither approach preserves the original JSON structure for the consuming client.
I have a few questions on this:
-
Is there a supported way within MQTT Transmission to publish raw JSON payloads without the additional encoding? Are there any configuration options I’m missing that would allow the original JSON to pass through cleanly?
-
When sending encapsulated JSON payloads (i.e., a full JSON document as a single tag value), is this something consuming applications need to handle explicitly — such as parsing through an additional wrapper layer — or does the Sparkplug specification account for nested/complex payloads in a way that preserves them transparently for the consumer?
-
From a scalability and performance standpoint, is there a meaningful difference between publishing a single Document tag containing a complete JSON payload versus breaking that same data out into individual atomic tags? Specifically, how do the two approaches compare in terms of broker throughput, message overhead, and consumer-side processing efficiency at scale?