MQTT Message Size

Hi, we are evaluating Ignition and Cirrus Link MQTT Transmission module to bridge the IoT data from OPC-UA to AWS IoT core. We are hitting the issues documented here: Connections to AWS IoT oscillating - MQTT Modules for Ignition 8.x - Confluence.

The issue occurs when we only have about 20,000 tags imported to a tag provider, the DBIRTH message was just too large and causes the AWS to drop the MQTT connection: with “Message payload exceeds size limit for message type”.

I understand we can group the tags to minimize the MQTT message size and enabled compression of the payload. We tested this with one of the OPC-UA tree branches and works ok. I supposed I can create multiple paths and transmitters based on the sub-branches to publish the grouped tags in various paths to AWS.

However, I am still having concern of the potential payload sizes sending to AWS. One of the tag datatype is dynamic and can potentially include thousands of data points per tag value.
Without a way to control the message size, we can only adjust the Tag Pacing Period to further reduced the batch payloads (as an attempt to limit the compressed payload). For context, a full payload of such dynamic data can be ~100Kb uncompressed and the maximum MQTT message size for AWS IoT core as we all know, is 128Kb.

Can anyone please advise what is the maximum payload size the MQTT message can be sent from MQTT Transmission module? Is there any way we can tweak and limit the payload size apart from developing the payload size limiting logic in the gateway script and publishing it using system.cirruslink.transmission.publish?

Thank you in advance.

It sounds like you understand the options well. There isn’t currently a hard limit on MQTT messages from Transmission. You will need to use the options you’ve mentioned to control the size. Is AWS IoT Core a hard requirement? It is really the culprit here in terms of limiting message size. The MQTT spec allows up to 250MB payloads. Chariot and MQTT Distributor MQTT Servers both support message sizes up to 250MB.

Hi Wes, the IoT data we are pulling out from out OT environment will be consumed and stored in the existing developed solutions hosted in AWS. Utilizing the AWS IoT Core has native support and better integration with our solutions. I am still keen to understand the options though:

  1. If we go down to the path of: Ignition (MQTT Transmission module) → Chariot MQTT server, how can we forward the IoT data to AWS. The ultimate destination for us can be S3/Firehose or Kinesis Datastream. Each of those service will have their own size limitations.
  2. If we are to script the whole publishing process using the Tag Changes script at the Ignition gateway, is there any advise from you about how we can build out the logic for batching and also limiting the size of the payload. The later should be quite straight forward, I think, but I have no idea how I can achieve batching the sets of changes in a set interval, say per second and publish it to MQTT (assuming I get the size limit checked in the same script).

Thanks again for your quick response and further advise.

If the end goal is to get data into Kinesis and you don’t have other MQTT subscribers, maybe this is a better option for you? AWS Injector - MQTT Modules for Ignition 8.x - Confluence

It auto-divides message sizes down to stay in the limits of what Kinesis can handle as well.

Hi,
I tested the AWS Injector module a few weeks back and it wasn’t working back then. I just tested it again today, the connection to AWS is showing as Connected but there are errors in the log: Failed to initialize the Kinesis client.

I followed the instructions from this link: AWI: Configuration - MQTT Modules for Ignition 8.x - Confluence (chariot.io) and the version of the AWS Injector module is 4.0.21 and the Ignition is 8.1.37.

Testing with DynamoDB I am getting a different error: failed to push the record to DDB.

Based on the error recorded for the Kinesis testing I suspect it is something to do with the AWS credential I provided. Which is surprising. For my case, my logon is SSO via AWS SSO and Azure with the build in PowerUserAccess policy attached, which should have sufficient permission. I have since tested with an IAM user created manually with the following IAM policy attached:
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “VisualEditor0”,
“Effect”: “Allow”,
“Action”: [
“dynamodb:",
"kinesis:

],
“Resource”: “*”
}
]
}

Still no luck. Let if you have any further pointers.

Thanks again.

Looking at the log a bit more. I have found the following warnings about the AWS Secret Key


Let me know what you think.

Thanks.

And it started to work overnight. I can only assume there are some injector connectors that still caches the first credential after I updated to user the IAM user. Using a custom IAM user with restricted policy confirmed working for us.

Thanks.

Internally, the application uses session tokens. Once it renewed the session token it would’ve used the new policy attached to the user. For future reference it would’ve picked up the new policy right away if the module had been restarted.