Unable to process NBIRTH message

I am attempting to birth an edge of network node, and am running into error messages that I don’t understand.

This is the message that shows up in Ignition’s logs.
ProtocolProcessor 28Feb2023 06:15:19 content <{“timestamp”:“1601-01-02T22:35:56.4830889”,“metrics”:[{“name”:“bdSeq”,“timestamp”:“1601-01-02T22:35:56.4830889”,“dataType”:“Uint64”,“value”:6},{“name”:“Node Control/Rebirth”,“timestamp”:“1601-01-02T22:35:56.4830889”,“dataType”:“Boolean”,“value”:false}],“seq”:0}>

Shortly after a warning appears in the logs and an error message.

The only variable with a value of 0 is the seq variable. Page 58 of the Sparkplug B Specification version 2.2 requires the seq value to be set to 0 on an NBIRTH message. Not finding the same wording anywhere in Sparkplug B Specification version 3, I tried setting that value to 1 instead (valid 0 through 255). That resulted in a the same warning and a different error.

ProtocolProcessor 28Feb2023 06:35:03 content <{“timestamp”:“1601-01-02T22:35:56.6101828”,“metrics”:[{“name”:“bdSeq”,“timestamp”:“1601-01-02T22:35:56.6101828”,“dataType”:“Uint64”,“value”:7},{“name”:“Node Control/Rebirth”,“timestamp”:“1601-01-02T22:35:56.6101828”,“dataType”:“Boolean”,“value”:false}],“seq”:1}>

Sorry, I’m new and only allowed to include one picture. The text of the second error message when the seq is set to 1 instead of 0 is:
com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol message, the input ended unexpectedly in the middle of a field. This could mean either that the input has been truncated or that an embedded message misreported its own length.
com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol message, the input ended unexpectedly in the middle of a field. This could mean either that the input has been truncated or that an embedded message misreported its own length.

Thank you for your interest in Sparkplug! We’ve been overloaded with questions and support around Sparkplug implementations and are working to help foster the Sparkplug Working Group now formed at the Eclipse Foundation. We recently released a new version of the Sparkplug Specification as well as a TCK (Technology Compatibility Kit) which can be used for free to test Sparkplug application conformance/compliance. We are asking that all Sparkplug specific questions be directed there. There are a few options for interacting with the Working Group.

If you already have a custom Sparkplug implementation, the TCK (Technology Compatibility Kit) is ready to help you with conformance/compliance and it is free: sparkplug/README.md at develop · eclipse-sparkplug/sparkplug · GitHub

You can post your question directly on the Sparkplug Working Group mail list:

You can join the Sparkplug Slack channel(s):

Post an issue on the Eclipse Tahu project site: GitHub - eclipse/tahu: Eclipse Tahu addresses the existence of legacy SCADA/DCS/ICS protocols and infrastructures and provides a much-needed definition of how best to apply MQTT into these existing industrial operational environments.

If posting an issue on the Tahu Github page, it is highly recommended to post a link on the mailing list to encourage responses.

With all of that being said, it appears your protobuf encoded message is fundamentally not formatted correctly. I’d recommend looking at the Tahu code for some examples. So, I think your issue is more fundamental than a problem with the sequence number.

Also with regard to sequence numbers, they are still required. From the spec: https://www.eclipse.org/tahu/spec/sparkplug_spec.pdf there is this:

[tck-id-topics-nbirth-seq-num] The NBIRTH MUST include a sequence number in the payload
and it MUST have a value of 0.