Store&Forward best practice when using multiple trasnmitters

Hello; I have a question related to the store&forward mechanism when using the transmission module (on an edge gateway) in conjunction with an engine module (on a main gateway); we have multiple ignition edge gateways each one having a redundant pair (backup); the communication to a main gateway is over mqtt (distributor + engine on the main gateway). Each edge gateway has multiple equipment connected (PLCs); some result data must be sent from edge to main gateway (this data must also be locally historized if connection drops); What would be the best approach (from the store and forward mechanism perspective) for implementing the transmitters on the edge gateway? Scenario A: one transmitter which will collect mqtt data from multiple edge nodes (PCLs) ; this I assume will create a mqtt client for each node (PLC); Scenario B: define one transmitter for each PLC (that will create a single client connection for each PLC); From the transmiter’s history store side, which scenario would be recommended? I ask that because currently we have some issues which the redundant pair (mostly when the master edge reconnects after it was disconnected)

We have some docs that should answer much of what you are looking for. This doc shows the relationship between Transmitters and tag trees and how they relate to Sparkplug Edge Nodes and Devices (and in turn MQTT Clients): MQTT Transmission Transmitters and Tag Trees - MQTT Modules for Ignition 8.x - Confluence

This doc shows how we recommend setting up redundancy: Advanced: MQTT Modules in Redundant Ignition Environment - MQTT Modules for Ignition 8.x - Confluence

We have additional docs on history store setup: Managing historic data with MQTT Modules - MQTT Modules for Ignition 8.x - Confluence

Hello Wes,
All the docs you pointed I think I read them 2 times (each). But related to my actual issue with the redundancy what I can observe is that, besides the timeout period (mentioned in the docs) when you can loose some data, most problems occurs when the master gateway reconnects while the backup gateway is running. When that happens the data gap is much higher compared to the case when master disconnects and backup takes over. The rolling buffer (of the historian store) is activated; the timeout is set to 5sec and the redundancy is Warm; This behavior is not always the same (sometimes the switch from backup to master is almost good, sometimes it takes about 15-20 seconds to do it). Any idea?
And back to my original question: from the doc files you can understand how the topics are built and how the mqtt clients are created; but I cannot get a clear understanding which will be the best approach (from the historization perspective) for handling the transmitters; each transmitter has its own history store but if 10 clients will be created for this transmitter how would that compare with the case when I would use 10 transmitters (each with its own history store)? Are the 2 use case the same?

With regard to redundancy failover, I would direct you to Inductive Automation as they can better answer questions around that feature. With regard to the rolling buffer, this value is adjustable. So, if failover takes longer, you can extend this buffer period to fit your needs.

With regard to the Transmitters to history stores, we generally recommend a one to one ratio. As you note, multiple Edge Nodes and Devices can be created from a single Transmitter but this is fine as the DB schema accounts for this and handles the table creation and rotation internally. So, you shouldn’t have to adjust anything unless you are writing many millions of tag change events per hour per device. If you are there are additional controls to increase the number of tables created. But, this is very rarely required.