MQTT has become the dominant messaging protocol for IoT. It is lightweight, runs on constrained hardware, and uses a clean publish/subscribe model. But MQTT itself is deliberately minimal — it defines how bytes move between a client and a broker, and nothing else. It has no opinion on what your topic strings should look like, what format your payloads should use, or how you should know whether a device is alive or dead.
For consumer IoT, that flexibility is fine. For a manufacturing plant with 200 PLCs from four different vendors, it is a problem. If every integrator invents their own topic structure, payload format, and state management scheme, you end up with dozens of incompatible MQTT implementations that cannot talk to each other without custom middleware.
Sparkplug B is the specification that solves this. It sits on top of standard MQTT v3.1.1 (or v5) and adds the three things industrial IoT actually needs: a defined topic namespace, a standardized binary payload encoding, and a complete device lifecycle model with birth and death certificates. The result is an interoperable, self-describing data layer that any compliant application can consume without prior knowledge of the devices publishing to it.
To understand why Sparkplug B exists, consider what happens when you deploy plain MQTT in a factory:
factory/line1/plc3/temperature. Another uses site-cleveland/area-mixing/tag-TT101. A third uses data/json/12345. There is no way for a consuming application to parse all three without custom code for each.Sparkplug B addresses every one of these issues with a tight, well-defined specification that any vendor can implement.
Every Sparkplug B message is published to a topic that follows this exact structure:
spBv1.0 — Fixed prefix identifying the Sparkplug B version. Every compliant message starts here.group_id — A logical grouping, typically a plant, site, or production area (e.g., Cleveland_Plant).message_type — One of eight defined types: NBIRTH, NDEATH, DBIRTH, DDEATH, NDATA, DDATA, NCMD, DCMD.edge_node_id — The identifier for the edge gateway or edge-of-network device (e.g., Line1_Gateway).device_id — (Optional) A specific device under the edge node (e.g., Saw_04). Omitted for node-level messages.A real example: spBv1.0/Cleveland_Plant/DDATA/Line1_Gateway/Saw_04 — this is a data message from device Saw_04 on edge node Line1_Gateway in the Cleveland_Plant group.
Because the topic structure is fixed, any Sparkplug-aware application can parse the group, node, and device identity from any message without configuration. You can subscribe to spBv1.0/# and immediately know where every metric is coming from.
Sparkplug B defines exactly eight message types. Each serves a specific purpose in the device lifecycle:
| Message Type | Direction | Purpose |
|---|---|---|
NBIRTH |
Edge Node → Broker | Node birth certificate. Published when the edge node connects. Contains the full list of node-level metrics with current values, data types, and metadata. |
NDEATH |
Broker (LWT) | Node death certificate. Registered as the MQTT Last Will message. Published by the broker when the edge node disconnects unexpectedly. |
DBIRTH |
Edge Node → Broker | Device birth certificate. Published for each device connected through the edge node. Contains all device-level metrics. |
DDEATH |
Edge Node → Broker | Device death certificate. Published by the edge node when a downstream device disconnects or becomes unavailable. |
NDATA |
Edge Node → Broker | Node data. Publishes changed metric values at the node level. Only metrics that have changed since the last publish are included (report by exception). |
DDATA |
Edge Node → Broker | Device data. Publishes changed metric values for a specific device. This is the most frequent message type in a running system. |
NCMD |
Application → Edge Node | Node command. Sent by a host application (SCADA, MES) to write values to the edge node itself. |
DCMD |
Application → Edge Node | Device command. Sent by a host application to write values to a specific device through the edge node. Used for setpoint writes, recipe downloads, and control actions. |
The naming convention is consistent: N = Node, D = Device. BIRTH = came online, DEATH = went offline, DATA = metric values, CMD = write command.
The birth/death mechanism is the most important innovation in Sparkplug B. It solves the "late joiner" problem that plagues plain MQTT deployments.
When an edge node connects to the broker, the first thing it does (after registering its NDEATH as the MQTT Last Will) is publish an NBIRTH message. This message contains every metric the node exposes: name, data type, current value, and optional metadata like engineering units or description. Immediately after, it publishes a DBIRTH for each device it manages.
The effect is powerful: any application that subscribes to spBv1.0/+/NBIRTH/# and spBv1.0/+/DBIRTH/# immediately receives the complete metric catalog and current state of every device on the network. No configuration files, no tag databases, no manual mapping. The network describes itself.
The NDEATH message is registered as the MQTT Last Will when the edge node connects. If the TCP connection drops (power loss, network failure, crash), the broker automatically publishes the NDEATH on behalf of the dead node. The NDEATH includes a bdSeq (birth/death sequence number) that matches the corresponding NBIRTH, so consumers can correlate exactly which session ended.
For device-level deaths, the edge node itself publishes DDEATH when it detects a downstream device has gone offline (e.g., a PLC that stops responding to polls). This distinction matters: the edge node is still alive and reporting, but a specific device under it has failed.
In a plain MQTT system, if you subscribe to a topic and nothing arrives, you do not know if the device is offline, if the topic is wrong, or if the device simply has not published yet. With Sparkplug B, you always know. Every device is either in a BIRTH state (online, here are its metrics) or a DEATH state (offline, here is when it died). There is no ambiguity.
Sparkplug B payloads are encoded using Google Protocol Buffers (protobuf), a compact binary serialization format. This is a deliberate choice over JSON, and the reasons are practical:
Int8, Int16, Int32, Int64, UInt8, UInt16, UInt32, UInt64, Float, Double, Boolean, String, DateTime, Bytes, and more. The consumer does not have to guess whether "temperature": 72 is an integer or a float — the type is encoded in the payload.The protobuf schema (the .proto file) is publicly available in the Eclipse Sparkplug specification. Any language with protobuf support — Python, Java, C, Go, JavaScript — can decode Sparkplug B payloads.
Sparkplug B is the technical foundation for what the industry calls the Unified Namespace (UNS) — the idea that all operational data from an entire plant (or enterprise) is published to a single MQTT broker, and any authorized consumer can subscribe to any data.
This is a fundamental shift from traditional manufacturing IT architecture, where data moves through point-to-point integrations:
Sparkplug B makes this feasible because of its self-describing nature. A new application that connects to the broker receives BIRTH messages for every online device, learns all available metrics and their types, and starts processing data immediately. No tag export, no CSV import, no manual configuration.
The UNS also eliminates the "data silo" problem. When your MES, your OEE calculator, and your quality system all read from the same data stream, there is one version of the truth. The machine state that the operator sees on the HMI is the same state that the MES uses to make scheduling decisions.
| Capability | Plain MQTT | Sparkplug B |
|---|---|---|
| Topic structure | Freeform — every vendor invents their own | Fixed: spBv1.0/group/type/node/device |
| Payload format | Unspecified (JSON, CSV, binary, XML) | Protocol Buffers with typed metrics |
| Device lifecycle | Basic Last Will only | Full birth/death certificates with sequence numbers |
| State awareness | None — late joiners get nothing until next publish | BIRTH messages provide full state on connect |
| Metric metadata | Not defined | Data type, timestamp, engineering units, description |
| Interoperability | Only between identical implementations | Any compliant client can consume any compliant publisher |
| Bandwidth efficiency | Depends on implementation | Report by exception + protobuf compression |
To be clear, Sparkplug B is not a replacement for MQTT. It is a layer on top of it. Every Sparkplug B message is a valid MQTT message. You can use a standard MQTT broker (Mosquitto, HiveMQ, EMQX) without modification — though some brokers offer Sparkplug-aware features like topic aliasing and metric indexing.
The Sparkplug B specification was originally created by Cirrus Link Solutions and is now maintained by the Eclipse Foundation as an open standard under the Eclipse Sparkplug project. It has broad industry support:
The specification is open and royalty-free. Anyone can implement it. The protobuf definition files, the specification document, and reference implementations are all available on the Eclipse Sparkplug project page.
Sparkplug B uses a report by exception (RBE) model for data messages. After the initial BIRTH (which contains all metrics with current values), subsequent DDATA and NDATA messages only include metrics whose values have changed since the last publish.
This is critical for bandwidth efficiency. A PLC might expose 500 tags, but in any given scan cycle, only 10–20 might change. Sparkplug B sends only those 10–20 changed values, not all 500. The consuming application maintains a local cache of the last known values (seeded from the BIRTH message) and merges incoming DDATA updates into that cache.
If the edge node or a consuming application loses sync (e.g., after a network partition), the Sparkplug B spec defines a rebirth mechanism: the host application sends an NCMD requesting the edge node to re-publish its NBIRTH and all DBIRTHs. This resets the cache and restores full state awareness.
PulseMQ is built on Sparkplug B from the ground up. When a PLC edge node publishes an NBIRTH, PulseMQ automatically discovers the device and all its metrics — no manual tag configuration required. The birth/death lifecycle drives real-time device health tracking: the platform knows instantly when a machine goes offline and when it returns. DCMD messages flow back from the MES layer to the edge node for recipe downloads, job dispatches, and control setpoints. Learn more about the MQTT architecture or explore the full PulseMQ platform.
PulseMQ auto-discovers your devices, tracks machine state, and gives you a real-time unified namespace — no manual tag mapping required.
Contact Sales