Intro
Protobuf is a language-neutral, platform-neutral extensible mechanism for serializing structured data protocol that Google initially created internally in 2001, now on its 3rd release (Proto 3). Since its release, it’s become a very popular serialized solution, including in the embedded world.
The most prominent library that implements protobuf encoding/decoding is Nanopb, an open-source, ANSI-C library developed specifically for embedded systems, with minimal requirements for RAM and code space. It’s also included by default as an option for serialization in Zephyr: https://docs.zephyrproject.org/latest/services/serialization/nanopb.html
Since this library is used so much, it makes sense to dive into some configurations available for it. Below are some pitfalls and configuration changes you should consider when using nanopb in your own projects.

Trap #1 Too many Submessages
Depending on how protobufs and nanopb are used, a significant roadblock can be the speed of encoding protobufs to send over the wire. By default, nanopb sacrifices some speed for code size.
The easiest pitfall to run into is if the proto messages you define have lots of submessages. During encoding, nanopb actually encodes each submessage twice. The first time is used to calculate the size, and the second time is putting it into the proto buffer. It’s easy to see how fast this can increase encoding time when adding more submessages. As always, it’s a trade-off, this time between organization (it’s helpful to organize data based on submessages) vs. the time to encode.
To avoid this, be sure to make protobuf messages that will be encoded/decoded on your embedded system with minimal submessages. Having a one or two (or even a couple nested messages), can be totally fine, depending on your system, but getting into 5, 10, etc. levels deep starts seriously affecting the encode process.
Trap #2 Streams Abstraction
Another limitation is that the default configuration of nanopb has encoding focused on writing to streams. Of course this is a bonus if you want to support reading or writing to streams, but if your design only has nanopb writing to memory buffers, you’re missing out on improvements with the default setup.
Happily, the developer of nanopb recognized that trade off and offered a compilation flag to revert to using memory buffers only. The aptly named PB_BUFFER_ONLY define conditionally removes extra callback logic, cutting down on the final code size of nanopb and speeding up the encoding process with fewer function calls. Adding this define to your compile calls, or adding the #define to the stop of your nanopb library is a quick and easy change to speed up your in-memory protobuf encodings.
How avoiding these traps can help
The Issue
Sending a series of small size but deeply nested protobuf messages over to another module, so each message has to be encoded via nanopb before it’s sent over the wire. The plan was to send the messages every 100 milliseconds, but we were missing the timing occasionally, and D5 was tasked with hunting down why things were taking so long.
Before
We could track this timing by triggering a gpio on/off around the process that was taking so long, and logging that with a logic analyzer. Slowly drilling down into the slowest part of the process showed that encoding a message ends up taking roughly 1.5ms, not including the time to calculate the data needed to send, which added another .5ms to each message. Screenshot showing a series of protobuf message encodings captured by pulseview.
After
Updating to remove most submessages, and updating to set nanopb to buffers only, the encoding ended up taking roughly .6ms, as tracked by a logic analyzer. Another pulseview screenshot:
Cutting the encoding process down to 40% of the original time is a significant improvement, especially considering that one of the two changes was simply setting a single config value in nanopb! Reducing the number of submessages in a protobuf message may be more complicated depending on who is currently using the messages, but you can always deprecate fields and gradually move things over. If, of course, you know how deprecation support works…
Trap #3 Deprecation Support
Current protobuf version 3 implementation supports a deprecated option added onto a particular field to show its status. However, the current proto compiler from Google does not do anything with that option unless it’s building to Java or C++. As an ANSI-C library, Nanopb ignores that field option as well. So how can you enforce deprecated fields? Before 2024, If you want to deprecate a field for nanopb, you could set the FT_IGNORE flag, in which case, it would remove the field entirely, making using it a hard failure, as opposed to a warning that Google recommends. A small improvement was made in early 2024 which updated nanopb with a discard_deprecated option, which, when coupled with the official deprecated tag, does the same thing as the FT_IGNORE flag: https://github.com/nanopb/nanopb/issues/997
So the deprecated flag is set up to be used in the future, but as of now, we still have to just do a hard change over, removing that field entirely. When doing this though, current documentation from Google gives an important note on removing fields:
If the field is not used by anyone and you want to prevent new users from using it, consider replacing the field declaration with a reserved statement.
This prevents the field number from being re-used by future developers, which can cause serious issues later on.
Conclusion
Nanopb is used in Zephyr and elsewhere for a good reason, it’s open source and a solid protobuf implementation. But without understanding the underlying system, it’s easy to run into issues slowing down your development process and ultimately, your product!
Interested in working with our embedded experts to discuss your development challenges? Schedule here.
Looking for a more productive CI build product? Sign up for EmbedOps.
References
- The main documentation for nanopb: https://jpa.kapsi.fi/nanopb/docs/reference.html
- The on-the-wire encoding process of protobuf messages: https://protobuf.dev/programming-guides/encoding/
- Recommendation from the creator of nanopb on speeding up nanopb encoding: https://stackoverflow.com/questions/73146576/nanopb-how-to-optimize-encoding-for-runtime-speed


