What does a modern pipeline look like for embedded?

Having a modern pipeline helps automate the tools you need to ensure high software quality and consistent results. Although the cost of setup is higher, once a modern pipeline (and corresponding practices) are in place, the development team will be more agile, more flexible, and more responsive to evolving customer needs.

Ask yourself, how many of these problems are familiar to you:

Inconsistent Build Process

  • The wrong build process is followed or the wrong build artifacts are sent to the CM, resulting in hundreds or thousands of boards that need to be reloaded.
  • A release was issued from the wrong branch.
  • Developers are unable to identify the code commit corresponding to firmware running in a customer’s device or an RMA’d unit.
  • At least one or more stakeholders (manufacturing, management, QA, other developers) do not know where to find the build artifacts for a given release.

Bug Whack-a-Mole

  • A bug that would have been identified by static code analysis was caught only after widespread release.
  • A bug that could have been identified with the right tests was caught only after widespread release.
  • A bug was fixed only to silently introduce a new bug.

Lack of Automation

  • The build process is complex and requires tribal knowledge. The organization is dependent on one or two developers to be able to perform the build.
  • Testing is ad-hoc and manual. Tests are not rerun as necessary resulting in inconsistent QA practices.
  • Installing the build environment on a new system is a complex and time-consuming process.
  • The same build performed by two different developers results in differing binaries.
  • Static analysis and code formatting tools are not regularly used, leading to a lower overall quality of code and lax programming practices.

Modern pipelines are here to ease the pain. What do these pipelines bring to the table that didn’t exist before? What value do they bring to the table?

Consistent Builds Through Containerized Build Environments

Making sure everyone has the same version of the build tools installed is a chore. The more ancillary steps that are involved, the worse it gets. Minifying embedded webpages, autogenerating code from configuration files, building filesystem images and assembling release packages are all examples of build steps that fall outside of the standard compile and link stages. If these steps aren’t automated with specific tool versions specified, how do you know that Bob is creating the same release artifacts as Steve? The sad truth is, you don’t and your organization is setting itself up for risk.

This is where containers come in. A container is a miniature virtualized environment where you can specify everything from the host OS down to the version of every tool installed. Better yet, these containers can be hosted in an online registry, meaning that everyone on the team has access to the same container. The CI pipeline can also make use of the container, giving consistency across the board.

The industry standard for containerization is Docker. A Docker dev environment can help you synchronize your build process across your team.

Better Coordination Through Improved Cloud Services

Some of us are old enough to remember the days of locally hosted SVN servers. Or worse yet, Visual SourceSafe. Modern Git repos managed through SaaS providers such as GitLab and BitBucket offer much more streamlined access. Developers can clone the repo locally and contribute to the online repo via SSH or using access tokens. The organization no longer needs to host the source control server or worry about VPNs and domain management.

Better yet, these service providers provide tooling to trigger pipeline jobs on specific actions. For example, you may have unit tests and static analysis run on every commit, but only perform release builds when certain tags are added. Developers can get emails informing them when the build has been broken and when it’s been fixed. Automation has become easier and is now straightforward to apply consistently to the whole team.

Better Automation Through Improved Scripting Capabilities

If you’re an old hat in the embedded industry, then you’re familiar with Keil and IAR and their lock-in to the Windows platform. These tools are designed to be license-locked, typically to a FlexLM with limited seats or to a USB dongle. They are not conducive to online builds in a cloud pipeline. Although these companies have added support, it may require additional license fees. IAR now has a release for Linux due to the pressure to support CI.

Of particular note, however, is the tie-in to Windows. Until recently, Windows batch files were about your only option as far as scripting. Python has changed all that. With Python, it is possible to write truly platform agnostic scripts. The scene here keeps improving. Node and Pipenv now also provide platform agnostic scripting with Windows support. And finally, Git Bash and WSL allow running .sh scripts on Windows.

Stages for modern pipelines are based primarily on Bash and Python and all SaaS vendors have full support for both. Docker containers (with true Linux host) can be run on Windows as well as Mac. Scripts can now be run consistently regardless of a developer’s personal laptop configuration.

Enforcement of Consistent Practices via the Cloud

In the old days of locally hosted servers, even if an organization had developed scripts to automate their processes, it was still up to each individual developer to know about them and use them. Now, these scripts can be triggered automatically when code is pushed, a pull request is completed, or a release tag is added. It is up to you what CI stages you wish to define and when they are triggered. All SaaS vendors provide some way of defining this. Having this in place ensures that unit tests, static analysis, and development and release builds are always performed and that they are performed in a consistent fashion any time any developer contributes code.

How does this work? Typically there will be a YAML file to define the CI stages (unit test, static analysis, build, etc.), the Docker image to use and script to execute for each stage, the rules for when to execute each stage, and specification of where to store artifacts, if applicable.

Consistent, Well-Defined Access to Build Outputs

Perhaps one of the biggest benefits of using a modern pipeline is having your build artifacts in a consistent, well-defined location. Not only do the developers and manufacturing know where to grab the release package, but you can also rest assured those build outputs were built in a consistent, repeatable way following an official process.

All CI systems will allow you to define artifacts for any stage of the CI pipeline. Typically this is used for build outputs, but it can also be used to save test coverage reports and the like.

Artifact storage is either temporary or permanent. For daily pushes and merge requests, the artifacts are temporary and will disappear after a predefined amount of time. For tagged releases, you can specify that the CI system store these artifacts indefinitely.

And finally, it may sound like something of a pipe dream when it comes to embedded, but these tools allow you to take things a step further and set up continuous deployment. Many devices are designed now to have regular connection to a cloud service or mobile app. Official releases can be pushed to these services in order to automatically update devices that connect to them.

So, What Should my Pipeline Look Like?

The truth is there is no one answer. What CI system are you using: GitLab, BitBucket, CircleCI, Jenkins? What build system are you using: Keil, IAR, Segger Studio, Make, CMake? What unit testing framework are you using: Unity, gtest, CppUnit? What code analysis tools are you running: Coverity, CppCheck, Lizard, clang-format? Do you want to run any additional stages such as document generation with doxygen?

A modern pipeline, once set up, is not something the developers should spend much (if any) time on in their day-to-day routine. The whole point is automation. Many organizations will argue they don’t have the time for such up-front costs. Such thinking is often shortsighted, however. It is surprisingly easy for a project to become burdened by technical debt and siloed knowledge. Development slows down, it’s impossible to switch developers, and the process is anything but agile. The truth is that high-quality code is much cheaper in the long term.

Does this all sound overwhelming? Setting up a CI pipeline can be time-consuming and sometimes be frustrating, but worry no more, Dojo Five has got you covered. Our Embedded CI Platform will do the dirty jobs for you. Our EmbedOps team is here to help you out. For more information, visit the Dojo Five website.