Using Docker for Embedded Development

By leveraging Docker images, development teams can completely specify and control the development environment without the overhead and bloat of an entire build machine, virtual or otherwise. These same images save time in the future by not having to perform Software Archaeology to reproduce the build environment to track down a bug, make an improvement, or add a new feature.

AdobeStock_321780029_Editorial_Use_Only-scaled

What is Docker?

Docker is an open source containerization platform which enables developers to build, run, and manage self-sufficient containers. Containers offer an abstraction over the host operating system, allowing you to specify the exact version of the operating system, tools, and libraries used to build and/or run an application.

What’s the value?

In the context of Embedded Software development, Docker’s primary value is in offering a repeatable build environment. Repeatability is important for several reasons; the most obvious being that regardless of the machine that an artifact is built on, you’ll get the same output given the same input (e.g. your source code repository at a particular tag or commit hash).

Repeatability also means that recreating the build environment is trivial. This is thanks to the fact that Docker containers are created from a read-only template called an image, which can easily be built, published to a container registry, then pulled down by anyone with access to that container registry, all using the Docker CLI or Docker Desktop.

Not only does this benefit your team when onboarding a new developer or setting up a new build server, but also when you inevitably need to change something about your build environment. Rather than documenting the steps to update the environment and hoping that everyone on your team follows them (and ends up with the same result), you can update your docker image and publish a new version to your container registry. Your updates to the build environment are codified and carried out by a machine, rather than (lossily) translated into human instructions and carried out by a (potentially fallible) human.

Getting to know Docker

As opposed to a full virtual machine with its own OS kernel and hardware drivers, Docker containers have much less overhead, as they are just isolated processes that run on your host OS. The Docker engine (the thing that runs your containers) may itself run on top of a virtualization layer (depending on your host OS), but if you run multiple containers simultaneously, they will all use the same kernel.

As mentioned in the previous section, Docker containers are created from templates called images. This distinction is important: the image is used for creating containers, and containers are the things you actually run when using Docker. Keep this in mind when deciding what steps should be run during image creation and what steps should be run during container execution.

Let’s create a Docker image to build a simple embedded software application. This section assumes you have already installed Docker on your system; instructions for that can be found on Docker’s website. When it comes to simple embedded software applications, they don’t get much simpler than the application outlined in Freedom Embedded’s Simplest bare metal program for ARM article[1]

We’ll start by creating a new directory and placing the test.c, startup.s, and test.ld files from that article in the new directory. Here are the contents of those files for convenience:

// test.c
int c_entry() {
  return 0;
}
// startup.s
.section INTERRUPT_VECTOR, "x"
.global _Reset
_Reset:
  B Reset_Handler /* Reset */
  B . /* Undefined */
  B . /* SWI */
  B . /* Prefetch Abort */
  B . /* Data Abort */
  B . /* reserved */
  B . /* IRQ */
  B . /* FIQ */
 
Reset_Handler:
  LDR sp, =stack_top
  BL c_entry
  B .
/* test.ld */
ENTRY(_Reset)
SECTIONS
{
 . = 0x0;
 .text : {
 startup.o (INTERRUPT_VECTOR)
 *(.text)
 }
 .data : { *(.data) }
 .bss : { *(.bss COMMON) }
 . = ALIGN(8);
 . = . + 0x1000; /* 4kB of stack memory */
 stack_top = .;
}

We’ll also add a simple bash script named build.sh in order to run the build commands shown in that article from one shell command:

#!/bin/bash -e

arm-none-eabi-gcc -c -mcpu=arm926ej-s -g test.c -o test.o
arm-none-eabi-as -mcpu=arm926ej-s -g startup.s -o startup.o
arm-none-eabi-ld -T test.ld test.o startup.o -o test.elf

Now, we’ll add a file named Dockerfile to the project directory, with the following contents:

FROM alpine:3.14
RUN apk add --no-cache \
	gcc-arm-none-eabi=10.2.0-r2 \
	bash=5.1.16-r0
WORKDIR /src

This file is going to tell Docker how to build our image. The first line, FROM alpine:3.14, tells Docker that we want to base our image on the official Alpine Linux image, version 3.14. If you’re unfamiliar with Alpine Linux, all you need to know at this point is that it is a minimal Linux distribution with a package manager named apk.

This is probably the appropriate point in this article to introduce the concept of layers. All Docker images are built from layers. Each layer contains a set of filesystem changes to apply to the image, and layers roughly translate 1:1 to the (non-empty) lines of your Dockerfile. Layers have the nice side effect of making it easy to compose Docker images, which is what we’re doing by basing our image off of the Alpine Linux image. If you want to try to create a very minimal Docker image, you could start off with FROM scratch, which is essentially a no-op and gives you a blank slate to work from.

One drawback of this layer system is that subsequent layers cannot alter the size of previous layers. If you add a large file to your Docker image during one step, use it temporarily, then try to remove it in another step, your resulting image won’t actually be any smaller. To work around this, if you want a large file to exist only temporarily in your Dockerfile, you’ll need to download it (e.g. with curl or wget), use it, and remove it all in one RUN command (see here for a Dockerfile command reference).

The second line of our Dockerfile uses Alpine Linux’s package manager to install the ARM GCC toolchain and bash (to run our build script). If you were using this on a real project, you’d probably need a few more things, such as proper build tools like CMake, ninja, and/or make.

The final line of our Dockerfile sets the working directory (for when we run the container) to /src. This is the directory where we’ll place our project sources, and the name is entirely arbitrary. 

Now that we have a Dockerfile defining our image, we can build it. Run the following command from the directory your Dockerfile is in:

docker build -t myimage

The -t argument in the build command just tells Docker what we want to name our image, and the . at the end of the command tells Docker which directory our Dockerfile is in. If the command completed successfully, you should now be able to run docker image ls and see myimage listed.

Now that we have an image, we can create a container from the image and run it. From the directory containing your build script and sources, run:

docker run -it --rm -v .:/src myimage /bin/bash

Let’s break down the arguments used in that command. The first set of arguments, -it, tells Docker to keep STDIN open and allocate a pseudo-TTY for your container. We’re using those arguments because we want to run an interactive shell in the container. The second argument, --rm tells Docker that we want to automatically remove the container when it exits.

The next set of arguments, -v .:/src tells Docker that we want to mount a volume in the container. In this case we want the current directory (.) to be present in the container at /src. This is why we set our working directory (the last line of our Dockerfile) to /src – when we run the container with this command, we’ll be in a directory containing all of our sources. The final two arguments tell Docker the name of the image we want to use to create the container, and the command we want to run within the container.

You should be greeted with a prompt that looks something like bash-5.1#. If you run ls you should be able to see all of the source files, the build script, and the Dockerfile (we don’t need to access this from the container for any reason, it just happens to be in our source directory). Go ahead and run the build script with:

# You may need to make the script executable first
chmod +x ./build.sh

./build.sh

Run ls again and you should see the test.elf file (as well as the test.o and startup.o object files) output by our build. For good measure you can inspect the elf to make sure everything compiled and linked as expected:

Here’s the expected output:

arm-none-eabi-objdump -t test.elf
test.elf: 	file format elf32-littlearm

SYMBOL TABLE:
00000000 l	d  .text  00000000 .text
00000000 l	d  .ARM.attributes    	00000000 .ARM.attributes
00000000 l	d  .comment   	00000000 .comment
00000000 l	d  .debug_line	00000000 .debug_line
00000000 l	d  .debug_info	00000000 .debug_info
00000000 l	d  .debug_abbrev  00000000 .debug_abbrev
00000000 l	d  .debug_aranges 00000000 .debug_aranges
00000000 l	d  .debug_str 	00000000 .debug_str
00000000 l	d  .debug_frame   00000000 .debug_frame
00000000 l	df *ABS*  00000000 startup.o
00000020 l   	.text  00000000 Reset_Handler
00000000 l	df *ABS*  00000000 test.c
00001050 g   	.text  00000000 stack_top
00000030 g 	F .text  0000001c c_entry
00000000 g   	.text  00000000 _Reset

You can see our interrupt vector table at address 0 (the symbol _Reset marks the start of it), as well as the Reset_Handler we defined in the startup file and the c_entry function we defined in test.c.

Running an interactive shell within your Docker container can be useful for bring-up and troubleshooting, but typically you’ll want to just run a single command that starts the container and runs the build for you, e.g.:

docker run --rm -v .:/src myimage ./build.sh

The arguments should look pretty familiar by now. Notably, we removed the -it arguments since we won’t be running an interactive shell within the container. You’ll probably get tired of typing that command and will want to put it in something like a build_in_container.sh script, but we’ll leave that as an exercise for the reader (and the details of this will probably differ based on what build system you’re using and your personal preferences).

Next Steps

This is just a minimal example, and we’ve only scratched the surface of what Docker is, how to write Dockerfiles, etcetera. If you want to take a deep dive, or just need to reference the documentation to figure out how to do something specific, Docker has excellent documentation on their website

If you’re wanting to use Docker for your firmware builds, you’ll probably want to share built images across your development team (rather than requiring everyone to rebuild the Docker image). For information on that, take a look at the documentation for registries. When it comes to hosted container registries, there are many options, including Docker, Amazon, and Google. Most CI providers also provide container registries, such as Azure, GitLab, and GitHub.

On the topic of CI, running a job inside of a docker container is typically as simple as providing access to your container registry for your CI runner and specifying which docker image to use. As an example, here’s what a minimal .yaml file for building in a container on GitLab looks like:

image: <image>
build:
  script:
	- <build script>
  artifacts:
	paths:
  	- <output folder>

Of course, there can be some challenges when using Docker to containerize your firmware builds. Although Docker containers are generally more lightweight than a virtual machine, they can still be resource intensive (particularly in memory and storage). Also, if you’re trying to debug an application built within a Docker container, the paths in the debug information will probably not match the paths on your local machine (which might confuse your debugger).

Many debuggers give you the ability to provide mappings from the paths in debug info to the paths on your local machine. As an example, SEGGER Ozone provides the Project.AddPathSubstitute setting.

Another assumption we made here is that your project is already being built (or is buildable) using command line tools. If you’re using a vendor IDE such as STM32CubeIDE or Silicon Labs’ Simplicity Studio, it can be a challenge to figure out how to run your build in headless mode from the command line. EmbedOps solves this problem for you, by providing containerized build environments for many common vendor IDE’s, as well as for many common testing and code quality tools.

Interested in learning more and ensuring your team is utilizing best practices for creating repeatable firmware builds? Give us a call–we have talented engineers on hand ready to help you with all aspects of your firmware development projects.

References

[1] Balducci, Francesco. “Simplest bare metal program for ARM” Freedom Embedded, 14 Feb 2010

Sign up to get our content updates!

Unlock the full potential of your embedded projects with our expert insights! Dive into our comprehensive resources to stay ahead in firmware development. Subscribe now to get the latest best practices and guides delivered straight to your inbox.

Sign Up for Updates

Discover why Dojo Five EmbedOps is the embedded enterprise choice for build tool and test management.

Sign up to receive a free account to the EmbedOps platform and start building with confidence..

  • Connect a repo
  • Use Dev Containers with your Continuous Integration (CI) provider
  • Analyze memory usage
  • Integrate and visualize static analysis results
  • Perform Hardware-in-the-Loop (HIL) tests
  • Install the Command Line Interface for a developer-friendly experience

Subscribe to our Monthly Newsletter

Subscribe to our monthly newsletter for development insights delivered straight to your inbox.

Interested in learning more?

Best-in-class embedded firmware content, resources and best practices

Laptop with some code on screen

I want to write my first embedded program. Where do I start?

The boom in the Internet of Things (IoT) commercial devices and hobbyist platforms like the Raspberry Pi and Arduino have created a lot of options, offering inexpensive platforms with easy to use development tools for creating embedded projects. You have a lot of options to choose from. An embedded development platform is typically a microcontroller chip mounted on a circuit board designed to show off its features. There are typically two types out there: there are inexpensive versions, sometimes called

Read More »
Medical device monitoring vitals

IEC-62304 Medical Device Software – Software Life Cycle Processes Primer – Part 1

IEC-62304 Software Lifecycle requires a lot of self-reflection to scrutinize and document your development processes. There is an endless pursuit of perfection when it comes to heavily regulated industries. How can you guarantee something will have zero defects? That’s a pretty hefty task. The regulatory approach for the medical device industry is process control. The concept essentially states that if you document how every step must be completed, and provide checks to show every step has been completed properly, you

Read More »
Operating room filled with medical devices

IEC-62304 Medical Device Software – Software Life Cycle Processes Primer – Part II

Part I provides some background to IEC-62304. Part II provides a slightly more in-depth look at some of the specifics. The IEC 62304 Medical Device Software – Software Lifecycle Processes looks into your development processes for creating and maintaining your software. The standard is available for purchase here. So what activities does the standard look at? Here are some of the major topics. For any given topic, there will be a lot more specifics. This will look at a few

Read More »