Using Docker for Embedded Development

Using Docker for Embedded Development

By leveraging Docker images, development teams can completely specify and control the development environment without the overhead and bloat of an entire build machine, virtual or otherwise. These same images save time in the future by not having to perform Software Archaeology to reproduce the build environment to track down a bug, make an improvement, or add a new feature.

AdobeStock_321780029_Editorial_Use_Only-scaled

What is Docker?

Docker is an open source containerization platform which enables developers to build, run and manage self-sufficient containers on servers and the cloud. The containers deliver an application as well as necessary configurations to ensure the functionality of the application is consistent, regardless of the environment.

What’s the value?

Ultimately even a basic setup saves a ton of time during a project in the form of questions that just don’t need to be asked anymore. Putting in a little effort early in the project will get all contributors on the same page. There’s no more wondering if a bug that only one developer can reproduce is due to his or her environment since everyone’s using the same one, even the automated build server.

Taking it a step further and you find out you need to add another tool to the build process. Instead of having to document the update steps and get everyone to run them in a timely fashion, all you have to do is update your Dockerfile and rebuild the image; CI can begin using it immediately.

Getting to know Docker

By using Docker, you can encapsulate your entire development environment in a lightweight object called an Image. Images are in some ways similar to full virtual machines, but by sharing some resources with the Docker host they are much more lightweight.

How good your image ends up being is very dependent on how much effort you put in when creating it. There are certain Best Practices image creators should follow. Need help? Try Hadolint to help enforce those Best Practices.

The first decision the image creator has to make is where to start from. The very first line of a Dockerfile is always a FROM ____. The blank can be filled in with almost anything you could want. Want a completely blank slate?, use FROM scratch. Simply want to start from a minimal linux type environment?, perhaps FROM alpine:<tag> or FROM busybox would be more your style.

Depending on the situation, we here at Dojo Five tend to start from a version of either Buildpack-Deps (FROM buildpack-deps:<ver>) or Ubuntu (FROM ubuntu:<ver>). In the not-too-distant future, we’ll even have our own base image to start from.

Next, you add commands to install, import, or copy anything else you need to compile, debug, or flash your project that wasn’t already included in the base image. Don’t forget to be specific here, a specific version of gcc or that python module is very important to maintain the same environment each time the image is built. How complicated you get is up to you. You can make a Swiss army knife image that handles everything, or you can make separate task-based images. You can even create a new baseline, and use that as the starting point for your task-based images.

Building the image is simple, just run the docker build command. However, there are a ton of options to explore once you get used to Docker.

Using your images

Okay, you’ve created your image and tested that it functions by running it interactively and using volumes to persist data on the host from within the container. Have you noticed that the docker commands are getting pretty lengthy? Can you remember all that every time you want to build your code? How about 3 months (or more…) after the project ends and something comes up that you need to address? You can?? Well we can’t (read: don’t want to), so we write small scripts to manage these tasks. We also capture the actual calls to build the code as a script like tools/build.sh.

# run -> Run a command in a new container
# [OPTIONS]
#   --rm    -> Automatically remove the container when it exits
#   --name  -> Assign a name to the container
#    -v     -> Bind mount a volume (current directory to /src)
#    -i     -> Keep STDIN open even if not attached
# IMAGE
#   Build image specified
# [COMMAND]
#   bin/bash
#   -c      -> Return exit code from the [ARGS...]
# [ARGS...]
#   cd /src
#   tools/build.sh [email protected]
docker run --rm --name build -v $(pwd):/src -i <image> /bin/bash -c "cd /src;tools/build.sh [email protected]"

After updating <image> to a valid image, this script will create a container from the image, mounts the current directory at /src, and builds the code using the build script and bash.

CI/CD

If you’re familiar with running Continuous Integration on your Git server, much of what was included in build_local.sh are the same items you need to create a minimal .yaml file for building on a system like GitLab.

image: <image>
variables:
  GIT_SUBMODULE_STRATEGY: recursive
build:
  script:
      # Setup steps to 
      # Run the build script
    - <build script>
  artifacts:
    paths:
      - <output folder>

The YAML file practically writes itself. Reference the same image from above, point to the build script knowing that you’re already at the repo root, and list the output folder to grab the build artifacts.

This script gets you off the ground and building, but there are a ton of interesting things you can do with CI. We’ve already written some on the topic, but look forward to more in future posts.

Dojo Five brings modern tools, techniques, and best practices from the web and mobile development environments, paired with leading-edge innovations in firmware to our customers to help them build successful products and successful clients. We have talented engineers on hand ready to help you with all aspects of your EmbedOps journey. Bring your interesting problems that need solving – we are always happy to help out. You can reach out at any time on LinkedIn or through email!