Testing Python Scripts in A Docker Container

Most Python scripts require external packages. Many require a specific version of these packages or even the Python interpreter. Instead of installing these dependencies globally on your machine, you can either create an isolated virtual environment using Python virtual environments (venv) or create a Docker container. While there is a lot to like about virtual environments, in this blog post we will focus on using a Docker container. To learn more about Docker, check out our blog post on Docker: An ideal development environment and Controlled Development Environment.

This content assumes some basic experience with Docker and that the cross-platform Docker Desktop is installed and running.

Our Sample Code

Before we get to Docker-izing, some (simple) code to test is needed. We’ll create two files to verify our Docker test environment can:

  • Run a basic script
  • Mount and access multiple host volumes
  • Use separately-installed python packages

In the root of our working directory, we’ll add the following directories and files:

src/hello.py (for a classic hello-world with a package and helper module dependency)

import distro
from helpers.module import ExtModule

print(f“Hello world, from inside {distro.name(pretty=true)}!”)
mod = ExtModule()
mod.get_module_support()

helpers/module.py (for our helper module)

class ExtModule():
    def get_module_support(self):
        print(“Your helper module is here, too.”)

Docker Container

To begin creating a virtual runtime environment for our Python script, we need a base image with the Python environment. Dockerhub has a list of official Python Docker images that we can pull to our local machine. Instead of getting a fully-featured Python container (read: large), we recommend grabbing an image that is smaller in size such as a slim bullseye release for your Python script development and testing. Click here to learn more about different types of Docker images.

An example of how an official Docker image looks on Dockerhub.

An example of how an official Docker image looks on Dockerhub.

To run the Docker image on the example above, enter the following command at the terminal. If it can’t find the image on the local machine, the program will pull the image from Dockerhub automatically.

% docker run -it  --rm python:3.13.0b1-slim-bullseye /bin/bash
Unable to find image 'python:3.13.0b1-slim-bullseye' locally
3.13.0b1-slim-bullseye: Pulling from library/python
728328ac3bde: Pull complete
1db7ac90e91b: Pull complete
fa67c4e1e796: Pull complete
67b38c82ef53: Pull complete
90ed103683eb: Pull complete
Digest: sha256:6efce108697ffabf20924c157d5f08bc41550aca27a04d5df871f8889d405262
Status: Downloaded newer image for python:3.13.0b1-slim-bullseye
root@8cc5b995446a:/#

Here is a quick explanation of what we just did: With the -it option the container will run in interactive mode – meaning you get dropped to a terminal where you can run commands. The –rm flag erases the container once it is exited to save disk space. And by adding the /bin/bash at the end of the command, the container will be put in a bash shell session instead of the Python shell.

# Run the Python image without /bin/bash

% docker run -it  --rm python:3.13.0b1-slim-bullseye     	 
Python 3.13.0b1 (main, May 14 2024, 07:12:18) [GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>

How do you exit a container?

To exit a container, enter exit on the terminal if it is at a bash shell or exit() if it is at a Python shell.

Docker Volumes

Now, we have a container running but the script we want to test is not in the container. This is where Docker “bind mounts” come into play. These allow us to mount a local directory that contains the script into the container. Any modification of the script locally or in the container will be reflected on both sides. Therefore, while developing the script in the container, the local script will always be up-to-date. Bind mounts can also be used for pulling in drivers, third-party libraries, or any other helper resources needed by the script-under-test.

% docker run -it --rm -v $(pwd)/<DIR>:/<DIR> python:3.13.0b1-slim-bullseye /bin/bash

Volumes and bind mounts are both specified by the -v flag. There are two parts after the flag which are separated by the colon :. The path to the local directory to be mounted is on the left side while the path of the directory in the container is on the right side.

# For example, there is a local directory called src and we want to
# mount it to the container with the same name

% docker run -it --rm -v $(pwd)/src:/src python:3.13.0b1-slim-bullseye /bin/bash
root@ccdd37996de5:/# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  src  srv  sys  tmp  usr  var


# We can mount more than one local directory to the container. 
# Now, we have two local directories, src and helpers

% docker run -it --rm -v $(pwd)/src:/src -v $(pwd)/helpers:/helpers python:3.13.0b1-slim-bullseye /bin/bash
root@2395ba51671f:/# ls
bin  boot  dev  helpers  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  src  srv  sys  tmp  usr  var

One important point is that these volumes are read-write by default. To demonstrate this, we can create a new file within our docker container and view it on our local host.

From within the container:

root@2395ba51671f:/# echo “Test 1 Result: PASSED” > /src/results.txt

We can then see the file is viewable from our host. This allows container-generated files, such as test results, to be passed out of our isolated environment.

Test Environment Setup

There are two ways to set up a suitable environment for testing the Python script.

Install Packages in the Python Docker Container

Once we are in the container, we can install the required Python packages manually by using the pip3 installer. Alternatively, we can create a requirements.txt file with the desired packages and versions, put it in the folder that will be mounted to the container, and run the file to install the packages. The following is a simple requirements.txt example for installing the requests package and how to run it.

distro==1.9.0
# To run a requirements.txt file
% pip3 install -r /src/requirements.txt

We also need to set the PYTHONPATH environment variable in our shell to our current working directory in order for hello.py to be able to find our module.py helper file. There are multiple ways of handling this, but this way will be used for expediency.

% export PYTHONPATH=/

Create a Docker Image with Packages Installed

If the script will be tested more than once, it makes sense to create a customized Docker image with the required packages pre-installed. In the Dockerfile, set the official Python image as the base image using the FROM instruction and install the packages with the pip3 installer by using the RUN instruction. Then, build a customized Docker image with the Dockerfile. Click here to learn more about Dockerfile. The following is a simple Dockerfile example that installs the requests package when building the Docker image and sets the PYTHONPATH environment variable.

FROM python:3.13.0b1-slim-bullseye

RUN pip3 install distro==1.9.0
ENV PYTHONPATH=/

Automating Testing in CI

This approach of testing Python scripts in Docker can be utilized beyond just manual, iterative testing on local machines. A script of interest can be run directly through the command invoking our specific Docker image, removing the need for “interactive” Docker mode and allowing for automation of the process.

# To run the script directly
% docker run --rm -v $(pwd)/src:/src -v $(pwd)/helpers:/helpers python-in-docker:latest python /src/hello.py

The containerized Python script could be run automatically as a part of a Continuous Integration pipeline, further easing the manual burden of testing that our ever-growing repository of scripts are behaving as we expect. This (ultimately trivial) example provides the foundations to support a great deal more complexity for automatically testing embedded systems utilizing Python.

Conclusion

Script or application testing in a virtual environment like a Docker container will not only prevent developers from installing random packages on their local machine and having to manage ensuring correct package versions but also provide an enclosed, controlled environment for verifying functionality. By utilizing Docker volumes, the process of testing is improved without the developer copying files back and forth between the local machine and the Docker container.

And if you have questions about an embedded project you’re working on, Dojo Five can help you with all aspects of your EmbedOps journey! We are always happy to hear about cool projects or interesting problems to solve, so don’t hesitate to reach out and chat with us on LinkedIn or through email!

Sign up to get our content updates!

Unlock the full potential of your embedded projects with our expert insights! Dive into our comprehensive resources to stay ahead in firmware development. Subscribe now to get the latest best practices and guides delivered straight to your inbox.

Sign Up for Updates

Leave a Reply