Epic FE/BE build pipelines with docker — part 1

Containers, but this time they are for building the.. containers!

If you have a build server, you might have issues with it. Its hard targeting 10k deployments a day per company , any you may not need to, but we can all agree that a healthy team with a handful developers should be able to do 10+ deployments a day. So what do we expect and need from a good Continuous Delivery / Continuous Integration pipeline?


  • Build and tag master branch — this usually includes testing, building, and storing the artifact on nexus / artifactory.
  • Build and test any branch — so you can validate pull requests. It should run unit tests, and probably analyze the code quality, for what we use sonar.
  • Never break down —(ineffective work condition #2) it’s very painful when you’re about to finish a story / bug fix, and you need to spend hours debugging the build agent. Especially if the infra is not fully yours, and is managed by another team.
  • No external dependencies — all projects have dependencies. Maven, npm, anything on the build server shared by teams is a risk. If one team updates a version, it can break builds for other teams.
  • Fully deterministic — same input should always end with same output. Looking at npm, where an update in a dependency can make differences between builds on the same day. Also you should be able to run the same build on your local as well!
  • Is generally quick — no one likes waiting hours. If you want to merge a pull request, waiting for the build to verify is always taking time. This of course depends on the project’s nature and size, but I personally prefer under 1 minute builds. Usually to achieve this, you’ll need to utilize multi-threading, and paralell running of tests. On a bigger project you might separate tests by level of importance, and delay some of them.
  • A build is preceded by the automation tests (or the very least triggers nightly automation tests with good notifications) — if you’re building front-end, you cannot escape the benefits (like did we break Safari, did we break any existing feature) of running the whole code in a selenium / web-driver environment. You can only have real CI/CD (and its benefits, if you can trust to release without manual testing involved.
  • There are no manual steps, optimizations — we used to have the assets generation as the designers’ task. They usually optimized the png-s once, and then on the next change forget about it. If it’s not part of the automation, its a hazard, mostly of work being lost.
  • Provides measurements — it will become very handy to track non-functional requirements, to highlight improvements and back falls, weather it’s a back-end load test, or a front-end performance or page-load test. You should already have lot’s of KPI’s, so why not hook it into the build.
  • Portable — business is business. They alwasy change minds. If some high management changes your CI Tooling because we got a new one bundled with our other Microsoft / Atlassian products, or because someone at finance run the cost analysis on moving builds to cloud, you might need to migrate everything you have. Also if you just want to run them locally. Or if you get new servers. So your agents and processes need to be as portable as they can, therefore you need Config as Code.

Dockerise all the builds!

Now that we know the goals, we now can start building this. Let’s start with having no external dependencies. This is really similar to most problems today, and the solution of course is containerization. Even if you’ve used docker before, you might think that a docker container is the output of the build, not the producer, but if we use docker to be the host of a build step, we gain the usual benefits: isolation, security, efficiency. So lets see it in action!

The following code-block is our docker image, what basically sets up the build environment, and depending on the input command, it will call either build or cleanup shell script.

#docker debianFROM debian:buster-slim# node, npm, grunt, webpackRUN apt update -y
RUN apt install build-essential -y --no-install-recommends
RUN apt install nodejs -y --no-install-recommends
RUN apt install npm -y --no-install-recommends
RUN npm install --global grunt
RUN npm install --global webpack
RUN npm install --global webpack-cli
RUN npm i npm@latest -g
# Install MavenRUN apt update -y
RUN mkdir -p /usr/share/man/man1
RUN apt-get install maven -y --no-install-recommends
# maven config to localCOPY maven-settings.xml “/usr/share/maven/conf/settings.xml”# git and opensshRUN apt-get install git -y --no-install-recommends
RUN apt install openssh-client -y --no-install-recommends
RUN git config — global user.name “USERNAME”
RUN git config — global user.email “USEREMAIL”
# -- here you'll need to set up ssh credentials as well
# we set the working directory to the folder that will be mounted.WORKDIR /sharedfolder/# we copy the runnable scriptsCOPY cmd.sh /cmd.sh
COPY cleanup.sh /cleanup.sh
# if cleanup is 1, it will not build but clean up.ENV isCleanup=0## actual build commandCMD echo isCleanup: ${isCleanup} && \if [ ${isCleanup} -eq 1 ]; then echo “CLEANUP”; /cleanup.sh; fi && \if [ ${isCleanup} -ne 1 ]; then echo “BUILD”; /cmd.sh; fi

To build this docker, you’ll need to run “docker-compose up --build --no-start”, what build the docker image, and no-start is needed because now we didn’t create a service in docker.

To recap, you need to set up the docker instance as you’d set up the build agent of yours. RUN commands will be run on docker build, and when we call docker with the commands below, it will trigger the CMD part, where we only do routing and call the actual shell script.

docker run -t --rm -v /pwd-path-to-project:/sharedfolder fe-build-script-docker-tag-name

So what this does, is it runs the docker scripts CMD part where it mounts up the /pwd-path-to-project/ folder of the host machine into docker’s ‘sharedfolder’ folder, ergo /sharedfolder/ will have the project’s code, and the script will work there, because the WORKDIR in the dockerfile is pointing to it. If you ever would want to debug the script, you can run the same with -ti flag and with ‘sh’ at the end of the script to start a shell inside the docker agent.

And what’s inside the build.sh command?

#!/bin/bashmvn -B release:clean resources:resources release:prepare release:perform;ret=$? #STORE MAVEN RETURN VALUE( rm -rf ./output || true );exit $ret;

You can see this example build is pretty much just the maven script, with some additional cleanup. We need the return flags, so that the docker script’s output will be the actual build’s output, and not the cleanups, ergo the build will fail / succeed on correct times. You can use your generic build scripts, any npm / maven / grunt script will work, and if you need any perquisites, you’ll need to be sure to set it up on the dockerfile. If you want to distinguish tag and non tag build, you can add further shell scripts, and make the CMD choose between them based on the input parameters. Also be sure to commit these scripts as runnable.

We need a cleanup.sh script, because docker will create files under different user than the host machine, ergo deleting them is much simpler from the same docker container. This could be worked around, but we need a clean workspace after builds anyway, to save on storage. This looks not much more than:

echo “Cleaning up”;

So now we can have this docker builder running anywhere, and we can call it with simple commands. How do we wire in the build agent to run this docker command?

In GoCD we can use yaml pipeline descriptors, so its pretty much done like this:

The pipeline-name.gocd.yaml build script. Only step is to fire up the docker script.

#CI #CD #Devops #frontend #build pipelines #gocd #FrontEnd #FrontEndDevelopment #FullStack #code #docker #workfromhome

Special thanks to Mate Farkas, who guided me towards this path, and was helpful whenever I had (tons of) questions.

Please tune in and subscribe for part 2, where I’ll tackle resources handling and other goodies in a build pipeline! Any claps are appreciated :)

Tech Lead