Categories
IT

Gitlab CI, docker and gem testing

I am using the Gitlab-CI server to handle my CI needs, and sometimes I find workarounds that might be worth sharing.

Categories
devops docker IT

Automatic cache invalidation in Dockerfile

I have a utility image built by my ci server whenever I push updates to a locally developed gem. The ci-system pushes the gem to the local gemserver, but whenever the utility image is built, it believes that it has already performed the step

RUN gem install gem_name

…and I will end up with a utility image containing an old version of the gem. As I do not want to build with the –no-cache option (that would force a full rebuild of the image), and I do not want to rely on manually updating any files ADDed in the Dockerfile, I ended up doing the following:

As part of the build step, my ci server now generates the file invalidate_cache by the following command:

head -n 500 /dev/urandom|md5 > invalidate_cache

…and in my Dockerfile I do a

ADD invalidate_cache /invalidate_cache

just before the gem installation command. As invalidate_cache will always have new content, docker will invalidate all caches after this step, and I get a properly updated image created each time.

Categories
IT

ansible, docker and ports

If you are using ansible to deploy docker containers, you might have run into the problem where, despite you exposing ports via the ports: option, the ports are not exposed anyway. You think you have set up your services according to plan, but for some reason, after pulling out your hair because of communication errors, you bring out iptables – and lo and behold – only a subset of the ports reported as exposed by docker inspect are actually exposed.

It turns out that when starting docker images from ansible, only ports exposed directly from the image will actually be exposed. Why, oh why!

Luckily, you do not need to rebuild your images, as you can explicitly expose them from ansible through the expose: section

So there you go – if you have a docker image exposing 80 (via EXPOSE keyword in the Dockerfile), but you want to expose 443 as well through ansible – the following will not work:

ports:
  - "{{ansible_default_ipv4.address}}:80:80"
  - "{{ansible_default_ipv4.address}}:443:443"

…as this will only expose port 80. However, the following will fix things up:

expose: 443
ports:
  - "{{ansible_default_ipv4.address}}:80:80"
  - "{{ansible_default_ipv4.address}}:443:443"

This will expose both ports 80 and 443.

Mission accomplished, and a big %*$% to ansible for not documenting that the ports option does not work like the docker -p option (they are syntactically identical, but thats about it).