The Container Registry is a powerful feature of GitLab that acts as a private Docker Registry for passing images between jobs within a pipeline, between pipelines, or for use outside of GitLab CI/CD. Note that GitLab CI/CD doesn’t automatically use the Container Registry; it has to be accessed explicitly from within jobs. Auto Devops uses it somewhat, but not entirely as designed; the Docker CI/CD template pushes to it but doesn’t pull from it. Using it, or modifying the way it works in Auto Devops, requires a little understanding of the functionality.

Requires a basic understanding of Docker terms and principles

Enabling the Container Registry

On a self-managed GitLab instance, the Container Registry must be explicitly enabled at the instance level in order to operate. That’s not an issue on, where it is always enabled. Also, make sure that Container Registry is switched on under Settings > General > “Visibility, product features, permissions”.

Building and pushing images to the Container Registry

The simplest way to push images into the Container Registry is illustrated in the Docker CI template and is described below.

In order to execute Docker commands in the job, use the Docker in Docker executor.

  image: docker:latest
    - docker:dind

Logging in to the registry is not automatic, but the required variables are provided.


Then build and push the image. Note the use of $CI_REGISTRY_IMAGE to locate the exact address of the registry for this project, and $CI_COMMIT_REF_SLUG so there is a single image created for each branch or tag.

    - docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .

Accessing Container Registry images

If the job completes successfully, then click the “Registry” link in the left nav of the GitLab project, and navigate to the newly created image.

Note that the Registry page lists images, which can be opened to list the Docker tags within each image.

To work with the image locally (requires docker installed on local machine), use a GitLab personal access token. To set it up, go to the user icon in the top right, then choose Settings and Access Token.

Use the token as a password when signing in to the registry.

docker login

The UI provides the address of the images, and specific Docker tags, using the “copy” button next to the image or tag.

Using that address, run the image, for example:

docker run docker run

Of course, other options might be required in order for docker run to perform as expected.

Accessing the image in later jobs

Images from the Container Registry can be referenced directly as the value for image: in a job.


But obviously, it’s better to avoid hardcoding image addresses. Of course, we could use the same syntax that we used in the docker push command:


… but that approach makes the assumption (not necessarily valid) that we always want the latest image for that slug, which might not be the case, if for example we re-run a pipeline for a branch. We want the image created during the earlier job in that exact pipeline.

There seem to be a few opinions on what variable to use as a key into the Container Registry.

  • $CI_COMMIT_REF_SLUG as pointed out above, doesn’t correlate by pipelines
  • $CI_BUILD_REF is present in some older examples but was deprecated in GitLab 9.0
  • $CI_CONCURRENT_PROJECT_ID allows for concurrency, and this might work; I’d like to investigate it further
  • $CI_PIPELINE_ID seems like a win to me, though I’ve not seen anyone using it before

Auto Devops

The Auto Devops CI build template follows of the steps listed above, but it’s a little trickier to figure out.

First, some variables are defined in Build.gitlab-ci.yml.


Note that $CI_APPLICATION_REPOSITORY and $CI_APPLICATION_TAG are not predefined variables. And, as described above, the smallest unit of granularity is the ref.

The action takes place in the script, using the values created above. Note that I’ve omitted a lot of the detail.

docker build \

What’s interesting is that the test job does not use the image created in the build job.

  image: gliderlabs/herokuish:latest
    - /bin/herokuish buildpack test

It’s running the tests directly on the code base as pulled from Git. That means, for example, that bundler (in the case of Ruby applications) runs all over again. This seems inefficient, and I suspect there would be a way to access the container that was built during build and run the tests there - especially considering that the Auto Devops test job doesn’t work at all on Python projects at the moment.

The deploy jobs, however, use the image by passing its info into the helm upgrade command.

function deploy() {
    helm upgrade --install \
      --set image.repository="$image_repository" \
      --set image.tag="$image_tag" \