Skip to content
Snippets Groups Projects

Kubernetes volumes

Merged Zeger-Jan van de Weg requested to merge kubernetes-volumes into master
All threads resolved!

Continuing the work of @jon-walton, to support Kubernetes volumes.

What does this MR do?

This MR adds basic volume support to the Kubernetes runner. Kubernetes volumes work differently to docker such that it's possible to mount many different types of storage into the container. To reflect this and to make it easy to implement other sources into the runner in the future, I've implemented the source and the container mount as separate config items (the structure is very similar to a kubernetes config file).

For now, only HostPathVolumeSource is supported

An example config.toml would be:

concurrent = 2

[[runners]]
  name = "Kubernetes Runner"
  url = "https://gitlab.com/ci"
  token = "TOKEN"
  executor = "kubernetes"
  [runners.kubernetes]
    namespace = "gitlab"
    privileged = true

    [[runners.kubernetes.mounts]]
      name = "docker"
      mount_path = "/var/run/docker.sock"

    [[runners.kubernetes.volumes.host_paths]]
      name = "docker"
      path = "/var/run/docker.sock"

    [[runners.kubernetes.mounts]]
      name = "cache"
      mount_path = "/cache/"

    [[runners.kubernetes.volumes.host_paths]]
      name = "cache"
      path = "/tmp/gitlab-ci-cache/"

  [runners.cache]
    Insecure = false

Any mounts configured must have a matching volumes

Why was this MR needed?

I have a requirement to bind the docker socket into the build pods because I do not want to use docker:dind. Binding the socket gains me local image caching and no risk of name conflicts because kubernetes appends a random string to each pod's name

Are there points in the code the reviewer needs to double check?

This is my first look into golang, so please let me know if I've done anything wrong. If the general direction is sound, I will go ahead and add documentation.

Does this MR meet the acceptance criteria?

  • Documentation created/updated
  • Tests
    • Added for this feature/bug
    • All builds are passing
  • Branch has no merge conflicts with master (if you do - rebase it please)

What are the relevant issue numbers?

Closes #1876 (closed), #1759 (closed), !331 (closed)

Edited by Zeger-Jan van de Weg

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • Tomasz Maczukin mentioned in merge request !331 (closed)

    mentioned in merge request !331 (closed)

  • Zeger-Jan van de Weg marked the task Added for this feature/bug as completed

    marked the task Added for this feature/bug as completed

  • Zeger-Jan van de Weg marked the task All builds are passing as completed

    marked the task All builds are passing as completed

  • Author Developer

    /cc @ayufan

    I'm currently trying to understand what needs to happen still, except for the docs.

  • Author Developer

    TODO:

    • Reach conclusion on config.toml structure
    • Secrets check, what if its not there
  • I don't like current approach of manually specifying volumes. How about using template files that would be pre-filled with GitLab specific data. This allows us full flexibility of configuration of volumes (any type), labels (of any value) and anything else that is there:

     [runners.kubernetes]
        namespace = "gitlab"
        privileged = true
        pod_template_file = "pod.yaml"
        build_container_template_file = "build-container.yaml"
        helper_container_template_file = "helper-container.yml"
        service_container_template_file = "service-container.yml"

    Then we have pod_template_file:

    apiVersion: v1
    kind: Pod
    metadata:
      labels:
        mylabel: myvalue
    spec:
      volumes:
      - name: shared-data
        emptyDir: {}

    Then we have build_template_file:

    (this is out of Pod => specs.containers).

    volumeMounts:
     - name: shared-data
       mountPath: /shared-data

    We would load pod template, fill it with our own data. We will load container template, for each type and fill it with our own data. We will add container template to list of pod containers. Pod template could have additional containers that would not be created dynamically, but manually.

  • @munnerz and @tmaczukin Can you review my proposal?

  • @ayufan I think this is the right way.

    I'm not 100% sure how this templates for containers would look. Will they be able to contain most of configuration fields (like CPU/Memory resources and limits) currently present in config.toml? If yes then we could store there even more than only volumes definitions. But even if not I think this is the 'cleanest' configuration form that we discussed here and in !331 (closed).

  • @ayufan sorry for the delay, it's been very busy in the run up to KubeCon here.

    I'm not a big fan of this design for a few reasons:

    1. with this, there's no way for a user to mount a volume into a service container. Currently, volumes are mounted into every container with the pod. So if a user wants to configure/tweak parameters or some config in a service, they can.

    2. if the Kubernetes API changes, we now have to version gitlab-runner against not only GitLab, the helper docker image, and then also additional volume types.

    3. where does this kind of templating end? There's a complexity vs. flexibility tradeoff here, and I feel like the proposal favours flexibility over complexity, meaning it's more powerful but also a steeper learner curve for users (having to understand k8s manifests format for one)

    Kubernetes itself already has abstractions for what we're trying to do here in the form of PersistentVolumeClaims (https://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims). If the number of different volume types being added here is a concern, we could support just PVC's as volumes. If users want to mount a volume into their build, they can create a PVC in the destination Kubernetes cluster (and namespace) and reference that from within their config. PVCs can be created from within the Kubernetes dashboard UI too, making it nice and simple for users that have 0 experience with writing Kubernetes manifests.

    1. with this, there's no way for a user to mount a volume into a service container. Currently, volumes are mounted into every container with the pod. So if a user wants to configure/tweak parameters or some config in a service, they can.

    You have the separate definition for service container that can have different volume mounts. So this is not fully correct.

    1. if the Kubernetes API changes, we now have to version gitlab-runner against not only GitLab, the helper docker image, and then also additional volume types.

    It will be just recompiled of binary with the bump of k8s integration. As we do not implement new functionality, we only parse existing specifications. So this will not be the biggest problem. Since the API there doesn't change you may be quite sure that it will work as previously.

    Kubernetes itself already has abstractions for what we're trying to do here in the form of PersistentVolumeClaims (https://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims). If the number of different volume types being added here is a concern, we could support just PVC's as volumes. If users want to mount a volume into their build, they can create a PVC in the destination Kubernetes cluster (and namespace) and reference that from within their config. PVCs can be created from within the Kubernetes dashboard UI too, making it nice and simple for users that have 0 experience with writing Kubernetes manifests.

    I want to avoid adding specific volumes support, and either rely on parsing a template to structures that are provided by someone else or simply. So either we will stick with templating which is very generic way of supporting all, or allow only to have: pvc, hostPath, config and secretMap. I don't expect to support more, but still I don't like that we have to replicate and rewrite to an API.

  • mentioned in issue #2258 (closed)

  • Kamil Trzcińśki changed milestone to %v9.1

    changed milestone to %v9.1

  • assigned to @zj

  • Kamil Trzcińśki changed milestone to %v9.2

    changed milestone to %v9.2

  • @zj This MR looks nice and it seems to fulfill most of the requirements which can be done in another MR. Can we write documentation for that and merge it?

  • Author Developer

    No, as its not ready yet.

  • Zeger-Jan van de Weg changed milestone to %9.3

    changed milestone to %9.3

  • Zeger-Jan van de Weg marked as a Work In Progress from 287865b41c9eaa142e56138ac9d3ac233f0dfd1e

    marked as a Work In Progress from 287865b41c9eaa142e56138ac9d3ac233f0dfd1e

  • Zeger-Jan van de Weg resolved all discussions

    resolved all discussions

  • Zeger-Jan van de Weg unmarked as a Work In Progress

    unmarked as a Work In Progress

  • Kamil Trzcińśki approved this merge request

    approved this merge request

  • @zj Thanks. Could you create another MR to now expose PVC and ConfigMaps as this are the only thing that we miss now I think?

    Edited by Kamil Trzcińśki
  • Will this also be available via command line parameter? For docker we have for example: --docker-volumes Bind mount a volumes [$DOCKER_VOLUMES]

    This would be highly appreciated. Any news on that?

  • Tomasz Maczukin mentioned in merge request !625 (merged)

    mentioned in merge request !625 (merged)

  • The above documentation in the description is not correct. We had to fiddle around and read related code quite a while to get it running. In case someone want to mount the docker socket this is what we configured:

    [[runners.kubernetes.volumes.host_path]]
    name = "docker"
    mount_path = "/var/run/docker.sock"

    Note that host_path is singular.

  • @Dbzman Thanks for sharing this. We're already working on some improvements (including documentation for volumes support) - see !625 (merged).

    @msvechla Please create a new issue with the feature/improvement proposal :)

  • Please register or sign in to reply
    Loading