Kubernetes volumes
Continuing the work of @jon-walton, to support Kubernetes volumes.
What does this MR do?
This MR adds basic volume support to the Kubernetes runner. Kubernetes volumes work differently to docker such that it's possible to mount many different types of storage into the container. To reflect this and to make it easy to implement other sources into the runner in the future, I've implemented the source and the container mount as separate config items (the structure is very similar to a kubernetes config file).
For now, only HostPathVolumeSource
is supported
An example config.toml would be:
concurrent = 2
[[runners]]
name = "Kubernetes Runner"
url = "https://gitlab.com/ci"
token = "TOKEN"
executor = "kubernetes"
[runners.kubernetes]
namespace = "gitlab"
privileged = true
[[runners.kubernetes.mounts]]
name = "docker"
mount_path = "/var/run/docker.sock"
[[runners.kubernetes.volumes.host_paths]]
name = "docker"
path = "/var/run/docker.sock"
[[runners.kubernetes.mounts]]
name = "cache"
mount_path = "/cache/"
[[runners.kubernetes.volumes.host_paths]]
name = "cache"
path = "/tmp/gitlab-ci-cache/"
[runners.cache]
Insecure = false
Any mounts
configured must have a matching volumes
Why was this MR needed?
I have a requirement to bind the docker socket into the build pods because I do not want to use docker:dind. Binding the socket gains me local image caching and no risk of name conflicts because kubernetes appends a random string to each pod's name
Are there points in the code the reviewer needs to double check?
This is my first look into golang, so please let me know if I've done anything wrong. If the general direction is sound, I will go ahead and add documentation.
Does this MR meet the acceptance criteria?
-
Documentation created/updated - Tests
-
Added for this feature/bug -
All builds are passing
-
-
Branch has no merge conflicts with master
(if you do - rebase it please)
What are the relevant issue numbers?
Closes #1876 (closed), #1759 (closed), !331 (closed)
Merge request reports
Activity
mentioned in merge request !331 (closed)
/cc @ayufan
I'm currently trying to understand what needs to happen still, except for the docs.
I don't like current approach of manually specifying volumes. How about using template files that would be pre-filled with GitLab specific data. This allows us full flexibility of configuration of volumes (any type), labels (of any value) and anything else that is there:
[runners.kubernetes] namespace = "gitlab" privileged = true pod_template_file = "pod.yaml" build_container_template_file = "build-container.yaml" helper_container_template_file = "helper-container.yml" service_container_template_file = "service-container.yml"
Then we have
pod_template_file
:apiVersion: v1 kind: Pod metadata: labels: mylabel: myvalue spec: volumes: - name: shared-data emptyDir: {}
Then we have
build_template_file
:(this is out of
Pod
=>specs.containers
).volumeMounts: - name: shared-data mountPath: /shared-data
We would load pod template, fill it with our own data. We will load container template, for each type and fill it with our own data. We will add container template to list of pod containers. Pod template could have additional containers that would not be created dynamically, but manually.
@munnerz and @tmaczukin Can you review my proposal?
@ayufan I think this is the right way.
I'm not 100% sure how this templates for containers would look. Will they be able to contain most of configuration fields (like CPU/Memory resources and limits) currently present in
config.toml
? If yes then we could store there even more than only volumes definitions. But even if not I think this is the 'cleanest' configuration form that we discussed here and in !331 (closed).@ayufan sorry for the delay, it's been very busy in the run up to KubeCon here.
I'm not a big fan of this design for a few reasons:
-
with this, there's no way for a user to mount a volume into a service container. Currently, volumes are mounted into every container with the pod. So if a user wants to configure/tweak parameters or some config in a service, they can.
-
if the Kubernetes API changes, we now have to version gitlab-runner against not only GitLab, the
helper
docker image, and then also additional volume types. -
where does this kind of templating end? There's a complexity vs. flexibility tradeoff here, and I feel like the proposal favours flexibility over complexity, meaning it's more powerful but also a steeper learner curve for users (having to understand k8s manifests format for one)
Kubernetes itself already has abstractions for what we're trying to do here in the form of PersistentVolumeClaims (https://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims). If the number of different volume types being added here is a concern, we could support just PVC's as volumes. If users want to mount a volume into their build, they can create a PVC in the destination Kubernetes cluster (and namespace) and reference that from within their config. PVCs can be created from within the Kubernetes dashboard UI too, making it nice and simple for users that have 0 experience with writing Kubernetes manifests.
-
- with this, there's no way for a user to mount a volume into a service container. Currently, volumes are mounted into every container with the pod. So if a user wants to configure/tweak parameters or some config in a service, they can.
You have the separate definition for service container that can have different volume mounts. So this is not fully correct.
- if the Kubernetes API changes, we now have to version gitlab-runner against not only GitLab, the
helper
docker image, and then also additional volume types.
It will be just recompiled of binary with the bump of k8s integration. As we do not implement new functionality, we only parse existing specifications. So this will not be the biggest problem. Since the API there doesn't change you may be quite sure that it will work as previously.
Kubernetes itself already has abstractions for what we're trying to do here in the form of PersistentVolumeClaims (https://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims). If the number of different volume types being added here is a concern, we could support just PVC's as volumes. If users want to mount a volume into their build, they can create a PVC in the destination Kubernetes cluster (and namespace) and reference that from within their config. PVCs can be created from within the Kubernetes dashboard UI too, making it nice and simple for users that have 0 experience with writing Kubernetes manifests.
I want to avoid adding specific volumes support, and either rely on parsing a template to structures that are provided by someone else or simply. So either we will stick with templating which is very generic way of supporting all, or allow only to have:
pvc
,hostPath
,config
andsecretMap
. I don't expect to support more, but still I don't like that we have to replicate and rewrite to an API.mentioned in issue #2258 (closed)
@zj @tmaczukin @munnerz Lets move design discussion to: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/2258
changed milestone to %v9.1
added Doing executorkubernetes labels
assigned to @zj
changed milestone to %v9.2
@zj This MR looks nice and it seems to fulfill most of the requirements which can be done in another MR. Can we write documentation for that and merge it?
@zj Why?
- Resolved by Zeger-Jan van de Weg
changed milestone to %9.3
assigned to @ayufan
@zj Thanks. Could you create another MR to now expose PVC and ConfigMaps as this are the only thing that we miss now I think?
Edited by Kamil Trzcińśkimentioned in merge request !625 (merged)
The above documentation in the description is not correct. We had to fiddle around and read related code quite a while to get it running. In case someone want to mount the docker socket this is what we configured:
[[runners.kubernetes.volumes.host_path]] name = "docker" mount_path = "/var/run/docker.sock"
Note that host_path is singular.
@Dbzman Thanks for sharing this. We're already working on some improvements (including documentation for volumes support) - see !625 (merged).
@msvechla Please create a new issue with the feature/improvement proposal :)