Kubernetes Executor Problems (AWS)
I am trying to use my AWS kubernetes cluster as the place where gitlab builds & runs stuff, but I'm experiencing some problems. I stood up the runner with this: https://gist.github.com/dts/cbb93aea7dbfec77ac70a8c24d436253. I did the following to "register" the runner:
concurrent = 1
check_interval = 0
[[runners]]
name = "aws-kube-2"
url = "https://gitlab.com/ci"
token = "<snip>"
executor = "kubernetes"
[runners.ssh]
[runners.docker]
tls_verify = false
image = ""
privileged = false
disable_cache = false
[runners.parallels]
base_name = ""
disable_snapshots = false
[runners.virtualbox]
base_name = ""
disable_snapshots = false
[runners.cache]
[runners.kubernetes]
host = ""
cert_file = ""
key_file = ""
ca_file = ""
image = ""
namespace = ""
privileged = false
cpus = ""
memory = ""
service_cpus = ""
service_memory = ""
Which results in:
Running with gitlab-ci-multi-runner 1.6.1 (c52ad4f)
ERROR: Preparation failed: error connecting to Kubernetes: invalid configuration: no configuration has been provided
Will be retried in 3s ...
ERROR: Preparation failed: error connecting to Kubernetes: invalid configuration: no configuration has been provided
Will be retried in 3s ...
ERROR: Preparation failed: error connecting to Kubernetes: invalid configuration: no configuration has been provided
Will be retried in 3s ...
ERROR: Build failed (system failure): error connecting to Kubernetes: invalid configuration: no configuration has been provided
The docs seem to suggest that Kubernetes should auto-discover what's going on. Further, I want to be able to build against the Docker socket that is running on the machine itself, and limit it to one host in AWS (the same host that I have put the gitlab-runner pod on), so that caching happens in a sensible way. I suspect that this is all a pretty bad idea, and I should just spin up a dedicated host to do this (which will be my next path), but I figured I'd try it out since I already have a Kubernetes cluster spun up and going.