Skip to content
GitLab
Next
    • GitLab: the DevOps platform
    • Explore GitLab
    • Install GitLab
    • How GitLab compares
    • Get started
    • GitLab docs
    • GitLab Learn
  • Pricing
  • Talk to an expert
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
    Projects Groups Topics Snippets
  • Register
  • Sign in
  • gitlab-runner gitlab-runner
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Issues 972
    • Issues 972
    • List
    • Boards
    • Service Desk
    • Milestones
    • Iterations
    • Requirements
  • Jira
    • Jira
  • Merge requests 88
    • Merge requests 88
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Artifacts
    • Schedules
    • Test cases
  • Deployments
    • Deployments
    • Environments
  • Packages and registries
    • Packages and registries
    • Container Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • Code review
    • Insights
    • Issue
  • Activity
  • Create a new issue
  • Jobs
  • Issue Boards
Collapse sidebar

Do not update/delete: Banner broadcast message test data

Do not update/delete: Notification broadcast message test data

  • GitLab.orgGitLab.org
  • gitlab-runnergitlab-runner
  • Issues
  • #1759
Closed
Open
Issue created Oct 04, 2016 by username-removed-755814@dts1

Kubernetes Executor Problems (AWS)

I am trying to use my AWS kubernetes cluster as the place where gitlab builds & runs stuff, but I'm experiencing some problems. I stood up the runner with this: https://gist.github.com/dts/cbb93aea7dbfec77ac70a8c24d436253. I did the following to "register" the runner:

concurrent = 1
check_interval = 0

[[runners]]
  name = "aws-kube-2"
  url = "https://gitlab.com/ci"
  token = "<snip>"
  executor = "kubernetes"
  [runners.ssh]
  [runners.docker]
    tls_verify = false
    image = ""
    privileged = false
    disable_cache = false
  [runners.parallels]
    base_name = ""
    disable_snapshots = false
  [runners.virtualbox]
    base_name = ""
    disable_snapshots = false
  [runners.cache]
  [runners.kubernetes]
    host = ""
    cert_file = ""
    key_file = ""
    ca_file = ""
    image = ""
    namespace = ""
    privileged = false
    cpus = ""
    memory = ""
    service_cpus = ""
    service_memory = ""

Which results in:

Running with gitlab-ci-multi-runner 1.6.1 (c52ad4f)
ERROR: Preparation failed: error connecting to Kubernetes: invalid configuration: no configuration has been provided
Will be retried in 3s ...
ERROR: Preparation failed: error connecting to Kubernetes: invalid configuration: no configuration has been provided
Will be retried in 3s ...
ERROR: Preparation failed: error connecting to Kubernetes: invalid configuration: no configuration has been provided
Will be retried in 3s ...
ERROR: Build failed (system failure): error connecting to Kubernetes: invalid configuration: no configuration has been provided

The docs seem to suggest that Kubernetes should auto-discover what's going on. Further, I want to be able to build against the Docker socket that is running on the machine itself, and limit it to one host in AWS (the same host that I have put the gitlab-runner pod on), so that caching happens in a sensible way. I suspect that this is all a pretty bad idea, and I should just spin up a dedicated host to do this (which will be my next path), but I figured I'd try it out since I already have a Kubernetes cluster spun up and going.

Assignee
Assign to
Time tracking