Move Kubernetes from service to Clusters page
Description
We have a Kubernetes service integration, which lets a user provide credentials for a k8s cluster to a project. We have an issue elsewhere to provide this at a group level. We even have an issue to create a Kubernetes cluster on GKE for GitLab group. But perhaps it's time to promote our Kubernetes integration from being just another service, to being first-class.
For example, we could have a page under CI/CD at the group level to list your clusters, and add new ones. When adding a new one, you could provide the credentials to an existing cluster, or create a new one on GKE. If you create multiple clusters, they'll be shown in a list.
Each cluster could have an environment (or wildcard pattern) to be accessible on. e.g. there could be a production
cluster for production[/*]
, and a dev
cluster *
(the rest). Projects could override which clusters they use (perhaps using the existing service, or a new clusters page there). Perhaps as an admin, I'd be able to see which projects are making using of which clusters. Perhaps as an admin I'd be able to assign clusters to specific projects, although we have no precedence for this kind of control. (Project-specific runners are assigned at the project-level regardless of group, for example).
For each cluster, we could make it easy to install runners, Prometheus, NGINX ingress, Helm tiller, etc. Everything you need to use Auto DevOps. Ideally with one click, but possibly a la carte. We should show the status of each of these services, and let you upgrade them easily as well. Or have them optionally auto-updated.
For each cluster, we could make it easy to monitor the cluster, see when the cluster needs to be scaled, and provide an easy way to scale it (if it was created on GKE).
There may be other admin functions as well. Perhaps we need to be able to clean up orphaned pods (although I'd rather that just not happen). I don't exactly want to replace the Kubernetes dashboard, but I do want to make it so that it's unnecessary to use the k8s dashboard, except for rare circumstances.
To do this right, we really should use the master creds to create secrets for each projects/environment rather than sharing the same creds. But as a first iteration, we can share the creds.
Proposal
- Add Clusters menu at the group, personal namespace, and project levels.
- Show list of clusters (with great no-data graphic).
- When looking at a project, I should be able to see group clusters too
- Ideally when looking at a group, I can see all the project clusters (that I have access to).
- Button to add existing k8s cluster credentials.
- Button to create cluster on GKE (via OAuth). (Or single dialog with both choices.)
- For either existing or new cluster, provide an environment pattern for each cluster, much like environment-specific variables
- Cluster variables will only be sent to deploy jobs with matching environments
- If there are multiple clusters that match a given environment, the most-specific one wins. e.g.
production/eu
wins overproduction/*
- Indicate if a cluster is attached to protected environment(s). Restrict access appropriately. (Don't let Devs see details or create/edit them. Seeing the existence of them is OK.)
- For either existing or new cluster offer to:
- Install Helm tiller (but required for all the following options): #36629
- Install NGINX-ingress with Let's Encrypt
- Install Runner: #32831 (moved)
- Install Prometheus: #28916 (moved)
- Declare base domain(s) for ingress on cluster (needed by Auto Deploy)
- For GKE, allow automatically configuring Google DNS for base domain to ingress IP
- Hopefully we don't need to worry about "protected clusters" if we have "protected environments". Clusters attached to protected environments will automatically be protected. This may be more complicated if we let Developers view/edit cluster information.
Questions
- Should we have this available for personal namespaces as well?
- Proposal only includes group-level, does that mean we won't let people create clusters at the project level anymore? I suspect we'll still need to, so we can either leave the existing service integration, or make a Clusters menu at the project level as well.
- If we offer Clusters at both group and project level, we should have visibility across both. i.e. when looking at a project, I should be able to see group clusters too, and ideally when looking at a group, I can see all the project clusters (that I have access to).
- Should we provide DNS for them? e.g.
*.$USER_OR_GROUP_PATH_SLUG.gitlab-apps.com
Then providing their own DNS would be an option, but not required.
Links / references
- Monitoring of GKE cluster: https://gitlab.com/gitlab-org/gitlab-ce/issues/27890
Documentation blurb
Overview
What is it? Why should someone use this feature? What is the underlying (business) problem? How do you use this feature?
Use cases
Who is this for? Provide one or more use cases.
Feature checklist
Make sure these are completed before closing the issue, with a link to the relevant commit.
- Feature assurance
- Documentation
- Added to features.yml