Constant Deployment
Constant Deployment means that 2 deployments are never more than a certain period (for example 1 hour or 1 day) apart. If there are no changes to master in the period the last commit is updated, rebuilt, tested, and deployed again. In case of deploying the same commit again the source code is the same but the dependencies or the deployment environment might have changed. Think of it as a Time To Live (TTL) for your deployment process and upstream dependencies.
Deployments are hard, the advice is that if it hurts do it more often. Because GitLab has extensive testing including end-to-end and performance testing as part of the scope it can thoroughly evaluate before deploying. As a last resort, and because GitLab has production monitoring as part of the scope, it can automatically revert a (canary) deploy based on Service Level Objectives set on system, application, and business metrics. This makes it possible to deploy constantly.
Deployment methods will evolve from manual deployment to constant deployment:
- Manual deployment - every change to master is deployed combined with others
- Continuous Delivery (CD) - every change to master can be deployed
- Continuous Deployment (CD) - every change to master is deployed once
- Constant Deployment (CD) - every change to master is deployed one or multiple times
Suppose it is the middle of the night and Debian has come out with a security fix. Because every hour a new deployment is made your application will automatically be fixed within an hour.
Advantages:
- Detect problems with deployments sooner, make it easier to find the cause, and prevent them being attributed to a code change.
- Detect problems with dependency updates sooner, make it easier to find the cause, and prevent them being attributed to a code change.
- Automatically apply upstream security fixes. https://gitlab.com/gitlab-org/gitlab-ce/issues/28566
- Prevent stale containers. No need for a button to update all containers. https://gitlab.com/gitlab-org/gitlab-ee/issues/592
- Prevent a whole list of tasks from vulnerability scanning tools you need to act on.
- Allow you to run vulnerability scanners as part of your regular pipeline (you no longer need another process to run them regularly)
Disadvantages (and their mitigation) are:
- More compute needed (savings in people their time is larger)
- More network needed (cache aggressively)
- More storage needed (identify when artifacts are the same)
- Unsupervised deployments (automatic rollback if the service level objective is not met)
- Need zero downtime deploys (easy with cloud native applications and Auto DevOps)
This concept is orthogonal to the concept of a merge train https://gitlab.com/gitlab-org/gitlab-ce/issues/4176 You can independently use merge trains and constant deployment.