The stages are already listed in the .gitlab-ci.yml file. We should show which commits are currently in which stage, what the status of the individual jobs are and what steps require manual intervention.
Since we're GitLab and have issues and merge requests available we can go beyond CI/CD and show the complete pipeline, from issue to deploy. I explored this idea in gitlab-org/gitlab-ce#3743 (private issue) 7 months ago.
We would show:
Issue (link to issue, when was it made, how many emoji's, how many comments, when closed)
Merge requests (link to merge request, when was it made, branch name, which commits, who reviewed it, how many emoji's, how many comments, when closed)
Each of the CI/CD stages (name, success, failure, started, pending, when did it complete, how long did it take, which jobs, which build artifacts)
I discussed with @ayufan and there is little time for this. He can probably make something ugly. @ayufan please make this ugly thing early in the 8.4 cycle so we have time to make it beautiful with UX Designers and JS.
I don't want to be too biased by my previous work, but there's a good intro to Heroku's Pipelines on the Heroku Flow page, including a nice video.
For comparison, Heroku's Pipeline view focuses on visualizing the pipeline (e.g. staging and production) and the state of each app environment which spans multiple commits. When I think of a Pipeline view, this is what I think of:
In this case, CI status is just a pass/fail attribute of merge requests since doesn't cover the test part at all.
When I think of pipelines that span from test to production, I think of this:
But unfortunately, a more realistic flow requires rebuilding and retesting after merging to master in case something has changed. Even if nothing has changed, most (all?) merges generate a new SHA so if you reference builds by git SHA, you need to rebuild after the merge. Perhaps in the future, we could inspect the history and know if tests really need to be re-run or not, but that's an optimization that can be left for later.
We like to describe build as a separate stage, but in practice, most scripted languages (e.g. Ruby) end up doing a bundle install on each runner. This always feels like such a waste, but in practice, it's probably faster (for the user) than caching and sharing the build between test runners.
So you probably end up with something like:
This breaks one of the goals of good CI. We can at least try to avoid the final build before production and copy the build used in staging. A docker registry make that a bit easier, but even outside of Docker, we should be able to grab build artifacts from the staging deploy and re-deploy to production. I'm not sure if GitLab CI currently supports that or makes it easy.
Side note 1: You'd think you'd want to use the same build from test and not rebuild for staging too, but in practice, the build used for test includes additional modules that you don't actually want in production. For example, with a Rails app, you might load a gem which stubs out mail functions and displays a web UI to see emails, but in production, you want to use Mailgun. Aside from the waste of including an unnecessary gem, the existence of the gem in the build could trigger different behavior. I think good use of environment variables should make that last part a non-issue, but it doesn't erase the bloat issue.
Part of the pitch for Docker is to create a single image and have the confidence that it's exactly the same in test and production, but I'm worried that given the above, it results in bloat. Maybe people are ignoring it for now, but they'll eventually get sick of it. One option that might work for some teams is to build a core production image, then using layers, add on the test-only modules (and remove any that can't be there?) for CI tests. Then at least when your tests are done and you ship the core image to production, you have some confidence that the production image has been tested. I'm not 100% positive about this, since, well, it wasn't exactly that image that was tested. But at least the common problems shouldn't come up (like compiler or module version changes in between builds).
Side note 2:
a) All of the above is for a topic-branch and release-tag type of flow, but should be relatively equivalent if you use a production or 2.2-stable branch rather than a tag.
b) GitHub flow would be a bit different though, since they actually deploy a topic branch directly to production using a manual trigger while it's still just a pull request, and then after some criteria, merge the pull request into master.
c) That brings up another common flow which is to not use tags or branches to mark what is in production, and simply have a manual trigger to promote from staging to production. Both of these add the need to support manual triggers outside of repository-related triggers.
d) A final flow which shouldn't be ignored is one that skips staging altogether and deploys all changes to production as soon as CI passes. That's really just a subset of the above.
e) A fork-based flow where merge requests come from people's private forks shouldn't affect this too much since the merge request itself lives on the parent repo.
[Sorry for all the rambling. Hopefully some of this makes sense and helps with the discussion. I'm keenly aware of derailing otherwise productive work.]
As a user, I care about a few things.
I want to enable the above type of testing and deployment. Meaning, it just needs to happen somehow. Ideally quickly and reliably. And configured by .gitlab-ci.yml. But enabling it is different than visualizing it.
I want to understand the current status of a given merge request, whether tests have passed, and whether it's been deployed anywhere.
I want to understand the status and of builds/tests for a merge request, including history of builds.
I want to have merge requests deployed to ephemeral environments created for the purpose of testing the merge request. Aka Review Apps.
After a MR has been merged, I want to know if and when it's deployed to staging and/production. I have never seen an integration that provides this, but I dearly want it.
I want to understand the current status of the environments. e.g. What is currently running in production? What's the difference between staging and production?
I want to be able to control any manual parts of the flow. e.g. manually promote from staging to production.
I want to have best practices be encouraged, easy and default, but allow customization.
@markpundsack only problem I see with reusing the staging build for production is that some people use different environments (e.g. Rails environments/Bundler groups) for staging. So I think "Side note 1" is applicable to Staging => Production as well.
That may not be a best practice, but it makes sense in some situations, e.g. you wouldn't want real emails to go out in a staging environment due to a bug. (Actually, thinking about it more, wouldn't you never want to use the Staging build in Production since that'd mean they'd have the same DB. So if you accidentally deleted records in staging, it'd mean your production environment would be screwed as well? This is assuming the framework or what-have-you doesn't support runtime Environment Variables)
What if I have a flow where I tag a release as 8.8.0-pre1 and then deploy that to staging, then if there are problems deploy 8.8.0-pre2, then when that's all clear I tag 8.8.0 and deploy to production? Is that possible with this setup?
I see you mentioned that this should be equivalent if the user has a flow wherein they use a branch instead of a tag, so that issue is covered.
This is very focused on web development, which is fair and probably a good idea for an MVP, but for the future would it be reasonably possible to route an iOS app build to Test Flight or an Android app to Google Play Beta from GitLab (assuming that's even possible)? Would I be able to push a gem to RubyGems? Are any of these setups things that we want to support at any point in the future, or are they more "private" (e.g. between a developer and RubyGems, with key signing) interactions that users can't/won't want to automate?
EDIT: Also, how did you make those flowcharts? They're quite pretty, and I kind of want to steal them.
Also, presumably any of the following could be used to trigger a push?
Merge Request is opened on the master branch
MR is opened on the develop branch
MR is merged into master branch
MR is merged into develop branch
A tag (any tag?) is added on master
A tag matching a given regex (we could probably provide regex for common formats, e.g. v1.0.0, 1.0.0, 2016-04-26, etc.) is added on master
Builds pass on a commit to master
Manual intervention by a user
Any MR is updated (e.g. not just opened, but subsequent code updates too)
Any code commit is pushed to master
Any code commit is pushed to develop
Any code commit is pushed to a topic branch
API trigger
Tell me if I'm missing anything.
In case anyone's unfamiliar: develop is a branch running parallel to master on which feature branches are branched from. Instead of using MRs to show that something has been launched to staging, one would use develop. Once staging is tested and verified to be working, develop is merged into master and pushed to production. I've used this before and think it works relatively well for more frequently-released web apps. Not sure how common it is, though.
And then there's the "how does a project define its pipeline?". Most CI systems I know of use mostly/entirely a config file (Concourse, Circle, Travis), but since GitLab is more than a CI system, that makes things more complicated.
Current setup for a lot of projects:
The commit/PR goes from GitHub to Circle/Travis.
The build status is passed to GitHub or Bitbucket via webhook.
GitHub/Bitbucket signals that the PR can be merged into master.
That causes a webhook which Heroku uses to deploy a build to staging/production (automatically/manually, depending).
GitHub/Bitbucket and Heroku use interfaces, whereas Concourse, Travis, and Circle use config files.
Pros to a config file:
GitLab CI already uses one
Developers tend to like them
Portable across instances of GitLab, forks of a project (excluding environment variables)
More flexibility for users, easier/faster to develop
Doesn't need as many UI Designers/Developers
It's in version control with all the inherent benefits (history, repeatable, forkable)
Pros to a UI:
A config file may become grossly large and/or complex
A/B test-able, unlike a config file
No need to worry about different branches having de-synced config files (Feature or bug?)
Easier to redesign/refactor?
Easier to configure for less technical and/or newer users
Having deploy configuration in version control makes less sense for forks and open source projects (e.g. the fork user may not want a staging environment)
Sidenotes:
GitLab CI Runner isn't currently released in tandem with GitLab CE/EE as far as I know, and different instances may be on different GitLab versions, so if a user were to fork a project from GitLab.com (running a pre-release) to a GitLab EE instance (running a stable release) the config file may not work. This may be a problem we should fix anyway, since I've run into the CI Config Linter being out of sync with the runner GitLab.com was using.
Would it be confusing/possible to have both?
Should we be versioning the config files so we can make breaking changes?
only problem I see with reusing the staging build for production is that some people use different environments (e.g. Rails environments/Bundler groups) for staging. So I think "Side note 1" is applicable to Staging => Production as well.
Yeah, there are times when the staging build can't be used for production and we should support rebuilding. We just shouldn't require it for everyone. I'm not worried about the database though, since that's usually logically separate from the build. You may have to run migrations again after promoting a build, though.
What if I have a flow where I tag a release as 8.8.0-pre1 and then deploy that to staging, then if there are problems deploy 8.8.0-pre2, then when that's all clear I tag 8.8.0 and deploy to production? Is that possible with this setup?
That's a little complicated in that we probably don't want to have to interpret version numbers and figure out which ones go to staging and which go to production. I'm imaging simpler, constant tags like production or simply "every tag". :)
for the future would it be reasonably possible to route an iOS app build to Test Flight or an Android app to Google Play Beta from GitLab (assuming that's even possible)? Would I be able to push a gem to RubyGems? Are any of these setups things that we want to support at any point in the future, or are they more "private" (e.g. between a developer and RubyGems, with key signing) interactions that users can't/won't want to automate?
Yes, we absolutely should support those flows. I tend to think of web first and hope that it's extensible to mobile and others, but it's quite possible there's something in those flows that isn't covered. I can't think of anything off the top of my head, but I haven't given it as much thought.
Also, how did you make those flowcharts? They're quite pretty, and I kind of want to steal them.
Also, presumably any of the following could be used to trigger a push?
For completeness, add:
Any MR is updated (e.g. not just opened, but subsequent code updates too)
Any code commit is pushed to master
Any code commit is pushed to develop
Any code commit is pushed to a topic branch
API trigger
Pros to a config file:
Add:
It's in version control with all the inherent benefits (history, repeatable, forkable)
But then also add a con:
Having deploy configuration in version control makes less sense for forks and open source projects (e.g. the fork user may not want a staging environment)
Would it be confusing/possible to have both [config file and UI]?
Possibly. You could have a web UI that generates/edits .gitlab-ci.yml. You could also have a CLI that inspects your code and generates a reasonable .gitlab-ci.yml based on heuristics.
Alternatively, there may be portions of the config that are best suited to a web UI. For example, maybe the configuration of staging and production deployments shouldn't be in your source code, but in a web UI. That way each fork of a project could have their own deployment pipeline. Think of hubot. The maintainers of hubot probably have some deployment pipeline, but it's not likely to mesh with every single user of hubot's codebase. But maybe every single user should still run tests using a shared config file.
Should we be versioning the config files so we can make breaking changes?
Having deploy configuration in version control makes less sense for forks and open source projects (e.g. the fork user may not want a staging environment)
My main argument against this would be that it's either "User has to remove extra config cruft from the config file" or "User has to figure out how to deploy from scratch". Personally I'd prefer the former, maybe have the Project Settings default to ignoring Deploy configuration in the .yml file so the forked config file doesn't cause any errors, but still leaving the forked project with a "template" should they wish to deploy the project themselves.
Vote for this features: that gitlab with support both CI&CD
FYI: Amazon has the global build&deploy system called Apollo, and also have a Pipeline system within it.
Simply this Pipeline controls software delivery from Build->Alpha(dev)->Beta(test)->Gamma(pre-flight)->Prod
And this a pipeline have an evaluation, the Full CD pipeline and support fully unsupervised go over the pipeline to Prod triggered by a code commit.
Sometimes you want to retest after the merge to master, sometimes not, you can configure this in .gitlab-ci.yml by specifying what needs to happen for what branch.
People will have different ways to promote changes: automatic, manual action in the pipeline view, tag, and merge into a branch. We should support all. Currently we support everything except manual action though our .gitlab-ci.yml format. We can add manual action easily to the pipeline view in a later version of GitLab.
As far as I can tell our current architecture supports the different flows that people will have.
As far as I understand the disadvantage of the proposed pipeline view as proposed in https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/3703#note_5030054 is that the 'merge request' and 'deploy after merge' are separate pipelines. I think this is acceptable for 8.8 and we can improve it later. We can also have a look at the feasibility of detecting a link between the MR commit and the merge commit and showing them in a single pipeline.
Thanks @markpundsack for all your ideas. The workflows make a lot of sense :) We did some thinking about workflow and also technical side of all of this and how we can actually implement that, but also looking at how we can solve our case.
The Pipeline word is a little tricky, because you can describe multiple workflows by this term. I think that in world we can say that there are common uses of this word:
Pipeline describing the CI/CD workflow:
The sequence of executed jobs. Usually Build->Test->Deploy, but the actual stages are to be questioned. This approach focuses on change, the application or applications are the outcome of the pipeline.
This term is commonly used by CI/CD solutions. You may say that this should be named Builds, but this is not accurate, because this locks this concept to only some subset of functionality that it offers.
In CI/CD world Pipeline describes the complete cycle from building the application to deploying it somewhere. It concentrates around the process of releasing rather than focuses on describing how the artifacts of applications are processed.
Pipeline describing the environments (Heroku-like):
We have different environments that are used to deploy application, we focus on application rather on change, you could think that this tries to use the concept of two Pipelines, this depends on the product:
If you have Heroku-like product you have different environments defined in “Pipeline”. “Pipeline” describe how the application can be moved between these environments. To actually execute or promote the move you have to:
Have full control of environment and use IaaS/CaaS/PaaS API to promote. This relies on having the images be provided by someone else, possibly the Pipeline describing CI/CD workflow.
Have something different that would run things, possible define Pipeline describing CI/CD workflow in that case. This pipeline could use Tests, and at the end do Deployment.
The Pipeline in that context is used only by IaaS/CaaS/PaaS which are for web applications. Some applications rather than using Pipeline keyword uses Track (Google Play Publishing).
In context of repository representing single application it is hard to name it Pipeline, because this application only has a set of environments. Pipeline can make sense when we try to look at it from context of multiple repositories having dependent applications or group of applications.
Let's consider an example where deployment is part of your repository configuration. We have two applications or repositories:
API - contains our application, integration test and defines a rules for deployment and promoting application between different environments,
Frontend - contains our application, integration test and defines a rules for deployment and promoting application between different environments, it also depends on specific version of API.
Since frontend always require a matching API version. We can't deploy a new API, because it may be breaking for our Frontend application. Since we have two repositories, this introduces a bi-directional dependency:
Frontend requires matching API version,
API requires to be deployed alongside with Frontend (only if we are deploying Frontend and API together).
In that model it's hard to break this dependency cycle. We could think that:
If developer updates API,
We run test on API,
We can't deploy it to production, because we are blocked by Frontend,
We can think about using some trigger to run Frontend and bump API dependency automatically,
We run test on Frontend,
We can finally deploy our Frontend,
We then can somehow trigger deployment of API.
The state of environments
Interesting thing to consider is to know what is deployed where. I guess it depends. We depend on our internal (mostly the CI) information which may or may not be accurate. The problem is that GitLab is not a IaaS/PaaS/CaaS and it doesn’t appear that we have plans to do such. Since we don’t have full control over our *aaS we can’t be sure that something that may appear to be latest (because saved in GitLab database) is actually latest deployed to environment. That’s why I have problems with relying only on the internal state of CI, because we actually don't control the environment to which we deploy. In order to be exact what we have deployed we should ask our *aaS for the application that is deployed and tie that with our information that can be taken from our repository. By more deeply integrating *aaS platform (using their API) we could think about implementing additional metrics, having health check, monitoring error rates, etc.
The problem of GitLab
The pretty funny example is GitLab. At GitLab we have multiple repositories, repositories that serves different purposes, but also shows that usually the projects are made from multiple blocks.
We do have:
GitLab CE - our main development repository,
GitLab EE - our main development repository for EE that gets regular changes from CE,
Omnibus GitLab - our main repository used for building release packages and Docker builds,
Chef Repo - our main repository used for managing GitLab infrastructure,
GitLab EE Bosh Release - our repository used for building GitLab PCF image from Omnibus packages
Product GitLab Tile - our repository used for releasing CloudFoundry Tile from GitLab EE Bosh Release
GitLab CE repository on DEV - our internal development repository for security patches
GitLab EE repository on DEV - our internal development repository for security patches
We also have a number of smaller repositories that are components or gems of GitLab CE and GitLab EE: gitlab_git, gitlab-pages, gitlab_elasticsearch_git, gitlab-workhorse.
It’s hard to describe the single repository deployment workflow of development server and also deployment of GitLab.com. It maybe problematic to say where is the responsibility of these located. Trying to use single repository concept introduces circular dependencies between our repositories:
We need chef-repo to execute deployment
We need omnibus-gitlab to run deployment from this
Thinking differently about pipelines and deployments
I think that current solutions are limited, because they are limited by single repository concept. They have to build another abstraction on top of repository, because it’s really hard for them to split this responsibility and focus on making the workflow simpler.
Let’s consider the previously mentioned API and Frontend example where we introduce another repository. So now we have three repositories:
API - our application, created with integration tests, as part of our process we release stable docker images or another artifacts that tie this to specific version,
Frontend - our application, created with integration tests, we depend on specific API version, as part of our process we release stable docker images or another artifacts that tie this to specific version of frontend and api.
Deployment - this repository contains all information how to deploy our microservices, and how this microservices can be promoted between environments. This repository depends on version of API and Frontend.
Previously we had problem of bidirectional dependencies between an API and Frontend. Right now the dependency cycle is directional:
Our Frontend depends on API (specific version). Our Deployment depends on Frontend and API.
What happens when we push a new commit to API:
We run integration test for API. They do succeed.
We run Docker image for API.
We trigger pipeline defined for Deploy and for Frontend.
We run Pipeline defined by Deploy (this is subject to optimisation, because it appears as not being required):
We fetch the API and Frontend,
We verify the constraints of modules,
Since the Frontend requires a different version of API we abort.
Deployment doesn’t happen.
We run Pipeline defined by Frontend:
The Frontend fetches a new version of API.
We run integration tests for Frontend with a new version of API.
We create a new release of Frontend.
We build a docker image for Frontend and push to registry.
We trigger pipeline for Deploy.
We run Pipeline defined by Deploy, but triggered by Frontend:
We fetch the API and Frontend,
We verify the constraints of modules,
We deploy to Staging.
We can see exactly what did happen for every of the branches.
The above concept basically breaks the bidirectional dependency which is in most cases is a bad thing when developing the software. This flow scales to application containing any number of components and deployments happening for multiple environments, environments that can be managed by different teams.
Seeing things
The above example shows that we actually have a pipeline defined per-repository, but we introduce a concept where we allow automatically or semi-automatically to run pipelines of other repositories. You maybe wondering how to see the status of these in MR, in Commits and in other places.
The answer is really simple. We show the Pipeline for current project, but we also can show all dependent pipelines created from this one to any other projects to which user has access. Since we are hosting repositories and have a full control, we can traverse the pipeline graph forward, because we know about all projects that we use and show all environments provided by them. This allows us to show the environments to which the specific change got deployed.
The interesting outcome of this is that anyone can contribute and this is super flexible. In the end the person who has access to repository running later pipelines sees the status of them and sees the environments used by these projects. It’s important to him, because he is a contributor of that part of workflow. This allows to have DevOps with an insight of more stages than regular contributors. This is also nice, because actually every contributor having is own deployment repository can have his own pipelines, which can be used for his own automated or semi-automated deployments.
This also makes possible to show the overview of all pipelines that span across different projects. See what were executed lately and what version/artifacts/image were used recently.
The dependency graph for GitLab and on GitLab
Let’s consider how it would look like for GitLab:
The more complex is GitLab where we have multiple repositories. Since by default we promote a flow where advise to use functional repositories.
Let’s consider that change is merged on GitLab CE to master
We trigger package build on Omnibus GitLab repository (we run pipeline defined in .gitlab-ci.yml)
Omnibus GitLab builds a GitLab CE package
Triggers pipeline defined for Dev GitLab repository
** We deploy previously build GitLab CE package
** We can run some other actions if required
We trigger automatic merge on GitLab EE repository (we run pipeline defined in .gitlab-ci.yml)
We perform automatic-merge
We run integration tests
If test pass we commit merge
This triggers a pipeline for change pushed to master of GitLab EE
** We trigger Omnibus GitLab repository to rebuild latest GitLab EE package
** Anyone who has a repository which depends on Omnibus GitLab can create it’s own deployment of GitLab EE package
It’s important that Omnibus GitLab can then trigger all dependent projects that for example inherit from built package, like creating a new Bosh Release, then automatically creating GitLab PCF Tile.
This is not only that
The flow is not limited to Deployment. The deployment is one of the use cases.
You can have dependencies that will do automatically merges. That will rebuild your images when upstream project changes. It’s by default made to be used by multiple projects.
How to achieve all of that
It’s amazing how simple is that. We actually have most of the things. We miss a few things to make a full flow:
Define dependencies between projects - the cross-project observer pattern,
Pass informations between pipelines - this is simple,
Visualise pipelines from other projects - this is also really simple,
Show a list of environments with latest change that were build for them,
Add manual actions to Pipelines,
How to configure that
Let’s consider the GitLab example. I’ll show a important parts of .gitlab-ci.yml’s:
Some simple examples we were thinking about (syntax to be defined, this is concept example only):
dependencies: - watch: gitlab-org/gitlab-ce when: tag entry: deployimage: ruby:2.2deploy: script: - git clone git@gitlab.com/gitlab-org/gitlab-ce gitlab-ce - cap staging deploy
The environment thing is just a hint for GitLab that this job actually did deployment to this environment. After looking for jobs and environments we can show what change lately were deployed where.
Allowing to promote. Let’s consider the Dev GitLab example:
production: stage: deploy script: run-deployment $OMNIBUS_GITLAB_PACKAGE environment: production when: manual
Since we know what were deployed to staging, we also know what pipeline were used to do that. We can mark some of the jobs as manual. User going to list of his environments would see deployed staging, but will have a button that would show him a possible other environments that can be deployed from this pipeline. In that case it would be production. This would not be executed by default, because it has a manual action set.
The same you could see by going to Pipeline view. You would see that some jobs are manual. You could trigger them, retry them if they were executed before. This would allow you to effectively deploy and do a rollback of specific environment.
Summary
Environments here are actually a supplementary thing that allows you to track deployments. Where deployments are actually something that is part of the Pipeline. Also, because we think of pipeline on per-project we also can thinking about cross-project dependencies that allows you to connect any number of repositories, repositories which purpose can be solely for deployments. This concept quite nicely fits into any workflow and spans across any number of applications. This allows to implement actions of any kind, it also tries to reuse artifacts where possible. It’s flexible, because the manual actions can also do merges, pushes to registries, building images and deployments.
I really like the design that introduces inversion of control. I think this may be something different than you can find in existing solutions. Inversion of control in terms of software development usually leads to better code in object oriented programming. If we would think about repositories as objects then implementing observer pattern in our pipeline seems like interesting design.
Although we probably also want to support defining dependencies in the usual direction, inverted direction may open infinite possibilities and promotes this main idea behind GitLab -- "Everyone can Contribute". Suppose there is someone wants to 'attach' his workflow to our pipeline. Having support for dependency inversion in our pipeline would make it possible for everyone to attach to GitLab CE tag event and trigger deploy to their servers according to definition in their repository (like some private project bofh/my-gitlab-deploy depends on gitlab-org/gitlab-ce to trigger it's pipeline).
Such solution would probably require some deeper thoughts about how to visualize this (e.g. should we visualize pipeline within a group context only, or should admins see entire pipeline including repositories from namespaces that we do not control). Anyway, it is appealing idea, also challenging one, but seems consistent with our current architecture/design. What do you think?
The .gitlab-ci.yml script that allows to construct all presented above:
teaspon: stage: test script: ...deploy to staging: stage: uat environment: staging script: ...run smoke tests: stage: smoke tests script: ...deploy to production: stage: deploy when: manual environment: production script: ...
The environment is a hint for GitLab that this Job performs actual deployment to specific environment.
Pipeline actions are based on the fact that we have jobs that are marked as manual, because of that in different places we allow to easily execute them.
To prevent confusion we'll change the name from Pipeline to a tab called 'Stages' in the existing Builds tab. What is now in the Builds tab will be named 'Jobs'.
Don't use the word pipeline and use the word builds only as it currently defined in our API and as little as possible.
@markpundsack please crease issues for the following things:
Rename Builds tab into CI tab, that now will get Jobs & Stages, and in the future will add Projects & Environments (link to mockups here)
Create an issue for environments (link to mockups here)
Manual deployments
Make trigger first class
Add dependencies in .gitlab-ci.yml
Linking builds that are related (MR commit and merge commit)
The idea of creating a project to represent a production site (like Dev GitLab) is really interesting. I'm not 100% sold yet, but I like that it:
Allows for some complicated configurations
Decouples code testing from deployment-specific configuration
Allows different deployed instances of a codebase to have different deployment-specific configuration
Allows third-parties using the code to ignore deployment instructions that don't apply to them
It doesn't feel as elegant, and I wouldn't like to be forced to create a project for every deployed thing since it would feel like it clutters up the code repositories. Worth exploring further. Having said that, from what I can see, I could move the deploy steps for Dev GitLab and GitLab.com onto the Omnibus GitLab repo if I really wanted to.
It doesn't feel as elegant, and I wouldn't like to be forced to create a project for every deployed thing since it would feel like it clutters up the code repositories. Worth exploring further. Having said that, from what I can see, I could move the deploy steps for Dev GitLab and GitLab.com onto the Omnibus GitLab repo if I really wanted to.
It's tricky, because I agree with :) This will not work for everyone. Most of people will stick with single repository, but this is fine. This model is a simple case of above. It's made also to solve our case. I have a problem with Omnibus-GitLab, because this is also a thing that people can contribute too. I kind of feel that Deployment is much bigger task than Omnibus-GitLab can handle :)
Will be there possibility to accept manual jobs and having multiple manual jobs per project?
Example: Project has manual job "pushing to test environment". Testers will trigger it and do manual testing there, if everything is okey they mark job as successful/failure. In the meantime the pipeline is stopped from running next stages till the manual one will be marked as successful. If they mark job as successful pipeline automatically triggers next stage. If job is marked as failure, it will stop the pipeline from going any forward. Then on the last stage, if everything succeeded, we can manually trigger "deploy to production".
@dariss6666 Not in this iteration, but there's a separate issue (#17010 (closed)) to track manual deployments. It's not exactly what you suggested, but might suffice.