Master canary deployment: two master application servers used by GitLab (core) team
- Every time master is updated we'll create packages and deploy them to two master workers.
- Every GitLab team member and core team member gets a special http headers.
- HA proxy detects that headers and only then routes the traffic to the master workers.
- This way we can quickly experience the state of master, this is especially important since we're not using dev.gitlab.org much anymore
- When there is a database migration this process will stop. Maybe later we'll automatically deploy migrations that can be performed online.
- We'll do this first before diving into feature branches.
Related - documenting how to build a custom package -> https://gitlab.com/gitlab-com/operations/issues/104
Old two: as proposed in https://gitlab.com/gitlab-com/operations/issues/98/#note_4027404
Old:
It would be great to have an instance of GitLab which has at least 10% of data that we have in GitLab.com. It can be actual data from GitLab.com where all sensitive information is replaced. I also propose to set up some tool for stress testing (there is plenty of them). I see several advantages of having this:
- Every new release can be tested there at first (instead of GitLab.com)
- Not so long time ago @yorickpeterse asked someone from DevOps team to test some changes in GitLab.com because he could not create proper conditions to test it locally. We could avoid this situation using staging.
- Recently I developed Elasticsearch integration and tested it locally with near ten huge repository and it worked. After running it in GitLab.com it was crashing due to many reasons like invalid data in our database, strange repositories and so on.