Skip to content
Snippets Groups Projects
Commit 20146580 authored by Evan Read's avatar Evan Read
Browse files

Resolve Markdown ordered lists not conforming to styleguide

parent dbb342d4
No related branches found
No related tags found
No related merge requests found
Showing
with 141 additions and 135 deletions
Loading
Loading
@@ -6,7 +6,7 @@ Authentiq will generate a Client ID and the accompanying Client Secret for you t
 
1. Get your Client credentials (Client ID and Client Secret) at [Authentiq](https://www.authentiq.com/developers).
 
2. On your GitLab server, open the configuration file:
1. On your GitLab server, open the configuration file:
 
For omnibus installation
```sh
Loading
Loading
@@ -18,11 +18,11 @@ Authentiq will generate a Client ID and the accompanying Client Secret for you t
```sh
sudo -u git -H editor /home/git/gitlab/config/gitlab.yml
```
3. See [Initial OmniAuth Configuration](../../integration/omniauth.md#initial-omniauth-configuration) for initial settings to enable single sign-on and add Authentiq as an OAuth provider.
 
4. Add the provider configuration for Authentiq:
1. See [Initial OmniAuth Configuration](../../integration/omniauth.md#initial-omniauth-configuration) for initial settings to enable single sign-on and add Authentiq as an OAuth provider.
1. Add the provider configuration for Authentiq:
For Omnibus packages:
 
```ruby
Loading
Loading
@@ -31,15 +31,15 @@ Authentiq will generate a Client ID and the accompanying Client Secret for you t
"name" => "authentiq",
"app_id" => "YOUR_CLIENT_ID",
"app_secret" => "YOUR_CLIENT_SECRET",
"args" => {
"args" => {
"scope": 'aq:name email~rs address aq:push'
}
}
]
```
For installations from source:
```yaml
- { name: 'authentiq',
app_id: 'YOUR_CLIENT_ID',
Loading
Loading
@@ -49,20 +49,20 @@ Authentiq will generate a Client ID and the accompanying Client Secret for you t
}
}
```
5. The `scope` is set to request the user's name, email (required and signed), and permission to send push notifications to sign in on subsequent visits.
1. The `scope` is set to request the user's name, email (required and signed), and permission to send push notifications to sign in on subsequent visits.
See [OmniAuth Authentiq strategy](https://github.com/AuthentiqID/omniauth-authentiq/wiki/Scopes,-callback-url-configuration-and-responses) for more information on scopes and modifiers.
 
6. Change `YOUR_CLIENT_ID` and `YOUR_CLIENT_SECRET` to the Client credentials you received in step 1.
1. Change `YOUR_CLIENT_ID` and `YOUR_CLIENT_SECRET` to the Client credentials you received in step 1.
 
7. Save the configuration file.
1. Save the configuration file.
 
8. [Reconfigure](../restart_gitlab.md#omnibus-gitlab-reconfigure) or [restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect if you installed GitLab via Omnibus or from source respectively.
1. [Reconfigure](../restart_gitlab.md#omnibus-gitlab-reconfigure) or [restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect if you installed GitLab via Omnibus or from source respectively.
 
On the sign in page there should now be an Authentiq icon below the regular sign in form. Click the icon to begin the authentication process.
On the sign in page there should now be an Authentiq icon below the regular sign in form. Click the icon to begin the authentication process.
 
- If the user has the Authentiq ID app installed in their iOS or Android device, they can scan the QR code, decide what personal details to share and sign in to your GitLab installation.
- If not they will be prompted to download the app and then follow the procedure above.
- If the user has the Authentiq ID app installed in their iOS or Android device, they can scan the QR code, decide what personal details to share and sign in to your GitLab installation.
- If not they will be prompted to download the app and then follow the procedure above.
 
If everything goes right, the user will be returned to GitLab and will be signed in.
\ No newline at end of file
If everything goes right, the user will be returned to GitLab and will be signed in.
Loading
Loading
@@ -59,7 +59,7 @@ on an Linux NFS server, do the following:
sysctl -w fs.leases-enable=0
```
 
2. Restart the NFS server process. For example, on CentOS run `service nfs restart`.
1. Restart the NFS server process. For example, on CentOS run `service nfs restart`.
 
## Avoid using AWS's Elastic File System (EFS)
 
Loading
Loading
@@ -87,12 +87,12 @@ this configuration.
Additionally, this configuration is specifically warned against in the
[Postgres Documentation](https://www.postgresql.org/docs/current/static/creating-cluster.html#CREATING-CLUSTER-NFS):
 
>PostgreSQL does nothing special for NFS file systems, meaning it assumes NFS behaves exactly like
>locally-connected drives. If the client or server NFS implementation does not provide standard file
>system semantics, this can cause reliability problems. Specifically, delayed (asynchronous) writes
>PostgreSQL does nothing special for NFS file systems, meaning it assumes NFS behaves exactly like
>locally-connected drives. If the client or server NFS implementation does not provide standard file
>system semantics, this can cause reliability problems. Specifically, delayed (asynchronous) writes
>to the NFS server can cause data corruption problems.
 
For supported database architecture, please see our documentation on
For supported database architecture, please see our documentation on
[Configuring a Database for GitLab HA](https://docs.gitlab.com/ee/administration/high_availability/database.html).
 
## NFS Client mount options
Loading
Loading
Loading
Loading
@@ -665,7 +665,7 @@ cache, queues, and shared_state. To make this work with Sentinel:
**Note**: Redis URLs should be in the format: `redis://:PASSWORD@SENTINEL_MASTER_NAME`
 
1. PASSWORD is the plaintext password for the Redis instance
2. SENTINEL_MASTER_NAME is the Sentinel master name (e.g. `gitlab-redis-cache`)
1. SENTINEL_MASTER_NAME is the Sentinel master name (e.g. `gitlab-redis-cache`)
1. Include an array of hashes with host/port combinations, such as the following:
 
```ruby
Loading
Loading
Loading
Loading
@@ -29,9 +29,9 @@ Each line contains a JSON line that can be ingested by Elasticsearch, Splunk, et
In this example, you can see this was a GET request for a specific issue. Notice each line also contains performance data:
 
1. `duration`: the total time taken to retrieve the request
2. `view`: total time taken inside the Rails views
3. `db`: total time to retrieve data from the database
4. `gitaly_calls`: total number of calls made to Gitaly
1. `view`: total time taken inside the Rails views
1. `db`: total time to retrieve data from the database
1. `gitaly_calls`: total number of calls made to Gitaly
 
User clone/fetch activity using http transport appears in this log as `action: git_upload_pack`.
 
Loading
Loading
@@ -119,7 +119,7 @@ This file lives in `/var/log/gitlab/gitlab-rails/integrations_json.log` for
Omnibus GitLab packages or in `/home/git/gitlab/log/integrations_json.log` for
installations from source.
 
It contains information about [integrations](../user/project/integrations/project_services.md) activities such as JIRA, Asana and Irker services. It uses JSON format like the example below:
It contains information about [integrations](../user/project/integrations/project_services.md) activities such as JIRA, Asana and Irker services. It uses JSON format like the example below:
 
``` json
{"severity":"ERROR","time":"2018-09-06T14:56:20.439Z","service_class":"JiraService","project_id":8,"project_path":"h5bp/html5-boilerplate","message":"Error sending message","client_url":"http://jira.gitlap.com:8080","error":"execution expired"}
Loading
Loading
@@ -257,8 +257,8 @@ importer. Future importers may use this file.
 
## Reconfigure Logs
 
Reconfigure log files live in `/var/log/gitlab/reconfigure` for Omnibus GitLab
packages. Installations from source don't have reconfigure logs. A reconfigure log
Reconfigure log files live in `/var/log/gitlab/reconfigure` for Omnibus GitLab
packages. Installations from source don't have reconfigure logs. A reconfigure log
is populated whenever `gitlab-ctl reconfigure` is run manually or as part of an upgrade.
 
Reconfigure logs files are named according to the UNIX timestamp of when the reconfigure
Loading
Loading
Loading
Loading
@@ -95,10 +95,10 @@ UDP can be done using the following settings:
This does the following:
 
1. Enable UDP and bind it to port 8089 for all addresses.
2. Store any data received in the "gitlab" database.
3. Define a batch of points to be 1000 points in size and allow a maximum of
1. Store any data received in the "gitlab" database.
1. Define a batch of points to be 1000 points in size and allow a maximum of
5 batches _or_ flush them automatically after 1 second.
4. Define a UDP read buffer size of 200 MB.
1. Define a UDP read buffer size of 200 MB.
 
One of the most important settings here is the UDP read buffer size as if this
value is set too low, packets will be dropped. You must also make sure the OS
Loading
Loading
Loading
Loading
@@ -5,7 +5,7 @@ NOTE: **Note:** This document describes a drop-in replacement for the
using [ssh certificates](ssh_certificates.md), they are even faster,
but are not a drop-in replacement.
 
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ee/issues/1631) in
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ee/issues/1631) in
> [GitLab Starter](https://about.gitlab.com/gitlab-ee) 9.3.
>
> [Available in](https://gitlab.com/gitlab-org/gitlab-ee/issues/3953) GitLab
Loading
Loading
@@ -109,7 +109,7 @@ the database. The following instructions can be used to build OpenSSH 7.5:
yum install rpm-build gcc make wget openssl-devel krb5-devel pam-devel libX11-devel xmkmf libXt-devel
```
 
3. Prepare the build by copying files to the right place:
1. Prepare the build by copying files to the right place:
 
```
mkdir -p /root/rpmbuild/{SOURCES,SPECS}
Loading
Loading
@@ -118,7 +118,7 @@ the database. The following instructions can be used to build OpenSSH 7.5:
cd /root/rpmbuild/SPECS
```
 
3. Next, set the spec settings properly:
1. Next, set the spec settings properly:
 
```
sed -i -e "s/%define no_gnome_askpass 0/%define no_gnome_askpass 1/g" openssh.spec
Loading
Loading
@@ -126,13 +126,13 @@ the database. The following instructions can be used to build OpenSSH 7.5:
sed -i -e "s/BuildPreReq/BuildRequires/g" openssh.spec
```
 
3. Build the RPMs:
1. Build the RPMs:
 
```
rpmbuild -bb openssh.spec
```
 
4. Ensure the RPMs were built:
1. Ensure the RPMs were built:
 
```
ls -al /root/rpmbuild/RPMS/x86_64/
Loading
Loading
@@ -150,7 +150,7 @@ the database. The following instructions can be used to build OpenSSH 7.5:
-rw-r--r--. 1 root root 367516 Jun 20 19:37 openssh-server-7.5p1-1.x86_64.rpm
```
 
5. Install the packages. OpenSSH packages will replace `/etc/pam.d/sshd`
1. Install the packages. OpenSSH packages will replace `/etc/pam.d/sshd`
with its own version, which may prevent users from logging in, so be sure
that the file is backed up and restored after installation:
 
Loading
Loading
@@ -161,7 +161,7 @@ the database. The following instructions can be used to build OpenSSH 7.5:
yes | cp pam-ssh-conf-$timestamp /etc/pam.d/sshd
```
 
6. Verify the installed version. In another window, attempt to login to the server:
1. Verify the installed version. In another window, attempt to login to the server:
 
```
ssh -v <your-centos-machine>
Loading
Loading
@@ -171,7 +171,7 @@ the database. The following instructions can be used to build OpenSSH 7.5:
 
If not, you may need to restart sshd (e.g. `systemctl restart sshd.service`).
 
7. *IMPORTANT!* Open a new SSH session to your server before exiting to make
1. *IMPORTANT!* Open a new SSH session to your server before exiting to make
sure everything is working! If you need to downgrade, simple install the
older package:
 
Loading
Loading
Loading
Loading
@@ -8,7 +8,7 @@ The instructions make the assumption that you will be using the email address `i
## Configure your server firewall
 
1. Open up port 25 on your server so that people can send email into the server over SMTP.
2. If the mail server is different from the server running GitLab, open up port 143 on your server so that GitLab can read email from the server over IMAP.
1. If the mail server is different from the server running GitLab, open up port 143 on your server so that GitLab can read email from the server over IMAP.
 
## Install packages
 
Loading
Loading
Loading
Loading
@@ -20,7 +20,7 @@ an SMTP server, but you're not seeing mail delivered. Here's how to check the se
bundle exec rails console production
```
 
2. Look at the ActionMailer `delivery_method` to make sure it matches what you
1. Look at the ActionMailer `delivery_method` to make sure it matches what you
intended. If you configured SMTP, it should say `:smtp`. If you're using
Sendmail, it should say `:sendmail`:
 
Loading
Loading
@@ -29,7 +29,7 @@ an SMTP server, but you're not seeing mail delivered. Here's how to check the se
=> :smtp
```
 
3. If you're using SMTP, check the mail settings:
1. If you're using SMTP, check the mail settings:
 
```ruby
irb(main):002:0> ActionMailer::Base.smtp_settings
Loading
Loading
@@ -39,7 +39,7 @@ an SMTP server, but you're not seeing mail delivered. Here's how to check the se
In the example above, the SMTP server is configured for the local machine. If this is intended, you may need to check your local mail
logs (e.g. `/var/log/mail.log`) for more details.
 
4. Send a test message via the console.
1. Send a test message via the console.
 
```ruby
irb(main):003:0> Notify.test_email('youremail@email.com', 'Hello World', 'This is a test message').deliver_now
Loading
Loading
Loading
Loading
@@ -23,9 +23,9 @@ You need to have the Google Cloud SDK installed. e.g.
On OSX, install [homebrew](https://brew.sh):
 
1. Install Brew Caskroom: `brew install caskroom/cask/brew-cask`
2. Install Google Cloud SDK: `brew cask install google-cloud-sdk`
3. Add `kubectl`: `gcloud components install kubectl`
4. Log in: `gcloud auth login`
1. Install Google Cloud SDK: `brew cask install google-cloud-sdk`
1. Add `kubectl`: `gcloud components install kubectl`
1. Log in: `gcloud auth login`
 
Now go back to the Google interface, find your cluster, and follow the instructions under `Connect to the cluster` and open the Kubernetes Dashboard. It will look something like `gcloud container clusters get-credentials ruby-autodeploy \ --zone europe-west2-c --project api-project-XXXXXXX` and then `kubectl proxy`.
 
Loading
Loading
Loading
Loading
@@ -46,18 +46,18 @@ GitLab Runner then executes job scripts as the `gitlab-runner` user.
--description "My Runner"
```
 
2. Install Docker Engine on server.
1. Install Docker Engine on server.
 
For more information how to install Docker Engine on different systems
checkout the [Supported installations](https://docs.docker.com/engine/installation/).
 
3. Add `gitlab-runner` user to `docker` group:
1. Add `gitlab-runner` user to `docker` group:
 
```bash
sudo usermod -aG docker gitlab-runner
```
 
4. Verify that `gitlab-runner` has access to Docker:
1. Verify that `gitlab-runner` has access to Docker:
 
```bash
sudo -u gitlab-runner -H docker info
Loading
Loading
@@ -75,7 +75,7 @@ GitLab Runner then executes job scripts as the `gitlab-runner` user.
- docker run my-docker-image /script/to/run/tests
```
 
5. You can now use `docker` command and install `docker-compose` if needed.
1. You can now use `docker` command and install `docker-compose` if needed.
 
NOTE: **Note:**
By adding `gitlab-runner` to the `docker` group you are effectively granting `gitlab-runner` full root permissions.
Loading
Loading
Loading
Loading
@@ -33,9 +33,9 @@ before_script:
In this particular case, the `npm deploy` script is a Gulp script that does the following:
 
1. Compile CSS & JS
2. Create sprites
3. Copy various assets (images, fonts) around
4. Replace some strings
1. Create sprites
1. Copy various assets (images, fonts) around
1. Replace some strings
 
All these operations will put all files into a `build` folder, which is ready to be deployed to a live server.
 
Loading
Loading
@@ -62,10 +62,10 @@ before_script:
 
In order, this means that:
 
1. We check if the `ssh-agent` is available and we install it if it's not;
2. We create the `~/.ssh` folder;
3. We make sure we're running bash;
4. We disable host checking (we don't ask for user accept when we first connect to a server; and since every job will equal a first connect, we kind of need this)
1. We check if the `ssh-agent` is available and we install it if it's not.
1. We create the `~/.ssh` folder.
1. We make sure we're running bash.
1. We disable host checking (we don't ask for user accept when we first connect to a server and since every job will equal a first connect, we kind of need this).
 
And this is basically all you need in the `before_script` section.
 
Loading
Loading
@@ -91,11 +91,11 @@ stage_deploy:
Here's the breakdown:
 
1. `only:dev` means that this build will run only when something is pushed to the `dev` branch. You can remove this block completely and have everything be ran on every push (but probably this is something you don't want)
2. `ssh-add ...` we will add that private key you added on the web UI to the docker container
3. We will connect via `ssh` and create a new `_tmp` folder
4. We will connect via `scp` and upload the `build` folder (which was generated by a `npm` script) to our previously created `_tmp` folder
5. We will connect again to `ssh` and move the `live` folder to an `_old` folder, then move `_tmp` to `live`.
6. We connect to ssh and remove the `_old` folder
1. `ssh-add ...` we will add that private key you added on the web UI to the docker container
1. We will connect via `ssh` and create a new `_tmp` folder
1. We will connect via `scp` and upload the `build` folder (which was generated by a `npm` script) to our previously created `_tmp` folder
1. We will connect again to `ssh` and move the `live` folder to an `_old` folder, then move `_tmp` to `live`.
1. We connect to ssh and remove the `_old` folder
 
What's the deal with the artifacts? We just tell GitLab CI to keep the `build` directory (later on, you can download that as needed).
 
Loading
Loading
Loading
Loading
@@ -40,15 +40,17 @@ production:
```
 
This project has three jobs:
1. `test` - used to test Django application,
2. `staging` - used to automatically deploy staging environment every push to `master` branch
3. `production` - used to automatically deploy production environment for every created tag
- `test` - used to test Django application,
- `staging` - used to automatically deploy staging environment every push to `master` branch
- `production` - used to automatically deploy production environment for every created tag
 
## Store API keys
 
You'll need to create two variables in `Settings > CI/CD > Variables` on your GitLab project settings:
1. `HEROKU_STAGING_API_KEY` - Heroku API key used to deploy staging app,
2. `HEROKU_PRODUCTION_API_KEY` - Heroku API key used to deploy production app.
- `HEROKU_STAGING_API_KEY` - Heroku API key used to deploy staging app.
- `HEROKU_PRODUCTION_API_KEY` - Heroku API key used to deploy production app.
 
Find your Heroku API key in [Manage Account](https://dashboard.heroku.com/account).
 
Loading
Loading
Loading
Loading
@@ -36,16 +36,17 @@ production:
```
 
This project has three jobs:
1. `test` - used to test Rails application,
2. `staging` - used to automatically deploy staging environment every push to `master` branch
3. `production` - used to automatically deploy production environment for every created tag
- `test` - used to test Rails application.
- `staging` - used to automatically deploy staging environment every push to `master` branch.
- `production` - used to automatically deploy production environment for every created tag.
 
## Store API keys
 
You'll need to create two variables in your project's **Settings > CI/CD > Variables**:
 
1. `HEROKU_STAGING_API_KEY` - Heroku API key used to deploy staging app,
2. `HEROKU_PRODUCTION_API_KEY` - Heroku API key used to deploy production app.
- `HEROKU_STAGING_API_KEY` - Heroku API key used to deploy staging app.
- `HEROKU_PRODUCTION_API_KEY` - Heroku API key used to deploy production app.
 
Find your Heroku API key in [Manage Account](https://dashboard.heroku.com/account).
 
Loading
Loading
Loading
Loading
@@ -168,7 +168,7 @@ can be found at <https://docs.gitlab.com/runner/>.
In order to have a functional Runner you need to follow two steps:
 
1. [Install it][runner-install]
2. [Configure it](../runners/README.md#registering-a-specific-runner)
1. [Configure it](../runners/README.md#registering-a-specific-runner)
 
Follow the links above to set up your own Runner or use a Shared Runner as
described in the next section.
Loading
Loading
Loading
Loading
@@ -138,9 +138,9 @@ project without requiring your authorization, so use it with caution.
An admin can enable/disable a specific Runner for projects:
 
1. Navigate to **Admin > Runners**
2. Find the Runner you wish to enable/disable
3. Click edit on the Runner
4. Click **Enable** or **Disable** on the project
1. Find the Runner you wish to enable/disable
1. Click edit on the Runner
1. Click **Enable** or **Disable** on the project
 
## Protected Runners
 
Loading
Loading
Loading
Loading
@@ -28,9 +28,9 @@ to filter data by. Instead one should ask themselves the following questions:
 
1. Can I write my query in such a way that it re-uses as many existing indexes
as possible?
2. Is the data going to be large enough that using an index will actually be
1. Is the data going to be large enough that using an index will actually be
faster than just iterating over the rows in the table?
3. Is the overhead of maintaining the index worth the reduction in query
1. Is the overhead of maintaining the index worth the reduction in query
timings?
 
We'll explore every question in detail below.
Loading
Loading
@@ -62,7 +62,7 @@ In short:
 
1. Try to write your query in such a way that it re-uses as many existing
indexes as possible.
2. Run the query using `EXPLAIN ANALYZE` and study the output to find the most
1. Run the query using `EXPLAIN ANALYZE` and study the output to find the most
ideal query.
 
## Data Size
Loading
Loading
Loading
Loading
@@ -4,12 +4,13 @@ While developing a new feature or modifying an existing one, it is helpful if an
installable package (or a docker image) containing those changes is available
for testing. For this very purpose, a manual job is provided in the GitLab CI/CD
pipeline that can be used to trigger a pipeline in the omnibus-gitlab repository
that will create
1. A deb package for Ubuntu 16.04, available as a build artifact, and
2. A docker image, which is pushed to [Omnibus GitLab's container
registry](https://gitlab.com/gitlab-org/omnibus-gitlab/container_registry)
(images titled `gitlab-ce` and `gitlab-ee` respectively and image tag is the
commit which triggered the pipeline).
that will create:
- A deb package for Ubuntu 16.04, available as a build artifact, and
- A docker image, which is pushed to [Omnibus GitLab's container
registry](https://gitlab.com/gitlab-org/omnibus-gitlab/container_registry)
(images titled `gitlab-ce` and `gitlab-ee` respectively and image tag is the
commit which triggered the pipeline).
 
When you push a commit to either the gitlab-ce or gitlab-ee project, the
pipeline for that commit will have a `build-package` manual action you can
Loading
Loading
Loading
Loading
@@ -9,11 +9,11 @@ code is effective, understandable, and maintainable.
 
## Getting your merge request reviewed, approved, and merged
 
You are strongly encouraged to get your code **reviewed** by a
You are strongly encouraged to get your code **reviewed** by a
[reviewer](https://about.gitlab.com/handbook/engineering/#reviewer) as soon as
there is any code to review, to get a second opinion on the chosen solution and
implementation, and an extra pair of eyes looking for bugs, logic problems, or
uncovered edge cases. The reviewer can be from a different team, but it is
uncovered edge cases. The reviewer can be from a different team, but it is
recommended to pick someone who knows the domain well. You can read more about the
importance of involving reviewer(s) in the section on the responsibility of the author below.
 
Loading
Loading
@@ -23,7 +23,7 @@ one of the [Merge request coaches][team].
Depending on the areas your merge request touches, it must be **approved** by one
or more [maintainers](https://about.gitlab.com/handbook/engineering/#maintainer):
 
For approvals, we use the approval functionality found in the merge request
For approvals, we use the approval functionality found in the merge request
widget. Reviewers can add their approval by [approving additionally](https://docs.gitlab.com/ee/user/project/merge_requests/merge_request_approvals.html#adding-or-removing-an-approval).
 
1. If your merge request includes backend changes [^1], it must be
Loading
Loading
@@ -42,43 +42,43 @@ widget. Reviewers can add their approval by [approving additionally](https://doc
Getting your merge request **merged** also requires a maintainer. If it requires
more than one approval, the last maintainer to review and approve it will also merge it.
 
As described in the section on the responsibility of the maintainer below, you
are recommended to get your merge request approved and merged by maintainer(s)
As described in the section on the responsibility of the maintainer below, you
are recommended to get your merge request approved and merged by maintainer(s)
from other teams than your own.
 
### The responsibility of the merge request author
 
The responsibility to find the best solution and implement it lies with the
merge request author.
merge request author.
 
Before assigning a merge request to a maintainer for approval and merge, they
should be confident that it actually solves the problem it was meant to solve,
that it does so in the most appropriate way, that it satisfies all requirements,
and that there are no remaining bugs, logical problems, or uncovered edge cases.
The merge request should also have a completed task list in its description and
Before assigning a merge request to a maintainer for approval and merge, they
should be confident that it actually solves the problem it was meant to solve,
that it does so in the most appropriate way, that it satisfies all requirements,
and that there are no remaining bugs, logical problems, or uncovered edge cases.
The merge request should also have a completed task list in its description and
a passing CI pipeline to avoid unnecessary back and forth.
 
To reach the required level of confidence in their solution, an author is expected
to involve other people in the investigation and implementation processes as
to involve other people in the investigation and implementation processes as
appropriate.
 
They are encouraged to reach out to domain experts to discuss different solutions
or get an implementation reviewed, to product managers and UX designers to clear
up confusion or verify that the end result matches what they had in mind, to
They are encouraged to reach out to domain experts to discuss different solutions
or get an implementation reviewed, to product managers and UX designers to clear
up confusion or verify that the end result matches what they had in mind, to
database specialists to get input on the data model or specific queries, or to
any other developer to get an in-depth review of the solution.
 
If an author is unsure if a merge request needs a domain expert's opinion, that's
usually a pretty good sign that it does, since without it the required level of
usually a pretty good sign that it does, since without it the required level of
confidence in their solution will not have been reached.
 
### The responsibility of the maintainer
 
Maintainers are responsible for the overall health, quality, and consistency of
the GitLab codebase, across domains and product areas.
the GitLab codebase, across domains and product areas.
 
Consequently, their reviews will focus primarily on things like overall
architecture, code organization, separation of concerns, tests, DRYness,
Consequently, their reviews will focus primarily on things like overall
architecture, code organization, separation of concerns, tests, DRYness,
consistency, and readability.
 
Since a maintainer's job only depends on their knowledge of the overall GitLab
Loading
Loading
@@ -87,12 +87,12 @@ merge requests from any team and in any product area.
 
In fact, authors are recommended to get their merge requests merged by maintainers
from other teams than their own, to ensure that all code across GitLab is consistent
and can be easily understood by all contributors, from both inside and outside the
and can be easily understood by all contributors, from both inside and outside the
company, without requiring team-specific expertise.
 
Maintainers will do their best to also review the specifics of the chosen solution
before merging, but as they are not necessarily domain experts, they may be poorly
placed to do so without an unreasonable investment of time. In those cases, they
placed to do so without an unreasonable investment of time. In those cases, they
will defer to the judgment of the author and earlier reviewers and involved domain
experts, in favor of focusing on their primary responsibilities.
 
Loading
Loading
@@ -100,7 +100,7 @@ If a developer who happens to also be a maintainer was involved in a merge reque
as a domain expert and/or reviewer, it is recommended that they are not also picked
as the maintainer to ultimately approve and merge it.
 
Maintainers should check before merging if the merge request is approved by the
Maintainers should check before merging if the merge request is approved by the
required approvers.
 
## Best practices
Loading
Loading
@@ -230,41 +230,41 @@ Enterprise Edition instance. This has some implications:
1. **Query changes** should be tested to ensure that they don't result in worse
performance at the scale of GitLab.com:
1. Generating large quantities of data locally can help.
2. Asking for query plans from GitLab.com is the most reliable way to validate
1. Asking for query plans from GitLab.com is the most reliable way to validate
these.
2. **Database migrations** must be:
1. **Database migrations** must be:
1. Reversible.
2. Performant at the scale of GitLab.com - ask a maintainer to test the
1. Performant at the scale of GitLab.com - ask a maintainer to test the
migration on the staging environment if you aren't sure.
3. Categorised correctly:
1. Categorised correctly:
- Regular migrations run before the new code is running on the instance.
- [Post-deployment migrations](post_deployment_migrations.md) run _after_
the new code is deployed, when the instance is configured to do that.
- [Background migrations](background_migrations.md) run in Sidekiq, and
should only be done for migrations that would take an extreme amount of
time at GitLab.com scale.
3. **Sidekiq workers**
1. **Sidekiq workers**
[cannot change in a backwards-incompatible way](sidekiq_style_guide.md#removing-or-renaming-queues):
1. Sidekiq queues are not drained before a deploy happens, so there will be
workers in the queue from the previous version of GitLab.
2. If you need to change a method signature, try to do so across two releases,
1. If you need to change a method signature, try to do so across two releases,
and accept both the old and new arguments in the first of those.
3. Similarly, if you need to remove a worker, stop it from being scheduled in
1. Similarly, if you need to remove a worker, stop it from being scheduled in
one release, then remove it in the next. This will allow existing jobs to
execute.
4. Don't forget, not every instance will upgrade to every intermediate version
1. Don't forget, not every instance will upgrade to every intermediate version
(some people may go from X.1.0 to X.10.0, or even try bigger upgrades!), so
try to be liberal in accepting the old format if it is cheap to do so.
4. **Cached values** may persist across releases. If you are changing the type a
1. **Cached values** may persist across releases. If you are changing the type a
cached value returns (say, from a string or nil to an array), change the
cache key at the same time.
5. **Settings** should be added as a
1. **Settings** should be added as a
[last resort](https://about.gitlab.com/handbook/product/#convention-over-configuration).
If you're adding a new setting in `gitlab.yml`:
1. Try to avoid that, and add to `ApplicationSetting` instead.
2. Ensure that it is also
1. Ensure that it is also
[added to Omnibus](https://docs.gitlab.com/omnibus/settings/gitlab.yml.html#adding-a-new-setting-to-gitlab-yml).
6. **Filesystem access** can be slow, so try to avoid
1. **Filesystem access** can be slow, so try to avoid
[shared files](shared_files.md) when an alternative solution is available.
 
### Credits
Loading
Loading
Loading
Loading
@@ -17,20 +17,20 @@ The diffs fetching process _limits_ single file diff sizes and the overall size
then persisted on `merge_request_diff_files` table.
 
Even though diffs larger than 10% of the value of `ApplicationSettings#diff_max_patch_bytes` are collapsed,
we still keep them on Postgres. However, diff files larger than defined _safety limits_
we still keep them on Postgres. However, diff files larger than defined _safety limits_
(see the [Diff limits section](#diff-limits)) are _not_ persisted in the database.
 
In order to present diffs information on the Merge Request diffs page, we:
 
1. Fetch all diff files from database `merge_request_diff_files`
2. Fetch the _old_ and _new_ file blobs in batch to:
1. Highlight old and new file content
2. Know which viewer it should use for each file (text, image, deleted, etc)
3. Know if the file content changed
4. Know if it was stored externally
5. Know if it had storage errors
3. If the diff file is cacheable (text-based), it's cached on Redis
using `Gitlab::Diff::FileCollection::MergeRequestDiff`
1. Fetch the _old_ and _new_ file blobs in batch to:
- Highlight old and new file content
- Know which viewer it should use for each file (text, image, deleted, etc)
- Know if the file content changed
- Know if it was stored externally
- Know if it had storage errors
1. If the diff file is cacheable (text-based), it's cached on Redis
using `Gitlab::Diff::FileCollection::MergeRequestDiff`
 
### Note diffs
 
Loading
Loading
@@ -39,9 +39,9 @@ on `NoteDiffFile` (which is associated with the actual `DiffNote`). So instead
of hitting the repository every time we need the diff of the file, we:
 
1. Check whether we have the `NoteDiffFile#diff` persisted and use it
2. Otherwise, if it's a current MR revision, use the persisted
`MergeRequestDiffFile#diff`
3. In the last scenario, go the the repository and fetch the diff
1. Otherwise, if it's a current MR revision, use the persisted
`MergeRequestDiffFile#diff`
1. In the last scenario, go the repository and fetch the diff
 
## Diff limits
 
Loading
Loading
Loading
Loading
@@ -183,9 +183,11 @@ Don't use ID selectors in CSS.
```
 
### Variables
Before adding a new variable for a color or a size, guarantee:
1. There isn't already one
2. There isn't a similar one we can use instead.
- There isn't already one
- There isn't a similar one we can use instead.
 
## Linting
 
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment