Skip to content
Snippets Groups Projects
Commit e1282393 authored by Evan Read's avatar Evan Read Committed by Achilleas Pipinellis
Browse files

Add Markdown linting

Also adds and one linting rule
and makes project conform to it.
parent cf291a11
No related branches found
No related tags found
No related merge requests found
Showing
with 142 additions and 114 deletions
Loading
Loading
@@ -66,6 +66,10 @@ docs lint:
- scripts/lint-changelog-yaml
- mv doc/ /tmp/gitlab-docs/content/$DOCS_GITLAB_REPO_SUFFIX
- cd /tmp/gitlab-docs
# Lint Markdown
# https://github.com/markdownlint/markdownlint/blob/master/docs/RULES.md
- bundle exec mdl content/$DOCS_GITLAB_REPO_SUFFIX/**/*.md --rules \
MD032
# Build HTML from Markdown
- bundle exec nanoc
# Check the internal links
Loading
Loading
Loading
Loading
@@ -144,20 +144,20 @@ for more details:
If you're having trouble, here are some tips:
 
1. Ensure `discovery` is set to `true`. Setting it to `false` requires
specifying all the URLs and keys required to make OpenID work.
specifying all the URLs and keys required to make OpenID work.
 
1. Check your system clock to ensure the time is synchronized properly.
 
1. As mentioned in [the
documentation](https://github.com/m0n9oose/omniauth_openid_connect),
make sure `issuer` corresponds to the base URL of the Discovery URL. For
example, `https://accounts.google.com` is used for the URL
`https://accounts.google.com/.well-known/openid-configuration`.
documentation](https://github.com/m0n9oose/omniauth_openid_connect),
make sure `issuer` corresponds to the base URL of the Discovery URL. For
example, `https://accounts.google.com` is used for the URL
`https://accounts.google.com/.well-known/openid-configuration`.
 
1. The OpenID Connect client uses HTTP Basic Authentication to send the
OAuth2 access token. For example, if you are seeing 401 errors upon
retrieving the `userinfo` endpoint, you may want to check your OpenID
Web server configuration. For example, for
[oauth2-server-php](https://github.com/bshaffer/oauth2-server-php), you
may need to [add a configuration parameter to
Apache](https://github.com/bshaffer/oauth2-server-php/issues/926#issuecomment-387502778).
OAuth2 access token. For example, if you are seeing 401 errors upon
retrieving the `userinfo` endpoint, you may want to check your OpenID
Web server configuration. For example, for
[oauth2-server-php](https://github.com/bshaffer/oauth2-server-php), you
may need to [add a configuration parameter to
Apache](https://github.com/bshaffer/oauth2-server-php/issues/926#issuecomment-387502778).
Loading
Loading
@@ -6,8 +6,8 @@ The requirements are listed [on the index page](index.md#requirements-for-runnin
 
## How does Geo know which projects to sync?
 
On each **secondary** node, there is a read-only replicated copy of the GitLab database.
A **secondary** node also has a tracking database where it stores which projects have been synced.
On each **secondary** node, there is a read-only replicated copy of the GitLab database.
A **secondary** node also has a tracking database where it stores which projects have been synced.
Geo compares the two databases to find projects that are not yet tracked.
 
At the start, this tracking database is empty, so Geo will start trying to update from every project that it can see in the GitLab database.
Loading
Loading
@@ -15,19 +15,19 @@ At the start, this tracking database is empty, so Geo will start trying to updat
For each project to sync:
 
1. Geo will issue a `git fetch geo --mirror` to get the latest information from the **primary** node.
If there are no changes, the sync will be fast and end quickly. Otherwise, it will pull the latest commits.
If there are no changes, the sync will be fast and end quickly. Otherwise, it will pull the latest commits.
1. The **secondary** node will update the tracking database to store the fact that it has synced projects A, B, C, etc.
1. Repeat until all projects are synced.
 
When someone pushes a commit to the **primary** node, it generates an event in the GitLab database that the repository has changed.
When someone pushes a commit to the **primary** node, it generates an event in the GitLab database that the repository has changed.
The **secondary** node sees this event, marks the project in question as dirty, and schedules the project to be resynced.
 
To ensure that problems with pipelines (for example, syncs failing too many times or jobs being lost) don't permanently stop projects syncing, Geo also periodically checks the tracking database for projects that are marked as dirty. This check happens when
the number of concurrent syncs falls below `repos_max_capacity` and there are no new projects waiting to be synced.
the number of concurrent syncs falls below `repos_max_capacity` and there are no new projects waiting to be synced.
 
Geo also has a checksum feature which runs a SHA256 sum across all the Git references to the SHA values.
If the refs don't match between the **primary** node and the **secondary** node, then the **secondary** node will mark that project as dirty and try to resync it.
So even if we have an outdated tracking database, the validation should activate and find discrepancies in the repository state and resync.
Geo also has a checksum feature which runs a SHA256 sum across all the Git references to the SHA values.
If the refs don't match between the **primary** node and the **secondary** node, then the **secondary** node will mark that project as dirty and try to resync it.
So even if we have an outdated tracking database, the validation should activate and find discrepancies in the repository state and resync.
 
## Can I use Geo in a disaster recovery situation?
 
Loading
Loading
Loading
Loading
@@ -331,7 +331,7 @@ There are a few key points to remember:
 
1. The FDW settings are configured on the Geo **tracking** database.
1. The configured foreign server enables a login to the Geo
**secondary**, read-only database.
**secondary**, read-only database.
 
By default, the Geo secondary and tracking database are running on the
same host on different ports. That is, 5432 and 5431 respectively.
Loading
Loading
@@ -350,7 +350,7 @@ To check the configuration:
```
 
1. Check whether any tables are present. If everything is working, you
should see something like this:
should see something like this:
 
```sql
gitlabhq_geo_production=# SELECT * from information_schema.foreign_tables;
Loading
Loading
Loading
Loading
@@ -83,7 +83,7 @@ deploy the bundled PostgreSQL.
plain text password. These will be necessary when configuring the GitLab
application servers later.
1. [Enable monitoring](#enable-monitoring)
Advanced configuration options are supported and can be added if
needed.
 
Loading
Loading
@@ -204,9 +204,9 @@ Few notes on the service itself:
 
- The service runs under a system account, by default `gitlab-consul`.
- If you are using a different username, you will have to specify it. We
will refer to it with `CONSUL_USERNAME`,
will refer to it with `CONSUL_USERNAME`,
- There will be a database user created with read only access to the repmgr
database
database
- Passwords will be stored in the following locations:
- `/etc/gitlab/gitlab.rb`: hashed
- `/var/opt/gitlab/pgbouncer/pg_auth`: hashed
Loading
Loading
Loading
Loading
@@ -285,7 +285,7 @@ Example response:
 
### Scope: wiki_blobs **[STARTER]**
 
This scope is available only if [Elasticsearch](../integration/elasticsearch.md) is enabled.
This scope is available only if [Elasticsearch](../integration/elasticsearch.md) is enabled.
 
```bash
curl --request GET --header "PRIVATE-TOKEN: <your_access_token>" https://gitlab.example.com/api/v4/search?scope=wiki_blobs&search=bye
Loading
Loading
@@ -346,6 +346,7 @@ Example response:
This scope is available only if [Elasticsearch](../integration/elasticsearch.md) is enabled.
 
Filters are available for this scope:
- filename
- path
- extension
Loading
Loading
@@ -679,6 +680,7 @@ Example response:
This scope is available only if [Elasticsearch](../integration/elasticsearch.md) is enabled.
 
Filters are available for this scope:
- filename
- path
- extension
Loading
Loading
Loading
Loading
@@ -489,6 +489,7 @@ it's provided as an environment variable. This is because GitLab Runnner uses **
runtime.
 
### Using statically-defined credentials
As an example, let's assume that you want to use the `registry.example.com:5000/private/image:latest`
image which is private and requires you to login into a private container registry.
 
Loading
Loading
@@ -566,7 +567,6 @@ for the Runner to match the `DOCKER_AUTH_CONFIG`. For example, if
then the `DOCKER_AUTH_CONFIG` must also specify `registry.example.com:5000`.
Specifying only `registry.example.com` will not work.
 
### Using Credentials Store
 
> Support for using Credentials Store was added in GitLab Runner 9.5.
Loading
Loading
@@ -574,7 +574,7 @@ Specifying only `registry.example.com` will not work.
To configure credentials store, follow these steps:
 
1. To use a credentials store, you need an external helper program to interact with a specific keychain or external store.
Make sure helper program is available in GitLab Runner `$PATH`.
Make sure helper program is available in GitLab Runner `$PATH`.
 
1. Make GitLab Runner use it. There are two ways to accomplish this. Either:
- Create a
Loading
Loading
Loading
Loading
@@ -47,10 +47,10 @@ deploy:
In the above configuration:
 
- The `before_script` installs [SBT](http://www.scala-sbt.org/) and
displays the version that is being used.
displays the version that is being used.
- The `test` stage executes SBT to compile and test the project.
- [sbt-scoverage](https://github.com/scoverage/sbt-scoverage) is used as an SBT
plugin to measure test coverage.
- [sbt-scoverage](https://github.com/scoverage/sbt-scoverage) is used as an SBT
plugin to measure test coverage.
- The `deploy` stage automatically deploys the project to Heroku using dpl.
 
You can use other versions of Scala and SBT by defining them in
Loading
Loading
Loading
Loading
@@ -339,7 +339,7 @@ Group-level variables can be added by:
 
1. Navigating to your group's **Settings > CI/CD** page.
1. Inputing variable types, keys, and values in the **Variables** section.
Any variables of [subgroups](../../user/group/subgroups/index.md) will be inherited recursively.
Any variables of [subgroups](../../user/group/subgroups/index.md) will be inherited recursively.
 
Once you set them, they will be available for all subsequent pipelines.
 
Loading
Loading
Loading
Loading
@@ -198,9 +198,9 @@ abilities as in the Rails app.
If the:
 
- Currently authenticated user fails the authorization, the authorized
resource will be returned as `null`.
resource will be returned as `null`.
- Resource is part of a collection, the collection will be filtered to
exclude the objects that the user's authorization checks failed against.
exclude the objects that the user's authorization checks failed against.
 
TIP: **Tip:**
Try to load only what the currently authenticated user is allowed to
Loading
Loading
@@ -496,4 +496,4 @@ it 'returns a successful response' do
expect(response).to have_gitlab_http_status(:success)
expect(graphql_mutation_response(:merge_request_set_wip)['errors']).to be_empty
end
```
\ No newline at end of file
```
Loading
Loading
@@ -126,16 +126,16 @@ When writing commit messages, please follow the guidelines below:
 
- The commit subject must contain at least 3 words.
- The commit subject should ideally contain up to 50 characters,
and must not be longer than 72 characters.
and must not be longer than 72 characters.
- The commit subject must start with a capital letter.
- The commit subject must not end with a period.
- The commit subject and body must be separated by a blank line.
- The commit body must not contain more than 72 characters per line.
- Commits that change 30 or more lines across at least 3 files must
describe these changes in the commit body.
describe these changes in the commit body.
- The commit subject or body must not contain Emojis.
- Use issues and merge requests' full URLs instead of short references,
as they are displayed as plain text outside of GitLab.
as they are displayed as plain text outside of GitLab.
- The merge request must not contain more than 10 commit messages.
 
If the guidelines are not met, the MR will not pass the
Loading
Loading
Loading
Loading
@@ -76,10 +76,10 @@ After a given documentation path is aligned across CE and EE, all merge requests
affecting that path must be submitted to CE, regardless of the content it has.
This means that:
 
* For **EE-only docs changes**, you only have to submit a CE MR.
* For **EE-only features** that touch both the code and the docs, you have to submit
an EE MR containing all changes, and a CE MR containing only the docs changes
and without a changelog entry.
- For **EE-only docs changes**, you only have to submit a CE MR.
- For **EE-only features** that touch both the code and the docs, you have to submit
an EE MR containing all changes, and a CE MR containing only the docs changes
and without a changelog entry.
 
This might seem like a duplicate effort, but it's only for the short term.
A list of the already aligned docs can be found in
Loading
Loading
Loading
Loading
@@ -165,8 +165,8 @@ The table below shows what kind of documentation goes where.
`doc/topics/topic-name/subtopic-name/index.md` when subtopics become necessary.
General user- and admin- related documentation, should be placed accordingly.
1. The directories `/workflow/`, `/university/`, and `/articles/` have
been **deprecated** and the majority their docs have been moved to their correct location
in small iterations.
been **deprecated** and the majority their docs have been moved to their correct location
in small iterations.
 
If you are unsure where a document or a content addition should live, this should
not stop you from authoring and contributing. You can use your best judgment and
Loading
Loading
Loading
Loading
@@ -909,11 +909,12 @@ import bundle from 'ee_else_ce/protected_branches/protected_branches_bundle.js';
See the frontend guide [performance section](fe_guide/performance.md) for
information on managing page-specific javascript within EE.
 
## Vue code in `assets/javascript`
### script tag
 
#### Child Component only used in EE
To separate Vue template differences we should [async import the components](https://vuejs.org/v2/guide/components-dynamic-async.html#Async-Components).
 
Doing this allows for us to load the correct component in EE whilst in CE
Loading
Loading
@@ -937,10 +938,12 @@ export default {
```
 
#### For JS code that is EE only, like props, computed properties, methods, etc, we will keep the current approach
- Since we [can't async load a mixin](https://github.com/vuejs/vue-loader/issues/418#issuecomment-254032223) we will use the [`ee_else_ce`](../development/ee_features.md#javascript-code-in-assetsjavascripts) alias we already have for webpack.
- Since we [can't async load a mixin](https://github.com/vuejs/vue-loader/issues/418#issuecomment-254032223) we will use the [`ee_else_ce`](../development/ee_features.md#javascript-code-in-assetsjavascripts) alias we already have for webpack.
- This means all the EE specific props, computed properties, methods, etc that are EE only should be in a mixin in the `ee/` folder and we need to create a CE counterpart of the mixin
 
##### Example:
```javascript
import mixin from 'ee_else_ce/path/mixin';
 
Loading
Loading
@@ -955,6 +958,7 @@ import mixin from 'ee_else_ce/path/mixin';
- You can see an MR with an example [here](https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9762)
 
#### `template` tag
* **EE Child components**
- Since we are using the async loading to check which component to load, we'd still use the component's name, check [this example](#child-component-only-used-in-ee).
 
Loading
Loading
@@ -962,11 +966,12 @@ import mixin from 'ee_else_ce/path/mixin';
- For the templates that have extra HTML in EE we should move it into a new component and use the `ee_else_ce` dynamic import
 
### Non Vue Files
For regular JS files, the approach is similar.
 
1. We will keep using the [`ee_else_ce`](../development/ee_features.md#javascript-code-in-assetsjavascripts) helper, this means that EE only code should be inside the `ee/` folder.
1. An EE file should be created with the EE only code, and it should extend the CE counterpart.
1. For code inside functions that can't be extended, the code should be moved into a new file and we should use `ee_else_ce` helper:
1. An EE file should be created with the EE only code, and it should extend the CE counterpart.
1. For code inside functions that can't be extended, the code should be moved into a new file and we should use `ee_else_ce` helper:
 
##### Example:
 
Loading
Loading
@@ -996,6 +1001,7 @@ to isolate such ruleset from rest of CE rules (along with adding comment describ
to avoid conflicts during CE to EE merge.
 
#### Bad
```scss
.section-body {
.section-title {
Loading
Loading
@@ -1011,6 +1017,7 @@ to avoid conflicts during CE to EE merge.
```
 
#### Good
```scss
.section-body {
.section-title {
Loading
Loading
Loading
Loading
@@ -64,20 +64,25 @@ All indexing after the initial one is done via `ElasticIndexerWorker` (sidekiq j
Search queries are generated by the concerns found in [ee/app/models/concerns/elastic](https://gitlab.com/gitlab-org/gitlab-ee/tree/master/ee/app/models/concerns/elastic). These concerns are also in charge of access control, and have been a historic source of security bugs so please pay close attention to them!
 
## Existing Analyzers/Tokenizers/Filters
These are all defined in https://gitlab.com/gitlab-org/gitlab-ee/blob/master/ee/lib/elasticsearch/git/model.rb
 
### Analyzers
#### `path_analyzer`
Used when indexing blobs' paths. Uses the `path_tokenizer` and the `lowercase` and `asciifolding` filters.
 
Please see the `path_tokenizer` explanation below for an example.
 
#### `sha_analyzer`
Used in blobs and commits. Uses the `sha_tokenizer` and the `lowercase` and `asciifolding` filters.
 
Please see the `sha_tokenizer` explanation later below for an example.
 
#### `code_analyzer`
Used when indexing a blob's filename and content. Uses the `whitespace` tokenizer and the filters: `code`, `edgeNGram_filter`, `lowercase`, and `asciifolding`
 
The `whitespace` tokenizer was selected in order to have more control over how tokens are split. For example the string `Foo::bar(4)` needs to generate tokens like `Foo` and `bar(4)` in order to be properly searched.
Loading
Loading
@@ -85,15 +90,19 @@ The `whitespace` tokenizer was selected in order to have more control over how t
Please see the `code` filter for an explanation on how tokens are split.
 
#### `code_search_analyzer`
Not directly used for indexing, but rather used to transform a search input. Uses the `whitespace` tokenizer and the `lowercase` and `asciifolding` filters.
 
### Tokenizers
#### `sha_tokenizer`
This is a custom tokenizer that uses the [`edgeNGram` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-edgengram-tokenizer.html) to allow SHAs to be searcheable by any sub-set of it (minimum of 5 chars).
 
example:
Example:
 
`240c29dc7e` becomes:
- `240c2`
- `240c29`
- `240c29d`
Loading
Loading
@@ -102,21 +111,26 @@ example:
- `240c29dc7e`
 
#### `path_tokenizer`
This is a custom tokenizer that uses the [`path_hierarchy` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-pathhierarchy-tokenizer.html) with `reverse: true` in order to allow searches to find paths no matter how much or how little of the path is given as input.
 
example:
Example:
 
`'/some/path/application.js'` becomes:
- `'/some/path/application.js'`
- `'some/path/application.js'`
- `'path/application.js'`
- `'application.js'`
 
### Filters
#### `code`
Uses a [Pattern Capture token filter](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-pattern-capture-tokenfilter.html) to split tokens into more easily searched versions of themselves.
Uses a [Pattern Capture token filter](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-pattern-capture-tokenfilter.html) to split tokens into more easily searched versions of themselves.
 
Patterns:
- `"(\\p{Ll}+|\\p{Lu}\\p{Ll}+|\\p{Lu}+)"`: captures CamelCased and lowedCameCased strings as separate tokens
- `"(\\d+)"`: extracts digits
- `"(?=([\\p{Lu}]+[\\p{L}]+))"`: captures CamelCased strings recursively. Ex: `ThisIsATest` => `[ThisIsATest, IsATest, ATest, Test]`
Loading
Loading
@@ -126,6 +140,7 @@ Patterns:
- `'\/?([^\/]+)(?=\/|\b)'`: separate path terms `like/this/one`
 
#### `edgeNGram_filter`
Uses an [Edge NGram token filter](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-edgengram-tokenfilter.html) to allow inputs with only parts of a token to find the token. For example it would turn `glasses` into permutations starting with `gl` and ending with `glasses`, which would allow a search for "`glass`" to find the original token `glasses`
 
## Gotchas
Loading
Loading
@@ -140,13 +155,13 @@ Uses an [Edge NGram token filter](https://www.elastic.co/guide/en/elasticsearch/
You might get an error such as
 
```
[2018-10-31T15:54:19,762][WARN ][o.e.c.r.a.DiskThresholdMonitor] [pval5Ct]
flood stage disk watermark [95%] exceeded on
[pval5Ct7SieH90t5MykM5w][pval5Ct][/usr/local/var/lib/elasticsearch/nodes/0] free: 56.2gb[3%],
[2018-10-31T15:54:19,762][WARN ][o.e.c.r.a.DiskThresholdMonitor] [pval5Ct]
flood stage disk watermark [95%] exceeded on
[pval5Ct7SieH90t5MykM5w][pval5Ct][/usr/local/var/lib/elasticsearch/nodes/0] free: 56.2gb[3%],
all indices on this node will be marked read-only
```
 
This is because you've exceeded the disk space threshold - it thinks you don't have enough disk space left, based on the default 95% threshold.
This is because you've exceeded the disk space threshold - it thinks you don't have enough disk space left, based on the default 95% threshold.
 
In addition, the `read_only_allow_delete` setting will be set to `true`. It will block indexing, `forcemerge`, etc
 
Loading
Loading
@@ -158,16 +173,16 @@ Add this to your `elasticsearch.yml` file:
 
```
# turn off the disk allocator
cluster.routing.allocation.disk.threshold_enabled: false
cluster.routing.allocation.disk.threshold_enabled: false
```
 
_or_
 
```
# set your own limits
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.flood_stage: 5gb # ES 6.x only
cluster.routing.allocation.disk.watermark.low: 15gb
cluster.routing.allocation.disk.watermark.low: 15gb
cluster.routing.allocation.disk.watermark.high: 10gb
```
 
Loading
Loading
Loading
Loading
@@ -14,10 +14,10 @@ Geo handles replication for different components:
- [Database](#database-replication): includes the entire application, except cache and jobs.
- [Git repositories](#repository-replication): includes both projects and wikis.
- [Uploaded blobs](#uploads-replication): includes anything from images attached on issues
to raw logs and assets from CI.
to raw logs and assets from CI.
 
With the exception of the Database replication, on a *secondary* node, everything is coordinated
by the [Geo Log Cursor](#geo-log-cursor).
by the [Geo Log Cursor](#geo-log-cursor).
 
### Geo Log Cursor daemon
 
Loading
Loading
@@ -31,8 +31,8 @@ picks the event up and schedules a `Geo::ProjectSyncWorker` job which will
use the `Geo::RepositorySyncService` and `Geo::WikiSyncService` classes
to update the repository and the wiki respectively.
 
The Geo Log Cursor daemon can operate in High Availability mode automatically.
The daemon will try to acquire a lock from time to time and once acquired, it
The Geo Log Cursor daemon can operate in High Availability mode automatically.
The daemon will try to acquire a lock from time to time and once acquired, it
will behave as the *active* daemon.
 
Any additional running daemons on the same node, will be in standby
Loading
Loading
@@ -164,20 +164,20 @@ The Git Push Proxy exists as a functionality built inside the `gitlab-shell` com
It is active on a **secondary** node only. It allows the user that has cloned a repository
from the secondary node to push to the same URL.
 
Git `push` requests directed to a **secondary** node will be sent over to the **primary** node,
Git `push` requests directed to a **secondary** node will be sent over to the **primary** node,
while `pull` requests will continue to be served by the **secondary** node for maximum efficiency.
 
HTTPS and SSH requests are handled differently:
 
- With HTTPS, we will give the user a `HTTP 302 Redirect` pointing to the project on the **primary** node.
The git client is wise enough to understand that status code and process the redirection.
The git client is wise enough to understand that status code and process the redirection.
- With SSH, because there is no equivalent way to perform a redirect, we have to proxy the request.
This is done inside [`gitlab-shell`](https://gitlab.com/gitlab-org/gitlab-shell), by first translating the request
to the HTTP protocol, and then proxying it to the **primary** node.
This is done inside [`gitlab-shell`](https://gitlab.com/gitlab-org/gitlab-shell), by first translating the request
to the HTTP protocol, and then proxying it to the **primary** node.
 
The [`gitlab-shell`](https://gitlab.com/gitlab-org/gitlab-shell) daemon knows when to proxy based on the response
from `/api/v4/allowed`. A special `HTTP 300` status code is returned and we execute a "custom action",
specified in the response body. The response contains additional data that allows the proxied `push` operation
The [`gitlab-shell`](https://gitlab.com/gitlab-org/gitlab-shell) daemon knows when to proxy based on the response
from `/api/v4/allowed`. A special `HTTP 300` status code is returned and we execute a "custom action",
specified in the response body. The response contains additional data that allows the proxied `push` operation
to happen on the **primary** node.
 
## Using the Tracking Database
Loading
Loading
@@ -229,17 +229,17 @@ named `gitlab_secondary`. This configuration exists within the database's user
context only. To access the `gitlab_secondary`, GitLab needs to use the
same database user that had previously been configured.
 
The Geo Tracking Database accesses the readonly database replica via FDW as a regular user,
limited by its own restrictions. The credentials are configured as a
`USER MAPPING` associated with the `SERVER` mapped previously
The Geo Tracking Database accesses the readonly database replica via FDW as a regular user,
limited by its own restrictions. The credentials are configured as a
`USER MAPPING` associated with the `SERVER` mapped previously
(`gitlab_secondary`).
 
FDW configuration and credentials definition are managed automatically by the
Omnibus GitLab `gitlab-ctl reconfigure` command.
Omnibus GitLab `gitlab-ctl reconfigure` command.
 
#### Refeshing the Foreign Tables
 
Whenever a new Geo node is configured or the database schema changes on the
Whenever a new Geo node is configured or the database schema changes on the
**primary** node, you must refresh the foreign tables on the **secondary** node
by running the following:
 
Loading
Loading
@@ -279,11 +279,11 @@ on the Tracking Database:
SELECT project_registry.*
FROM project_registry
JOIN gitlab_secondary.projects
ON (project_registry.project_id = gitlab_secondary.projects.id
ON (project_registry.project_id = gitlab_secondary.projects.id
AND gitlab_secondary.projects.archived IS FALSE)
```
 
At the ActiveRecord level, we have additional Models that represent the
At the ActiveRecord level, we have additional Models that represent the
foreign tables. They must be mapped in a slightly different way, and they are read-only.
 
Check the existing FDW models in `ee/app/models/geo/fdw` for reference.
Loading
Loading
Loading
Loading
@@ -5,8 +5,8 @@ We devised a solution to solve common test automation problems such as the dread
Other problems that dynamic element validations solve are...
 
- When we perform an action with the mouse, we expect something to occur.
- When our test is navigating to (or from) a page, we ensure that we are on the page we expect before
test continuation.
- When our test is navigating to (or from) a page, we ensure that we are on the page we expect before
test continuation.
 
## How it works
 
Loading
Loading
@@ -19,7 +19,7 @@ We interpret user actions on the page to have some sort of effect. These actions
 
When a page is navigated to, there are elements that will always appear on the page unconditionally.
 
Dynamic element validation is instituted when using
Dynamic element validation is instituted when using
 
```ruby
Runtime::Browser.visit(:gitlab, Some::Page)
Loading
Loading
@@ -27,7 +27,7 @@ Runtime::Browser.visit(:gitlab, Some::Page)
 
### Clicks
 
When we perform a click within our tests, we expect something to occur. That something could be a component to now
When we perform a click within our tests, we expect something to occur. That something could be a component to now
appear on the webpage, or the test to navigate away from the page entirely.
 
Dynamic element validation is instituted when using
Loading
Loading
@@ -71,7 +71,7 @@ class MyPage < Page::Base
element :another_element, required: true
element :conditional_element
end
def open_layer
click_element :my_element, Layer::MyLayer
end
Loading
Loading
@@ -95,7 +95,7 @@ execute_stuff
```
 
will invoke GitLab QA to scan `MyPage` for `my_element` and `another_element` to be on the page before continuing to
`execute_stuff`
`execute_stuff`
 
### Clicking
 
Loading
Loading
Loading
Loading
@@ -82,7 +82,7 @@ module Page
end
 
# ...
end
end
end
end
```
Loading
Loading
@@ -134,7 +134,7 @@ for each element defined.
 
In our case, `qa-login-field`, `qa-password-field` and `qa-sign-in-button`
 
**app/views/my/view.html.haml**
**app/views/my/view.html.haml**
 
```haml
= f.text_field :login, class: "form-control top qa-login-field", autofocus: "autofocus", autocapitalize: "off", autocorrect: "off", required: true, title: "This field is required."
Loading
Loading
@@ -146,7 +146,7 @@ Things to note:
 
- The CSS class must be `kebab-cased` (separated with hyphens "`-`")
- If the element appears on the page unconditionally, add `required: true` to the element. See
[Dynamic element validation](dynamic_element_validation.md)
[Dynamic element validation](dynamic_element_validation.md)
 
## Running the test locally
 
Loading
Loading
Loading
Loading
@@ -25,22 +25,22 @@ and [Migrating from Jenkins to GitLab](https://www.youtube.com/watch?v=RlEVGOpYF
## Use cases
 
- Suppose you are new to GitLab, and want to keep using Jenkins until you prepare
your projects to build with [GitLab CI/CD](../ci/README.md). You set up the
integration between GitLab and Jenkins, then you migrate to GitLab CI later. While
you organize yourself and your team to onboard GitLab, you keep your pipelines
running with Jenkins, but view the results in your project's repository in GitLab.
your projects to build with [GitLab CI/CD](../ci/README.md). You set up the
integration between GitLab and Jenkins, then you migrate to GitLab CI later. While
you organize yourself and your team to onboard GitLab, you keep your pipelines
running with Jenkins, but view the results in your project's repository in GitLab.
- Your team uses [Jenkins Plugins](https://plugins.jenkins.io/) for other proceedings,
therefore, you opt for keep using Jenkins to build your apps. Show the results of your
pipelines directly in GitLab.
therefore, you opt for keep using Jenkins to build your apps. Show the results of your
pipelines directly in GitLab.
 
For a real use case, read the blog post [Continuous integration: From Jenkins to GitLab using Docker](https://about.gitlab.com/2017/07/27/docker-my-precious/).
 
## Requirements
 
* [Jenkins GitLab Plugin](https://wiki.jenkins.io/display/JENKINS/GitLab+Plugin)
* [Jenkins Git Plugin](https://wiki.jenkins.io/display/JENKINS/Git+Plugin)
* Git clone access for Jenkins from the GitLab repository
* GitLab API access to report build status
- [Jenkins GitLab Plugin](https://wiki.jenkins.io/display/JENKINS/GitLab+Plugin)
- [Jenkins Git Plugin](https://wiki.jenkins.io/display/JENKINS/Git+Plugin)
- Git clone access for Jenkins from the GitLab repository
- GitLab API access to report build status
 
## Configure GitLab users
 
Loading
Loading
@@ -65,7 +65,7 @@ Go to Manage Jenkins -> Configure System and scroll down to the 'GitLab' section
Enter the GitLab server URL in the 'GitLab host URL' field and paste the API token
copied earlier in the 'API Token' field.
 
For more information, see GitLab Plugin documentation about
For more information, see GitLab Plugin documentation about
[Jenkins-to-GitLab authentication](https://github.com/jenkinsci/gitlab-plugin#jenkins-to-gitlab-authentication)
 
![Jenkins GitLab plugin configuration](img/jenkins_gitlab_plugin_config.png)
Loading
Loading
@@ -76,8 +76,8 @@ Follow the GitLab Plugin documentation about [Jenkins Job Configuration](https:/
 
NOTE: **Note:**
Be sure to include the steps about [Build status configuration](https://github.com/jenkinsci/gitlab-plugin#build-status-configuration).
The 'Publish build status to GitLab' post-build step is required to view
Jenkins build status in GitLab Merge Requests.
The 'Publish build status to GitLab' post-build step is required to view
Jenkins build status in GitLab Merge Requests.
 
## Configure a GitLab project
 
Loading
Loading
@@ -114,21 +114,21 @@ and storing build status for Commits and Merge Requests.
All steps are implemented using AJAX requests on the merge request page.
 
1. In order to display the build status in a merge request you must create a project service in GitLab.
2. Your project service will do a (JSON) query to a URL of the CI tool with the SHA1 of the commit.
3. The project service builds this URL and payload based on project service settings and knowledge of the CI tool.
4. The response is parsed to give a response in GitLab (success/failed/pending).
1. Your project service will do a (JSON) query to a URL of the CI tool with the SHA1 of the commit.
1. The project service builds this URL and payload based on project service settings and knowledge of the CI tool.
1. The response is parsed to give a response in GitLab (success/failed/pending).
 
## Troubleshooting
 
### Error in merge requests - "Could not connect to the CI server"
 
This integration relies on Jenkins reporting the build status back to GitLab via
the [Commit Status API](../api/commits.md#commit-status).
the [Commit Status API](../api/commits.md#commit-status).
 
The error 'Could not connect to the CI server' usually means that GitLab did not
receive a build status update via the API. Either Jenkins was not properly
configured or there was an error reporting the status via the API.
configured or there was an error reporting the status via the API.
 
1. [Configure the Jenkins server](#configure-the-jenkins-server) for GitLab API access
2. [Configure a Jenkins project](#configure-a-jenkins-project), including the
'Publish build status to GitLab' post-build action.
1. [Configure a Jenkins project](#configure-a-jenkins-project), including the
'Publish build status to GitLab' post-build action.
Loading
Loading
@@ -14,7 +14,7 @@ Learn how GitLab helps you in the stages of the DevOps lifecycle by learning mor
 
### Self-managed: Install GitLab
 
Take a look at [installing GitLab](https://about.gitlab.com/install/) and our [administrator documentation](../administration/index.md). Then, follow the instructions below under [Your subscription](#your-subscription) to apply your license file.
Take a look at [installing GitLab](https://about.gitlab.com/install/) and our [administrator documentation](../administration/index.md). Then, follow the instructions below under [Your subscription](#your-subscription) to apply your license file.
 
### GitLab.com: Create a user and group
 
Loading
Loading
@@ -74,11 +74,11 @@ Please note that you need to be a group owner to associate a group to your subsc
To see the status of your GitLab.com subscription, you can click on the Billings
section of the relevant namespace:
 
* For individuals, this is located at https://gitlab.com/profile/billings under
in your Settings,
* For groups, this is located under the group's Settings dropdown, under Billing.
- For individuals, this is located at https://gitlab.com/profile/billings under
in your Settings,
- For groups, this is located under the group's Settings dropdown, under Billing.
 
For groups, you can see details of your subscription - including your current
For groups, you can see details of your subscription - including your current
plan - in the included table:
 
![Billing table](billing_table.png)
Loading
Loading
@@ -86,11 +86,11 @@ plan - in the included table:
| Field | Description |
| ------ | ------ |
| Seats in subscription | If this is a paid plan, this represents the number of seats you've paid to support in your group. |
| Seats currently in use | The number of active seats currently in use. |
| Max seats used | The highest number of seats you've used. If this exceeds the seats in subscription, you may owe an additional fee for the additional users. |
| Seats owed | If your max seats used exceeds the seats in your subscription, you'll owe an additional fee for the users you've added. |
| Subscription start date | The date your subscription started. If this is for a Free plan, this is the date you transitioned off your group's paid plan. |
| Subscription end date | The date your current subscription will end. This does not apply to Free plans. |
| Seats currently in use | The number of active seats currently in use. |
| Max seats used | The highest number of seats you've used. If this exceeds the seats in subscription, you may owe an additional fee for the additional users. |
| Seats owed | If your max seats used exceeds the seats in your subscription, you'll owe an additional fee for the users you've added. |
| Subscription start date | The date your subscription started. If this is for a Free plan, this is the date you transitioned off your group's paid plan. |
| Subscription end date | The date your current subscription will end. This does not apply to Free plans. |
 
### Subscription changes and your data
 
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment