Skip to content
Snippets Groups Projects
Commit 00fa950a authored by GitLab Bot's avatar GitLab Bot
Browse files

Add latest changes from gitlab-org/gitlab@master

parent c36152ff
No related branches found
No related tags found
No related merge requests found
Showing
with 94 additions and 43 deletions
8.24.0
8.25.0
Loading
Loading
@@ -5,6 +5,7 @@ class SearchController < ApplicationController
include SearchHelper
include RendersCommits
 
before_action :override_snippet_scope, only: :show
around_action :allow_gitaly_ref_name_caching
 
skip_before_action :authenticate_user!
Loading
Loading
@@ -103,4 +104,14 @@ class SearchController < ApplicationController
 
Gitlab::UsageDataCounters::SearchCounter.increment_navbar_searches_count
end
# Disallow web snippet_blobs search as we migrate snippet
# from database-backed storage to git repository-based,
# and searching across multiple git repositories is not feasible.
#
# TODO: after 13.0 refactor this into Search::SnippetService
# See https://gitlab.com/gitlab-org/gitlab/issues/208882
def override_snippet_scope
params[:scope] = 'snippet_titles' if params[:snippets] == 'true'
end
end
Loading
Loading
@@ -24,7 +24,6 @@
= users
 
- elsif @show_snippets
= search_filter_link 'snippet_blobs', _("Snippet Contents"), search: { snippets: true, group_id: nil, project_id: nil }
= search_filter_link 'snippet_titles', _("Titles and Filenames"), search: { snippets: true, group_id: nil, project_id: nil }
- else
= search_filter_link 'projects', _("Projects"), data: { qa_selector: 'projects_tab' }
Loading
Loading
---
title: Remove and deprecate snippet content search
merge_request: 26359
author:
type: removed
---
title: Optimize Project related count service desk enabled
merge_request: 27115
author:
type: performance
---
title: Enable Workhorse upload acceleration for Project Import uploads via API
merge_request: 26914
author:
type: performance
# frozen_string_literal: true
class AddIndexOnIdAndServiceDeskEnabledToProjects < ActiveRecord::Migration[6.0]
include Gitlab::Database::MigrationHelpers
DOWNTIME = false
INDEX_NAME = 'index_projects_on_id_service_desk_enabled'
disable_ddl_transaction!
def up
add_concurrent_index :projects, :id, where: 'service_desk_enabled = true', name: INDEX_NAME
end
def down
remove_concurrent_index_by_name :projects, INDEX_NAME
end
end
Loading
Loading
@@ -10,7 +10,7 @@
#
# It's strongly recommended that you check this file into your version control system.
 
ActiveRecord::Schema.define(version: 2020_03_11_165635) do
ActiveRecord::Schema.define(version: 2020_03_12_163407) do
 
# These are extensions that must be enabled in order to support this database
enable_extension "pg_trgm"
Loading
Loading
@@ -3491,6 +3491,7 @@ ActiveRecord::Schema.define(version: 2020_03_11_165635) do
t.index ["id", "repository_storage", "last_repository_updated_at"], name: "idx_projects_on_repository_storage_last_repository_updated_at"
t.index ["id"], name: "index_on_id_partial_with_legacy_storage", where: "((storage_version < 2) OR (storage_version IS NULL))"
t.index ["id"], name: "index_projects_on_id_partial_for_visibility", unique: true, where: "(visibility_level = ANY (ARRAY[10, 20]))"
t.index ["id"], name: "index_projects_on_id_service_desk_enabled", where: "(service_desk_enabled = true)"
t.index ["id"], name: "index_projects_on_mirror_and_mirror_trigger_builds_both_true", where: "((mirror IS TRUE) AND (mirror_trigger_builds IS TRUE))"
t.index ["last_activity_at", "id"], name: "index_projects_api_last_activity_at_id_desc", order: { id: :desc }
t.index ["last_activity_at", "id"], name: "index_projects_api_vis20_last_activity_at", where: "(visibility_level = 20)"
Loading
Loading
Loading
Loading
@@ -163,17 +163,21 @@ Git operations in GitLab will result in an API error.
unicorn['enable'] = false
sidekiq['enable'] = false
gitlab_workhorse['enable'] = false
grafana['enable'] = false
# If you run a seperate monitoring node you can disable these services
alertmanager['enable'] = false
prometheus['enable'] = false
# If you don't run a seperate monitoring node you can
# Enable Prometheus access & disable these extra services
# This makes Prometheus listen on all interfaces. You must use firewalls to restrict access to this address/port.
# prometheus['listen_address'] = '0.0.0.0:9090'
# prometheus['monitor_kubernetes'] = false
 
# If you don't want to run monitoring services uncomment the following (not recommended)
# alertmanager['enable'] = false
# gitlab_exporter['enable'] = false
# grafana['enable'] = false
# node_exporter['enable'] = false
# prometheus['enable'] = false
# Enable prometheus monitoring - comment out if you disable monitoring services above.
# This makes Prometheus listen on all interfaces. You must use firewalls to restrict access to this address/port.
prometheus['listen_address'] = '0.0.0.0:9090'
 
# Prevent database connections during 'gitlab-ctl reconfigure'
gitlab_rails['rake_cache_clear'] = false
Loading
Loading
@@ -861,7 +865,7 @@ default level is `WARN`.
You can run a gRPC trace with:
 
```shell
GRPC_TRACE=all GRPC_VERBOSITY=DEBUG sudo gitlab-rake gitlab:gitaly:check
sudo GRPC_TRACE=all GRPC_VERBOSITY=DEBUG gitlab-rake gitlab:gitaly:check
```
 
### Observing `gitaly-ruby` traffic
Loading
Loading
Loading
Loading
@@ -255,6 +255,8 @@ Example response:
 
### Scope: snippet_blobs
 
This scope will be disabled after GitLab 13.0.
```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" https://gitlab.example.com/api/v4/search?scope=snippet_blobs&search=test
```
Loading
Loading
Loading
Loading
@@ -6,7 +6,7 @@ NOTE: **Note:**
This API resource is renamed from Vulnerabilities to Vulnerability Findings because the Vulnerabilities are reserved
for serving the upcoming [Standalone Vulnerability objects](https://gitlab.com/gitlab-org/gitlab/issues/13561).
To fix any broken integrations with the former Vulnerabilities API, change the `vulnerabilities` URL part to be
`vulnerability_findings`.
`vulnerability_findings`.
 
Every API call to vulnerability findings must be [authenticated](README.md#authentication).
 
Loading
Loading
@@ -46,6 +46,9 @@ GET /projects/:id/vulnerability_findings?confidence=unknown,experimental
GET /projects/:id/vulnerability_findings?pipeline_id=42
```
 
CAUTION: **Deprecation:**
Beginning with GitLab 12.9, the `undefined` severity level is deprecated and the `undefined` confidence level isn't reported for new vulnerabilities.
| Attribute | Type | Required | Description |
| ------------- | -------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `id` | integer/string | yes | The ID or [URL-encoded path of the project](README.md#namespaced-path-encoding) which the authenticated user is a member of. |
Loading
Loading
Loading
Loading
@@ -81,7 +81,9 @@ There are some high level differences between the products worth mentioning:
container images to set up your build environment. For example, set up one pipeline that builds your build environment
itself and publish that to the container registry. Then, have your pipelines use this instead of each building their
own environment, which will be slower and may be less consistent. We have extensive docs on [how to use the Container Registry](../../user/packages/container_registry/index.md).
- Totally stuck and not sure where to turn for advice? The [GitLab community forum](https://forum.gitlab.com/) can be a great resource.
- A central utilities repository can be a great place to put assorted scheduled jobs
or other manual jobs that function like utilities. Jenkins installations tend to
have a few of these.
 
## Agents vs. Runners
 
Loading
Loading
Loading
Loading
@@ -9,7 +9,7 @@ importer and a parallel importer. The Rake task `import:github` uses the
sequential importer, while everything else uses the parallel importer. The
difference between these two importers is quite simple: the sequential importer
does all work in a single thread, making it more useful for debugging purposes
or Rake tasks. The parallel importer on the other hand uses Sidekiq.
or Rake tasks. The parallel importer, on the other hand, uses Sidekiq.
 
## Requirements
 
Loading
Loading
@@ -31,9 +31,9 @@ The importer's codebase is broken up into the following directories:
 
## Architecture overview
 
When a GitHub project is imported we schedule and execute a job for the
`RepositoryImportworker` worker as all other importers. However, unlike other
importers we don't immediately perform the work necessary. Instead work is
When a GitHub project is imported, we schedule and execute a job for the
`RepositoryImportWorker` worker as all other importers. However, unlike other
importers, we don't immediately perform the work necessary. Instead work is
divided into separate stages, with each stage consisting out of a set of Sidekiq
jobs that are executed. Between every stage a job is scheduled that periodically
checks if all work of the current stage is completed, advancing the import
Loading
Loading
@@ -65,9 +65,9 @@ This worker will import all pull requests. For every pull request a job for the
 
### 5. Stage::ImportIssuesAndDiffNotesWorker
 
This worker will import all issues and pull request comments. For every issue we
This worker will import all issues and pull request comments. For every issue, we
schedule a job for the `Gitlab::GithubImport::ImportIssueWorker` worker. For
pull request comments we instead schedule jobs for the
pull request comments, we instead schedule jobs for the
`Gitlab::GithubImport::DiffNoteImporter` worker.
 
This worker processes both issues and diff notes in parallel so we don't need to
Loading
Loading
@@ -82,7 +82,7 @@ project.
### 6. Stage::ImportNotesWorker
 
This worker imports regular comments for both issues and pull requests. For
every comment we schedule a job for the
every comment, we schedule a job for the
`Gitlab::GithubImport::ImportNoteWorker` worker.
 
Regular comments have to be imported at the end since the GitHub API used
Loading
Loading
@@ -116,14 +116,14 @@ schedule the worker of the next stage.
 
To reduce the number of `AdvanceStageWorker` jobs scheduled this worker will
briefly wait for jobs to complete before deciding what the next action should
be. For small projects this may slow down the import process a bit, but it will
be. For small projects, this may slow down the import process a bit, but it will
also reduce pressure on the system as a whole.
 
## Refreshing import JIDs
 
GitLab includes a worker called `StuckImportJobsWorker` that will periodically
run and mark project imports as failed if they have been running for more than
15 hours. For GitHub projects this poses a bit of a problem: importing large
15 hours. For GitHub projects, this poses a bit of a problem: importing large
projects could take several hours depending on how often we hit the GitHub rate
limit (more on this below), but we don't want `StuckImportJobsWorker` to mark
our import as failed because of this.
Loading
Loading
@@ -137,7 +137,7 @@ long we're still performing work.
 
## GitHub rate limit
 
GitHub has a rate limit of 5 000 API calls per hour. The number of requests
GitHub has a rate limit of 5,000 API calls per hour. The number of requests
necessary to import a project is largely dominated by the number of unique users
involved in a project (e.g. issue authors). Other data such as issue pages
and comments typically only requires a few dozen requests to import. This is
Loading
Loading
@@ -176,11 +176,11 @@ There are two types of lookups we cache:
in our GitLab database.
 
The expiration time of these keys is 24 hours. When retrieving the cache of a
positive lookups we refresh the TTL automatically. The TTL of false lookups is
positive lookup, we refresh the TTL automatically. The TTL of false lookups is
never refreshed.
 
Because of this caching layer it's possible newly registered GitLab accounts
won't be linked to their corresponding GitHub accounts. This however will sort
Because of this caching layer, it's possible newly registered GitLab accounts
won't be linked to their corresponding GitHub accounts. This, however, will sort
itself out once the cached keys expire.
 
The user cache lookup is shared across projects. This means that the more
Loading
Loading
@@ -194,12 +194,12 @@ The code for this resides in:
## Mapping labels and milestones
 
To reduce pressure on the database we do not query it when setting labels and
milestones on issues and merge requests. Instead we cache this data when we
milestones on issues and merge requests. Instead, we cache this data when we
import labels and milestones, then we reuse this cache when assigning them to
issues/merge requests. Similar to the user lookups these cache keys are expired
automatically after 24 hours of not being used.
 
Unlike the user lookup caches these label and milestone caches are scoped to the
Unlike the user lookup caches, these label and milestone caches are scoped to the
project that is being imported.
 
The code for this resides in:
Loading
Loading
Loading
Loading
@@ -57,6 +57,7 @@ Libraries with the following licenses are acceptable for use:
- [Creative Commons Zero (CC0)][CC0]: A public domain dedication, recommended as a way to disclaim copyright on your work to the maximum extent possible.
- [Unlicense][UNLICENSE]: Another public domain dedication.
- [OWFa 1.0][OWFa1]: An open-source license and patent grant designed for specifications.
- [JSON License](https://www.json.org/license.html): Equivalent to the MIT license plus the statement, "The Software shall be used for Good, not Evil."
 
## Unacceptable Licenses
 
Loading
Loading
Loading
Loading
@@ -448,9 +448,12 @@ SOME_CONSTANT = 'bar'
 
You might want millions of project rows in your local database, for example,
in order to compare relative query performance, or to reproduce a bug. You could
do this by hand with SQL commands, but since you have ActiveRecord models, you
might find using these gems more convenient:
do this by hand with SQL commands or using [Mass Inserting Rails
Models](mass_insert.md) functionality.
 
Assuming you are working with ActiveRecord models, you might also find these links helpful:
- [Insert records in batches](insert_into_tables_in_batches.md)
- [BulkInsert gem](https://github.com/jamis/bulk_insert)
- [ActiveRecord::PgGenerateSeries gem](https://github.com/ryu39/active_record-pg_generate_series)
 
Loading
Loading
doc/topics/web_application_firewall/img/guide_waf_ingress_installation.png

53.5 KiB

doc/topics/web_application_firewall/img/guide_waf_ingress_installation_v12_9.png

24.2 KiB

doc/topics/web_application_firewall/img/guide_waf_ingress_save_changes_v12_9.png

36.3 KiB

Loading
Loading
@@ -14,16 +14,6 @@ need to ensure your own [Runners are configured](../../ci/runners/README.md) and
**Note**: GitLab's Web Application Firewall is deployed with [Ingress](../../user/clusters/applications.md#Ingress),
so it will be available to your applications no matter how you deploy them to Kubernetes.
 
## Enable or disable ModSecurity
ModSecurity is enabled by default on GitLab.com. You can toggle the feature flag to false by running the following command in the Rails console:
```ruby
Feature.disable(:ingress_modsecurity)
```
Once disabled, you must uninstall and reinstall your Ingress application for the changes to take effect. See the [Feature Flag](../../user/project/operations/feature_flags.md) documentation for more information.
## Configuring your Google account
 
Before creating and connecting your Kubernetes cluster to your GitLab project,
Loading
Loading
@@ -112,10 +102,9 @@ Once it is installed, the other applications that rely on it will each have thei
 
For this guide, we need to install Ingress. Ingress provides load balancing,
SSL termination, and name-based virtual hosting, using NGINX behind
the scenes. Make sure that the **Enable Web Application Firewall** button is checked
before installing.
the scenes. Make sure to switch the toogle to the enabled position before installing.
 
![Cluster applications](./img/guide_waf_ingress_installation.png)
![Cluster applications](./img/guide_waf_ingress_installation_v12_9.png)
 
After Ingress is installed, wait a few seconds and copy the IP address that
is displayed in order to add in your base **Domain** at the top of the page. For
Loading
Loading
Loading
Loading
@@ -347,6 +347,9 @@ it highlighted:
}
```
 
CAUTION: **Deprecation:**
Beginning with GitLab 12.9, container scanning no longer reports `undefined` severity and confidence levels.
Here is the description of the report file structure nodes and their meaning. All fields are mandatory to be present in
the report JSON unless stated otherwise. Presence of optional fields depends on the underlying analyzers being used.
 
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment