Skip to content
Snippets Groups Projects
Commit 2fd92f2d authored by GitLab Bot's avatar GitLab Bot
Browse files

Add latest changes from gitlab-org/gitlab@master

parent 42ca24aa
No related branches found
No related tags found
No related merge requests found
Showing
with 364 additions and 42 deletions
Loading
Loading
@@ -79,27 +79,38 @@ subgraph "CNG-mirror pipeline"
**Additional notes:**
 
- If the `review-deploy` job keep failing (note that we already retry it twice),
please post a message in the `#quality` channel and/or create a ~Quality ~bug
please post a message in the `#g_qe_engineering_productivity` channel and/or create a `~"Engineering Productivity"` `~"ep::review apps"` `~bug`
issue with a link to your merge request. Note that the deployment failure can
reveal an actual problem introduced in your merge request (i.e. this isn't
necessarily a transient failure)!
- If the `review-qa-smoke` job keep failing (note that we already retry it twice),
- If the `review-qa-smoke` job keeps failing (note that we already retry it twice),
please check the job's logs: you could discover an actual problem introduced in
your merge request. You can also download the artifacts to see screenshots of
the page at the time the failures occurred. If you don't find the cause of the
failure or if it seems unrelated to your change, please post a message in the
`#quality` channel and/or create a ~Quality ~bug issue with a link to your
merge request.
- The manual [`review-stop`][gitlab-ci-yml] in the `test` stage can be used to
- The manual `review-stop` can be used to
stop a Review App manually, and is also started by GitLab once a merge
request's branch is deleted after being merged.
- Review Apps are cleaned up regularly via a pipeline schedule that runs
the [`schedule:review-cleanup`][gitlab-ci-yml] job.
- The Kubernetes cluster is connected to the `gitlab-{ce,ee}` projects using
[GitLab's Kubernetes integration][gitlab-k8s-integration]. This basically
allows to have a link to the Review App directly from the merge request
widget.
 
### Auto-stopping of Review Apps
Review Apps are automatically stopped 2 days after the last deployment thanks to
the [Environment auto-stop](../../ci/environments.html#environments-auto-stop) feature.
If you need your Review App to stay up for a longer time, you can
[pin its environment](../../ci/environments.html#auto-stop-example).
The `review-cleanup` job that automatically runs in scheduled
pipelines (and is manual in merge request) stops stale Review Apps after 5 days,
deletes their environment after 6 days, and cleans up any dangling Helm releases
and Kubernetes resources after 7 days.
## QA runs
 
On every [pipeline][gitlab-pipeline] in the `qa` stage (which comes after the
Loading
Loading
@@ -206,7 +217,7 @@ aids in identifying load spikes on the cluster, and if nodes are problematic or
 
**Potential cause:**
 
That could be a sign that the [`schedule:review-cleanup`][gitlab-ci-yml] job is
That could be a sign that the `review-cleanup` job is
failing to cleanup stale Review Apps and Kubernetes resources.
 
**Where to look for further debugging:**
Loading
Loading
@@ -270,7 +281,7 @@ kubectl get cm --sort-by='{.metadata.creationTimestamp}' | grep 'review-' | grep
 
### Using K9s
 
[K9s] is a powerful command line dashboard which allows you to filter by labels. This can help identify trends with apps exceeding the [review-app resource requests](https://gitlab.com/gitlab-org/gitlab/blob/master/scripts/review_apps/base-config.yaml). Kubernetes will schedule pods to nodes based on resource requests and allow for CPU usage up to the limits.
[K9s] is a powerful command line dashboard which allows you to filter by labels. This can help identify trends with apps exceeding the [review-app resource requests](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/review_apps/base-config.yaml). Kubernetes will schedule pods to nodes based on resource requests and allow for CPU usage up to the limits.
 
- In K9s you can sort or add filters by typing the `/` character
- `-lrelease=<review-app-slug>` - filters down to all pods for a release. This aids in determining what is having issues in a single deployment
Loading
Loading
@@ -387,13 +398,11 @@ find a way to limit it to only us.**
[helm-chart]: https://gitlab.com/gitlab-org/charts/gitlab/
[review-apps-ce]: https://console.cloud.google.com/kubernetes/clusters/details/us-central1-a/review-apps-ce?project=gitlab-review-apps
[review-apps-ee]: https://console.cloud.google.com/kubernetes/clusters/details/us-central1-b/review-apps-ee?project=gitlab-review-apps
[review-apps.sh]: https://gitlab.com/gitlab-org/gitlab/blob/master/scripts/review_apps/review-apps.sh
[automated_cleanup.rb]: https://gitlab.com/gitlab-org/gitlab/blob/master/scripts/review_apps/automated_cleanup.rb
[Auto-DevOps.gitlab-ci.yml]: https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml
[gitlab-ci-yml]: https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab-ci.yml
[review-apps.sh]: https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/review_apps/review-apps.sh
[automated_cleanup.rb]: https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/review_apps/automated_cleanup.rb
[Auto-DevOps.gitlab-ci.yml]: https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml
[gitlab-k8s-integration]: ../../user/project/clusters/index.md
[K9s]: https://github.com/derailed/k9s
[password-bug]: https://gitlab.com/gitlab-org/gitlab-foss/issues/53621
 
---
 
Loading
Loading
Loading
Loading
@@ -260,11 +260,16 @@ GitLab.
 
## Code Quality reports
 
Once the Code Quality job has completed, GitLab:
Once the Code Quality job has completed:
 
- Checks the generated report.
- Compares the metrics between the source and target branches.
- Shows the information right on the merge request.
- The full list of code quality violations generated by a pipeline is available in the
Code Quality tab of the Pipeline Details page.
- Potential changes to code quality are shown directly in the merge request.
The Code Quality widget in the merge request compares the reports from the base and head of the branch,
then lists any violations that will be resolved or created when the branch is merged.
- The full JSON report is available as a
[downloadable artifact](../../project/pipelines/job_artifacts.html#downloading-artifacts)
for the `code_quality` job.
 
If multiple jobs in a pipeline generate a code quality artifact, only the artifact from
the last created job (the job with the largest job ID) is used. To avoid confusion,
Loading
Loading
@@ -276,6 +281,10 @@ Code Quality job in your `.gitlab-ci.yml` for the very first time.
Consecutive merge requests will have something to compare to and the Code Quality
report will be shown properly.
 
These reports will only be available as long as the Code Quality artifact(s) required to generate
them are also available. See
[`artifacts:expire_in`](../../../ci/yaml/README.md#artifactsexpire_in) for more details.
<!-- ## Troubleshooting
 
Include any troubleshooting steps that you can foresee. If you know beforehand what issues
Loading
Loading
Loading
Loading
@@ -4,6 +4,8 @@ module API
class Triggers < Grape::API
include PaginationParams
 
HTTP_GITLAB_EVENT_HEADER = "HTTP_#{WebHookService::GITLAB_EVENT_HEADER}".underscore.upcase
params do
requires :id, type: String, desc: 'The ID of a project'
end
Loading
Loading
@@ -19,6 +21,8 @@ module API
post ":id/(ref/:ref/)trigger/pipeline", requirements: { ref: /.+/ } do
Gitlab::QueryLimiting.whitelist('https://gitlab.com/gitlab-org/gitlab-foss/issues/42283')
 
forbidden! if gitlab_pipeline_hook_request?
# validate variables
params[:variables] = params[:variables].to_h
unless params[:variables].all? { |key, value| key.is_a?(String) && value.is_a?(String) }
Loading
Loading
@@ -128,5 +132,11 @@ module API
destroy_conditionally!(trigger)
end
end
helpers do
def gitlab_pipeline_hook_request?
request.get_header(HTTP_GITLAB_EVENT_HEADER) == WebHookService.hook_to_event(:pipeline_hooks)
end
end
end
end
# frozen_string_literal: true
# This is based on https://github.com/jch/html-pipeline/blob/v2.12.2/lib/html/pipeline/camo_filter.rb
# and Banzai::Filter::AssetProxyFilter which we use to proxy images in Markdown
module Gitlab
module AssetProxy
class << self
def proxy_url(url)
return url unless Gitlab.config.asset_proxy.enabled
return url if asset_host_whitelisted?(url)
"#{Gitlab.config.asset_proxy.url}/#{asset_url_hash(url)}/#{hexencode(url)}"
end
private
def asset_host_whitelisted?(url)
parsed_url = URI.parse(url)
Gitlab.config.asset_proxy.domain_regexp&.match?(parsed_url.host)
end
def asset_url_hash(url)
OpenSSL::HMAC.hexdigest('sha1', Gitlab.config.asset_proxy.secret_key, url)
end
def hexencode(str)
str.unpack1('H*')
end
end
end
end
# frozen_string_literal: true
module Gitlab
module BackgroundMigration
# rubocop:disable Style/Documentation
class RecalculateProjectAuthorizationsWithMinMaxUserId
def perform(min_user_id, max_user_id)
User.where(id: min_user_id..max_user_id).find_each do |user|
service = Users::RefreshAuthorizedProjectsService.new(
user,
incorrect_auth_found_callback:
->(project_id, access_level) do
logger.info(message: 'Removing ProjectAuthorizations',
user_id: user.id,
project_id: project_id,
access_level: access_level)
end,
missing_auth_found_callback:
->(project_id, access_level) do
logger.info(message: 'Creating ProjectAuthorizations',
user_id: user.id,
project_id: project_id,
access_level: access_level)
end
)
service.execute
end
end
private
def logger
@logger ||= Gitlab::BackgroundMigration::Logger.build
end
end
end
end
Loading
Loading
@@ -7,6 +7,8 @@ module Gitlab
GIT_INVALID_URL_REGEX = /^git\+#{URL_REGEX}/.freeze
REPO_REGEX = %r{[^/'" ]+/[^/'" ]+}.freeze
 
include ActionView::Helpers::SanitizeHelper
class_attribute :file_type
 
def self.support?(blob_name)
Loading
Loading
@@ -62,7 +64,10 @@ module Gitlab
end
 
def link_tag(name, url)
%{<a href="#{ERB::Util.html_escape_once(url)}" rel="nofollow noreferrer noopener" target="_blank">#{ERB::Util.html_escape_once(name)}</a>}.html_safe
sanitize(
%{<a href="#{ERB::Util.html_escape_once(url)}" rel="nofollow noreferrer noopener" target="_blank">#{ERB::Util.html_escape_once(name)}</a>},
attributes: %w[href rel target]
)
end
 
# Links package names based on regex.
Loading
Loading
Loading
Loading
@@ -26,6 +26,7 @@ module Gitlab
ActiveRecord::Base.uncached do
ActiveRecord::Base.no_touching do
update_params!
update_relation_hashes!
create_relations!
end
end
Loading
Loading
@@ -217,6 +218,10 @@ module Gitlab
excluded_keys: excluded_keys_for_relation(relation_key)
}
end
def update_relation_hashes!
@tree_hash['ci_pipelines']&.sort_by! { |hash| hash['id'] }
end
end
end
end
Loading
Loading
@@ -62,6 +62,7 @@ module Gitlab
cte = Gitlab::SQL::RecursiveCTE.new(:namespaces_cte)
members = Member.arel_table
namespaces = Namespace.arel_table
group_group_links = GroupGroupLink.arel_table
 
# Namespaces the user is a member of.
cte << user.groups
Loading
Loading
@@ -69,7 +70,10 @@ module Gitlab
.except(:order)
 
# Namespaces shared with any of the group
cte << Group.select([namespaces[:id], 'group_group_links.group_access AS access_level'])
cte << Group.select([namespaces[:id],
least(members[:access_level],
group_group_links[:group_access],
'access_level')])
.joins(join_group_group_links)
.joins(join_members_on_group_group_links)
 
Loading
Loading
# frozen_string_literal: true
 
# This is needed for sidekiq-cluster
require 'json'
module Gitlab
module SidekiqLogging
class JSONFormatter
Loading
Loading
Loading
Loading
@@ -67,7 +67,13 @@ module Gitlab
return false unless can_access_git?
return false unless project
 
return false if !user.can?(:push_code, project) && !project.branch_allows_collaboration?(user, ref)
# Checking for an internal project to prevent an infinite loop:
# https://gitlab.com/gitlab-org/gitlab/issues/36805
if project.internal?
return false unless user.can?(:push_code, project)
else
return false if !user.can?(:push_code, project) && !project.branch_allows_collaboration?(user, ref)
end
 
if protected?(ProtectedBranch, project, ref)
protected_branch_accessible_to?(ref, action: :push)
Loading
Loading
Loading
Loading
@@ -146,5 +146,14 @@ module Gitlab
IPAddr.new(str)
rescue IPAddr::InvalidAddressError
end
# Converts a string to an Addressable::URI object.
# If the string is not a valid URI, it returns nil.
# Param uri_string should be a String object.
# This method returns an Addressable::URI object or nil.
def parse_url(uri_string)
Addressable::URI.parse(uri_string)
rescue Addressable::URI::InvalidURIError, TypeError
end
end
end
Loading
Loading
@@ -68,6 +68,11 @@ msgstr ""
msgid "\"%{path}\" did not exist on \"%{ref}\""
msgstr ""
 
msgid "%d code quality issue"
msgid_plural "%d code quality issues"
msgstr[0] ""
msgstr[1] ""
msgid "%d comment"
msgid_plural "%d comments"
msgstr[0] ""
Loading
Loading
@@ -4853,6 +4858,9 @@ msgstr ""
msgid "Code Owners to the merge request changes."
msgstr ""
 
msgid "Code Quality"
msgstr ""
msgid "Code Review"
msgstr ""
 
Loading
Loading
@@ -8378,6 +8386,9 @@ msgstr ""
msgid "False positive"
msgstr ""
 
msgid "Fast-forward merge is not possible. Rebase the source branch onto %{targetBranch} to allow this merge request to be merged."
msgstr ""
msgid "Fast-forward merge is not possible. Rebase the source branch onto the target branch or merge target branch into source branch to allow this merge request to be merged."
msgstr ""
 
Loading
Loading
@@ -8825,7 +8836,7 @@ msgstr ""
msgid "From %{providerTitle}"
msgstr ""
 
msgid "From %{source_title} into"
msgid "From <code>%{source_title}</code> into"
msgstr ""
 
msgid "From Bitbucket"
Loading
Loading
@@ -23188,6 +23199,9 @@ msgstr ""
msgid "ciReport|Fixed:"
msgstr ""
 
msgid "ciReport|Found %{issuesWithCount}"
msgstr ""
msgid "ciReport|Investigate this vulnerability by creating an issue"
msgstr ""
 
Loading
Loading
@@ -23206,6 +23220,9 @@ msgstr ""
msgid "ciReport|No changes to performance metrics"
msgstr ""
 
msgid "ciReport|No code quality issues found"
msgstr ""
msgid "ciReport|Performance metrics"
msgstr ""
 
Loading
Loading
@@ -23236,6 +23253,9 @@ msgstr ""
msgid "ciReport|There was an error dismissing the vulnerability. Please try again."
msgstr ""
 
msgid "ciReport|There was an error fetching the codequality report."
msgstr ""
msgid "ciReport|There was an error reverting the dismissal. Please try again."
msgstr ""
 
Loading
Loading
Loading
Loading
@@ -429,6 +429,7 @@ module QA
autoload :Gcloud, 'qa/service/cluster_provider/gcloud'
autoload :Minikube, 'qa/service/cluster_provider/minikube'
autoload :K3d, 'qa/service/cluster_provider/k3d'
autoload :K3s, 'qa/service/cluster_provider/k3s'
end
 
module DockerRun
Loading
Loading
@@ -440,6 +441,7 @@ module QA
autoload :GitlabRunner, 'qa/service/docker_run/gitlab_runner'
autoload :MailHog, 'qa/service/docker_run/mail_hog'
autoload :SamlIdp, 'qa/service/docker_run/saml_idp'
autoload :K3s, 'qa/service/docker_run/k3s'
end
end
 
Loading
Loading
Loading
Loading
@@ -11,7 +11,7 @@ module QA
element :api_url, 'url_field :api_url' # rubocop:disable QA/ElementWithPattern
element :ca_certificate, 'text_area :ca_cert' # rubocop:disable QA/ElementWithPattern
element :token, 'text_field :token' # rubocop:disable QA/ElementWithPattern
element :add_cluster_button, "submit s_('ClusterIntegration|Add Kubernetes cluster')" # rubocop:disable QA/ElementWithPattern
element :add_kubernetes_cluster_button
element :rbac_checkbox
end
 
Loading
Loading
@@ -32,7 +32,7 @@ module QA
end
 
def add_cluster!
click_on 'Add Kubernetes cluster'
click_element :add_kubernetes_cluster_button, Page::Project::Operations::Kubernetes::Show
end
 
def uncheck_rbac!
Loading
Loading
Loading
Loading
@@ -11,14 +11,20 @@ module QA
end
 
view 'app/views/clusters/clusters/_form.html.haml' do
element :base_domain
element :save_domain
element :integration_status_toggle, required: true
element :base_domain_field, required: true
element :save_changes_button, required: true
end
view 'app/assets/javascripts/clusters/components/application_row.vue' do
element :install_button
element :uninstall_button
end
 
def install!(application_name)
within_element(application_name) do
has_element?(:install_button, application: application_name, wait: 30)
click_on 'Install' # TODO replace with click_element
click_element :install_button
end
end
 
Loading
Loading
@@ -41,11 +47,11 @@ module QA
end
 
def set_domain(domain)
fill_element :base_domain, domain
fill_element :base_domain_field, domain
end
 
def save_domain
click_element :save_domain
click_element :save_changes_button, Page::Project::Operations::Kubernetes::Show
end
end
end
Loading
Loading
# frozen_string_literal: true
module QA
module Service
module ClusterProvider
class K3s < Base
def validate_dependencies
Runtime::ApplicationSettings.set_application_settings(allow_local_requests_from_web_hooks_and_services: true)
end
def setup
@k3s = Service::DockerRun::K3s.new.tap do |k3s|
k3s.register!
shell "kubectl config set-cluster k3s --server https://#{k3s.host_name}:6443 --insecure-skip-tls-verify"
shell 'kubectl config set-credentials default --username=node --password=some-secret'
shell 'kubectl config set-context k3s --cluster=k3s --user=default'
shell 'kubectl config use-context k3s'
wait_for_server(k3s.host_name) do
shell 'kubectl version'
wait_for_namespaces do
# install local storage
shell 'kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml'
# patch local storage
shell %(kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}')
end
end
end
end
def teardown
Runtime::ApplicationSettings.set_application_settings(allow_local_requests_from_web_hooks_and_services: false)
@k3s&.remove!
end
def set_credentials(admin_user)
end
# Fetch "real" certificate
# See https://github.com/rancher/k3s/issues/27
def filter_credentials(credentials)
kubeconfig = YAML.safe_load(@k3s.kubeconfig)
ca_certificate = kubeconfig.dig('clusters', 0, 'cluster', 'certificate-authority-data')
credentials.merge('data' => credentials['data'].merge('ca.crt' => ca_certificate))
end
private
def wait_for_server(host_name)
print "Waiting for K3s server at `https://#{host_name}:6443` to become available "
60.times do
if service_available?('kubectl version')
return yield if block_given?
return true
end
sleep 1
print '.'
end
raise 'K3s server never came up'
end
def wait_for_namespaces
print 'Waiting for k8s namespaces to populate'
60.times do
if service_available?('kubectl get pods --all-namespaces | grep --silent "Running"')
return yield if block_given?
return true
end
sleep 1
print '.'
end
raise 'K8s namespaces didnt populate correctly'
end
def service_available?(command)
system("#{command} > /dev/null 2>&1")
end
end
end
end
end
Loading
Loading
@@ -37,6 +37,10 @@ module QA
def running?
`docker ps -f name=#{@name}`.include?(@name)
end
def read_file(file_path)
`docker exec #{@name} /bin/cat #{file_path}`
end
end
end
end
Loading
Loading
# frozen_string_literal: true
module QA
module Service
module DockerRun
class K3s < Base
def initialize
@image = 'registry.gitlab.com/gitlab-org/cluster-integration/test-utils/k3s-gitlab-ci/releases/v0.6.1'
@name = 'k3s'
super
end
def register!
pull
start_k3s
end
def host_name
return 'localhost' unless Runtime::Env.running_in_ci?
super
end
def kubeconfig
read_file('/etc/rancher/k3s/k3s.yaml').chomp
end
def start_k3s
command = <<~CMD.tr("\n", ' ')
docker run -d --rm
--network #{network}
--hostname #{host_name}
--name #{@name}
--publish 6443:6443
--privileged
#{@image} server --cluster-secret some-secret
CMD
command.gsub!("--network #{network} ", '') unless QA::Runtime::Env.running_in_ci?
shell command
end
end
end
end
end
Loading
Loading
@@ -2,10 +2,9 @@
 
module QA
context 'Configure' do
# This test requires GITLAB_QA_ADMIN_ACCESS_TOKEN to be specified
describe 'Kubernetes Cluster Integration', :orchestrated, :kubernetes, :requires_admin, :skip do
describe 'Kubernetes Cluster Integration', :orchestrated, :kubernetes, :requires_admin do
context 'Project Clusters' do
let(:cluster) { Service::KubernetesCluster.new(provider_class: Service::ClusterProvider::K3d).create! }
let(:cluster) { Service::KubernetesCluster.new(provider_class: Service::ClusterProvider::K3s).create! }
let(:project) do
Resource::Project.fabricate_via_api! do |project|
project.name = 'project-with-k8s'
Loading
Loading
@@ -35,18 +34,6 @@ module QA
expect(index).to have_cluster(cluster)
end
end
it 'installs helm and tiller on a gitlab managed app' do
Resource::KubernetesCluster.fabricate_via_browser_ui! do |k8s_cluster|
k8s_cluster.project = project
k8s_cluster.cluster = cluster
k8s_cluster.install_helm_tiller = true
end
Page::Project::Operations::Kubernetes::Show.perform do |show|
expect(show).to have_application_installed(:helm)
end
end
end
end
end
Loading
Loading
# frozen_string_literal: true
module QA
describe Service::DockerRun::K3s do
describe '#host_name' do
context 'in CI' do
let(:name) { 'k3s-12345' }
let(:network) { 'thenet' }
before do
allow(Runtime::Env).to receive(:running_in_ci?).and_return(true)
allow(subject).to receive(:network).and_return(network)
subject.instance_variable_set(:@name, name)
end
it 'returns name.network' do
expect(subject.host_name).to eq("#{name}.#{network}")
end
end
context 'not in CI' do
before do
allow(Runtime::Env).to receive(:running_in_ci?).and_return(false)
end
it 'returns localhost if not running in a CI environment' do
expect(subject.host_name).to eq('localhost')
end
end
end
end
end
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment