Skip to content
Snippets Groups Projects
Commit 7f3bff15 authored by GitLab Bot's avatar GitLab Bot
Browse files

Add latest changes from gitlab-org/gitlab@master

parent 8d0aed5e
No related branches found
No related tags found
No related merge requests found
Showing
with 555 additions and 62 deletions
---
title: Delete kubernetes cluster association and resources
merge_request: 16954
author:
type: added
---
title: Upgrade to Gitaly v1.72.0
merge_request: 20313
author:
type: changed
---
title: Return 422 status code in case of error in submitting comments
merge_request: 19276
author: raju249
type: added
Loading
Loading
@@ -85,7 +85,7 @@ in your repository map to paths of pages on your website using a Route Map.
Once set, GitLab will display **View on ...** buttons, which will take you
to the pages changed directly from merge requests.
 
To set up a route map, add a a file inside the repository at `.gitlab/route-map.yml`,
To set up a route map, add a file inside the repository at `.gitlab/route-map.yml`,
which contains a YAML array that maps `source` paths (in the repository) to `public`
paths (on the website).
 
Loading
Loading
Loading
Loading
@@ -50,14 +50,17 @@ Any issue that belongs to a project in the epic's group, or any of the epic's
subgroups, are eligible to be added. New issues appear at the top of the list of issues in the **Epics and Issues** tab.
 
An epic contains a list of issues and an issue can be associated with at most
one epic. When you add an issue to an epic that is already associated with another epic,
the issue is automatically removed from the previous epic.
one epic. When you add an issue that is already linked to an epic,
the issue is automatically unlinked from its current parent.
 
To add an issue to an epic:
 
1. Click **Add an issue**.
1. Paste the link of the issue.
- Press <kbd>Spacebar</kbd> and repeat this step if there are multiple issues.
1. Identify the issue to be added, using either of the following methods:
- Paste the link of the issue.
- Search for the desired issue by entering part of the issue's title, then selecting the desired match. ([From GitLab 12.5](https://gitlab.com/gitlab-org/gitlab/issues/9126))
If there are multiple issues to be added, press <kbd>Spacebar</kbd> and repeat this step.
1. Click **Add**.
 
To remove an issue from an epic:
Loading
Loading
@@ -72,17 +75,19 @@ To remove an issue from an epic:
Any epic that belongs to a group, or subgroup of the parent epic's group, is
eligible to be added. New child epics appear at the top of the list of epics in the **Epics and Issues** tab.
 
When you add a child epic that is already associated with another epic,
that epic is automatically removed from the previous epic.
When you add an epic that is already linked to a parent epic, the link to its current parent is removed.
 
An epic can have multiple child epics with
the maximum depth being 5.
 
To add a child epic:
To add a child epic to an epic:
 
1. Click **Add an epic**.
1. Paste the link of the epic.
- Press <kbd>Spacebar</kbd> and repeat this step if there are multiple issues.
1. Identify the epic to be added, using either of the following methods:
- Paste the link of the epic.
- Search for the desired issue by entering part of the epic's title, then selecting the desired match. ([From GitLab 12.5](https://gitlab.com/gitlab-org/gitlab/issues/9126))
If there are multiple epics to be added, press <kbd>Spacebar</kbd> and repeat this step.
1. Click **Add**.
 
To remove a child epic from a parent epic:
Loading
Loading
Loading
Loading
@@ -262,55 +262,55 @@ new Kubernetes cluster to your project:
1. Click **Create Policy**, which will open a new window.
1. Select the **JSON** tab, and paste in the following snippet in place of the existing content:
 
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:CreateAutoScalingGroup",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeScalingActivities",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:CreateLaunchConfiguration",
"autoscaling:DescribeLaunchConfigurations",
"cloudformation:CreateStack",
"cloudformation:DescribeStacks",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupEgress",
"ec2:RevokeSecurityGroupIngress",
"ec2:CreateSecurityGroup",
"ec2:createTags",
"ec2:DescribeImages",
"ec2:DescribeKeyPairs",
"ec2:DescribeRegions",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"eks:CreateCluster",
"eks:DescribeCluster",
"iam:AddRoleToInstanceProfile",
"iam:AttachRolePolicy",
"iam:CreateRole",
"iam:CreateInstanceProfile",
"iam:GetRole",
"iam:ListRoles",
"iam:PassRole",
"ssm:GetParameters"
],
"Resource": "*"
}
]
}
```
NOTE: **Note:**
These permissions give GitLab the ability to create resources, but not delete them.
This means that if an error is encountered during the creation process, changes will
not be rolled back and you must remove resources manually. You can do this by deleting
the relevant [CloudFormation stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html)
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:CreateAutoScalingGroup",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeScalingActivities",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:CreateLaunchConfiguration",
"autoscaling:DescribeLaunchConfigurations",
"cloudformation:CreateStack",
"cloudformation:DescribeStacks",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupEgress",
"ec2:RevokeSecurityGroupIngress",
"ec2:CreateSecurityGroup",
"ec2:createTags",
"ec2:DescribeImages",
"ec2:DescribeKeyPairs",
"ec2:DescribeRegions",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"eks:CreateCluster",
"eks:DescribeCluster",
"iam:AddRoleToInstanceProfile",
"iam:AttachRolePolicy",
"iam:CreateRole",
"iam:CreateInstanceProfile",
"iam:GetRole",
"iam:ListRoles",
"iam:PassRole",
"ssm:GetParameters"
],
"Resource": "*"
}
]
}
```
NOTE: **Note:**
These permissions give GitLab the ability to create resources, but not delete them.
This means that if an error is encountered during the creation process, changes will
not be rolled back and you must remove resources manually. You can do this by deleting
the relevant [CloudFormation stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html)
 
1. Click **Review policy**.
1. Enter a suitable name for this policy, and click **Create Policy**. You can now close this window.
Loading
Loading
Loading
Loading
@@ -157,6 +157,8 @@ The plain text title and description of the issue fill the top center of the iss
The description fully supports [GitLab Flavored Markdown](../../markdown.md#gitlab-flavored-markdown-gfm),
allowing many formatting options.
 
> [Since GitLab 12.5](https://gitlab.com/gitlab-org/gitlab/issues/10103), changes to an issue's description are listed in the [issue history](#23-issue-history).**(STARTER)**
#### 17. Mentions
 
You can mention a user or a group present in your GitLab instance with `@username` or
Loading
Loading
Loading
Loading
@@ -55,8 +55,7 @@ module QA
end
end
 
# https://gitlab.com/gitlab-org/gitlab/issues/35156
describe 'Auto DevOps support', :orchestrated, :kubernetes, :quarantine do
describe 'Auto DevOps support', :orchestrated, :kubernetes do
context 'when rbac is enabled' do
before(:all) do
@cluster = Service::KubernetesCluster.new.create!
Loading
Loading
Loading
Loading
@@ -259,6 +259,17 @@ describe Projects::NotesController do
end
end
 
context 'the note does not have commands_only errors' do
context 'for empty note' do
let(:note_text) { '' }
let(:extra_request_params) { { format: :json } }
it "returns status 422 for json" do
expect(response).to have_gitlab_http_status(422)
end
end
end
context 'the project is a private project' do
let(:project_visibility) { Gitlab::VisibilityLevel::PRIVATE }
 
Loading
Loading
Loading
Loading
@@ -90,6 +90,28 @@ FactoryBot.define do
domain { 'example.com' }
end
 
trait :with_environments do
transient do
environments { %i(staging production) }
end
cluster_type { Clusters::Cluster.cluster_types[:project_type] }
before(:create) do |cluster, evaluator|
cluster_project = create(:cluster_project, cluster: cluster)
evaluator.environments.each do |env_name|
environment = create(:environment, name: env_name, project: cluster_project.project)
cluster.kubernetes_namespaces << create(:cluster_kubernetes_namespace,
cluster: cluster,
cluster_project: cluster_project,
project: cluster_project.project,
environment: environment)
end
end
end
trait :not_managed do
managed { false }
end
Loading
Loading
Loading
Loading
@@ -75,4 +75,21 @@ describe Projects::ErrorTrackingHelper do
end
end
end
describe '#error_details_data' do
let(:issue_id) { 1234 }
let(:route_params) { [project.owner, project, issue_id, { format: :json }] }
let(:details_path) { details_namespace_project_error_tracking_index_path(*route_params) }
let(:stack_trace_path) { stack_trace_namespace_project_error_tracking_index_path(*route_params) }
let(:result) { helper.error_details_data(project, issue_id) }
it 'returns the correct details path' do
expect(result['issue-details-path']).to eq details_path
end
it 'returns the correct stack trace path' do
expect(result['issue-stack-trace-path']).to eq stack_trace_path
end
end
end
Loading
Loading
@@ -500,6 +500,48 @@ describe Clusters::Cluster, :use_clean_rails_memory_store_caching do
end
end
 
describe '.with_persisted_applications' do
let(:cluster) { create(:cluster) }
let!(:helm) { create(:clusters_applications_helm, :installed, cluster: cluster) }
it 'preloads persisted applications' do
query_rec = ActiveRecord::QueryRecorder.new do
described_class.with_persisted_applications.find_by_id(cluster.id).application_helm
end
expect(query_rec.count).to eq(1)
end
end
describe '#persisted_applications' do
let(:cluster) { create(:cluster) }
subject { cluster.persisted_applications }
context 'when all applications are created' do
let!(:helm) { create(:clusters_applications_helm, cluster: cluster) }
let!(:ingress) { create(:clusters_applications_ingress, cluster: cluster) }
let!(:cert_manager) { create(:clusters_applications_cert_manager, cluster: cluster) }
let!(:prometheus) { create(:clusters_applications_prometheus, cluster: cluster) }
let!(:runner) { create(:clusters_applications_runner, cluster: cluster) }
let!(:jupyter) { create(:clusters_applications_jupyter, cluster: cluster) }
let!(:knative) { create(:clusters_applications_knative, cluster: cluster) }
it 'returns a list of created applications' do
is_expected.to contain_exactly(helm, ingress, cert_manager, prometheus, runner, jupyter, knative)
end
end
context 'when not all were created' do
let!(:helm) { create(:clusters_applications_helm, cluster: cluster) }
let!(:ingress) { create(:clusters_applications_ingress, cluster: cluster) }
it 'returns a list of created applications' do
is_expected.to contain_exactly(helm, ingress)
end
end
end
describe '#applications' do
set(:cluster) { create(:cluster) }
 
Loading
Loading
# frozen_string_literal: true
require 'spec_helper'
describe Clusters::Cleanup::AppService do
describe '#execute' do
let!(:cluster) { create(:cluster, :project, :cleanup_uninstalling_applications, provider_type: :gcp) }
let(:service) { described_class.new(cluster) }
let(:logger) { service.send(:logger) }
let(:log_meta) do
{
service: described_class.name,
cluster_id: cluster.id,
execution_count: 0
}
end
subject { service.execute }
shared_examples 'does not reschedule itself' do
it 'does not reschedule itself' do
expect(Clusters::Cleanup::AppWorker).not_to receive(:perform_in)
end
end
context 'when cluster has no applications available or transitioning applications' do
it_behaves_like 'does not reschedule itself'
it 'transitions cluster to cleanup_removing_project_namespaces' do
expect { subject }
.to change { cluster.reload.cleanup_status_name }
.from(:cleanup_uninstalling_applications)
.to(:cleanup_removing_project_namespaces)
end
it 'schedules Clusters::Cleanup::ProjectNamespaceWorker' do
expect(Clusters::Cleanup::ProjectNamespaceWorker).to receive(:perform_async).with(cluster.id)
subject
end
it 'logs all events' do
expect(logger).to receive(:info)
.with(log_meta.merge(event: :schedule_remove_project_namespaces))
subject
end
end
context 'when cluster has uninstallable applications' do
shared_examples 'reschedules itself' do
it 'reschedules itself' do
expect(Clusters::Cleanup::AppWorker)
.to receive(:perform_in)
.with(1.minute, cluster.id, 1)
subject
end
end
context 'has applications with dependencies' do
let!(:helm) { create(:clusters_applications_helm, :installed, cluster: cluster) }
let!(:ingress) { create(:clusters_applications_ingress, :installed, cluster: cluster) }
let!(:cert_manager) { create(:clusters_applications_cert_manager, :installed, cluster: cluster) }
let!(:jupyter) { create(:clusters_applications_jupyter, :installed, cluster: cluster) }
it_behaves_like 'reschedules itself'
it 'only uninstalls apps that are not dependencies for other installed apps' do
expect(Clusters::Applications::UninstallWorker)
.not_to receive(:perform_async).with(helm.name, helm.id)
expect(Clusters::Applications::UninstallWorker)
.not_to receive(:perform_async).with(ingress.name, ingress.id)
expect(Clusters::Applications::UninstallWorker)
.to receive(:perform_async).with(cert_manager.name, cert_manager.id)
.and_call_original
expect(Clusters::Applications::UninstallWorker)
.to receive(:perform_async).with(jupyter.name, jupyter.id)
.and_call_original
subject
end
it 'logs application uninstalls and next execution' do
expect(logger).to receive(:info)
.with(log_meta.merge(event: :uninstalling_app, application: kind_of(String))).exactly(2).times
expect(logger).to receive(:info)
.with(log_meta.merge(event: :scheduling_execution, next_execution: 1))
subject
end
context 'cluster is not cleanup_uninstalling_applications' do
let!(:cluster) { create(:cluster, :project, provider_type: :gcp) }
it_behaves_like 'does not reschedule itself'
end
end
context 'when applications are still uninstalling/scheduled/depending on others' do
let!(:helm) { create(:clusters_applications_helm, :installed, cluster: cluster) }
let!(:ingress) { create(:clusters_applications_ingress, :scheduled, cluster: cluster) }
let!(:runner) { create(:clusters_applications_runner, :uninstalling, cluster: cluster) }
it_behaves_like 'reschedules itself'
it 'does not call the uninstallation service' do
expect(Clusters::Applications::UninstallWorker).not_to receive(:new)
subject
end
end
end
end
end
# frozen_string_literal: true
require 'spec_helper'
describe Clusters::Cleanup::ProjectNamespaceService do
describe '#execute' do
subject { service.execute }
let!(:service) { described_class.new(cluster) }
let!(:cluster) { create(:cluster, :with_environments, :cleanup_removing_project_namespaces) }
let!(:logger) { service.send(:logger) }
let(:log_meta) do
{
service: described_class.name,
cluster_id: cluster.id,
execution_count: 0
}
end
let(:kubeclient_instance_double) do
instance_double(Gitlab::Kubernetes::KubeClient, delete_namespace: nil, delete_service_account: nil)
end
before do
allow_any_instance_of(Clusters::Cluster).to receive(:kubeclient).and_return(kubeclient_instance_double)
end
context 'when cluster has namespaces to be deleted' do
it 'deletes namespaces from cluster' do
expect(kubeclient_instance_double).to receive(:delete_namespace)
.with cluster.kubernetes_namespaces[0].namespace
expect(kubeclient_instance_double).to receive(:delete_namespace)
.with(cluster.kubernetes_namespaces[1].namespace)
subject
end
it 'deletes namespaces from database' do
expect { subject }.to change { cluster.kubernetes_namespaces.exists? }.from(true).to(false)
end
it 'schedules ::ServiceAccountWorker' do
expect(Clusters::Cleanup::ServiceAccountWorker).to receive(:perform_async).with(cluster.id)
subject
end
it 'logs all events' do
expect(logger).to receive(:info)
.with(
log_meta.merge(
event: :deleting_project_namespace,
namespace: cluster.kubernetes_namespaces[0].namespace))
expect(logger).to receive(:info)
.with(
log_meta.merge(
event: :deleting_project_namespace,
namespace: cluster.kubernetes_namespaces[1].namespace))
subject
end
end
context 'when cluster has no namespaces' do
let!(:cluster) { create(:cluster, :cleanup_removing_project_namespaces) }
it 'schedules Clusters::Cleanup::ServiceAccountWorker' do
expect(Clusters::Cleanup::ServiceAccountWorker).to receive(:perform_async).with(cluster.id)
subject
end
it 'transitions to cleanup_removing_service_account' do
expect { subject }
.to change { cluster.reload.cleanup_status_name }
.from(:cleanup_removing_project_namespaces)
.to(:cleanup_removing_service_account)
end
it 'does not try to delete namespaces' do
expect(kubeclient_instance_double).not_to receive(:delete_namespace)
subject
end
end
end
end
# frozen_string_literal: true
require 'spec_helper'
describe Clusters::Cleanup::ServiceAccountService do
describe '#execute' do
subject { service.execute }
let!(:service) { described_class.new(cluster) }
let!(:cluster) { create(:cluster, :cleanup_removing_service_account) }
let!(:logger) { service.send(:logger) }
let(:log_meta) do
{
service: described_class.name,
cluster_id: cluster.id,
execution_count: 0
}
end
let(:kubeclient_instance_double) do
instance_double(Gitlab::Kubernetes::KubeClient, delete_namespace: nil, delete_service_account: nil)
end
before do
allow_any_instance_of(Clusters::Cluster).to receive(:kubeclient).and_return(kubeclient_instance_double)
end
it 'deletes gitlab service account' do
expect(kubeclient_instance_double).to receive(:delete_service_account)
.with(
::Clusters::Kubernetes::GITLAB_SERVICE_ACCOUNT_NAME,
::Clusters::Kubernetes::GITLAB_SERVICE_ACCOUNT_NAMESPACE)
subject
end
it 'logs all events' do
expect(logger).to receive(:info).with(log_meta.merge(event: :deleting_gitlab_service_account))
expect(logger).to receive(:info).with(log_meta.merge(event: :destroying_cluster))
subject
end
it 'deletes cluster' do
expect { subject }.to change { Clusters::Cluster.where(id: cluster.id).exists? }.from(true).to(false)
end
end
end
Loading
Loading
@@ -45,7 +45,7 @@ describe Clusters::DestroyService do
expect(Clusters::Cluster.where(id: cluster.id).exists?).not_to be_falsey
end
 
it 'transition cluster#cleanup_status from cleanup_not_started to uninstalling_applications' do
it 'transition cluster#cleanup_status from cleanup_not_started to cleanup_uninstalling_applications' do
expect { subject }.to change { cluster.cleanup_status_name }
.from(:cleanup_not_started)
.to(:cleanup_uninstalling_applications)
Loading
Loading
# frozen_string_literal: true
shared_examples 'cluster cleanup worker base specs' do
it 'transitions to errored if sidekiq retries exhausted' do
job = { 'args' => [cluster.id, 0], 'jid' => '123' }
described_class.sidekiq_retries_exhausted_block.call(job)
expect(cluster.reload.cleanup_status_name).to eq(:cleanup_errored)
end
end
# frozen_string_literal: true
require 'spec_helper'
describe Clusters::Cleanup::AppWorker do
describe '#perform' do
subject { worker_instance.perform(cluster.id) }
let!(:worker_instance) { described_class.new }
let!(:cluster) { create(:cluster, :project, :cleanup_uninstalling_applications, provider_type: :gcp) }
let!(:logger) { worker_instance.send(:logger) }
it_behaves_like 'cluster cleanup worker base specs'
context 'when exceeded the execution limit' do
subject { worker_instance.perform(cluster.id, worker_instance.send(:execution_limit)) }
let(:worker_instance) { described_class.new }
let(:logger) { worker_instance.send(:logger) }
let!(:helm) { create(:clusters_applications_helm, :installed, cluster: cluster) }
let!(:ingress) { create(:clusters_applications_ingress, :scheduled, cluster: cluster) }
it 'logs the error' do
expect(logger).to receive(:error)
.with(
hash_including(
exception: 'ClusterCleanupMethods::ExceededExecutionLimitError',
cluster_id: kind_of(Integer),
class_name: described_class.name,
applications: "helm:installed,ingress:scheduled",
cleanup_status: cluster.cleanup_status_name,
event: :failed_to_remove_cluster_and_resources,
message: "exceeded execution limit of 10 tries"
)
)
subject
end
end
end
end
# frozen_string_literal: true
require 'spec_helper'
describe Clusters::Cleanup::ProjectNamespaceWorker do
describe '#perform' do
context 'when cluster.cleanup_status is cleanup_removing_project_namespaces' do
let!(:cluster) { create(:cluster, :with_environments, :cleanup_removing_project_namespaces) }
let!(:worker_instance) { described_class.new }
let!(:logger) { worker_instance.send(:logger) }
it_behaves_like 'cluster cleanup worker base specs'
it 'calls Clusters::Cleanup::ProjectNamespaceService' do
expect_any_instance_of(Clusters::Cleanup::ProjectNamespaceService).to receive(:execute).once
subject.perform(cluster.id)
end
context 'when exceeded the execution limit' do
subject { worker_instance.perform(cluster.id, worker_instance.send(:execution_limit))}
it 'logs the error' do
expect(logger).to receive(:error)
.with(
hash_including(
exception: 'ClusterCleanupMethods::ExceededExecutionLimitError',
cluster_id: kind_of(Integer),
class_name: described_class.name,
applications: "",
cleanup_status: cluster.cleanup_status_name,
event: :failed_to_remove_cluster_and_resources,
message: "exceeded execution limit of 10 tries"
)
)
subject
end
end
end
context 'when cluster.cleanup_status is not cleanup_removing_project_namespaces' do
let!(:cluster) { create(:cluster, :with_environments) }
it 'does not call Clusters::Cleanup::ProjectNamespaceService' do
expect(Clusters::Cleanup::ProjectNamespaceService).not_to receive(:new)
subject.perform(cluster.id)
end
end
end
end
# frozen_string_literal: true
require 'spec_helper'
describe Clusters::Cleanup::ServiceAccountWorker do
describe '#perform' do
let!(:cluster) { create(:cluster, :cleanup_removing_service_account) }
context 'when cluster.cleanup_status is cleanup_removing_service_account' do
it 'calls Clusters::Cleanup::ServiceAccountService' do
expect_any_instance_of(Clusters::Cleanup::ServiceAccountService).to receive(:execute).once
subject.perform(cluster.id)
end
end
context 'when cluster.cleanup_status is not cleanup_removing_service_account' do
let!(:cluster) { create(:cluster, :with_environments) }
it 'does not call Clusters::Cleanup::ServiceAccountService' do
expect(Clusters::Cleanup::ServiceAccountService).not_to receive(:new)
subject.perform(cluster.id)
end
end
end
end
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment