Skip to content
Snippets Groups Projects
Commit 5781a496 authored by GitLab Bot's avatar GitLab Bot
Browse files

Add latest changes from gitlab-org/gitlab@master

parent 4d48b3cf
No related branches found
No related tags found
No related merge requests found
Showing
with 1007 additions and 163 deletions
Loading
Loading
@@ -311,11 +311,11 @@ If you experience a problem, you should contact GitLab support. Use the command
line at your own risk.
 
For debugging purposes, you can start extra Sidekiq processes by using the command
`/opt/gitlab/embedded/service/gitlab-rails/ee/bin/sidekiq-cluster`. This command
`/opt/gitlab/embedded/service/gitlab-rails/bin/sidekiq-cluster`. This command
takes arguments using the following syntax:
 
```shell
/opt/gitlab/embedded/service/gitlab-rails/ee/bin/sidekiq-cluster [QUEUE,QUEUE,...] [QUEUE, ...]
/opt/gitlab/embedded/service/gitlab-rails/bin/sidekiq-cluster [QUEUE,QUEUE,...] [QUEUE, ...]
```
 
Each separate argument denotes a group of queues that have to be processed by a
Loading
Loading
@@ -333,14 +333,14 @@ For example, say you want to start 2 extra processes: one to process the
done as follows:
 
```shell
/opt/gitlab/embedded/service/gitlab-rails/ee/bin/sidekiq-cluster process_commit post_receive
/opt/gitlab/embedded/service/gitlab-rails/bin/sidekiq-cluster process_commit post_receive
```
 
If you instead want to start one process processing both queues, you'd use the
following syntax:
 
```shell
/opt/gitlab/embedded/service/gitlab-rails/ee/bin/sidekiq-cluster process_commit,post_receive
/opt/gitlab/embedded/service/gitlab-rails/bin/sidekiq-cluster process_commit,post_receive
```
 
If you want to have one Sidekiq process dealing with the `process_commit` and
Loading
Loading
@@ -348,7 +348,7 @@ If you want to have one Sidekiq process dealing with the `process_commit` and
you'd use the following:
 
```shell
/opt/gitlab/embedded/service/gitlab-rails/ee/bin/sidekiq-cluster process_commit,post_receive gitlab_shell
/opt/gitlab/embedded/service/gitlab-rails/bin/sidekiq-cluster process_commit,post_receive gitlab_shell
```
 
### Monitoring the `sidekiq-cluster` command
Loading
Loading
@@ -380,7 +380,7 @@ file is written, but this can be changed by passing the `--pidfile` option to
`sidekiq-cluster`. For example:
 
```shell
/opt/gitlab/embedded/service/gitlab-rails/ee/bin/sidekiq-cluster --pidfile /var/run/gitlab/sidekiq_cluster.pid process_commit
/opt/gitlab/embedded/service/gitlab-rails/bin/sidekiq-cluster --pidfile /var/run/gitlab/sidekiq_cluster.pid process_commit
```
 
Keep in mind that the PID file will contain the PID of the `sidekiq-cluster`
Loading
Loading
Loading
Loading
@@ -5,12 +5,8 @@ type: howto
# User email confirmation at sign-up
 
GitLab can be configured to require confirmation of a user's email address when
the user signs up. When this setting is enabled:
- For GitLab 12.7 and earlier, the user is unable to sign in until they confirm their
email address.
- For GitLab 12.8 and later, the user [has 30 days to confirm their email address](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/31245).
After 30 days, they will be unable to log in and access GitLab features.
the user signs up. When this setting is enabled, the user is unable to sign in until
they confirm their email address.
 
In **Admin Area > Settings** (`/admin/application_settings/general`), go to the section
**Sign-up Restrictions** and look for the **Send confirmation email on sign-up** option.
Loading
Loading
Loading
Loading
@@ -666,6 +666,10 @@ To use Auto Deploy on a Kubernetes 1.16+ cluster, you must:
This will opt-in to using a version of the PostgreSQL chart that supports Kubernetes
1.16 and higher.
 
CAUTION: **Caution:** Opting into `AUTO_DEVOPS_POSTGRES_CHANNEL` version `2` will delete
the version `1` PostgreSQL database. Please backup the contents of the PostgreSQL database
first before opting into version `2`, so that you can restore into the version `2` database.
#### Migrations
 
> [Introduced][ce-21955] in GitLab 11.4
Loading
Loading
Loading
Loading
@@ -37,12 +37,7 @@ email domains to prevent malicious users from creating accounts.
## Require email confirmation
 
You can send confirmation emails during sign-up and require that users confirm
their email address. If this setting is selected:
- For GitLab 12.7 and earlier, the user is unable to sign in until they confirm their
email address.
- For GitLab 12.8 and later, the user [has 30 days to confirm their email address](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/31245).
After 30 days, they will be unable to log in and access GitLab features.
their email address before they are allowed to sign in.
 
![Email confirmation](img/email_confirmation_v12_7.png)
 
Loading
Loading
Loading
Loading
@@ -62,6 +62,8 @@ However, users will not be prompted to log via SSO on each visit. GitLab will ch
 
We intend to add a similar SSO requirement for [Git and API activity](https://gitlab.com/gitlab-org/gitlab/issues/9152) in the future.
 
When SSO enforcement is enabled for a group, users cannot share a project in the group outside the top-level group, even if the project is forked.
#### Group-managed accounts
 
> [Introduced in GitLab 12.1](https://gitlab.com/groups/gitlab-org/-/epics/709).
Loading
Loading
@@ -74,6 +76,7 @@ When this option is enabled:
 
- All existing and new users in the group will be required to log in via the SSO URL associated with the group.
- After the group-managed account has been created, group activity will require the use of this user account.
- Users can't share a project in the group outside the top-level group (also applies to forked projects).
 
Upon successful authentication, GitLab prompts the user with options, based on the email address received from the configured identity provider:
 
Loading
Loading
@@ -107,6 +110,16 @@ Groups with enabled group-managed accounts can allow or disallow forking of proj
by using separate toggle. If forking is disallowed any project of given root group or its subgroups can be forked to
a subgroup of the same root group only.
 
##### Other restrictions for Group-managed accounts
> [Introduced in GitLab 12.9](https://gitlab.com/gitlab-org/gitlab/issues/12420)
Projects within groups with enabled group-managed accounts are not to be shared with:
- Groups outside of the parent group
- Members who are not users managed by this group
This restriction also applies to projects forked from or to those groups.
#### Assertions
 
When using group-managed accounts, the following user details need to be passed to GitLab as SAML
Loading
Loading
# frozen_string_literal: true
module Gitlab
module SidekiqCluster
# The signals that should terminate both the master and workers.
TERMINATE_SIGNALS = %i(INT TERM).freeze
# The signals that should simply be forwarded to the workers.
FORWARD_SIGNALS = %i(TTIN USR1 USR2 HUP).freeze
# Traps the given signals and yields the block whenever these signals are
# received.
#
# The block is passed the name of the signal.
#
# Example:
#
# trap_signals(%i(HUP TERM)) do |signal|
# ...
# end
def self.trap_signals(signals)
signals.each do |signal|
trap(signal) do
yield signal
end
end
end
def self.trap_terminate(&block)
trap_signals(TERMINATE_SIGNALS, &block)
end
def self.trap_forward(&block)
trap_signals(FORWARD_SIGNALS, &block)
end
def self.signal(pid, signal)
Process.kill(signal, pid)
true
rescue Errno::ESRCH
false
end
def self.signal_processes(pids, signal)
pids.each { |pid| signal(pid, signal) }
end
# Starts Sidekiq workers for the pairs of processes.
#
# Example:
#
# start([ ['foo'], ['bar', 'baz'] ], :production)
#
# This would start two Sidekiq processes: one processing "foo", and one
# processing "bar" and "baz". Each one is placed in its own process group.
#
# queues - An Array containing Arrays. Each sub Array should specify the
# queues to use for a single process.
#
# directory - The directory of the Rails application.
#
# Returns an Array containing the PIDs of the started processes.
def self.start(queues, env: :development, directory: Dir.pwd, max_concurrency: 50, min_concurrency: 0, dryrun: false)
queues.map.with_index do |pair, index|
start_sidekiq(pair, env: env, directory: directory, max_concurrency: max_concurrency, min_concurrency: min_concurrency, worker_id: index, dryrun: dryrun)
end
end
# Starts a Sidekiq process that processes _only_ the given queues.
#
# Returns the PID of the started process.
def self.start_sidekiq(queues, env:, directory:, max_concurrency:, min_concurrency:, worker_id:, dryrun:)
counts = count_by_queue(queues)
cmd = %w[bundle exec sidekiq]
cmd << "-c #{self.concurrency(queues, min_concurrency, max_concurrency)}"
cmd << "-e#{env}"
cmd << "-gqueues: #{proc_details(counts)}"
cmd << "-r#{directory}"
counts.each do |queue, count|
cmd << "-q#{queue},#{count}"
end
if dryrun
puts "Sidekiq command: #{cmd}" # rubocop:disable Rails/Output
return
end
pid = Process.spawn(
{ 'ENABLE_SIDEKIQ_CLUSTER' => '1',
'SIDEKIQ_WORKER_ID' => worker_id.to_s },
*cmd,
pgroup: true,
err: $stderr,
out: $stdout
)
wait_async(pid)
pid
end
def self.count_by_queue(queues)
queues.each_with_object(Hash.new(0)) { |element, hash| hash[element] += 1 }
end
def self.proc_details(counts)
counts.map do |queue, count|
if count == 1
queue
else
"#{queue} (#{count})"
end
end.join(', ')
end
def self.concurrency(queues, min_concurrency, max_concurrency)
concurrency_from_queues = queues.length + 1
max = max_concurrency.positive? ? max_concurrency : concurrency_from_queues
min = [min_concurrency, max].min
concurrency_from_queues.clamp(min, max)
end
# Waits for the given process to complete using a separate thread.
def self.wait_async(pid)
Thread.new do
Process.wait(pid) rescue Errno::ECHILD
end
end
# Returns true if all the processes are alive.
def self.all_alive?(pids)
pids.each do |pid|
return false unless process_alive?(pid)
end
true
end
def self.any_alive?(pids)
pids_alive(pids).any?
end
def self.pids_alive(pids)
pids.select { |pid| process_alive?(pid) }
end
def self.process_alive?(pid)
# Signal 0 tests whether the process exists and we have access to send signals
# but is otherwise a noop (doesn't actually send a signal to the process)
signal(pid, 0)
end
def self.write_pid(path)
File.open(path, 'w') do |handle|
handle.write(Process.pid.to_s)
end
end
end
end
# frozen_string_literal: true
require 'optparse'
require 'logger'
require 'time'
module Gitlab
module SidekiqCluster
class CLI
CHECK_TERMINATE_INTERVAL_SECONDS = 1
# How long to wait in total when asking for a clean termination
# Sidekiq default to self-terminate is 25s
TERMINATE_TIMEOUT_SECONDS = 30
CommandError = Class.new(StandardError)
def initialize(log_output = STDERR)
require_relative '../../../lib/gitlab/sidekiq_logging/json_formatter'
# As recommended by https://github.com/mperham/sidekiq/wiki/Advanced-Options#concurrency
@max_concurrency = 50
@min_concurrency = 0
@environment = ENV['RAILS_ENV'] || 'development'
@pid = nil
@interval = 5
@alive = true
@processes = []
@logger = Logger.new(log_output)
@logger.formatter = ::Gitlab::SidekiqLogging::JSONFormatter.new
@rails_path = Dir.pwd
@dryrun = false
end
def run(argv = ARGV)
if argv.empty?
raise CommandError,
'You must specify at least one queue to start a worker for'
end
option_parser.parse!(argv)
all_queues = SidekiqConfig::CliMethods.all_queues(@rails_path)
queue_names = SidekiqConfig::CliMethods.worker_queues(@rails_path)
queue_groups = argv.map do |queues|
next queue_names if queues == '*'
# When using the experimental queue query syntax, we treat
# each queue group as a worker attribute query, and resolve
# the queues for the queue group using this query.
if @experimental_queue_selector
SidekiqConfig::CliMethods.query_workers(queues, all_queues)
else
SidekiqConfig::CliMethods.expand_queues(queues.split(','), queue_names)
end
end
if @negate_queues
queue_groups.map! { |queues| queue_names - queues }
end
if queue_groups.all?(&:empty?)
raise CommandError,
'No queues found, you must select at least one queue'
end
@logger.info("Starting cluster with #{queue_groups.length} processes")
@processes = SidekiqCluster.start(
queue_groups,
env: @environment,
directory: @rails_path,
max_concurrency: @max_concurrency,
min_concurrency: @min_concurrency,
dryrun: @dryrun
)
return if @dryrun
write_pid
trap_signals
start_loop
end
def write_pid
SidekiqCluster.write_pid(@pid) if @pid
end
def monotonic_time
Process.clock_gettime(Process::CLOCK_MONOTONIC, :float_second)
end
def continue_waiting?(deadline)
SidekiqCluster.any_alive?(@processes) && monotonic_time < deadline
end
def hard_stop_stuck_pids
SidekiqCluster.signal_processes(SidekiqCluster.pids_alive(@processes), :KILL)
end
def wait_for_termination
deadline = monotonic_time + TERMINATE_TIMEOUT_SECONDS
sleep(CHECK_TERMINATE_INTERVAL_SECONDS) while continue_waiting?(deadline)
hard_stop_stuck_pids
end
def trap_signals
SidekiqCluster.trap_terminate do |signal|
@alive = false
SidekiqCluster.signal_processes(@processes, signal)
wait_for_termination
end
SidekiqCluster.trap_forward do |signal|
SidekiqCluster.signal_processes(@processes, signal)
end
end
def start_loop
while @alive
sleep(@interval)
unless SidekiqCluster.all_alive?(@processes)
# If a child process died we'll just terminate the whole cluster. It's up to
# runit and such to then restart the cluster.
@logger.info('A worker terminated, shutting down the cluster')
SidekiqCluster.signal_processes(@processes, :TERM)
break
end
end
end
def option_parser
OptionParser.new do |opt|
opt.banner = "#{File.basename(__FILE__)} [QUEUE,QUEUE] [QUEUE] ... [OPTIONS]"
opt.separator "\nOptions:\n"
opt.on('-h', '--help', 'Shows this help message') do
abort opt.to_s
end
opt.on('-m', '--max-concurrency INT', 'Maximum threads to use with Sidekiq (default: 50, 0 to disable)') do |int|
@max_concurrency = int.to_i
end
opt.on('--min-concurrency INT', 'Minimum threads to use with Sidekiq (default: 0)') do |int|
@min_concurrency = int.to_i
end
opt.on('-e', '--environment ENV', 'The application environment') do |env|
@environment = env
end
opt.on('-P', '--pidfile PATH', 'Path to the PID file') do |pid|
@pid = pid
end
opt.on('-r', '--require PATH', 'Location of the Rails application') do |path|
@rails_path = path
end
opt.on('--experimental-queue-selector', 'EXPERIMENTAL: Run workers based on the provided selector') do |experimental_queue_selector|
@experimental_queue_selector = experimental_queue_selector
end
opt.on('-n', '--negate', 'Run workers for all queues in sidekiq_queues.yml except the given ones') do
@negate_queues = true
end
opt.on('-i', '--interval INT', 'The number of seconds to wait between worker checks') do |int|
@interval = int.to_i
end
opt.on('-d', '--dryrun', 'Print commands that would be run without this flag, and quit') do |int|
@dryrun = true
end
end
end
end
end
end
# frozen_string_literal: true
require 'spec_helper'
describe 'bin/sidekiq-cluster' do
using RSpec::Parameterized::TableSyntax
context 'when selecting some queues and excluding others' do
where(:args, :included, :excluded) do
%w[--negate cronjob] | '-qdefault,1' | '-qcronjob,1'
%w[--experimental-queue-selector resource_boundary=cpu] | '-qupdate_merge_requests,1' | '-qdefault,1'
end
with_them do
it 'runs successfully', :aggregate_failures do
cmd = %w[bin/sidekiq-cluster --dryrun] + args
output, status = Gitlab::Popen.popen(cmd, Rails.root.to_s)
expect(status).to be(0)
expect(output).to include('"bundle", "exec", "sidekiq"')
expect(output).to include(included)
expect(output).not_to include(excluded)
end
end
end
context 'when selecting all queues' do
[
%w[*],
%w[--experimental-queue-selector *]
].each do |args|
it "runs successfully with `#{args}`", :aggregate_failures do
cmd = %w[bin/sidekiq-cluster --dryrun] + args
output, status = Gitlab::Popen.popen(cmd, Rails.root.to_s)
expect(status).to be(0)
expect(output).to include('"bundle", "exec", "sidekiq"')
expect(output).to include('-qdefault,1')
expect(output).to include('-qcronjob:ci_archive_traces_cron,1')
end
end
end
end
Loading
Loading
@@ -3,6 +3,10 @@
require 'spec_helper'
 
describe ConfirmEmailWarning do
before do
stub_feature_flags(soft_email_confirmation: true)
end
controller(ApplicationController) do
# `described_class` is not available in this context
include ConfirmEmailWarning
Loading
Loading
Loading
Loading
@@ -79,29 +79,31 @@ describe RegistrationsController do
stub_application_setting(send_user_confirmation_email: true)
end
 
context 'when a grace period is active for confirming the email address' do
context 'when soft email confirmation is not enabled' do
before do
allow(User).to receive(:allow_unconfirmed_access_for).and_return 2.days
stub_feature_flags(soft_email_confirmation: false)
allow(User).to receive(:allow_unconfirmed_access_for).and_return 0
end
 
it 'sends a confirmation email and redirects to the dashboard' do
it 'does not authenticate the user and sends a confirmation email' do
post(:create, params: user_params)
 
expect(ActionMailer::Base.deliveries.last.to.first).to eq(user_params[:user][:email])
expect(response).to redirect_to(dashboard_projects_path)
expect(subject.current_user).to be_nil
end
end
 
context 'when no grace period is active for confirming the email address' do
context 'when soft email confirmation is enabled' do
before do
allow(User).to receive(:allow_unconfirmed_access_for).and_return 0
stub_feature_flags(soft_email_confirmation: true)
allow(User).to receive(:allow_unconfirmed_access_for).and_return 2.days
end
 
it 'sends a confirmation email and redirects to the almost there page' do
it 'authenticates the user and sends a confirmation email' do
post(:create, params: user_params)
 
expect(ActionMailer::Base.deliveries.last.to.first).to eq(user_params[:user][:email])
expect(response).to redirect_to(users_almost_there_path)
expect(response).to redirect_to(dashboard_projects_path)
end
end
end
Loading
Loading
Loading
Loading
@@ -135,7 +135,9 @@ describe 'Invites' do
 
expect(current_path).to eq(dashboard_projects_path)
expect(page).to have_content(project.full_name)
visit group_path(group)
expect(page).to have_content(group.full_name)
end
 
Loading
Loading
@@ -153,6 +155,25 @@ describe 'Invites' do
context 'email confirmation enabled' do
let(:send_email_confirmation) { true }
 
context 'when soft email confirmation is not enabled' do
before do
allow(User).to receive(:allow_unconfirmed_access_for).and_return 0
end
it 'signs up and redirects to root page with all the project/groups invitation automatically accepted' do
fill_in_sign_up_form(new_user)
confirm_email(new_user)
fill_in_sign_in_form(new_user)
expect(current_path).to eq(root_path)
expect(page).to have_content(project.full_name)
visit group_path(group)
expect(page).to have_content(group.full_name)
end
end
context 'when soft email confirmation is enabled' do
before do
allow(User).to receive(:allow_unconfirmed_access_for).and_return 2.days
Loading
Loading
@@ -164,7 +185,9 @@ describe 'Invites' do
 
expect(current_path).to eq(root_path)
expect(page).to have_content(project.full_name)
visit group_path(group)
expect(page).to have_content(group.full_name)
end
end
Loading
Loading
@@ -180,14 +203,32 @@ describe 'Invites' do
context 'the user sign-up using a different email address' do
let(:invite_email) { build_stubbed(:user).email }
 
before do
allow(User).to receive(:allow_unconfirmed_access_for).and_return 2.days
context 'when soft email confirmation is not enabled' do
before do
stub_feature_flags(soft_email_confirmation: false)
allow(User).to receive(:allow_unconfirmed_access_for).and_return 0
end
it 'signs up and redirects to the invitation page' do
fill_in_sign_up_form(new_user)
confirm_email(new_user)
fill_in_sign_in_form(new_user)
expect(current_path).to eq(invite_path(group_invite.raw_invite_token))
end
end
 
it 'signs up and redirects to the invitation page' do
fill_in_sign_up_form(new_user)
context 'when soft email confirmation is enabled' do
before do
stub_feature_flags(soft_email_confirmation: true)
allow(User).to receive(:allow_unconfirmed_access_for).and_return 2.days
end
 
expect(current_path).to eq(invite_path(group_invite.raw_invite_token))
it 'signs up and redirects to the invitation page' do
fill_in_sign_up_form(new_user)
expect(current_path).to eq(invite_path(group_invite.raw_invite_token))
end
end
end
end
Loading
Loading
Loading
Loading
@@ -61,10 +61,6 @@ describe 'Merge request > User sees diff', :js do
let(:merge_request) { create(:merge_request_with_diffs, source_project: forked_project, target_project: project, author: author_user) }
let(:changelog_id) { Digest::SHA1.hexdigest("CHANGELOG") }
 
before do
forked_project.repository.after_import
end
context 'as author' do
it 'shows direct edit link', :sidekiq_might_not_need_inline do
sign_in(author_user)
Loading
Loading
Loading
Loading
@@ -797,6 +797,7 @@ describe 'Login' do
 
before do
stub_application_setting(send_user_confirmation_email: true)
stub_feature_flags(soft_email_confirmation: true)
allow(User).to receive(:allow_unconfirmed_access_for).and_return grace_period
end
 
Loading
Loading
Loading
Loading
@@ -129,29 +129,63 @@ shared_examples 'Signup' do
stub_application_setting(send_user_confirmation_email: true)
end
 
it 'creates the user account and sends a confirmation email' do
visit new_user_registration_path
context 'when soft email confirmation is not enabled' do
before do
stub_feature_flags(soft_email_confirmation: false)
end
 
fill_in 'new_user_username', with: new_user.username
fill_in 'new_user_email', with: new_user.email
it 'creates the user account and sends a confirmation email' do
visit new_user_registration_path
 
if Gitlab::Experimentation.enabled?(:signup_flow)
fill_in 'new_user_first_name', with: new_user.first_name
fill_in 'new_user_last_name', with: new_user.last_name
else
fill_in 'new_user_name', with: new_user.name
fill_in 'new_user_email_confirmation', with: new_user.email
fill_in 'new_user_username', with: new_user.username
fill_in 'new_user_email', with: new_user.email
if Gitlab::Experimentation.enabled?(:signup_flow)
fill_in 'new_user_first_name', with: new_user.first_name
fill_in 'new_user_last_name', with: new_user.last_name
else
fill_in 'new_user_name', with: new_user.name
fill_in 'new_user_email_confirmation', with: new_user.email
end
fill_in 'new_user_password', with: new_user.password
expect { click_button 'Register' }.to change { User.count }.by(1)
expect(current_path).to eq users_almost_there_path
expect(page).to have_content('Please check your email to confirm your account')
end
end
 
fill_in 'new_user_password', with: new_user.password
context 'when soft email confirmation is enabled' do
before do
stub_feature_flags(soft_email_confirmation: true)
end
 
expect { click_button 'Register' }.to change { User.count }.by(1)
it 'creates the user account and sends a confirmation email' do
visit new_user_registration_path
 
if Gitlab::Experimentation.enabled?(:signup_flow)
expect(current_path).to eq users_sign_up_welcome_path
else
expect(current_path).to eq dashboard_projects_path
expect(page).to have_content("Please check your email (#{new_user.email}) to verify that you own this address and unlock the power of CI/CD.")
fill_in 'new_user_username', with: new_user.username
fill_in 'new_user_email', with: new_user.email
if Gitlab::Experimentation.enabled?(:signup_flow)
fill_in 'new_user_first_name', with: new_user.first_name
fill_in 'new_user_last_name', with: new_user.last_name
else
fill_in 'new_user_name', with: new_user.name
fill_in 'new_user_email_confirmation', with: new_user.email
end
fill_in 'new_user_password', with: new_user.password
expect { click_button 'Register' }.to change { User.count }.by(1)
if Gitlab::Experimentation.enabled?(:signup_flow)
expect(current_path).to eq users_sign_up_welcome_path
else
expect(current_path).to eq dashboard_projects_path
expect(page).to have_content("Please check your email (#{new_user.email}) to verify that you own this address and unlock the power of CI/CD.")
end
end
end
end
Loading
Loading
# frozen_string_literal: true
require 'fast_spec_helper'
require 'rspec-parameterized'
describe Gitlab::SidekiqCluster::CLI do
let(:cli) { described_class.new('/dev/null') }
let(:default_options) do
{ env: 'test', directory: Dir.pwd, max_concurrency: 50, min_concurrency: 0, dryrun: false }
end
before do
stub_env('RAILS_ENV', 'test')
end
describe '#run' do
context 'without any arguments' do
it 'raises CommandError' do
expect { cli.run([]) }.to raise_error(described_class::CommandError)
end
end
context 'with arguments' do
before do
allow(cli).to receive(:write_pid)
allow(cli).to receive(:trap_signals)
allow(cli).to receive(:start_loop)
end
it 'starts the Sidekiq workers' do
expect(Gitlab::SidekiqCluster).to receive(:start)
.with([['foo']], default_options)
.and_return([])
cli.run(%w(foo))
end
it 'allows the special * selector' do
expect(Gitlab::SidekiqCluster)
.to receive(:start).with(
[Gitlab::SidekiqConfig::CliMethods.worker_queues],
default_options
)
cli.run(%w(*))
end
context 'with --negate flag' do
it 'starts Sidekiq workers for all queues in all_queues.yml except the ones in argv' do
expect(Gitlab::SidekiqConfig::CliMethods).to receive(:worker_queues).and_return(['baz'])
expect(Gitlab::SidekiqCluster).to receive(:start)
.with([['baz']], default_options)
.and_return([])
cli.run(%w(foo -n))
end
end
context 'with --max-concurrency flag' do
it 'starts Sidekiq workers for specified queues with a max concurrency' do
expect(Gitlab::SidekiqConfig::CliMethods).to receive(:worker_queues).and_return(%w(foo bar baz))
expect(Gitlab::SidekiqCluster).to receive(:start)
.with([%w(foo bar baz), %w(solo)], default_options.merge(max_concurrency: 2))
.and_return([])
cli.run(%w(foo,bar,baz solo -m 2))
end
end
context 'with --min-concurrency flag' do
it 'starts Sidekiq workers for specified queues with a min concurrency' do
expect(Gitlab::SidekiqConfig::CliMethods).to receive(:worker_queues).and_return(%w(foo bar baz))
expect(Gitlab::SidekiqCluster).to receive(:start)
.with([%w(foo bar baz), %w(solo)], default_options.merge(min_concurrency: 2))
.and_return([])
cli.run(%w(foo,bar,baz solo --min-concurrency 2))
end
end
context 'queue namespace expansion' do
it 'starts Sidekiq workers for all queues in all_queues.yml with a namespace in argv' do
expect(Gitlab::SidekiqConfig::CliMethods).to receive(:worker_queues).and_return(['cronjob:foo', 'cronjob:bar'])
expect(Gitlab::SidekiqCluster).to receive(:start)
.with([['cronjob', 'cronjob:foo', 'cronjob:bar']], default_options)
.and_return([])
cli.run(%w(cronjob))
end
end
context 'with --experimental-queue-selector' do
where do
{
'memory-bound queues' => {
query: 'resource_boundary=memory',
included_queues: %w(project_export),
excluded_queues: %w(merge)
},
'memory- or CPU-bound queues' => {
query: 'resource_boundary=memory,cpu',
included_queues: %w(auto_merge:auto_merge_process project_export),
excluded_queues: %w(merge)
},
'high urgency CI queues' => {
query: 'feature_category=continuous_integration&urgency=high',
included_queues: %w(pipeline_cache:expire_job_cache pipeline_cache:expire_pipeline_cache),
excluded_queues: %w(merge)
},
'CPU-bound high urgency CI queues' => {
query: 'feature_category=continuous_integration&urgency=high&resource_boundary=cpu',
included_queues: %w(pipeline_cache:expire_pipeline_cache),
excluded_queues: %w(pipeline_cache:expire_job_cache merge)
},
'CPU-bound high urgency non-CI queues' => {
query: 'feature_category!=continuous_integration&urgency=high&resource_boundary=cpu',
included_queues: %w(new_issue),
excluded_queues: %w(pipeline_cache:expire_pipeline_cache)
},
'CI and SCM queues' => {
query: 'feature_category=continuous_integration|feature_category=source_code_management',
included_queues: %w(pipeline_cache:expire_job_cache merge),
excluded_queues: %w(mailers)
}
}
end
with_them do
it 'expands queues by attributes' do
expect(Gitlab::SidekiqCluster).to receive(:start) do |queues, opts|
expect(opts).to eq(default_options)
expect(queues.first).to include(*included_queues)
expect(queues.first).not_to include(*excluded_queues)
[]
end
cli.run(%W(--experimental-queue-selector #{query}))
end
it 'works when negated' do
expect(Gitlab::SidekiqCluster).to receive(:start) do |queues, opts|
expect(opts).to eq(default_options)
expect(queues.first).not_to include(*included_queues)
expect(queues.first).to include(*excluded_queues)
[]
end
cli.run(%W(--negate --experimental-queue-selector #{query}))
end
end
it 'expands multiple queue groups correctly' do
expect(Gitlab::SidekiqCluster)
.to receive(:start)
.with([['chat_notification'], ['project_export']], default_options)
.and_return([])
cli.run(%w(--experimental-queue-selector feature_category=chatops&urgency=high resource_boundary=memory&feature_category=importers))
end
it 'allows the special * selector' do
expect(Gitlab::SidekiqCluster)
.to receive(:start).with(
[Gitlab::SidekiqConfig::CliMethods.worker_queues],
default_options
)
cli.run(%w(--experimental-queue-selector *))
end
it 'errors when the selector matches no queues' do
expect(Gitlab::SidekiqCluster).not_to receive(:start)
expect { cli.run(%w(--experimental-queue-selector has_external_dependencies=true&has_external_dependencies=false)) }
.to raise_error(described_class::CommandError)
end
it 'errors on an invalid query multiple queue groups correctly' do
expect(Gitlab::SidekiqCluster).not_to receive(:start)
expect { cli.run(%w(--experimental-queue-selector unknown_field=chatops)) }
.to raise_error(Gitlab::SidekiqConfig::CliMethods::QueryError)
end
end
end
end
describe '#write_pid' do
context 'when a PID is specified' do
it 'writes the PID to a file' do
expect(Gitlab::SidekiqCluster).to receive(:write_pid).with('/dev/null')
cli.option_parser.parse!(%w(-P /dev/null))
cli.write_pid
end
end
context 'when no PID is specified' do
it 'does not write a PID' do
expect(Gitlab::SidekiqCluster).not_to receive(:write_pid)
cli.write_pid
end
end
end
describe '#wait_for_termination' do
it 'waits for termination of all sub-processes and succeeds after 3 checks' do
expect(Gitlab::SidekiqCluster).to receive(:any_alive?)
.with(an_instance_of(Array)).and_return(true, true, true, false)
expect(Gitlab::SidekiqCluster).to receive(:pids_alive)
.with([]).and_return([])
expect(Gitlab::SidekiqCluster).to receive(:signal_processes)
.with([], :KILL)
stub_const("Gitlab::SidekiqCluster::CLI::CHECK_TERMINATE_INTERVAL_SECONDS", 0.1)
stub_const("Gitlab::SidekiqCluster::CLI::TERMINATE_TIMEOUT_SECONDS", 1)
cli.wait_for_termination
end
context 'with hanging workers' do
before do
expect(cli).to receive(:write_pid)
expect(cli).to receive(:trap_signals)
expect(cli).to receive(:start_loop)
end
it 'hard kills workers after timeout expires' do
worker_pids = [101, 102, 103]
expect(Gitlab::SidekiqCluster).to receive(:start)
.with([['foo']], default_options)
.and_return(worker_pids)
expect(Gitlab::SidekiqCluster).to receive(:any_alive?)
.with(worker_pids).and_return(true).at_least(10).times
expect(Gitlab::SidekiqCluster).to receive(:pids_alive)
.with(worker_pids).and_return([102])
expect(Gitlab::SidekiqCluster).to receive(:signal_processes)
.with([102], :KILL)
cli.run(%w(foo))
stub_const("Gitlab::SidekiqCluster::CLI::CHECK_TERMINATE_INTERVAL_SECONDS", 0.1)
stub_const("Gitlab::SidekiqCluster::CLI::TERMINATE_TIMEOUT_SECONDS", 1)
cli.wait_for_termination
end
end
end
describe '#trap_signals' do
it 'traps the termination and forwarding signals' do
expect(Gitlab::SidekiqCluster).to receive(:trap_terminate)
expect(Gitlab::SidekiqCluster).to receive(:trap_forward)
cli.trap_signals
end
end
describe '#start_loop' do
it 'runs until one of the processes has been terminated' do
allow(cli).to receive(:sleep).with(a_kind_of(Numeric))
expect(Gitlab::SidekiqCluster).to receive(:all_alive?)
.with(an_instance_of(Array)).and_return(false)
expect(Gitlab::SidekiqCluster).to receive(:signal_processes)
.with(an_instance_of(Array), :TERM)
cli.start_loop
end
end
end
# frozen_string_literal: true
require 'fast_spec_helper'
require 'rspec-parameterized'
describe Gitlab::SidekiqCluster do
describe '.trap_signals' do
it 'traps the given signals' do
expect(described_class).to receive(:trap).ordered.with(:INT)
expect(described_class).to receive(:trap).ordered.with(:HUP)
described_class.trap_signals(%i(INT HUP))
end
end
describe '.trap_terminate' do
it 'traps the termination signals' do
expect(described_class).to receive(:trap_signals)
.with(described_class::TERMINATE_SIGNALS)
described_class.trap_terminate { }
end
end
describe '.trap_forward' do
it 'traps the signals to forward' do
expect(described_class).to receive(:trap_signals)
.with(described_class::FORWARD_SIGNALS)
described_class.trap_forward { }
end
end
describe '.signal' do
it 'sends a signal to the given process' do
allow(Process).to receive(:kill).with(:INT, 4)
expect(described_class.signal(4, :INT)).to eq(true)
end
it 'returns false when the process does not exist' do
allow(Process).to receive(:kill).with(:INT, 4).and_raise(Errno::ESRCH)
expect(described_class.signal(4, :INT)).to eq(false)
end
end
describe '.signal_processes' do
it 'sends a signal to every thread' do
expect(described_class).to receive(:signal).with(1, :INT)
described_class.signal_processes([1], :INT)
end
end
describe '.start' do
it 'starts Sidekiq with the given queues, environment and options' do
expected_options = {
env: :production,
directory: 'foo/bar',
max_concurrency: 20,
min_concurrency: 10,
dryrun: true
}
expect(described_class).to receive(:start_sidekiq).ordered.with(%w(foo), expected_options.merge(worker_id: 0))
expect(described_class).to receive(:start_sidekiq).ordered.with(%w(bar baz), expected_options.merge(worker_id: 1))
described_class.start([%w(foo), %w(bar baz)], env: :production, directory: 'foo/bar', max_concurrency: 20, min_concurrency: 10, dryrun: true)
end
it 'starts Sidekiq with the given queues and sensible default options' do
expected_options = {
env: :development,
directory: an_instance_of(String),
max_concurrency: 50,
min_concurrency: 0,
worker_id: an_instance_of(Integer),
dryrun: false
}
expect(described_class).to receive(:start_sidekiq).ordered.with(%w(foo bar baz), expected_options)
expect(described_class).to receive(:start_sidekiq).ordered.with(%w(solo), expected_options)
described_class.start([%w(foo bar baz), %w(solo)])
end
end
describe '.start_sidekiq' do
let(:first_worker_id) { 0 }
let(:options) do
{ env: :production, directory: 'foo/bar', max_concurrency: 20, min_concurrency: 0, worker_id: first_worker_id, dryrun: false }
end
let(:env) { { "ENABLE_SIDEKIQ_CLUSTER" => "1", "SIDEKIQ_WORKER_ID" => first_worker_id.to_s } }
let(:args) { ['bundle', 'exec', 'sidekiq', anything, '-eproduction', *([anything] * 5)] }
it 'starts a Sidekiq process' do
allow(Process).to receive(:spawn).and_return(1)
expect(described_class).to receive(:wait_async).with(1)
expect(described_class.start_sidekiq(%w(foo), options)).to eq(1)
end
it 'handles duplicate queue names' do
allow(Process)
.to receive(:spawn)
.with(env, *args, anything)
.and_return(1)
expect(described_class).to receive(:wait_async).with(1)
expect(described_class.start_sidekiq(%w(foo foo bar baz), options)).to eq(1)
end
it 'runs the sidekiq process in a new process group' do
expect(Process)
.to receive(:spawn)
.with(anything, *args, a_hash_including(pgroup: true))
.and_return(1)
allow(described_class).to receive(:wait_async)
expect(described_class.start_sidekiq(%w(foo bar baz), options)).to eq(1)
end
end
describe '.concurrency' do
using RSpec::Parameterized::TableSyntax
where(:queue_count, :min, :max, :expected) do
2 | 0 | 0 | 3 # No min or max specified
2 | 0 | 9 | 3 # No min specified, value < max
2 | 1 | 4 | 3 # Value between min and max
2 | 4 | 5 | 4 # Value below range
5 | 2 | 3 | 3 # Value above range
2 | 1 | 1 | 1 # Value above explicit setting (min == max)
0 | 3 | 3 | 3 # Value below explicit setting (min == max)
1 | 4 | 3 | 3 # Min greater than max
end
with_them do
let(:queues) { Array.new(queue_count) }
it { expect(described_class.concurrency(queues, min, max)).to eq(expected) }
end
end
describe '.wait_async' do
it 'waits for a process in a separate thread' do
thread = described_class.wait_async(Process.spawn('true'))
# Upon success Process.wait just returns the PID.
expect(thread.value).to be_a_kind_of(Numeric)
end
end
# In the X_alive? checks, we check negative PIDs sometimes as a simple way
# to be sure the pids are definitely for non-existent processes.
# Note that -1 is special, and sends the signal to every process we have permission
# for, so we use -2, -3 etc
describe '.all_alive?' do
it 'returns true if all processes are alive' do
processes = [Process.pid]
expect(described_class.all_alive?(processes)).to eq(true)
end
it 'returns false when a thread was not alive' do
processes = [-2]
expect(described_class.all_alive?(processes)).to eq(false)
end
end
describe '.any_alive?' do
it 'returns true if at least one process is alive' do
processes = [Process.pid, -2]
expect(described_class.any_alive?(processes)).to eq(true)
end
it 'returns false when all threads are dead' do
processes = [-2, -3]
expect(described_class.any_alive?(processes)).to eq(false)
end
end
describe '.write_pid' do
it 'writes the PID of the current process to the given file' do
handle = double(:handle)
allow(File).to receive(:open).with('/dev/null', 'w').and_yield(handle)
expect(handle).to receive(:write).with(Process.pid.to_s)
described_class.write_pid('/dev/null')
end
end
end
Loading
Loading
@@ -444,61 +444,6 @@ eos
it { is_expected.to respond_to(:id) }
end
 
describe '#closes_issues' do
let(:issue) { create :issue, project: project }
let(:other_project) { create(:project, :public) }
let(:other_issue) { create :issue, project: other_project }
let(:committer) { create :user }
before do
project.add_developer(committer)
other_project.add_developer(committer)
end
it 'detects issues that this commit is marked as closing' do
ext_ref = "#{other_project.full_path}##{other_issue.iid}"
allow(commit).to receive_messages(
safe_message: "Fixes ##{issue.iid} and #{ext_ref}",
committer_email: committer.email
)
expect(commit.closes_issues).to include(issue)
expect(commit.closes_issues).to include(other_issue)
end
it 'ignores referenced issues when auto-close is disabled' do
project.update!(autoclose_referenced_issues: false)
allow(commit).to receive_messages(
safe_message: "Fixes ##{issue.iid}",
committer_email: committer.email
)
expect(commit.closes_issues).to be_empty
end
context 'with personal snippet' do
let(:commit) { personal_snippet.commit }
it 'does not call Gitlab::ClosingIssueExtractor' do
expect(Gitlab::ClosingIssueExtractor).not_to receive(:new)
commit.closes_issues
end
end
context 'with project snippet' do
let(:commit) { project_snippet.commit }
it 'does not call Gitlab::ClosingIssueExtractor' do
expect(Gitlab::ClosingIssueExtractor).not_to receive(:new)
commit.closes_issues
end
end
end
it_behaves_like 'a mentionable' do
subject { create(:project, :repository).commit }
 
Loading
Loading
@@ -775,32 +720,6 @@ eos
end
end
 
describe '#merge_requests' do
let!(:project) { create(:project, :repository) }
let!(:merge_request1) { create(:merge_request, source_project: project, source_branch: 'master', target_branch: 'feature') }
let!(:merge_request2) { create(:merge_request, source_project: project, source_branch: 'merged-target', target_branch: 'feature') }
let(:commit1) { merge_request1.merge_request_diff.commits.last }
let(:commit2) { merge_request1.merge_request_diff.commits.first }
it 'returns merge_requests that introduced that commit' do
expect(commit1.merge_requests).to contain_exactly(merge_request1, merge_request2)
expect(commit2.merge_requests).to contain_exactly(merge_request1)
end
context 'with personal snippet' do
it 'returns empty relation' do
expect(personal_snippet.repository.commit.merge_requests).to eq MergeRequest.none
end
end
context 'with project snippet' do
it 'returns empty relation' do
expect(project_snippet.project).not_to receive(:merge_requests)
expect(project_snippet.repository.commit.merge_requests).to eq MergeRequest.none
end
end
end
describe 'signed commits' do
let(:gpg_signed_commit) { project.commit_by(oid: '0b4bc9a49b562e85de7cc9e834518ea6828729b9') }
let(:x509_signed_commit) { project.commit_by(oid: '189a6c924013fc3fe40d6f1ec1dc20214183bc97') }
Loading
Loading
Loading
Loading
@@ -23,8 +23,7 @@ describe ProjectImportState, type: :model do
 
# Works around https://github.com/rspec/rspec-mocks/issues/910
allow(Project).to receive(:find).with(project.id).and_return(project)
expect(project.repository).to receive(:after_import).and_call_original
expect(project.wiki.repository).to receive(:after_import).and_call_original
expect(project).to receive(:after_import).and_call_original
end
 
it 'imports a project', :sidekiq_might_not_need_inline do
Loading
Loading
Loading
Loading
@@ -4618,13 +4618,14 @@ describe Project do
let(:import_state) { create(:import_state, project: project) }
 
it 'runs the correct hooks' do
expect(project.repository).to receive(:after_import)
expect(project.wiki.repository).to receive(:after_import)
expect(project.repository).to receive(:expire_content_cache)
expect(project.wiki.repository).to receive(:expire_content_cache)
expect(import_state).to receive(:finish)
expect(project).to receive(:update_project_counter_caches)
expect(project).to receive(:after_create_default_branch)
expect(project).to receive(:refresh_markdown_cache!)
expect(InternalId).to receive(:flush_records!).with(project: project)
expect(DetectRepositoryLanguagesWorker).to receive(:perform_async).with(project.id)
 
project.after_import
end
Loading
Loading
Loading
Loading
@@ -1929,32 +1929,6 @@ describe Repository do
end
end
 
describe '#after_import' do
subject { repository.after_import }
it 'flushes and builds the cache' do
expect(repository).to receive(:expire_content_cache)
subject
end
it 'calls DetectRepositoryLanguagesWorker' do
expect(DetectRepositoryLanguagesWorker).to receive(:perform_async)
subject
end
context 'with a wiki repository' do
let(:repository) { project.wiki.repository }
it 'does not call DetectRepositoryLanguagesWorker' do
expect(DetectRepositoryLanguagesWorker).not_to receive(:perform_async)
subject
end
end
end
describe '#after_push_commit' do
it 'expires statistics caches' do
expect(repository).to receive(:expire_statistics_caches)
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment