- Feb 19, 2020
-
-
GitLab Bot authored
-
- Jan 16, 2020
-
-
GitLab Bot authored
-
- Nov 29, 2019
-
-
GitLab Bot authored
-
- Nov 26, 2019
-
-
GitLab Bot authored
-
- Nov 20, 2019
-
-
GitLab Bot authored
-
- Sep 24, 2019
-
-
GitLab Bot authored
-
- Sep 18, 2019
-
-
GitLab Bot authored
-
- Sep 10, 2019
-
-
Qingyu Zhao authored
Move Gitlab::SidekiqMonitor to namespace Gitlab::SidekiqDaemon::Monitor - Class name and file name change - File path change to lib/gitlab/sidekiq_daemon/monitor.rb - Update class usage/reference in other files, including documentation
-
- Aug 21, 2019
-
-
Kamil Trzcińśki authored
Transform `CancelledError` into `JobRetry::Skip`
-
Kamil Trzcińśki authored
This makes: - very shallow `Middleware::Monitor` to only request tracking of sidekiq jobs, - `SidekiqStatus::Monitor` to be responsible to maintain persistent connection to receive messages, - `SidekiqStatus::Monitor` to always use structured logging and instance variables
-
This adds a middleware to track all threads for running jobs. This makes sidekiq to watch for redis-delivered notifications. This makes be able to send notification to interrupt running sidekiq jobs. This does not take into account any native code, as `Thread.raise` generates exception once the control gets back to Ruby. The separate measure should be taken to interrupt gRPC, shellouts, or anything else that escapes Ruby.
-
- Aug 09, 2019
-
-
Stan Hu authored
This will help identify Sidekiq jobs that invoke excessive number of filesystem access. The timing data is stored in `RequestStore`, but this is only active within the middleware and is not directly accessible to the Sidekiq logger. However, it is possible for the middleware to modify the job hash to pass this data along to the logger.
-
- Jul 29, 2019
-
-
Ryan Cobb authored
This adds diirect monitoring for sidekiq metrics. This is done via sidekiq middleware and a sampler to pull from sidekiqs api.
-
- Jul 10, 2019
-
-
Mayra Cabrera authored
Suggests to use a JSON structured log instead Related to https://gitlab.com/gitlab-org/gitlab-ce/issues/54102
-
- Jul 08, 2019
-
-
Robert Speicher authored
-
- Apr 25, 2019
-
-
Valery Sizov authored
-
Valery Sizov authored
-
- Mar 04, 2019
-
-
Nick Thomas authored
Sidekiq jobs frequently spawn long-lived child processes to do work. In some circumstances, these can be reparented to init when sidekiq is terminated, leading to duplication of work and strange concurrency problems. This commit changes sidekiq so that, if run as a process group leader, it will forward `INT` and `TERM` signals to the whole process group. If the memory killer is active, it will also use the process group when resorting to `kill -9` to shut down. These changes mean that a naive `kill <pid-of-sidekiq>` will now do the right thing, killing any child processes spawned by sidekiq, as long as the process supervisor placed it in its own process group. If sidekiq isn't a process group leader, this new code is skipped.
-
- Feb 28, 2019
-
-
Nick Thomas authored
This reverts commit 00675311.
-
- Feb 25, 2019
-
-
Thong Kuah authored
This enables easier debugging in GDK
-
- Dec 17, 2018
-
-
Valery Sizov authored
-
- Dec 06, 2018
-
-
Kamil Trzcińśki authored
The Correlation ID is taken or generated from received X-Request-ID. Then it is being passed to all executed services (sidekiq workers or gitaly calls). The Correlation ID is logged in all structured logs as `correlation_id`.
-
Stan Hu authored
The GitLab Development Kit initialization failed because the Sidekiq initializer was attempting to look up a feature flag when the `features` table hadn't been created yet. Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/54718
-
- Nov 22, 2018
-
-
Douwe Maan authored
-
- Oct 25, 2018
-
-
Andrew Newdigate authored
This allows us (and others) to test drive Puma without it affecting all users. Puma can be enabled by setting the environment variable "EXPERIMENTAL_PUMA" to a non empty value.
-
- Oct 03, 2018
-
-
Valery Sizov authored
-
- Sep 24, 2018
-
-
Valery Sizov authored
We remove this feature as it never worked properly
-
- Aug 31, 2018
-
-
Stan Hu authored
GitLab already has its own session store, so this extra Sidekiq session is unnecessary. In addition, the GitLab session store properly sets the Secure flag, unlike the default Rack session. CSRF protection in the Sidekiq /admin page continues to work with the existing GitLab session. See https://github.com/mperham/sidekiq/pull/3183 for more details. Part of #49120
-
- Jul 30, 2018
-
-
Stan Hu authored
-
- Apr 04, 2018
-
-
Stan Hu authored
Closes #20060
-
- Feb 26, 2018
-
-
- Dec 12, 2017
-
-
Douwe Maan authored
-
- Dec 05, 2017
-
-
Douwe Maan authored
-
- Jul 11, 2017
-
-
- Jun 28, 2017
-
-
DJ Mountney authored
-
- Mar 17, 2017
-
-
Yorick Peterse authored
This returns the ActiveRecord configuration for the current environment. While CE doesn't use this very often, EE will use it in a few places for the database load balancing code. I'm adding this to CE so we don't end up with merge conflicts in this file.
-
- Mar 07, 2017
-
-
Yorick Peterse authored
This should ensure that connections obtained before starting Sidekiq are not leaked, leading to connection timeouts. Fixes gitlab-com/infrastructure#1139
-
- Feb 06, 2017
-
-
Yorick Peterse authored
Adding two extra connections does nothing other than increasing the number of idle database connections. Given Sidekiq uses N threads it can never use more than N AR connections at a time, thus we don't need more. The initializer mentioned the Sidekiq upgrade guide stating this was required. This is false, the Sidekiq upgrade guide states this is necessary for Redis and not ActiveRecord. On GitLab.com this resulted in a reduction of about 80-100 PostgreSQL connections. Fixes #27713
-
- Jan 25, 2017
-
-
Yorick Peterse authored
There were two cases that could be problematic: 1. Because sometimes AuthorizedProjectsWorker would be scheduled in a transaction it was possible for a job to run/complete before a COMMIT; resulting in it either producing an error, or producing no new data. 2. When scheduling jobs the code would not wait until completion. This could lead to a user creating a project and then immediately trying to push to it. Usually this will work fine, but given enough load it might take a few seconds before a user has access. The first one is problematic, the second one is mostly just annoying (but annoying enough to warrant a solution). This commit changes two things to deal with this: 1. Sidekiq scheduling now takes places after a COMMIT, this is ensured by scheduling using Rails' after_commit hook instead of doing so in an arbitrary method. 2. When scheduling jobs the calling thread now waits for all jobs to complete. Solution 2 requires tracking of job completions. Sidekiq provides a way to find a job by its ID, but this involves scanning over the entire queue; something that is very in-efficient for large queues. As such a more efficient solution is necessary. There are two main Gems that can do this in a more efficient manner: * sidekiq-status * sidekiq_status No, this is not a joke. Both Gems do a similar thing (but slightly different), and the only difference in their name is a dash vs an underscore. Both Gems however provide far more than just checking if a job has been completed, and both have their problems. sidekiq-status does not appear to be actively maintained, with the last release being in 2015. It also has some issues during testing as API calls are not stubbed in any way. sidekiq_status on the other hand does not appear to be very popular, and introduces a similar amount of code. Because of this I opted to write a simple home grown solution. After all, all we need is storing a job ID somewhere so we can efficiently look it up; we don't need extra web UIs (as provided by sidekiq-status) or complex APIs to update progress, etc. This is where Gitlab::SidekiqStatus comes in handy. This namespace contains some code used for tracking, removing, and looking up job IDs; all without having to scan over an entire queue. Data is removed explicitly, but also expires automatically just in case. Using this API we can now schedule jobs in a fork-join like manner: we schedule the jobs in Sidekiq, process them in parallel, then wait for completion. By using Sidekiq we can leverage all the benefits such as being able to scale across multiple cores and hosts, retrying failed jobs, etc. The one downside is that we need to make sure we can deal with unexpected increases in job processing timings. To deal with this the class Gitlab::JobWaiter (used for waiting for jobs to complete) will only wait a number of seconds (30 by default). Once this timeout is reached it will simply return. For GitLab.com almost all AuthorizedProjectWorker jobs complete in seconds, only very rarely do we spike to job timings of around a minute. These in turn seem to be the result of external factors (e.g. deploys), in which case a user is most likely not able to use the system anyway. In short, this new solution should ensure that jobs are processed properly and that in almost all cases a user has access to their resources whenever they need to have access.
-
- Dec 16, 2016
-
-
Rydkin Maxim authored
-