- Feb 25, 2020
-
-
GitLab Bot authored
-
- Feb 11, 2020
-
-
GitLab Bot authored
-
- Feb 07, 2020
-
-
GitLab Bot authored
-
GitLab Bot authored
-
- Feb 05, 2020
-
-
GitLab Bot authored
-
- Jan 31, 2020
-
-
GitLab Bot authored
-
- Jan 29, 2020
-
-
GitLab Bot authored
-
- Jan 28, 2020
-
-
GitLab Bot authored
-
- Jan 24, 2020
-
-
GitLab Bot authored
-
- Jan 21, 2020
-
-
GitLab Bot authored
-
- Jan 20, 2020
-
-
GitLab Bot authored
-
- Jan 16, 2020
-
-
GitLab Bot authored
-
- Jan 10, 2020
-
-
GitLab Bot authored
-
- Jan 07, 2020
-
-
GitLab Bot authored
-
GitLab Bot authored
-
- Dec 17, 2019
-
-
GitLab Bot authored
-
- Dec 06, 2019
-
-
GitLab Bot authored
-
- Nov 11, 2019
-
-
GitLab Bot authored
-
- Oct 29, 2019
-
-
GitLab Bot authored
-
- Oct 23, 2019
-
-
GitLab Bot authored
-
- Oct 17, 2019
-
-
GitLab Bot authored
-
GitLab Bot authored
-
- Oct 04, 2019
-
-
GitLab Bot authored
-
- Sep 23, 2019
-
-
GitLab Bot authored
-
- Sep 09, 2019
-
-
Mo Khan authored
-
- Sep 05, 2019
-
-
Fabio Pitino authored
Detect if pipeline runs for a GitHub pull request When using a mirror for CI/CD only we register a pull_request webhook. When a pull_request webhook is received, if the source branch SHA matches the actual head of the branch in the repository we create immediately a new pipeline for the external pull request. Otherwise we store the pull request info for when the push webhook is received. When using "only/except: external_pull_requests" we can detect if the pipeline has a open pull request on GitHub and create or not the job based on that.
-
- Sep 04, 2019
-
-
Oswaldo Ferreir authored
-
- Aug 19, 2019
-
-
- Jul 18, 2019
-
-
Andrew Newdigate authored
This allows the chaos endpoints to be invoked in Sidekiq so that this environment can be tested for resilience.
-
- Jul 02, 2019
-
-
Mayra Cabrera authored
- Add two new ActiveRecord models: - RootNamespaceStoragestatistics will persist root namespace statistics - NamespaceAggregationSchedule will save information when a new update to the namespace statistics needs to be scheduled - Inject into UpdateProjectStatistics concern a new callback that will call an async job to insert a new row onto NamespaceAggregationSchedule table - When a new row is inserted a new job is scheduled. This job will update call an specific service to update the statistics and after that it will delete thee aggregated scheduled row - The RefresherServices makes heavy use of arel to build composable queries to update Namespace::RootStorageStatistics attributes. - Add an extra worker to traverse pending rows on NAmespace::AggregationSchedule table and schedule a worker for each one of this rows. - Add an extra worker to traverse pending rows on NAmespace::AggregationSchedule table and schedule a worker for each one of this rows
-
- Jun 24, 2019
-
-
Add index for pages domain ssl auto renewal Add PagesDomain.needs_ssl_renewal scope Add cron worker for ssl renewal Add worker for ssl renewal Add pages ssl renewal worker queues settings
-
- Jun 20, 2019
-
-
Yorick Peterse authored
-
- Jun 04, 2019
-
-
Shinya Maeda authored
As we have a central domain for auto merge process today, we should use a single worker for any auto merge process.
-
- May 31, 2019
-
-
Bob Van Landuyt authored
This sets up all the basics for importing Phabricator tasks into GitLab issues. To import all tasks from a Phabricator instance into GitLab, we'll import all of them into a new project that will have its repository disabled. The import is hooked into a regular ProjectImport setup, but similar to the GitHub parallel importer takes care of all the imports itself. In this iteration, we're importing each page of tasks in a separate sidekiq job. The first thing we do when requesting a new page of tasks is schedule the next page to be imported. But to avoid deadlocks, we only allow a single job per worker type to run at the same time. For now we're only importing basic Issue information, this should be extended to richer information.
-
- Apr 04, 2019
-
-
Hiroyuki Sato authored
-
- Mar 27, 2019
-
-
Nick Thomas authored
Since external diffs are likely to be a bit slower than in-database ones, add a mode that makes diffs external after they've been obsoleted by events. This should strike a balance between performance and disk space. A background cron drives the majority of migrations, since diffs become outdated through user actions.
-
- Mar 01, 2019
-
-
Gabriel Mazetto authored
We are adding sidekiq workers and service classes to allow to rollback a hashed storage migration. There are some refactoring involved as well as part of the code can be reused by both the migration and the rollback logic.
-
- Feb 27, 2019
-
-
Jacopo authored
The API get projects/:id/traffic/fetches allows user with write access to the repository to get the number of clones for the last 30 days.
-
- Feb 20, 2019
-
-
James Fargher authored
ChatOps used to be in the Ultimate tier.
-
- Jan 25, 2019
-
-
Gabriel Mazetto authored
Specs were reviewed and improved to better cover the current behavior. There was some standardization done as well to facilitate the implementation of the rollback functionality. StorageMigratorWorker was extracted to HashedStorage namespace were RollbackerWorker will live one as well.
-