- Jan 10, 2020
-
-
GitLab Bot authored
-
- Jan 07, 2020
-
-
GitLab Bot authored
-
- Dec 24, 2019
-
-
GitLab Bot authored
-
- Dec 10, 2019
-
-
GitLab Bot authored
-
- Nov 19, 2019
-
-
GitLab Bot authored
-
- Nov 11, 2019
-
-
GitLab Bot authored
-
- Oct 29, 2019
-
-
GitLab Bot authored
-
- Oct 17, 2019
-
-
GitLab Bot authored
-
GitLab Bot authored
-
- Sep 05, 2019
-
-
Fabio Pitino authored
Detect if pipeline runs for a GitHub pull request When using a mirror for CI/CD only we register a pull_request webhook. When a pull_request webhook is received, if the source branch SHA matches the actual head of the branch in the repository we create immediately a new pipeline for the external pull request. Otherwise we store the pull request info for when the push webhook is received. When using "only/except: external_pull_requests" we can detect if the pipeline has a open pull request on GitHub and create or not the job based on that.
-
- Aug 13, 2019
-
-
Kamil Trzcińśki authored
We migrated all logic to `PipelineProcessWorker` and this worker become redundant.
-
- Aug 01, 2019
-
-
Kamil Trzcińśki authored
This implements the support for `needs:` keyword as part of GitLab CI. That makes some of the jobs to be run out of order.
-
- Jul 18, 2019
-
-
Andrew Newdigate authored
This allows the chaos endpoints to be invoked in Sidekiq so that this environment can be tested for resilience.
-
- Jul 02, 2019
-
-
Mayra Cabrera authored
- Add two new ActiveRecord models: - RootNamespaceStoragestatistics will persist root namespace statistics - NamespaceAggregationSchedule will save information when a new update to the namespace statistics needs to be scheduled - Inject into UpdateProjectStatistics concern a new callback that will call an async job to insert a new row onto NamespaceAggregationSchedule table - When a new row is inserted a new job is scheduled. This job will update call an specific service to update the statistics and after that it will delete thee aggregated scheduled row - The RefresherServices makes heavy use of arel to build composable queries to update Namespace::RootStorageStatistics attributes. - Add an extra worker to traverse pending rows on NAmespace::AggregationSchedule table and schedule a worker for each one of this rows. - Add an extra worker to traverse pending rows on NAmespace::AggregationSchedule table and schedule a worker for each one of this rows
-
- Jun 24, 2019
-
-
Add index for pages domain ssl auto renewal Add PagesDomain.needs_ssl_renewal scope Add cron worker for ssl renewal Add worker for ssl renewal Add pages ssl renewal worker queues settings
-
- Jun 04, 2019
-
-
Shinya Maeda authored
As we have a central domain for auto merge process today, we should use a single worker for any auto merge process.
-
- Apr 30, 2019
-
-
Domain will be removed by verification worker after 1 week of being disabled
-
Thong Kuah authored
Add endpoint to delete/uninstall a cluster application
-
Thong Kuah authored
+ to monitor progress of uninstallation pod
-
- Apr 26, 2019
-
-
Jason Goodman authored
This enables sending a chat message to Slack or Mattermost upon a successful, failed, or canceled deployment
-
- Apr 04, 2019
-
-
Hiroyuki Sato authored
-
- Mar 27, 2019
-
-
Nick Thomas authored
Since external diffs are likely to be a bit slower than in-database ones, add a mode that makes diffs external after they've been obsoleted by events. This should strike a balance between performance and disk space. A background cron drives the majority of migrations, since diffs become outdated through user actions.
-
- Mar 20, 2019
-
-
Tiger Watson authored
Introduces the concept of Prerequisites for a CI build. If a build has unmet prerequisites it will go through the :preparing state before being made available to a runner. There are no actual prerequisites yet, so current behaviour is unchanged.
-
- Mar 05, 2019
-
-
João Cunha authored
- This is to avoid colision with EE ClusterUpdateAppWorker
-
- Creates new route - Creates new controller action - Creates call stack: Clusterss::ApplciationsController calls --> Clusters::Applications::UpdateService calls --> Clusters::Applications::ScheduleUpdateService calls --> ClusterUpdateAppWorker calls --> Clusters::Applications::PatchService --> ClusterWaitForAppInstallationWorker DRY req params Adds gcp_cluster:cluster_update_app queue Schedule_update_service is uneeded Extract common logic to a parent class (UpdateService will need it) Introduce new UpdateService Fix rescue class namespace Fix RuboCop offenses Adds BaseService for create and update services Remove request_handler code duplication Fixes update command Move update_command to ApplicationCore so all apps can use it Adds tests for Knative update_command Adds specs for PatchService Raise error if update receives an unistalled app Adds update_service spec Fix RuboCop offense Use subject in favor of go Adds update endpoint specs for project namespace Adds update endpoint specs for group namespace
-
- Mar 01, 2019
-
-
Gabriel Mazetto authored
Rollback is done similar to Migration for the Hashed Storage. It also shares the same ExclusiveLease key to prevent both happening at the same time. All Hashed Storage related workers now share the same queue namespace which allows for assigning dedicated workers easily.
-
Gabriel Mazetto authored
Moved to HashedStorage namespace, and added them to the `:hashed_storage` queue namespace
-
Gabriel Mazetto authored
We are adding sidekiq workers and service classes to allow to rollback a hashed storage migration. There are some refactoring involved as well as part of the code can be reused by both the migration and the rollback logic.
-
- Feb 27, 2019
-
-
Jacopo authored
The API get projects/:id/traffic/fetches allows user with write access to the repository to get the number of clones for the last 30 days.
-
- Feb 20, 2019
-
-
James Fargher authored
ChatOps used to be in the Ultimate tier.
-
- Feb 07, 2019
-
-
Thong Kuah authored
-
- Jan 25, 2019
-
-
Gabriel Mazetto authored
Specs were reviewed and improved to better cover the current behavior. There was some standardization done as well to facilitate the implementation of the rollback functionality. StorageMigratorWorker was extracted to HashedStorage namespace were RollbackerWorker will live one as well.
-
Kamil Trzcińśki authored
This includes a set of APIs to manipulate container registry. This includes also an ability to delete tags based on requested criteria, like keep-last-n, matching-name, older-than.
-
- Jan 07, 2019
-
-
Heinrich Lee Yu authored
Process CSV uploads async using a worker then email results
-
- Dec 21, 2018
-
-
George Tsiolis authored
-
- Dec 19, 2018
-
-
Zeger-Jan van de Weg authored
This action doesn't lean on reduplication, so a short call can me made to the Gitaly server to have the object pool remove its remote to the project pending deletion. https://gitlab.com/gitlab-org/gitaly/blob/f6cd55357/internal/git/objectpool/link.go#L58 When an object pool doesn't have members, this would invalidate the need for a pool. So when a project leaves the pool, the pool will be destroyed on the background. Fixes: https://gitlab.com/gitlab-org/gitaly/issues/1415
-
- Dec 12, 2018
-
-
Alejandro Rodríguez authored
The email is sent to project maintainers containing the last mirror update error. This will allow maintainers to set alarms and react accordingly.
-
- Dec 07, 2018
-
-
Zeger-Jan van de Weg authored
When a project is forked, the new repository used to be a deep copy of everything stored on disk by leveraging `git clone`. This works well, and makes isolation between repository easy. However, the clone is at the start 100% the same as the origin repository. And in the case of the objects in the object directory, this is almost always going to be a lot of duplication. Object Pools are a way to create a third repository that essentially only exists for its 'objects' subdirectory. This third repository's object directory will be set as alternate location for objects. This means that in the case an object is missing in the local repository, git will look in another location. This other location is the object pool repository. When Git performs garbage collection, it's smart enough to check the alternate location. When objects are duplicated, it will allow git to throw one copy away. This copy is on the local repository, where to pool remains as is. These pools have an origin location, which for now will always be a repository that itself is not a fork. When the root of a fork network is forked by a user, the fork still clones the full repository. Async, the pool repository will be created. Either one of these processes can be done earlier than the other. To handle this race condition, the Join ObjectPool operation is idempotent. Given its idempotent, we can schedule it twice, with the same effect. To accommodate the holding of state two migrations have been added. 1. Added a state column to the pool_repositories column. This column is managed by the state machine, allowing for hooks on transitions. 2. pool_repositories now has a source_project_id. This column in convenient to have for multiple reasons: it has a unique index allowing the database to handle race conditions when creating a new record. Also, it's nice to know who the host is. As that's a short link to the fork networks root. Object pools are only available for public project, which use hashed storage and when forking from the root of the fork network. (That is, the project being forked from itself isn't a fork) In this commit message I use both ObjectPool and Pool repositories, which are alike, but different from each other. ObjectPool refers to whatever is on the disk stored and managed by Gitaly. PoolRepository is the record in the database.
-
Douwe Maan authored
-
- Dec 06, 2018
-
-
Jan Provaznik authored
It gathers list of file paths to delete before destroying the parent object. Then after the parent_object is destroyed these paths are scheduled for deletion asynchronously. Carrierwave needed associated model for deleting upload file. To avoid this requirement, simple Fog/File layer is used directly for file deletion, this allows us to use just a simple list of paths.
-