Skip to content
Snippets Groups Projects
  1. Jan 10, 2020
  2. Jan 07, 2020
  3. Dec 24, 2019
  4. Dec 10, 2019
  5. Nov 19, 2019
  6. Nov 11, 2019
  7. Oct 29, 2019
  8. Oct 17, 2019
  9. Sep 05, 2019
    • Fabio Pitino's avatar
      CE port for pipelines for external pull requests · ca6a1f33
      Fabio Pitino authored
      Detect if pipeline runs for a GitHub pull request
      
      When using a mirror for CI/CD only we register a pull_request
      webhook. When a pull_request webhook is received, if the
      source branch SHA matches the actual head of the branch in the
      repository we create immediately a new pipeline for the
      external pull request. Otherwise we store the
      pull request info for when the push webhook is received.
      
      When using "only/except: external_pull_requests" we can detect
      if the pipeline has a open pull request on GitHub and create or
      not the job based on that.
      ca6a1f33
  10. Aug 13, 2019
  11. Aug 01, 2019
    • Kamil Trzcińśki's avatar
      Add support for DAG · e7ee84aa
      Kamil Trzcińśki authored
      This implements the support for `needs:` keyword
      as part of GitLab CI. That makes some of the jobs
      to be run out of order.
      e7ee84aa
  12. Jul 18, 2019
  13. Jul 02, 2019
    • Mayra Cabrera's avatar
      Includes logic to persist namespace statistics · dfdfa913
      Mayra Cabrera authored
      - Add two new ActiveRecord models:
        - RootNamespaceStoragestatistics will persist root namespace statistics
        - NamespaceAggregationSchedule will save information when a new update
      to the namespace statistics needs to be scheduled
      - Inject into UpdateProjectStatistics concern a new callback that will
      call an async job to insert a new row onto NamespaceAggregationSchedule
      table
      - When a new row is inserted a new job is scheduled. This job will
      update call an specific service to update the statistics and after that
      it will delete thee aggregated scheduled row
      - The RefresherServices makes heavy use of arel to build composable
      queries to update Namespace::RootStorageStatistics attributes.
      - Add an extra worker to traverse pending rows on
      NAmespace::AggregationSchedule table and schedule a worker for each one
      of this rows.
      - Add an extra worker to traverse pending rows on
      NAmespace::AggregationSchedule table and schedule a worker for each one
      of this rows
      dfdfa913
  14. Jun 24, 2019
    • vshushlin's avatar
      Renew Let's Encrypt certificates · a7764d0e
      vshushlin authored and Nick Thomas's avatar Nick Thomas committed
      Add index for pages domain ssl auto renewal
      Add PagesDomain.needs_ssl_renewal scope
      Add cron worker for ssl renewal
      Add worker for ssl renewal
      Add pages ssl renewal worker queues settings
      a7764d0e
  15. Jun 04, 2019
  16. Apr 30, 2019
  17. Apr 26, 2019
  18. Apr 04, 2019
  19. Mar 27, 2019
    • Nick Thomas's avatar
      Allow external diffs to be used conditionally · 0e831b0b
      Nick Thomas authored
      Since external diffs are likely to be a bit slower than in-database
      ones, add a mode that makes diffs external after they've been obsoleted
      by events. This should strike a balance between performance and disk
      space.
      
      A background cron drives the majority of migrations, since diffs become
      outdated through user actions.
      Verified
      0e831b0b
  20. Mar 20, 2019
    • Tiger Watson's avatar
      Create framework for build prerequisites · 00f0d356
      Tiger Watson authored
      Introduces the concept of Prerequisites for a CI build.
      If a build has unmet prerequisites it will go through the
      :preparing state before being made available to a runner.
      
      There are no actual prerequisites yet, so current
      behaviour is unchanged.
      00f0d356
  21. Mar 05, 2019
    • João Cunha's avatar
      Rename ClusterUpdateAppWorker to ClusterPatchAppWorker · 3bdff7aa
      João Cunha authored
      - This is to avoid colision with EE ClusterUpdateAppWorker
      3bdff7aa
    • João Cunha's avatar
      Creates Clusterss::ApplciationsController update endpoint · f8234d9a
      João Cunha authored and Jacques Erasmus's avatar Jacques Erasmus committed
      - Creates new route
      - Creates new controller action
      - Creates call stack:
        Clusterss::ApplciationsController calls -->
        Clusters::Applications::UpdateService calls -->
        Clusters::Applications::ScheduleUpdateService calls -->
        ClusterUpdateAppWorker calls -->
        Clusters::Applications::PatchService -->
        ClusterWaitForAppInstallationWorker
      
      DRY req params
      
      Adds gcp_cluster:cluster_update_app queue
      
      Schedule_update_service is uneeded
      
      Extract common logic to a parent class (UpdateService will need it)
      
      Introduce new UpdateService
      
      Fix rescue class namespace
      
      Fix RuboCop offenses
      
      Adds BaseService for create and update services
      
      Remove request_handler code duplication
      
      Fixes update command
      
      Move update_command to ApplicationCore so all apps can use it
      
      Adds tests for Knative update_command
      
      Adds specs for PatchService
      
      Raise error if update receives an unistalled app
      
      Adds update_service spec
      
      Fix RuboCop offense
      
      Use subject in favor of go
      
      Adds update endpoint specs for project namespace
      
      Adds update endpoint specs for group namespace
      f8234d9a
  22. Mar 01, 2019
  23. Feb 27, 2019
    • Jacopo's avatar
      Add project http fetch statistics API · 5ae9a44a
      Jacopo authored
      The API get projects/:id/traffic/fetches allows user with write
      access to the repository to get the number of clones for the
      last 30 days.
      5ae9a44a
  24. Feb 20, 2019
  25. Feb 07, 2019
  26. Jan 25, 2019
    • Gabriel Mazetto's avatar
      Refactor Storage Migration · 7bc16889
      Gabriel Mazetto authored
      Specs were reviewed and improved to better cover the current behavior.
      There was some standardization done as well to facilitate the
      implementation of the rollback functionality.
      
      StorageMigratorWorker was extracted to HashedStorage namespace were
      RollbackerWorker will live one as well.
      7bc16889
    • Kamil Trzcińśki's avatar
      Add Container Registry API · 045d07ba
      Kamil Trzcińśki authored
      This includes a set of APIs to manipulate container registry.
      This includes also an ability to delete tags based on requested
      criteria, like keep-last-n, matching-name, older-than.
      045d07ba
  27. Jan 07, 2019
  28. Dec 21, 2018
  29. Dec 19, 2018
  30. Dec 12, 2018
  31. Dec 07, 2018
    • Zeger-Jan van de Weg's avatar
      Allow public forks to be deduplicated · 896c0bdb
      Zeger-Jan van de Weg authored
      When a project is forked, the new repository used to be a deep copy of everything
      stored on disk by leveraging `git clone`. This works well, and makes isolation
      between repository easy. However, the clone is at the start 100% the same as the
      origin repository. And in the case of the objects in the object directory, this
      is almost always going to be a lot of duplication.
      
      Object Pools are a way to create a third repository that essentially only exists
      for its 'objects' subdirectory. This third repository's object directory will be
      set as alternate location for objects. This means that in the case an object is
      missing in the local repository, git will look in another location. This other
      location is the object pool repository.
      
      When Git performs garbage collection, it's smart enough to check the
      alternate location. When objects are duplicated, it will allow git to
      throw one copy away. This copy is on the local repository, where to pool
      remains as is.
      
      These pools have an origin location, which for now will always be a
      repository that itself is not a fork. When the root of a fork network is
      forked by a user, the fork still clones the full repository. Async, the
      pool repository will be created.
      
      Either one of these processes can be done earlier than the other. To
      handle this race condition, the Join ObjectPool operation is
      idempotent. Given its idempotent, we can schedule it twice, with the
      same effect.
      
      To accommodate the holding of state two migrations have been added.
      1. Added a state column to the pool_repositories column. This column is
      managed by the state machine, allowing for hooks on transitions.
      2. pool_repositories now has a source_project_id. This column in
      convenient to have for multiple reasons: it has a unique index allowing
      the database to handle race conditions when creating a new record. Also,
      it's nice to know who the host is. As that's a short link to the fork
      networks root.
      
      Object pools are only available for public project, which use hashed
      storage and when forking from the root of the fork network. (That is,
      the project being forked from itself isn't a fork)
      
      In this commit message I use both ObjectPool and Pool repositories,
      which are alike, but different from each other. ObjectPool refers to
      whatever is on the disk stored and managed by Gitaly. PoolRepository is
      the record in the database.
      Unverified
      896c0bdb
    • Douwe Maan's avatar
      Remove RemoveOldWebHookLogsWorker · 536c1e40
      Douwe Maan authored
      Unverified
      536c1e40
  32. Dec 06, 2018
    • Jan Provaznik's avatar
      Use FastDestroy for deleting uploads · 239fdc78
      Jan Provaznik authored
      It gathers list of file paths to delete before destroying
      the parent object. Then after the parent_object is destroyed
      these paths are scheduled for deletion asynchronously.
      
      Carrierwave needed associated model for deleting upload file.
      To avoid this requirement, simple Fog/File layer is used directly
      for file deletion, this allows us to use just a simple list of paths.
      239fdc78
Loading