Skip to content
Snippets Groups Projects
  1. Sep 04, 2019
  2. Aug 27, 2019
  3. Aug 15, 2019
  4. Aug 13, 2019
    • Bob Van Landuyt :neckbeard:'s avatar
      Rework retry strategy for remote mirrors · 452bc36d
      Bob Van Landuyt :neckbeard: authored and Douwe Maan's avatar Douwe Maan committed
      **Prevention of running 2 simultaneous updates**
      
      Instead of using `RemoteMirror#update_status` and raise an error if
      it's already running to prevent the same mirror being updated at the
      same time we now use `Gitlab::ExclusiveLease` for that.
      
      When we fail to obtain a lease in 3 tries, 30 seconds apart, we bail
      and reschedule. We'll reschedule faster for the protected branches.
      
      If the mirror already ran since it was scheduled, the job will be
      skipped.
      
      **Error handling: Remote side**
      
      When an update fails because of a `Gitlab::Git::CommandError`, we
      won't track this error in sentry, this could be on the remote side:
      for example when branches have diverged.
      
      In this case, we'll try 3 times scheduled 1 or 5 minutes apart.
      
      In between, the mirror is marked as "to_retry", the error would be
      visible to the user when they visit the settings page.
      
      After 3 tries we'll mark the mirror as failed and notify the user.
      
      We won't track this error in sentry, as it's not likely we can help
      it.
      
      The next event that would trigger a new refresh.
      
      **Error handling: our side**
      
      If an unexpected error occurs, we mark the mirror as failed, but we'd
      still retry the job based on the regular sidekiq retries with
      backoff. Same as we used to
      
      The error would be reported in sentry, since its likely we need to do
      something about it.
      452bc36d
  5. Aug 09, 2019
  6. Aug 06, 2019
  7. Jul 30, 2019
  8. Jul 29, 2019
  9. Jul 10, 2019
  10. Jun 25, 2019
  11. Jun 12, 2019
    • Fabio Pitino's avatar
      Expose ci_default_git_depth via project API · 3ac527b4
      Fabio Pitino authored
      Enable Get and Update of ci_default_git_depth for
      Project API.
      
      Renaming Project#default_git_depth to :ci_default_git_depth
      to give more context through the API usage.
      
      Add API documentation
      3ac527b4
  12. Jun 06, 2019
    • Krasimir Angelov's avatar
      Comment why forks get default_git_depth of 0 instead nil · b8704dce
      Krasimir Angelov authored
      and simplify ProjectCiCdSetting#set_default_git_depth
      b8704dce
    • Krasimir Angelov's avatar
      Forks get default_git_depth 0 if the origin is nil · 52673a91
      Krasimir Angelov authored
      If the origin project has no default_git_depth set (i.e. nil) set the
      fork's default_git_depth to 0
      52673a91
    • Krasimir Angelov's avatar
      Add project level git depth setting · ad9ae16d
      Krasimir Angelov authored and Fabio Pitino's avatar Fabio Pitino committed
      Introduce default_git_depth in project's CI/CD settings and set it to
      50. Use it if there is no GIT_DEPTH variable specified. Apply this
      default only to newly created projects and keep it nil for old ones
      in order to not break pipelines that rely on non-shallow clones.
      
      default_git_depth can be updated from CI/CD Settings in the UI, must be
      either nil or integer between 0 and 1000 (incl).
      
      Inherit default_git_depth from the origin project when forking projects.
      
      MR pipelines are run on a MR ref (refs/merge-requests/:iid/merge) and it
      contains unique commit (i.e. merge commit) which doesn't exist in the
      other branch/tags refs. We need to add it cause otherwise it may break
      pipelines for old projects that have already enabled Pipelines for merge
      results and have git depth 0.
      
      Document new default_git_depth project CI/CD setting
      ad9ae16d
  13. Jun 05, 2019
  14. May 29, 2019
  15. May 21, 2019
  16. May 17, 2019
  17. May 10, 2019
  18. May 09, 2019
  19. May 06, 2019
  20. May 05, 2019
  21. May 02, 2019
    • Jan Provaznik's avatar
      Use git_garbage_collect_worker to run pack_refs · d25239ee
      Jan Provaznik authored
      PackRefs is not an expensive gitaly call - we want to
      call it more often (than as part of full `gc`) because
      it helps to keep number of refs files small - too many
      refs file may be a problem for deployments with
      slow storage.
      d25239ee
  22. Apr 30, 2019
  23. Apr 26, 2019
  24. Apr 23, 2019
  25. Apr 15, 2019
  26. Apr 10, 2019
  27. Apr 09, 2019
  28. Apr 05, 2019
  29. Apr 01, 2019
    • Stan Hu's avatar
      Force a full GC after importing a project · d4c6a3af
      Stan Hu authored
      During a project import, it's possible that new branches are created by
      the importer to handle pull requests that have been created from forked
      projects, which would increment the `pushes_since_gc` value via
      `HousekeepingService.increment!` before a full garbage collection gets
      to run. This causes HousekeepingService to skip the full `git gc` and
      move to the incremental repack mode. To ensure that a garbage collection
      is run to pack refs and objects, explicitly execute the task.
      
      Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/59477
      Unverified
      d4c6a3af
  30. Mar 20, 2019
Loading