Skip to content
Snippets Groups Projects
  1. Sep 30, 2019
  2. Aug 13, 2019
    • Bob Van Landuyt :neckbeard:'s avatar
      Rework retry strategy for remote mirrors · 452bc36d
      Bob Van Landuyt :neckbeard: authored and Douwe Maan's avatar Douwe Maan committed
      **Prevention of running 2 simultaneous updates**
      
      Instead of using `RemoteMirror#update_status` and raise an error if
      it's already running to prevent the same mirror being updated at the
      same time we now use `Gitlab::ExclusiveLease` for that.
      
      When we fail to obtain a lease in 3 tries, 30 seconds apart, we bail
      and reschedule. We'll reschedule faster for the protected branches.
      
      If the mirror already ran since it was scheduled, the job will be
      skipped.
      
      **Error handling: Remote side**
      
      When an update fails because of a `Gitlab::Git::CommandError`, we
      won't track this error in sentry, this could be on the remote side:
      for example when branches have diverged.
      
      In this case, we'll try 3 times scheduled 1 or 5 minutes apart.
      
      In between, the mirror is marked as "to_retry", the error would be
      visible to the user when they visit the settings page.
      
      After 3 tries we'll mark the mirror as failed and notify the user.
      
      We won't track this error in sentry, as it's not likely we can help
      it.
      
      The next event that would trigger a new refresh.
      
      **Error handling: our side**
      
      If an unexpected error occurs, we mark the mirror as failed, but we'd
      still retry the job based on the regular sidekiq retries with
      backoff. Same as we used to
      
      The error would be reported in sentry, since its likely we need to do
      something about it.
      452bc36d
  3. May 31, 2019
  4. Dec 12, 2018
Loading