Skip to content
Snippets Groups Projects
  1. Mar 28, 2018
  2. Mar 20, 2018
  3. Mar 07, 2018
  4. Mar 01, 2018
  5. Feb 14, 2018
    • Stan Hu's avatar
      Simplify license generator error handling · 5b3b2b82
      Stan Hu authored
      5b3b2b82
    • Stan Hu's avatar
      Fix Error 500s loading repositories with no master branch · 35b3a0b9
      Stan Hu authored
      We removed the exception handling for Rugged errors in !16770, which
      revealed that the licensee gem attempts to retrieve a license file
      via Rugged in `refs/heads/master` by default. If that branch
      did not exist, a Rugged::ReferenceError would be thrown.
      
      There were two issues:
      
      1. Not every project uses `master` as the default branch. This
      change uses the head commit to identify the license.
      
      2. Removing the exception handling caused repositories to fail
      loading. We can safely catch and ignore any Rugged error because
      this means we weren't able to load a license file.
      
      Closes #43268
      35b3a0b9
  6. Feb 07, 2018
  7. Feb 02, 2018
  8. Feb 01, 2018
    • Zeger-Jan van de Weg's avatar
      Client changes for Tag,BranchNamesContainingCommit · 0a47d192
      Zeger-Jan van de Weg authored
      As part of gitlab-org/gitaly#884, this commit contains the client
      implementation for both TagNamesContaintingCommit and
      BranchNamesContainingCommit. The interface in the Repository model stays
      the same, but the implementation on the serverside, e.g. Gitaly, uses
      `for-each-ref`, as opposed to `branch` or `tag` which both aren't
      plumbing command. The result stays the same.
      
      On the serverside, we have the opportunity to limit the number of names
      to return. However, this is not supported on the front end yet. My
      proposal to use this ability: gitlab-org/gitlab-ce#42581. For now, this
      ability is not used as that would change more behaviours on a feature
      flag which might lead to unexpected changes on page refresh for example.
      Unverified
      0a47d192
  9. Jan 30, 2018
  10. Jan 29, 2018
  11. Jan 25, 2018
  12. Jan 23, 2018
  13. Jan 16, 2018
  14. Jan 15, 2018
  15. Jan 11, 2018
  16. Jan 10, 2018
  17. Jan 05, 2018
  18. Dec 20, 2017
  19. Dec 19, 2017
    • Zeger-Jan van de Weg's avatar
      Load commit in batches for pipelines#index · c6edae38
      Zeger-Jan van de Weg authored
      Uses `list_commits_by_oid` on the CommitService, to request the needed
      commits for pipelines. These commits are needed to display the user that
      created the commit and the commit title.
      
      This includes fixes for tests failing that depended on the commit
      being `nil`. However, now these are batch loaded, this doesn't happen
      anymore and the commits are an instance of BatchLoader.
      Unverified
      c6edae38
  20. Dec 14, 2017
  21. Dec 13, 2017
  22. Dec 12, 2017
  23. Dec 08, 2017
    • Bob Van Landuyt's avatar
      Move the circuitbreaker check out in a separate process · f1ae1e39
      Bob Van Landuyt authored
      Moving the check out of the general requests, makes sure we don't have
      any slowdown in the regular requests.
      
      To keep the process performing this checks small, the check is still
      performed inside a unicorn. But that is called from a process running
      on the same server.
      
      Because the checks are now done outside normal request, we can have a
      simpler failure strategy:
      
      The check is now performed in the background every
      `circuitbreaker_check_interval`. Failures are logged in redis. The
      failures are reset when the check succeeds. Per check we will try
      `circuitbreaker_access_retries` times within
      `circuitbreaker_storage_timeout` seconds.
      
      When the number of failures exceeds
      `circuitbreaker_failure_count_threshold`, we will block access to the
      storage.
      
      After `failure_reset_time` of no checks, we will clear the stored
      failures. This could happen when the process that performs the checks
      is not running.
      f1ae1e39
  24. Dec 07, 2017
  25. Dec 05, 2017
  26. Dec 04, 2017
  27. Nov 23, 2017
  28. Nov 21, 2017
  29. Nov 03, 2017
  30. Oct 27, 2017
    • Lin Jen-Shin (godfat)'s avatar
      Fetch the merged branches at once · 57d7ed05
      Lin Jen-Shin (godfat) authored
      57d7ed05
    • Zeger-Jan van de Weg's avatar
      Cache commits on the repository model · 3411fef1
      Zeger-Jan van de Weg authored
      Now, when requesting a commit from the Repository model, the results are
      not cached. This means we're fetching the same commit by oid multiple times
      during the same request. To prevent us from doing this, we now cache
      results. Caching is done only based on object id (aka SHA).
      
      Given we cache on the Repository model, results are scoped to the
      associated project, eventhough the change of two repositories having the
      same oids for different commits is small.
      Unverified
      3411fef1
Loading