Skip to content
Snippets Groups Projects
  1. Sep 13, 2019
  2. Aug 24, 2019
  3. Aug 23, 2019
  4. Jul 25, 2019
  5. Jul 17, 2019
  6. Jun 28, 2019
  7. Jun 25, 2019
  8. May 21, 2019
  9. Apr 10, 2019
  10. Mar 19, 2019
    • Patrick Bajao's avatar
      Integrate Gitlab::Keys with Gitlab::Shell · 26dadbc9
      Patrick Bajao authored and Nick Thomas's avatar Nick Thomas committed
      In this commit, some methods that aren't being used
      are removed from `Gitlab::Shell`. They are the ff:
      - `#remove_keys_not_found_in_db`
      - `#batch_read_key_ids`
      - `#list_key_ids`
      
      The corresponding methods in `Gitlab::Keys` have been
      removed as well.
      26dadbc9
  11. Feb 05, 2019
  12. Dec 03, 2018
  13. Sep 11, 2018
  14. Aug 23, 2018
  15. Jul 23, 2018
  16. Jul 20, 2018
  17. Apr 23, 2018
  18. Apr 05, 2018
  19. Mar 21, 2018
  20. Dec 15, 2017
    • Sean McGivern's avatar
      Don't use Markdown cache for stubbed settings in specs · 10885edf
      Sean McGivern authored
      The ApplicationSetting model uses the CacheMarkdownField concern, which updates
      the cached HTML when the field is updated in the database. However, in specs,
      when we want to test conditions using ApplicationSetting, we stub it, because
      this is accessed in different ways throughout the application.
      
      This means that if a spec runs that caches one of the Markdown fields, and a
      later spec uses `stub_application_setting` to set the raw value of that field,
      the cached value was still the original one. We can work around this by ignoring
      the Markdown cache in contexts where we're using `stub_application_setting`.
      
      We could be smarter, and only do this on the Markdown fields of the model, but
      this is probably fine.
      10885edf
  21. Dec 08, 2017
    • Bob Van Landuyt's avatar
      Move the circuitbreaker check out in a separate process · f1ae1e39
      Bob Van Landuyt authored
      Moving the check out of the general requests, makes sure we don't have
      any slowdown in the regular requests.
      
      To keep the process performing this checks small, the check is still
      performed inside a unicorn. But that is called from a process running
      on the same server.
      
      Because the checks are now done outside normal request, we can have a
      simpler failure strategy:
      
      The check is now performed in the background every
      `circuitbreaker_check_interval`. Failures are logged in redis. The
      failures are reset when the check succeeds. Per check we will try
      `circuitbreaker_access_retries` times within
      `circuitbreaker_storage_timeout` seconds.
      
      When the number of failures exceeds
      `circuitbreaker_failure_count_threshold`, we will block access to the
      storage.
      
      After `failure_reset_time` of no checks, we will clear the stored
      failures. This could happen when the process that performs the checks
      is not running.
      f1ae1e39
  22. Nov 03, 2017
  23. Oct 17, 2017
  24. Sep 22, 2017
  25. Aug 16, 2017
  26. Aug 09, 2017
  27. Jul 27, 2017
  28. Jul 18, 2017
  29. Jul 07, 2017
  30. Jul 06, 2017
  31. Jun 21, 2017
  32. Jun 02, 2017
  33. Mar 28, 2017
  34. Feb 23, 2017
  35. Nov 30, 2016
  36. Sep 21, 2015
  37. Aug 20, 2015
  38. Jul 07, 2015
Loading