- Jul 19, 2019
-
-
Mayra Cabrera authored
Fix Gitaly auto-detection caching Closes #64802 See merge request gitlab-org/gitlab-ce!30954 (cherry picked from commit eb3f465e) ec5ceae6 Fix Gitaly auto-detection caching
-
- Jul 16, 2019
-
-
John Cai authored
Whenever we use the rugged implementation, we are going straight to disk so we want to bypass the disk access check.
-
- Jul 15, 2019
-
-
John Cai authored
-
- Jul 10, 2019
-
-
Mayra Cabrera authored
Suggests to use a JSON structured log instead Related to https://gitlab.com/gitlab-org/gitlab-ce/issues/54102
-
- Jul 09, 2019
-
-
John Cai authored
-
- Jul 05, 2019
-
-
John Cai authored
Add a module we use as a singleton to determine whether or not rails is able to access the disk
-
Zeger-Jan van de Weg authored
The metric was used to correlate Gitaly requests to the Rails controller and action combination. However, Kibana provides better observability in this specific metric, and can handle hig cardinality much better. There's no dashboard in Grafana that currently depends on this metric being exposed.
-
- Jun 18, 2019
-
-
Zeger-Jan van de Weg authored
The feature flag has been introduced an was turned off by default, now the it will default to be turned on. That change would still allow users to turn this feature off by leveraging the Rails console by running: `Feature.disable("gitaly_catfile-cache")` Another option is to manage the number of items the LRU cache will contain, by updating the `config.toml` for Gitaly. This would be the `catfile_cache_size`: https://gitlab.com/gitlab-org/gitaly/blob/0dcb5c579e63754f557aef91a4fa7a00e5b8b127/config.toml.example#L27 Closes: https://gitlab.com/gitlab-org/gitaly/issues/1712
-
Zeger-Jan van de Weg authored
The GitalyClient held a lot of logic which was all very tightly coupled. In this instance the feature logic was extracted to make it do just a little less and create a bit more focus in the GitalyClient's responsibilies.
-
- Jun 03, 2019
-
-
Delta islands were implemented last released in: https://gitlab.com/gitlab-org/gitaly/merge_requests/1110. It's been enabled on production and works as expected.
-
- May 07, 2019
-
-
Jacob Vosmaer (GitLab) authored
-
- May 05, 2019
-
-
Stan Hu authored
-
- Apr 29, 2019
-
-
- Apr 18, 2019
-
-
Andrew Newdigate authored
This change is a fairly straightforward refactor to extract the tracing and correlation-id code from the gitlab rails codebase into the new LabKit-Ruby project. The corresponding import into LabKit-Ruby was in https://gitlab.com/gitlab-org/labkit-ruby/merge_requests/1 The code itself remains very similar for now. Extracting it allows us to reuse it in other projects, such as Gitaly-Ruby. This will give us the advantages of correlation-ids and distributed tracing in that project too.
-
- Apr 17, 2019
-
-
Stan Hu authored
This adds the backtrace to a table to show exactly where the Gitaly call was made to make it easier to understand where the call originated. This change also collapses the details in the same row to improve the usability when there is a backtrace.
-
- Mar 28, 2019
-
-
John Cai authored
-
- Mar 27, 2019
-
-
Stan Hu authored
This avoids the case: ``` allow_ref_name_caching do allow_ref_name_caching do # using-feature end end ```
-
Stan Hu authored
For a given merge request, it's quite common to see duplicate FindCommit Gitaly requests because the Gitaly CommitService caches the request by the commit SHA, not by the ref name. However, most of the duplicate requests use the ref name, so the cache is never actually used in practice. This leads to unnecessary requests that slow performance. This commit allows certain callers to bypass the ref name to OID conversion in the cache. We don't do this by default because it's possible the tip of the branch changes during the commit, which would cause the caller to get stale data. This commit also forces the Ci::Pipeline to use the full ref name so that caching can work for merge requests. Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/57083
-
Stan Hu authored
This makes it easier to debug Gitaly performance issues in the field. This commit also makes the tracking of query time thread-safe via RequestStore.
-
- Mar 13, 2019
-
-
Nick Thomas authored
-
- Mar 11, 2019
-
-
Mark Lapierre authored
We typically don't want to enforce request limits in production However, we have some production-like test environments, i.e., ones where `Rails.env.production?` returns `true`. We do want to be able to check if the limit is being exceeded while testing in those environments.
-
- Mar 06, 2019
-
-
John Cai authored
-
Andrew Newdigate authored
This style change enforces `return if ...` instead of `return nil if ...` to save maintainers a few minor review points
-
- Mar 05, 2019
-
-
John Cai authored
-
- Feb 28, 2019
-
-
Nick Thomas authored
This reverts commit 00675311.
-
- Feb 22, 2019
-
-
Prior to this change, 35 Gitaly RPCs were allowed. But recently there's been a renewed interest in performance. By lowering the number of calls new N + 1's will pop up. Later commits will add blocks to ignore the raised errors, followed by an issue for each to be fixed.
-
- Jan 25, 2019
-
-
Valery Sizov authored
Backport of https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7434
-
- Jan 22, 2019
-
-
Andrew Newdigate authored
This change allows the GitLab rails and sidekiq components to receive tracing spans from upstream services such as Workhorse and pass these spans on to downstream services including Gitaly and Sidekiq. This change will also emit traces for incoming and outgoing requests using the propagated trace information. This will allow operators and engineers to view traces across the Workhorse, GitLab Rails, Sidekiq and Gitaly components. Additional intra-service instrumentation will be added in future changes.
-
- Dec 20, 2018
-
-
Ahmad Hassan authored
-
- Dec 19, 2018
-
-
Ahmad Hassan authored
-
- Dec 17, 2018
-
-
Ahmad Hassan authored
-
- Dec 11, 2018
-
-
Ahmad Hassan authored
-
- Dec 07, 2018
-
-
Andrew Newdigate authored
-
- Dec 06, 2018
-
-
Kamil Trzcińśki authored
This reverts commit 3560b119.
-
Kamil Trzcińśki authored
This changes `correlation_id` to be `correlation-id` when passed via jobs
-
Kamil Trzcińśki authored
The Correlation ID is taken or generated from received X-Request-ID. Then it is being passed to all executed services (sidekiq workers or gitaly calls). The Correlation ID is logged in all structured logs as `correlation_id`.
-
- Nov 27, 2018
-
-
Ahmad Hassan authored
-
- Nov 20, 2018
-
-
Zeger-Jan van de Weg authored
On HEAD~ we remove the ID from the class, which created a bug. Given we don't need the ID anymore, it has been removed and simplified.
-
Zeger-Jan van de Weg authored
This reverts merge request !23229
-
Sean McGivern authored
This reverts merge request !23140
-