- Sep 12, 2019
-
-
Peter Leitzen authored
Setup counter for Productivity Analytics See merge request gitlab-org/gitlab-ce!32915 (cherry picked from commit 44dd3d2d) 21409095 Setup counter for Productivity Analytics
-
- Sep 10, 2019
-
-
-
Alessio Caiazza authored
This cop prevents you from using file in API, it points you to the development documentation about workhorse file acceleration.
-
Signed-off-by:
Dmitriy Zaporozhets <dmitriy.zaporozhets@gmail.com>
-
Qingyu Zhao authored
Move Gitlab::SidekiqMonitor to namespace Gitlab::SidekiqDaemon::Monitor - Class name and file name change - File path change to lib/gitlab/sidekiq_daemon/monitor.rb - Update class usage/reference in other files, including documentation
-
Nick Thomas authored
For zero-downtime deployed in a mixed code environment between 12.2 and 12.3, the branch and tag name cache is incorrectly invalidated - a push to an old machine will not clear the redis set version of the cache on the new machine. This commit ensures that, in 12.3, both set and non-set versions of the cache are invalidated, but does not write or consult the set version of the cache. . In 12.4, it will be safe to switch branch and tag names to the redis set cache both it and the legacy cache will be invalidated appropriately in such a mixed code environment. This delays the full implementation of the feature by one release, but in the absence of a credible feature-flagging strategy, and amidst an abundance of caution about the effects of too-eager cache expiration, I believe this is the best approach available to us.
-
Markus Koller authored
We had similar code in a few places to redirect to the last page if the given page number is out of range. This unifies the handling in a new controller concern and adds usage of it in all snippet listings.
-
Markus Koller authored
- Avoid N+1 queries for authors and comment counts - Avoid an additional snippet existence query
-
Nick Thomas authored
This reverts commit c6ccc07f.
-
Nick Thomas authored
-
-
This makes sure we build the correct variables for testing translations. When translating, we could be specifying the variables in different forms for each id: - In the singular we could be using a `%{hash}` interpolation - In the plural we could be using a `%d` interpolation This changes the tests to accommodate for that: We now use the variables used in the relevant translation id as the source for the variables we mix in in specs.
-
Francisco Javier López authored
In case the source and the target project are the same, the source branch is the default branch, and the target branch is not present, we will avoid prefilling the target branch with the repository default branch. Letting the user decide.
-
Jan Provaznik authored
This presenter will be used in an upcoming MR which adds rendering of epic events on group activity page.
-
Etienne Baqué authored
-
- Sep 09, 2019
-
-
Mo Khan authored
-
Michael Kozono authored
-
Enrique Alcántara authored
- Create HAML UI select a cloud provider to create a cluster. - Add query param to :new cluster view to display a specific cluster provider form depending on the value of the provider query param. - Update unit tests and e2e tests to reflect these changes
-
Peter Leitzen authored
Utilize `json_fields` to expose fields via `Service#as_json(only: json_fields)`.
-
Jose Ivan Vargas Lopez authored
The carets will function as a button that will allow the panels from the monitoring dashboard to collapse and show panels
-
Kamil Trzcińśki authored
ActiveModel::Serialization is simple in that it recursively calls `as_json` on each object to serialize everything. However, for a model like a Project, this can generate a query for every single association, which can add up to tens of thousands of queries and lead to memory bloat. To improve this, we can do several things: 1. We use `tree:` and `preload:` to automatically generate a list of all preloads that could be used to serialize objects in bulk. 2. We observe that a single project has many issues, merge requests, etc. Instead of serializing everything at once, which could lead to database timeouts and high memory usage, we take each top-level association and serialize the data in batches. For example, we serialize the first 100 issues and preload all of their associated events, notes, etc. before moving onto the next batch. When we're done, we serialize merge requests in the same way. We repeat this pattern for the remaining associations specified in import_export.yml.
-
Francisco Javier López authored
Lowering the limit when performing search from 1001 to 101. This will allow us to speed this process.
-
drew authored
-
Mathieu Parent authored
As in documentation. Fixes: #58180. Also remove the requirement between domain_blacklist_enabled and domain_blacklist.
-
This change implements Application Statistics API
-
- Sep 07, 2019
-
-
Andrea Leone authored
-
Jan Provaznik authored
Because we don't have any destroy callbacks (or other logic triggered on event destroy), there is no reason for deleting events inefficiently one by one, instead we can use :delete_all.
-
vshushlin authored
Just replace RSA.new with PKey.read
-
- Sep 06, 2019
-
-
Stan Hu authored
spec/controllers/registrations_controller_spec.rb polluted the test environment by changing the Recaptcha configuration. We now stub the controller's `verify_recaptcha` method instead of doing that. Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/67133
-
Lee Tickett authored
-
Igor Drozdov authored
Expose id field in the serializer in order to store comments content in the localStorage under the correct key
-
Winnie Hellmann authored
This reverts merge request !32571
-
Alessio Caiazza authored
-
Alessio Caiazza authored
Wiki attachments can be workhorse accelerated. This commit is backward compatible with older workhorse
-
Kamil Trzcińśki authored
This brings a significant refactor to how we handle `import_export.yml`, merge it with EE and how we handle that for reader and saver. This is meant to simplify the code, and remove a ton of conditions to handle different models of the structure. This is also meant to prepare the structure to extend it much easier, like adding `preload:` or additional object types when needed. This does not change the behavior of import/export, rather unifies and simplifies the current implementation.
-
-
Ash McKenzie authored
This class encapsulates our use of the Danger gem.
-
Winnie Hellmann authored
-