Skip to content
Snippets Groups Projects
  1. Jul 30, 2018
    • Bob Van Landuyt's avatar
      Show the status of a user in interactions · f1d3ea63
      Bob Van Landuyt authored
      The status is shown for
      - The author of a commit when viewing a commit
      - Notes on a commit (regular/diff)
      - The user that triggered a pipeline when viewing a pipeline
      - The author of a merge request when viewing a merge request
      - The author of notes on a merge request (regular/diff)
      - The author of an issue when viewing an issue
      - The author of notes on an issue
      - The author of a snippet when viewing a snippet
      - The author of notes on a snippet
      - A user's profile page
      - The list of members of a group/user
      f1d3ea63
  2. Jul 04, 2018
    • Bob Van Landuyt's avatar
      Add pipeline lists to GraphQL · 04b04658
      Bob Van Landuyt authored
      This adds Keyset pagination to GraphQL lists. PoC for that is
      pipelines on merge requests and projects.
      
      When paginating a list, the base-64 encoded id of the ordering
      field (in most cases the primary key) can be passed in the `before` or
      `after` GraphQL argument.
      04b04658
  3. May 25, 2018
  4. May 24, 2018
  5. May 23, 2018
  6. May 22, 2018
  7. May 21, 2018
  8. May 17, 2018
    • Yorick Peterse's avatar
      Exclude coverage data from the pipelines page · 878ca2e6
      Yorick Peterse authored
      When displaying a project's pipelines
      (Projects::PipelinesController#index) we now exclude the coverage data.
      This data was not used by the frontend, yet getting it would require one
      SQL query per pipeline. These queries in turn could be quite expensive
      on GitLab.com.
      Unverified
      878ca2e6
    • Yorick Peterse's avatar
      Preload pipeline data for project pipelines · 19428e80
      Yorick Peterse authored
      When displaying the pipelines of a project we now preload the following
      data:
      
      1. Authors of the commits that belong to these pipelines
      2. The number of warnings per pipeline, which is used by
         Ci::Pipeline#has_warnings?
      
      == Commit Authors
      
      Previously this data was queried for every Commit separately, leading to
      20 SQL queries being executed in the worst case. With an average of 3 to
      5 milliseconds per SQL query this could result in 100 milliseconds being
      spent in _just_ getting Commit authors.
      
      To preload this data Commit#author now uses BatchLoader (through
      Commit#lazy_author), and a separate module
      Gitlab::Ci::Pipeline::Preloader is used to ensure all authors are loaded
      before they are used.
      
      == Number of warnings
      
      This changes Ci::Pipeline#has_warnings? so it supports preloading of the
      number of warnings per pipeline. This removes the need for executing a
      COUNT(*) query for every pipeline just to see if it has any warnings or
      not.
      Unverified
      19428e80
    • Yorick Peterse's avatar
      Limit the number of pipelines to count · 70985aa1
      Yorick Peterse authored
      When displaying the project pipelines dashboard we display a few tabs
      for different pipeline states. For every such tab we count the number of
      pipelines that belong to it. For large projects such as GitLab CE this
      means having to count over 80 000 rows, which can easily take between 70
      and 100 milliseconds per query.
      
      To improve this we apply a technique we already use for search results:
      we limit the number of rows to count. The current limit is 1000, which
      means that if more than 1000 rows are present for a state we will show
      "1000+" instead of the exact number. The SQL queries used for this
      perform much better than a regular COUNT, even when a project has a lot
      of pipelines.
      
      Prior to these changes we would end up running a query like this:
      
          SELECT COUNT(*)
          FROM ci_pipelines
          WHERE project_id = 13083
          AND status IN ('success', 'failed', 'canceled')
      
      This would produce a plan along the lines of the following:
      
          Aggregate  (cost=3147.55..3147.56 rows=1 width=8) (actual time=501.413..501.413 rows=1 loops=1)
            Buffers: shared hit=17116 read=861 dirtied=2
            ->  Index Only Scan using index_ci_pipelines_on_project_id_and_ref_and_status_and_id on ci_pipelines  (cost=0.56..2984.14 rows=65364 width=0) (actual time=0.095..490.263 rows=80388 loops=1)
                  Index Cond: (project_id = 13083)
                  Filter: ((status)::text = ANY ('{success,failed,canceled}'::text[]))
                  Rows Removed by Filter: 2894
                  Heap Fetches: 353
                  Buffers: shared hit=17116 read=861 dirtied=2
          Planning time: 1.409 ms
          Execution time: 501.519 ms
      
      Using the LIMIT count technique we instead run the following query:
      
          SELECT COUNT(*)
          FROM (
              SELECT 1
              FROM ci_pipelines
              WHERE project_id = 13083
              AND status IN ('success', 'failed', 'canceled')
              LIMIT 1001
          ) for_count
      
      This query produces the following plan:
      
          Aggregate  (cost=58.77..58.78 rows=1 width=8) (actual time=1.726..1.727 rows=1 loops=1)
            Buffers: shared hit=169 read=15
            ->  Limit  (cost=0.56..46.25 rows=1001 width=4) (actual time=0.164..1.570 rows=1001 loops=1)
                  Buffers: shared hit=169 read=15
                  ->  Index Only Scan using index_ci_pipelines_on_project_id_and_ref_and_status_and_id on ci_pipelines  (cost=0.56..2984.14 rows=65364 width=4) (actual time=0.162..1.426 rows=1001 loops=1)
                        Index Cond: (project_id = 13083)
                        Filter: ((status)::text = ANY ('{success,failed,canceled}'::text[]))
                        Rows Removed by Filter: 9
                        Heap Fetches: 10
                        Buffers: shared hit=169 read=15
          Planning time: 1.832 ms
          Execution time: 1.821 ms
      
      While this query still uses a Filter for the "status" field the number
      of rows that it may end up filtering (at most 1001) is small enough that
      an additional index does not appear to be necessary at this time.
      
      See https://gitlab.com/gitlab-org/gitlab-ce/issues/43132#note_68659234
      for more information.
      Unverified
      70985aa1
  9. May 15, 2018
  10. May 06, 2018
  11. May 02, 2018
  12. May 01, 2018
  13. Apr 23, 2018
  14. Apr 18, 2018
  15. Feb 01, 2018
    • Yorick Peterse's avatar
      Track and act upon the number of executed queries · cca61980
      Yorick Peterse authored
      This ensures that we have more visibility in the number of SQL queries
      that are executed in web requests. The current threshold is hardcoded to
      100 as we will rarely (maybe once or twice) change it.
      
      In production and development we use Sentry if enabled, in the test
      environment we raise an error. This feature is also only enabled in
      production/staging when running on GitLab.com as it's not very useful to
      other users.
      Unverified
      cca61980
  16. Dec 19, 2017
    • Zeger-Jan van de Weg's avatar
      Load commit in batches for pipelines#index · c6edae38
      Zeger-Jan van de Weg authored
      Uses `list_commits_by_oid` on the CommitService, to request the needed
      commits for pipelines. These commits are needed to display the user that
      created the commit and the commit title.
      
      This includes fixes for tests failing that depended on the commit
      being `nil`. However, now these are batch loaded, this doesn't happen
      anymore and the commits are an instance of BatchLoader.
      Unverified
      c6edae38
  17. Sep 12, 2017
  18. Jul 05, 2017
  19. Jun 23, 2017
  20. Jun 13, 2017
  21. Jun 01, 2017
  22. May 31, 2017
  23. May 10, 2017
  24. May 09, 2017
  25. May 06, 2017
  26. May 05, 2017
  27. May 02, 2017
  28. Apr 06, 2017
Loading