Skip to content
Snippets Groups Projects
  1. Sep 04, 2019
  2. Sep 03, 2019
  3. Sep 02, 2019
  4. Aug 30, 2019
  5. Aug 29, 2019
    • João Cunha's avatar
      DRY check progress services · 1cec47ec
      João Cunha authored and Jan Provaznik's avatar Jan Provaznik committed
      Extract duplicated code from two similar classes into a parent one.
      1cec47ec
    • Stan Hu's avatar
      Fix snippets API not working with visibility level · 680f4377
      Stan Hu authored
      When a restricted visibility level of `private` is set in the instance,
      creating a snippet with the `visibility` level would always fail.
      This happened because:
      
      1. `params[:visibility]` was a string (e.g. "public")
      2. `CreateSnippetService` and `UpdateSnippetService` only looked
         at `params[:visibility_level]`, which was `nil`.
      
      To fix this, we:
      
      1. Make `CreateSnippetService` look at the newly-built
         `snippet.visibility_level`, since the right value is assigned by the
         `VisibilityLevel#visibility=` method.
      2. Modify `UpdateSnippetService` to handle both `visibility_level` and
      `visibility` parameters.
      
      Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/66050
      680f4377
  6. Aug 28, 2019
  7. Aug 27, 2019
  8. Aug 26, 2019
  9. Aug 24, 2019
  10. Aug 23, 2019
    • Reuben Pereira's avatar
      Add a link to docs in project description · 2515c0cd
      Reuben Pereira authored and Mayra Cabrera's avatar Mayra Cabrera committed
      Add to the service and migration both.
      2515c0cd
    • Nick Thomas's avatar
      Send TODOs for comments on commits correctly · 642f6b38
      Nick Thomas authored
      At present, the TodoService uses the `:read_project` ability to decide
      whether a user can read a note on a commit. However, commits can have a
      visibility level that is more restricted than the project, so this is a
      security issue.
      
      This commit changes the code to use the `:read_commit` ability in this
      case instead, which ensures TODOs are only generated for commit notes
      if the users can see the commit.
      Verified
      642f6b38
  11. Aug 22, 2019
  12. Aug 20, 2019
  13. Aug 19, 2019
  14. Aug 17, 2019
  15. Aug 16, 2019
    • Stan Hu's avatar
      Expire project caches once per push instead of once per ref · f14647fd
      Stan Hu authored and Douwe Maan's avatar Douwe Maan committed
      Previously `ProjectCacheWorker` would be scheduled once per ref, which
      would generate unnecessary I/O and load on Sidekiq, especially if many
      tags or branches were pushed at once. `ProjectCacheWorker` would expire
      three items:
      
      1. Repository size: This only needs to be updated once per push.
      2. Commit count: This only needs to be updated if the default branch
         is updated.
      3. Project method caches: This only needs to be updated if the default
         branch changes, but only if certain files change (e.g. README,
         CHANGELOG, etc.).
      
      Because the third item requires looking at the actual changes in the
      commit deltas, we schedule one `ProjectCacheWorker` to handle the first
      two cases, and schedule a separate `ProjectCacheWorker` for the third
      case if it is needed. As a result, this brings down the number of
      `ProjectCacheWorker` jobs from N to 2.
      
      Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/52046
      f14647fd
    • Douwe Maan's avatar
      Look up upstream commits once before queuing ProcessCommitWorkers · 97c2564f
      Douwe Maan authored
      Instead of checking if a commit already exists in the upstream project
      in its ProcessCommitWorker and bailing out if it does, we check the
      existence of all commits in bulk in Git::BranchHooksService, so that we
      can skip scheduling ProcessCommitWorker jobs for those commits
      that already exist upstream entirely.
      Unverified
      97c2564f
  16. Aug 15, 2019
    • Nick Thomas's avatar
      Only read rebase status from the model · d31b733f
      Nick Thomas authored and Mayra Cabrera's avatar Mayra Cabrera committed
      Prior to 12.1, rebase status was looked up directly from Gitaly. In
      https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/14417 , a DB
      column was added to track the status instead. However, we couldn't stop
      looking at the gitaly status immediately, since some rebases may been
      running across the upgrade.
      
      Now that we're in 12.3, it is safe to remove the direct-to-gitaly
      lookup. This also happens to fix a 500 error that is seen when viewing
      an MR for a fork where the source project has been removed.
      
      We still look at the Gitaly status in the service, just in case Gitaly
      and Sidekiq get out of sync - I assume this is possible, and it's a
      relatively cheap check.
      
      Since we atomically check and set `merge_requests.rebase_jid`, we
      should never enqueue two `RebaseWorker` jobs in parallel.
      d31b733f
    • Brett Walker's avatar
      Allow disabling group/project email notifications · 3489dc3d
      Brett Walker authored
      - Adds UI to configure in group and project settings
      - Removes notification configuration for users when
      disabled at group or project level
      3489dc3d
    • George Koltsov's avatar
      Fix project import restricted visibility bypass · 06eddc3e
      George Koltsov authored
      Add Gitlab::VisibilityLevelChecker that verifies
      selected project visibility level (or overridden param)
      is not restricted when creating or importing a project
      06eddc3e
  17. Aug 13, 2019
    • Bob Van Landuyt :neckbeard:'s avatar
      Rework retry strategy for remote mirrors · 452bc36d
      Bob Van Landuyt :neckbeard: authored and Douwe Maan's avatar Douwe Maan committed
      **Prevention of running 2 simultaneous updates**
      
      Instead of using `RemoteMirror#update_status` and raise an error if
      it's already running to prevent the same mirror being updated at the
      same time we now use `Gitlab::ExclusiveLease` for that.
      
      When we fail to obtain a lease in 3 tries, 30 seconds apart, we bail
      and reschedule. We'll reschedule faster for the protected branches.
      
      If the mirror already ran since it was scheduled, the job will be
      skipped.
      
      **Error handling: Remote side**
      
      When an update fails because of a `Gitlab::Git::CommandError`, we
      won't track this error in sentry, this could be on the remote side:
      for example when branches have diverged.
      
      In this case, we'll try 3 times scheduled 1 or 5 minutes apart.
      
      In between, the mirror is marked as "to_retry", the error would be
      visible to the user when they visit the settings page.
      
      After 3 tries we'll mark the mirror as failed and notify the user.
      
      We won't track this error in sentry, as it's not likely we can help
      it.
      
      The next event that would trigger a new refresh.
      
      **Error handling: our side**
      
      If an unexpected error occurs, we mark the mirror as failed, but we'd
      still retry the job based on the regular sidekiq retries with
      backoff. Same as we used to
      
      The error would be reported in sentry, since its likely we need to do
      something about it.
      452bc36d
    • Stan Hu's avatar
      Only expire tag cache once per push · e658f960
      Stan Hu authored
      Previously each tag in a push would invoke the Gitaly `FindAllTags` RPC
      since the tag cache would be invalidated with every tag.
      
      We can eliminate those extraneous calls by expiring the tag cache once
      in `PostReceive` and taking advantage of the cached tags.
      
      Relates to https://gitlab.com/gitlab-org/gitlab-ce/issues/65795
      e658f960
    • Kamil Trzcińśki's avatar
      Require `needs:` to be present · 93e95182
      Kamil Trzcińśki authored
      This changes the `needs:` logic to require
      that all jobs to be present. Instead of skipping
      do fail the pipeline creation if `needs:` dependency
      is not found.
      93e95182
    • Stan Hu's avatar
      Reduce Gitaly calls in PostReceive · 4e2bb4e5
      Stan Hu authored
      This commit reduces I/O load and memory utilization during PostReceive
      for the common case when no project hooks or services are set up.
      
      We saw a Gitaly N+1 issue in `CommitDelta` when many tags or branches
      are pushed. We can reduce this overhead in the common case because we
      observe that most new projects do not have any Web hooks or services,
      especially when they are first created. Previously, `BaseHooksService`
      unconditionally iterated through the last 20 commits of each ref to
      build the `push_data` structure. The `push_data` structured was used in
      numerous places:
      
      1. Building the push payload in `EventCreateService`
      2. Creating a CI pipeline
      3. Executing project Web or system hooks
      4. Executing project services
      5. As the return value of `BaseHooksService#execute`
      6. `BranchHooksService#invalidated_file_types`
      
      We only need to generate the full `push_data` for items 3, 4, and 6.
      
      Item 1: `EventCreateService` only needs the last commit and doesn't
      actually need the commit deltas.
      
      Item 2: In addition, `Ci::CreatePipelineService` only needed a subset of
      the parameters.
      
      Item 5: The return value of `BaseHooksService#execute` also wasn't being
      used anywhere.
      
      Item 6: This is only used when pushing to the default branch, so if
      many tags are pushed we can save significant I/O here.
      
      Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/65878
      
      Fic
      4e2bb4e5
Loading