Skip to content
Snippets Groups Projects

Fix variable file permission

Merged Alessio Caiazza requested to merge 2570-variable-permissions into master
All threads resolved!

What does this MR do?

Prevent kubernetes executor failure when the image does not run as root.

Why was this MR needed?

The user problem is described in #2570 (closed)

The problem is that stages running with the Predefined command run as root and if a variable is dumped to the disk, like CI_SERVER_TLS_CA_FILE, it will be root owned on the following stages.

Are there points in the code the reviewer needs to double check?

Personally I don't like this solution, but it's a fast and general fix to the problem. In case of CI_SERVER_TLS_KEY_FILE we will end up exposing a key file with global readability, which may not be a problem with docker and kubernetes but it's very dangerous with the shell runner.

Does this MR meet the acceptance criteria?

  • Documentation created/updated
  • Tests
    • Added for this feature/bug
    • All builds are passing
  • Branch has no merge conflicts with master (if you do - rebase it please)

What are the relevant issue numbers?

Closes #2570 (closed)

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • Author Maintainer

    @tmaczukin @ayufan Why are those ENV variables dumped to the filesystem? And where are those files used? Answering this question may help in checking if such change may break something or not.

  • If your project is cloned at:

    /builds/project/13/my_project

    Then variable files are stored in:

    /builds/project/13/my_project.tmp

    The file is used in that way - if for example CA_CERT variable is configured to be stored as file, then it value is stored inside of /builds/project/13/my_project.tmp/CA_CERT file, and the CA_CERT env variable is set as export CA_CERT=/builds/project/13/my_project.tmp/CA_CERT.

  • Author Maintainer

    Thak you, this is clear, but where is this file used? Why it's dumped on a file?

  • @nolith It depends on the software you use. For example git assumes that the certificate variable contains a path to a file with the certificate. When we dump the certificate into the file and create a GIT_SSL_CAINFO with a path to this file, git will load the certificate.

  • @tmaczukin

    I also don't know why on Kubernetes the permissions are different than on Docker. Maybe umask 0000 behaves differently in Kubernetes environment? Maybe this is caused by Kubernetes storage filesystem configuration?

    I'm not that familiar with the gitlab runner code, but tracing it, I don't see kubernetes executor in the prepare stage executes umask 0000. I see umask 0000 is in gitlab-runner-build, which the docker executor command calls but during the same stage (prepare) in the kubernetes executor, I don't see that script is being called when creating the helper container.

    The issue arises when the pre-build script is doing: echo ... > $BUILDS_DIR/$PROJECT.tmp/TLS_CA_CERT_FILE and b/c umask 0000 is not run, the folder $PROJECT.tmp is created with permission 0755 (owned by root). As non-privileged user, the above command fails with permission denied.

    I think a more correct way of fixing it would be finding out why umask 0000 wasn't run and fixing it instead of running a chmod as root.

  • mentioned in merge request !665 (closed)

  • Author Maintainer

    @tmaczukin @ayufan

    I also don't know why on Kubernetes the permissions are different than on Docker. Maybe umask 0000 behaves differently in Kubernetes environment? Maybe this is caused by Kubernetes storage filesystem configuration?

    I find out that the kubernetes executor do not use gitlab-runner-build or gitlab-runner-helper and this is why umask 0000 is not set.

    Given https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/1736 I still prefer chown-ing files the runner creates but I think we need to create another issue for re-aligning docker and k8s executor behavior.

  • I wonder if the actual fix for that problem as pointed out by @nolith is to make sure that we actually use gitlab-runner-build, something that is not being used by Kubernetes executor today:

    diff --git a/executors/kubernetes/executor_kubernetes.go b/executors/kubernetes/executor_kubernetes.go
    index e021ccfa..750ab8db 100644
    --- a/executors/kubernetes/executor_kubernetes.go
    +++ b/executors/kubernetes/executor_kubernetes.go
    @@ -138,15 +138,17 @@ func (s *executor) Run(cmd common.ExecutorCommand) error {
            }
     
            containerName := "build"
    +       containerCommand := s.BuildShell.DockerCommand
            if cmd.Predefined {
                    containerName = "helper"
    +               containerCommand = []string{"gitlab-runner-build"}
            }
     
            ctx, cancel := context.WithCancel(context.Background())
            defer cancel()
     
            select {
    -       case err := <-s.runInContainer(ctx, containerName, cmd.Script):
    +       case err := <-s.runInContainer(ctx, containerName, containerCommand, cmd.Script):
                    if err != nil && strings.Contains(err.Error(), "executing in Docker Container") {
                            return &common.BuildError{Inner: err}
                    }
    @@ -417,7 +419,7 @@ func (s *executor) setupBuildPod() error {
            return nil
     }
     
    -func (s *executor) runInContainer(ctx context.Context, name, command string) <-chan error {
    +func (s *executor) runInContainer(ctx context.Context, name string, command []string, script string) <-chan error {
            errc := make(chan error, 1)
            go func() {
                    defer close(errc)
    @@ -445,8 +447,8 @@ func (s *executor) runInContainer(ctx context.Context, name, command string) <-c
                            PodName:       s.pod.Name,
                            Namespace:     s.pod.Namespace,
                            ContainerName: name,
    -                       Command:       s.BuildShell.DockerCommand,
    -                       In:            strings.NewReader(command),
    +                       Command:       command,
    +                       In:            strings.NewReader(script),
                            Out:           s.Trace,
                            Err:           s.Trace,
                            Stdin:         true,
    Edited by Kamil Trzcińśki
  • Alessio Caiazza removed assignee

    removed assignee

  • assigned to @nolith

  • Author Maintainer

    @ayufan

    I wonder if the actual fix for that problem as pointed out by @nolith is to make sure that we actually use gitlab-runner-build, something that is not being used by Kubernetes executor today

    It worked. I've upgraded to MR.

    @tmaczukin can you review this, please?

  • Kamil Trzcińśki
  • @nolith You may want to rebase your changes on latest runner MR, the one that does rename of gitlab-ci-multi-runner to gitlab-runner.

    Edited by Kamil Trzcińśki
  • Author Maintainer

    @ayufan rebased.

    @ayufan @tmaczukin can we go on with the review, please?

  • Kamil Trzcińśki approved this merge request

    approved this merge request

  • Kamil Trzcińśki resolved all discussions

    resolved all discussions

  • Kamil Trzcińśki changed milestone to %10.0

    changed milestone to %10.0

  • Please register or sign in to reply
    Loading