I just pushed a brand new feature to GitLab Runner: exec.
It allows you to run the jobs defined in .gitlab-ci.yml locally!In turn, this allows for faster testing cycles, and it makes it easier to fix broken builds.
The command supports any executor and supports all .gitlab-ci.yml options.
Is it possible to override this variable if it already exists in the .gitlab-ci.yml file? It doesn't appear so. It would be nice to be able to override variables used for databases and other services when running tests locally.
@ayufan I'm new to gitlab-ci and I've started to setup my gitlab-ci on my own gitlab-ce server 3 days ago. "gitlab-runner exec" is really useful thanks!
My environment: gitlab-ce 8.7.3 on Debian 7, gitlab-runner 1.1.3 on Debian 7.
I have 2 questions:
It seems "gitlab-runner exec shell myjob" doesn't take the "variables:" keyword into account (if "variables:" is used inside myjob or outside of it). It does work when the build is run by the gitlab-runner service but it doesn't when I use "gitlab-runner exec shell". Should I create an issue about this ? (I've not found an existing issue about this)
I'd like to re-use the .gitlab-ci.yml in another context than gitlab-ci. On a developer workstation where the git repository of the project is already cloned in "/home/khelkun/myproject", I wish I could run "/home/khelkun/myproject/.gitlab-ci.yml" to build the project without cloning a new fresh copy of the HEAD of the repository, e.g: build the sources which already exist in "/home/khelkun/myproject". It doesn't seems to me that gitlab-runner can do that. Am I wrong ? Is there another way I could do it ?
It seems "gitlab-runner exec shell myjob" doesn't take the "variables:" keyword into account (if "variables:" is used inside myjob or outside of it). It does work when the build is run by the gitlab-runner service but it doesn't when I use "gitlab-runner exec shell". Should I create an issue about this ? (I've not found an existing issue about this)
Yes. Please create an issue.
I'd like to re-use the .gitlab-ci.yml in another context than gitlab-ci. On a developer workstation where the git repository of the project is already cloned in "/home/khelkun/myproject", I wish I could run "/home/khelkun/myproject/.gitlab-ci.yml" to build the project without cloning a new fresh copy of the HEAD of the repository, e.g: build the sources which already exist in "/home/khelkun/myproject". It doesn't seems to me that gitlab-runner can do that. Am I wrong ? Is there another way I could do it ?
Using Shell executor...Running on khelkun-workstation...Cloning repository...Cloning in'/home/khelkun/dev/srv-git/myproject/builds/0/project-1'...
Can I skip the cloning step ? And tell "gitlab-runner exec" to run 'build-release:myjob' directly on the '/home/khelkun/dev/srv-git/myproject/' instead of '/home/khelkun/dev/srv-git/myproject/builds/0/project-1' ?
gdubicki@mbp-greg:~/git/myproject$ ~/gitlab-runner exec shell testWARNING: You most probably have uncommitted changes.WARNING: These changes will not be tested.gitlab-ci-multi-runner 1.3.0~beta.20.g36963db (36963db)Using Shell executor...Running on mbp-greg...Cloning repository...Cloning into '/Users/gdubicki/git/myproject/builds/0/project-1'...done.
Hi there, just wanted to mention that I'd also be a user of an option to mount the current folder in the runner image instead of cloning. In my case, the .gitlab-ci.yml file calls external scripts, so if I need to iterate on these scripts then I need to push them... Which kinds of defeats the purpose of local testing :)
@grzegorz-dubicki and @khelkun are right, if cloning is unavoidable, it defeats whole purpose of local testing. Then, we are back into "commit and push to test" paradigm which is not the way development should be done.
Local automated testing is just that, workstation local. That is valuable in itself, since pushing would no longer be required, avoiding history writing and potentially improving the initial quality of pushed commits. Reducing time cost for testing is an independent objective. I agree that's an important objective as well.
I've just started playing with this great feature and I've seen in the docs that the cache and artefacts may and may not work. I'm wondering if I can end up on the "may" side though.
The ultimate goal is to build a node app via a gitlab-ci task locally before pushing to make sure there are no surprises later. For this I need node_modulesto be cached between gitlab-runner exec calls somehow - the process takes ages otherwise.
Here is a simplified.gitlab-ci.yml:
build:image:node:6.5.0cache:key:'42'# attempt to cache the folder 'globally' for even more simplicitypaths:-node_modules/script:-ls -la-mkdir node_modules-touch node_modules/hi-ls -la
After running gitlab-runner exec docker --docker-privileged --docker-cache-dir /tmp/gitlabrunner build for the second time, I'd expect to see node_modules in the first ls, but this does not happen no matter how I modify various options of the run script. According to the output, the cache is being attempted to be restored and saved, but this has no effect:
Running with gitlab-ci-multi-runner 1.5.2 (76fdacd)Using Docker executor with image node:6.5.0 ...Pulling docker image node:6.5.0 ...Running on runner--project-1-concurrent-0 via machinename...Cloning repository...Cloning into '/builds/project-1'...done.Checking out b316f21c as dev...Checking cache for 42...###### output from ls, mkdir, touch and ls###Creating cache 42...node_modules/: found 2 matching files Build succeeded
I suspect that some caching may be achieved with docker volumes, but it's not quite clear to me how to make work while keeping .gitlab-ci.yml both good for local execution and remote CI. If anyone has succeed in this challenge, could you please share a hint?
P.S.: all is happening on Ubuntu, i.e. docker is local.
The reason why cache does not work out of box as I understand it now is because each time you call exec a new runner machine is created and then the container gets removed. Unless you mount the cache to the host, all the data you preserve within that container gets wiped. This peculiarity of how exec works might be worth noting in the docs as it was not very clear in the beginning. A lot of users may want local caching, I'm sure!
I can't make dependencies and artifacts work with the local runner.
It's a pain to setup a proper deployment pipeline without the ability to test changes locally.
I have my stages as 1) build 2) test. How do I locally trigger runner to execute test after build?
gitlab-ci-multi-runner exec shell test_job
If I just run test job, it will not trigger build first. This simply gives me error as some required files that should be generated in build stages is missing. Could we kick off the whole sequential build, test, deployment, etc locally?
It's not working for me with artifacts. If I try to mount the build dir, I'll get: rm: can't remove '/builds/project-1': Resource busy see: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/1293
an additional --artifact-dir Parameter would be awesome.
This works great - one small thing that tripped me up is that exec doesn't read docker pull policy from config.toml so you have to pass it in manually:
sudo gitlab-runner exec docker master_build --docker-pull-policy never
Hello, I have created a dummy proof-of-concept implementation of artifacts for exec docker subcommand. This is by no means intended as a merge request, but you can use it for your local development until more systematic approach and implementation is developed. You can find it in my fork and branch td-gitlab-ci-exec-artifacts.
Has anyone (apart from @rpgillespie) experienced issues related to the config.toml and running local builds (using exec)? I'm currently having issues with the exec command ignoring both dns and extra_host settings.
In fact I've tested this under 3 debian based machines and it fails on all three. But it works under an Arch environment (using same docker and gitlab-runner versions), and that really misleads me.
Has anyone succeeded in using gitlab-runner exec docker on Windows? I am having issue #1775 which seems to make this unusable on Windows, but it seems surprising to assume that this has never worked on Windows and this lack of support was not documented.
In my process, I have three steps: "docker build" > build project > test project
To be able to do it locally vs gitlab.com I have to use the DinD (Docker in Docker) approach.
Sample here: https://gitlab.com/Mizux/gitlabci
If someone manage to do it without DinD (i.e. privileged docker)...
Since it's a DinD, thus docker "image" are lost locally (i.e. "docker image ls" will be empty) (in fact you can retrieve it in the zip cache file but...)
gitlab could provide docker registry attached to your repo but i didn't manage to "simulate" it locally (i.e. use a local host registry instead of the gitlab.com/project one)
I hope not, or at least not without a better replacement. Being able to run CI locally is a feature that really sets Gitlab CI appart from other choices.
I'm concerned that this feature is deprecated before there's a replacement, so we're kind of in a limbo as to supported ways to run locally (super important for testing new configs, etc).