I recently run into an issue when running build using gitlab runner with docker images.
I want to test my web app (one container) with selenium (second container). The problem is that the selenium container is a service and it does not know the IP of the main (web) container.
The solution is very simple - create a network
docker network create CI_BUILD_ID
And then run a simple command for each container:
docker network connect CI_BUILD_ID CONTAINER_ID
It can be also achieved by adding a network and then adding --net=NETWORK_ID to docker run commands.
After all build containers are in the same network - they can see each other.
Designs
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
Activity
Sort or filter
Newest first
Oldest first
Show all activity
Show comments only
Show history only
username-removed-389735Title changed from Start build containers inside one network instead of using --link to Build containers inside one network instead of using --link
Title changed from Start build containers inside one network instead of using --link to Build containers inside one network instead of using --link
@mkurzeja I'm not sure how we can resolve it. You are proposing to create a separate network for time of the build and put all containers in that network. Given current ordering of starting containers: services first, build last will it solve that problem?
@ayufan my issue is I need to connect from a service container to the build container.
So by build container asks the service container to run some test, but the service container needs to connect back to a web server that is hosted on the build container.
Gitlab runner is using --link to connect the service container, this allows to connect build->service but does not allow to connect service->build. Using network instead of links allows to connect both build->service and service->build.
@mkurzeja: are You sure? Aren't links restricted only to default bridge network? Docker docs says so but I didn't test it.
And even if it's working together now, Docker team recommends to avoid linking and migrate to using network. I would be careful with mixing those two features.
I have no problem with migrating to network, since it will use its own DNS server (the proposed change for Docker 1.10) as long as we don't break existing installations.
@tmaczukin It worked with link and network at the same time. I've added a link first, and then connected all to one network. Anyway, it would probably be better to use networking only
The main problem with migrating to networking is that the naming is slightly different. In the example /etc/hosts file above you can see the difference. Links have aliases (mysql) and networking don't. Not sure if there is a way to force aliases in /etc/hosts now (except for running a command inside container that would add the alias to /etc/hosts)
@mkurzeja As I remember the docs (I've started to work with docker networking recently, so I don't have much experience) in /etc/hosts You have containers names - provided by --name flag on container's startup. When I was testing it, for each container I had two entries in /etc/hosts:
IP [container_name]IP [container_name].[network_name]
So to use networking we should have an ability to set service containers names. But here is a problem - You can't duplicate container name. So the names should be partially dynamic in some way.
@ayufan usage of docker networking could help (or disturb - I'm not sure) in the autoscale feature You're working on.
It would not change the behavior in auto-scalling only making it possible to spinup the containers on different nodes, but this is not a big problem right now.
It's only needed if you want to have incoming communication from linked container.
@ayufan@mkurzeja: As a note - docker 1.10 introduced links in user defined networks. It looks like links will be a official way to set an alias for a container. There is also some option to create Network-wide container aliases. More info can be found here: https://blog.docker.com/2016/02/docker-1-10/
I would also be interested in that. As there is no way to set --net in the config, I came up with the idea, to install a VPN between my containers and the container running selenium - (I want to achieve the same goal as you. ;-) )
Same problem here with a PhantomJS image running as a service.
The PhantomJS service needs to point the internal WebKit Browser to the current instance by hostname/IP.
Hi there, any news about this?
I'm running into a similar issue, trying to run acceptance tests onto our web project using the selenium/standalone-chrome image as a service . The linked selenium service can't reach port 80 of the container where the web app is running. Has anyone found any workaround for this until this issue is solved?
Thanks!
Just ran into the same issue using the gitlab.com shared runners.
In my case, I have the following:
build_container -> vault_service -> mysql_service
I have some code in the build container that needs to talk to the vault_service, which in turn talks to the mysql_service. As docker links are being used at the moment, the vault_service is not able to talk to the mysql_service.
Similar to @F21 , I have build_container -> kong -> postgres, which can't work because kong isn't linked to postgres. Is there a workaround for this, aside from putting dependent services into the same container?
@brennanroberts thanks. This means I should run my project inside a docker container too, right? Something like:
DOCKER_NETWORK_ID=$(docker network create); SELENIUM_CONTAINER_ID=$(docker run --net=$DOCKER_NETWORK_ID -d -it selenium/standalone-chrome); WEBAPP_CONTAINER_ID=$(docker run -d --net=$DOCKER_NETWORK_ID -it my-base-image -v.:/var/www/) ; docker exec $WEBAPP_CONTAINER_ID phpunit; ?
Also, I should run all containers within a custom network. Otherwise I would run into the same issue... am I getting this right?
@nmercado1986 yes, this is very similar to what we do (I think you'll need to get the selenium IP and pass it into your webapp container). And yes, we run them in their own network.
@brennanroberts just tested this on my local docker and it's not necessary to pass the ip. Container names are available as hostnames (somehow? I don't see them in the container's hosts file) between all containers in the network, with a slight modification to the previous commands:
DOCKER_NETWORK_ID=$(docker network create);docker run --name selenium --net=$DOCKER_NETWORK_ID -d -it selenium/standalone-chrome;docker run --name webapp -d --net=$DOCKER_NETWORK_ID -it -v.:/var/www/ my-base-image ; docker exec -it webapp ping -c 1 selenium; # returns something like # PING selenium (172.20.0.3): 56 data bytes# 64 bytes from 172.20.0.3: icmp_seq=0 ttl=64 time=0.157 msdocker exec -it selenium ping -c 1 webapp;# returns something like # PING webapp (172.20.0.2): 56 data bytes# 64 bytes from 172.20.0.2: icmp_seq=0 ttl=64 time=0.110 ms
I'd like to have this in %v1.11 so let's do a little summary:
On a build start we create a new docker network with a name based on CI_BUILD_ID variable. We're doing an API equivalent of docker network create build-$CI_BUILD_ID.
Starting a service container we're connecting it to the created network. We also add a network aliases for that container which are the same as aliases used for linking. This is an equivalent of docker create --name runner_generated_name_here --network-alias my_mysql --network-alias my-mysql --network NETWORK_ID_HERE image.
Starting a build container we're connecting we're doing the same as for service container, but this time we're adding one alias: build. THis is an equivalent of docker create --name runner_generated_name_here --network-alias build --network NETWORK_ID_HERE image.
After this from each service container and from the build container we should be able to access other containers using the configured aliases. Docker will use /etc/hosts (starting with 1.10) or internal DNS (starting with 1.11 or 1.12) to resolve the names.
What should we consider:
We should think if CI_BUILD_ID is a sufficient factor that distinguishes two different builds executed on the same host (we should remember that on one host we can have multiple Runners connected to multiple GitLab installations).
How cache container and prebuilt container (which are using volumes sharing between containers) is using when switching from linking to network? It shouldn't make a difference but we should test that.
Networking will work fine starting with Docker 1.10. However for some Linux distributions (RHEL/CentOS 6?) the default version installed on the system is 1.8. We should consider if we want to be backward compatible if used version of Docker doesn't support networking or do we want to make Docker >= 1.10 a requirement. The latter option is the easiest but there are still users who are using older RHEL versions and they can't install packages from outside of official repositories (e.g. due to security policies). For one of the issues related to git operations we were looking for a solution that will be compatible with older RHEL/CentOS versions and I think we should do this also with this issue. But this means we should test available version of docker and switch the linking/networking strategy basing on the result of such test.
Is there any chance this might make it in gitlab-ci in the near future? The v1.11 milestone appears to have been past and gitlab-ci is using the 9.x milestones.
With support for service aliases and custom entrypoints in 9.4, this is the last piece that will allow me to drop my docker-in-docker and docker-compose hack for gitlab-ci :)