[0KRunning with gitlab-runner 13.0.0-rc2 (926834bc) [0;m[0K on docker-auto-scale fa6cab46 [0;msection_start:1590088796:prepare_executor [0K[0K[36;1mPreparing the "docker+machine" executor[0;m [0;m[0KUsing Docker executor with image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.15.0 ... [0;m[0KPulling docker image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.15.0 ... [0;m[0KUsing docker image sha256:93c7f924bc641f744a2ea17f02bfeeb23403a531c459d1471dd50037e274922e for registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.15.0 ... [0;msection_end:1590088867:prepare_executor [0Ksection_start:1590088867:prepare_script [0K[0K[36;1mPreparing environment[0;m [0;mRunning on runner-fa6cab46-project-4422333-concurrent-0 via runner-fa6cab46-stg-srm-1590088796-57d4d8e0... section_end:1590088870:prepare_script [0Ksection_start:1590088870:get_sources [0K[0K[36;1mGetting source from Git repository[0;m [0;m[32;1m$ eval "$CI_PRE_CLONE_SCRIPT"[0;m [32;1mFetching changes with git depth set to 50...[0;m Initialized empty Git repository in /builds/gitlab-org/monitor/monitor-sandbox/.git/ [32;1mCreated fresh repository.[0;m From https://ci-api.gstg.gitlab.net/gitlab-org/monitor/monitor-sandbox * [new ref] refs/pipelines/12776644 -> refs/pipelines/12776644 * [new branch] master -> origin/master [32;1mChecking out 3997cec7 as master...[0;m [32;1mSkipping Git submodules setup[0;m section_end:1590088872:get_sources [0Ksection_start:1590088872:restore_cache [0K[0K[36;1mRestoring cache[0;m [0;msection_end:1590088873:restore_cache [0Ksection_start:1590088873:download_artifacts [0K[0K[36;1mDownloading artifacts[0;m [0;msection_end:1590088875:download_artifacts [0Ksection_start:1590088875:build_script [0K[0K[36;1mRunning before_script and script[0;m [0;m[32;1m$ auto-deploy check_kube_domain[0;m [32;1m$ auto-deploy download_chart[0;m Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Not installing Tiller due to 'client-only' flag having been set "gitlab" has been added to your repositories Hang tight while we grab the latest from your chart repositories... ...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts): Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: connect: connection refused ...Successfully got an update from the "gitlab" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. Saving 1 charts Downloading postgresql from repo https://kubernetes-charts.storage.googleapis.com/ Deleting outdated charts Hang tight while we grab the latest from your chart repositories... ...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts): Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: connect: connection refused ...Successfully got an update from the "gitlab" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. Saving 1 charts Downloading postgresql from repo https://kubernetes-charts.storage.googleapis.com/ Deleting outdated charts [32;1m$ auto-deploy ensure_namespace[0;m NAME STATUS AGE monitor-sandbox-4422333-dast-default Active 204d [32;1m$ auto-deploy initialize_tiller[0;m Checking Tiller... Tiller is listening on localhost:44134 Client: &version.Version{SemVer:"v2.16.6", GitCommit:"dd2e5695da88625b190e6b22e9542550ab503a47", GitTreeState:"clean"} [debug] SERVER: "localhost:44134" Kubernetes: &version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.10-gke.36", GitCommit:"34a615f32e9a0c9e97cdb9f749adb392758349a6", GitTreeState:"clean", BuildDate:"2020-04-06T16:33:17Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"} Server: &version.Version{SemVer:"v2.16.6", GitCommit:"dd2e5695da88625b190e6b22e9542550ab503a47", GitTreeState:"clean"} [32;1m$ auto-deploy create_secret[0;m Create secret... [32;1m$ auto-deploy deploy[0;m Release "dast-default-postgresql" has been upgraded. LAST DEPLOYED: Thu May 21 19:21:34 2020 NAMESPACE: monitor-sandbox-4422333-dast-default STATUS: DEPLOYED RESOURCES: ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE dast-default-postgresql-0 1/1 Running 0 57s ==> v1/Secret NAME TYPE DATA AGE dast-default-postgresql Opaque 1 57s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dast-default-postgresql ClusterIP 10.12.4.24 <none> 5432/TCP 57s dast-default-postgresql-headless ClusterIP None <none> 5432/TCP 57s ==> v1/StatefulSet NAME READY AGE dast-default-postgresql 1/1 57s NOTES: ** Please be patient while the chart is being deployed ** PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster: dast-default-postgresql.monitor-sandbox-4422333-dast-default.svc.cluster.local - Read/Write connection To get the password for "user" run: export POSTGRES_PASSWORD=$(kubectl get secret --namespace monitor-sandbox-4422333-dast-default dast-default-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) To connect to your database run the following command: kubectl run dast-default-postgresql-client --rm --tty -i --restart='Never' --namespace monitor-sandbox-4422333-dast-default --image docker.io/bitnami/postgresql:9.6.16 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host dast-default-postgresql -U user -d dast-default -p 5432 To connect to your database from outside the cluster execute the following commands: kubectl port-forward --namespace monitor-sandbox-4422333-dast-default svc/dast-default-postgresql 5432:5432 & PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U user -d dast-default -p 5432 WARNING: Rolling tag detected (bitnami/postgresql:9.6.16), please note that it is strongly recommended to avoid using rolling tags in a production environment. +info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/ secret "dast-default-secret" deleted secret/dast-default-secret replaced No helm values file found at '.gitlab/auto-deploy-values.yaml' Deploying new stable release... Release "dast-default" has been upgraded. LAST DEPLOYED: Thu May 21 19:21:38 2020 NAMESPACE: monitor-sandbox-4422333-dast-default STATUS: DEPLOYED RESOURCES: ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE dast-default 1/1 1 1 37s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE dast-default-cb8569775-kj5f5 1/1 Running 0 13s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dast-default-auto-deploy ClusterIP 10.12.4.209 <none> 5000/TCP 37s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE dast-default-auto-deploy dast-4422333-dast-default.34.67.11.220.nip.io,le-4422333.34.67.11.220.nip.io 35.222.156.35 80, 443 37s NOTES: Application should be accessible at http://dast-4422333-dast-default.34.67.11.220.nip.io deployment "dast-default" successfully rolled out [32;1m$ auto-deploy persist_environment_url[0;m section_end:1590088912:build_script [0Ksection_start:1590088912:after_script [0K[0K[36;1mRunning after_script[0;m [0;msection_end:1590088914:after_script [0Ksection_start:1590088914:archive_cache [0K[0K[36;1mSaving cache[0;m [0;msection_end:1590088915:archive_cache [0Ksection_start:1590088915:upload_artifacts_on_success [0K[0K[36;1mUploading artifacts for successful job[0;m [0;m[32;1mUploading artifacts...[0;m environment_url.txt: found 1 matching files [0;m Uploading artifacts to coordinator... ok [0;m id[0;m=37227509 responseStatus[0;m=201 Created token[0;m=kacSFHgZ section_end:1590088918:upload_artifacts_on_success [0K[32;1mJob succeeded [0;m