[0KRunning with gitlab-runner 13.9.0-rc2 (69c049fd) [0;m[0K on docker-auto-scale fa6cab46 [0;m[0K feature flags: FF_GITLAB_REGISTRY_HELPER_IMAGE:true [0;msection_start:1613514191:resolve_secrets [0K[0K[36;1mResolving secrets[0;m [0;msection_end:1613514191:resolve_secrets [0Ksection_start:1613514191:prepare_executor [0K[0K[36;1mPreparing the "docker+machine" executor[0;m [0;m[0KUsing Docker executor with image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v1.0.7 ... [0;m[0KPulling docker image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v1.0.7 ... [0;m[0KUsing docker image sha256:3cc6f3a2fe3a9b760b7d7030af302d04022020882378955fb8b98d941f682033 for registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v1.0.7 with digest registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image@sha256:cc3b155c0406960ac664aaea96703b3289f295698128af383308c78931d3db11 ... [0;msection_end:1613514253:prepare_executor [0Ksection_start:1613514253:prepare_script [0K[0K[36;1mPreparing environment[0;m [0;mRunning on runner-fa6cab46-project-4422333-concurrent-0 via runner-fa6cab46-stg-srm-1613514191-7a258c2d... section_end:1613514255:prepare_script [0Ksection_start:1613514255:get_sources [0K[0K[36;1mGetting source from Git repository[0;m [0;m[32;1m$ eval "$CI_PRE_CLONE_SCRIPT"[0;m [32;1mFetching changes with git depth set to 50...[0;m Initialized empty Git repository in /builds/gitlab-org/monitor/monitor-sandbox/.git/ [32;1mCreated fresh repository.[0;m [32;1mChecking out 00164633 as master...[0;m [32;1mSkipping Git submodules setup[0;m section_end:1613514256:get_sources [0Ksection_start:1613514256:step_script [0K[0K[36;1mExecuting "step_script" stage of the job script[0;m [0;m[0KUsing docker image sha256:3cc6f3a2fe3a9b760b7d7030af302d04022020882378955fb8b98d941f682033 for registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v1.0.7 with digest registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image@sha256:cc3b155c0406960ac664aaea96703b3289f295698128af383308c78931d3db11 ... [0;m[32;1m$ auto-deploy check_kube_domain[0;m [32;1m$ auto-deploy download_chart[0;m Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://charts.helm.sh/stable Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Not installing Tiller due to 'client-only' flag having been set "bitnami" has been added to your repositories Download skipped. Using the default chart included in auto-deploy-image... Hang tight while we grab the latest from your chart repositories... ...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts): Get "http://127.0.0.1:8879/charts/index.yaml": dial tcp 127.0.0.1:8879: connect: connection refused ...Successfully got an update from the "bitnami" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. Saving 1 charts Downloading postgresql from repo https://charts.helm.sh/stable Deleting outdated charts [32;1m$ auto-deploy ensure_namespace[0;m NAME STATUS AGE monitor-sandbox-4422333-dast-default Active 119d [32;1m$ auto-deploy initialize_tiller[0;m Checking Tiller... Tiller is listening on localhost:44134 Client: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"} [debug] SERVER: "localhost:44134" Kubernetes: &version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.15-gke.6000", GitCommit:"b02f5ea6726390a4b19d06fa9022981750af2bbc", GitTreeState:"clean", BuildDate:"2020-11-18T09:16:22Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"} Server: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"} [32;1m$ auto-deploy create_secret[0;m Create secret... [32;1m$ auto-deploy deploy[0;m Error: release: "dast-default" not found Release "dast-default-postgresql" does not exist. Installing it now. NAME: dast-default-postgresql LAST DEPLOYED: Tue Feb 16 22:24:32 2021 NAMESPACE: monitor-sandbox-4422333-dast-default STATUS: DEPLOYED RESOURCES: ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE dast-default-postgresql-0 1/1 Running 0 35s ==> v1/Secret NAME TYPE DATA AGE dast-default-postgresql Opaque 1 37s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dast-default-postgresql ClusterIP 10.57.6.60 <none> 5432/TCP 37s dast-default-postgresql-headless ClusterIP None <none> 5432/TCP 37s ==> v1/StatefulSet NAME READY AGE dast-default-postgresql 1/1 37s NOTES: ** Please be patient while the chart is being deployed ** PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster: dast-default-postgresql.monitor-sandbox-4422333-dast-default.svc.cluster.local - Read/Write connection To get the password for "user" run: export POSTGRES_PASSWORD=$(kubectl get secret --namespace monitor-sandbox-4422333-dast-default dast-default-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) To connect to your database run the following command: kubectl run dast-default-postgresql-client --rm --tty -i --restart='Never' --namespace monitor-sandbox-4422333-dast-default --image docker.io/bitnami/postgresql:9.6.16 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host dast-default-postgresql -U user -d dast-default -p 5432 To connect to your database from outside the cluster execute the following commands: kubectl port-forward --namespace monitor-sandbox-4422333-dast-default svc/dast-default-postgresql 5432:5432 & PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U user -d dast-default -p 5432 WARNING: Rolling tag detected (bitnami/postgresql:9.6.16), please note that it is strongly recommended to avoid using rolling tags in a production environment. +info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/ Validating chart version... Fetching the previously deployed chart version... Fetching the deploying chart version... v1.0.7 secret/dast-default-secret replaced No helm values file found at '.gitlab/auto-deploy-values.yaml' Deploying new stable release... Release "dast-default" does not exist. Installing it now. NAME: dast-default LAST DEPLOYED: Tue Feb 16 22:25:10 2021 NAMESPACE: monitor-sandbox-4422333-dast-default STATUS: DEPLOYED RESOURCES: ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE dast-default 1/1 1 1 16s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE dast-default-9c96545cf-gc658 1/1 Running 0 16s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dast-default-auto-deploy ClusterIP 10.57.8.166 <none> 5000/TCP 16s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE dast-default-auto-deploy dast-4422333-dast-default.34.67.11.220.nip.io,le-4422333.34.67.11.220.nip.io 80, 443 16s NOTES: Application should be accessible at http://dast-4422333-dast-default.34.67.11.220.nip.io deployment "dast-default" successfully rolled out [32;1m$ auto-deploy persist_environment_url[0;m section_end:1613514328:step_script [0Ksection_start:1613514328:upload_artifacts_on_success [0K[0K[36;1mUploading artifacts for successful job[0;m [0;m[32;1mUploading artifacts...[0;m environment_url.txt: found 1 matching files and directories[0;m Uploading artifacts as "archive" to coordinator... ok[0;m id[0;m=38559972 responseStatus[0;m=201 Created token[0;m=rWuYjpeT section_end:1613514330:upload_artifacts_on_success [0Ksection_start:1613514331:cleanup_file_variables [0K[0K[36;1mCleaning up file based variables[0;m [0;msection_end:1613514331:cleanup_file_variables [0K[32;1mJob succeeded [0;m