I have acceptance tests set up with ddev. They get run locally on ddev composer cookieman:test. I would like to use the same setup with Github actions.
Did anybody have any luck with ddev in Github actions/workflow? I am getting until here where ddev's healthcheck fails:
...
Creating ddev-router ... done
Failed to start extension-cookieman-master: ddev-router failed to become ready: logOutput=2019/11/15 02:24:19 [emerg] 1630#1630: no servers are inside upstream in /etc/nginx/conf.d/default.conf:89
nginx: [emerg] no servers are inside upstream in /etc/nginx/conf.d/default.conf:89
nginx: configuration file /etc/nginx/nginx.conf test failed
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (22) The requested URL returned error: 404 Not Found
ddev-router healthcheck endpoint not responding
, err=container /ddev-router unhealthy: 2019/11/15 02:24:19 [emerg] 1630#1630: no servers are inside upstream in /etc/nginx/conf.d/default.conf:89
nginx: [emerg] no servers are inside upstream in /etc/nginx/conf.d/default.conf:89
nginx: configuration file /etc/nginx/nginx.conf test failed
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (22) The requested URL returned error: 404 Not Found
ddev-router healthcheck endpoint not responding
##[error]Process completed with exit code 1.
.github/workflows/tests.yml:
name: Tests
on: [push, pull_request]
jobs:
tests-via-ddev:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- run: export DEBIAN_FRONTEND=noninteractive
# update docker
- run: sudo -E apt-get purge -y docker docker-engine docker.io containerd runc nginx
- run: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- run: sudo -E add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- run: sudo -E apt-get update
- run: sudo -E apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce
# install linuxbrew
- run: sh -c "$(curl -fsSL https://raw.githubusercontent.com/Linuxbrew/install/master/install.sh)"
- run: echo "::add-path::/home/linuxbrew/.linuxbrew/bin"
# install ddev + docker-compose
- run: brew tap drud/ddev && brew install ddev docker-compose
# Start ddev
- run: ddev start || exit 0
# Debug
- run: ls -als .ddev/
- run: curl 127.0.0.1 || exit 0
- run: curl 127.0.0.1/healthcheck || exit 0
- run: docker ps || exit 0
# we want Clover coverage
- run: ddev exec enable_xdebug
# Run tests
- run: ddev composer cookieman:test
I tried
using Ubuntu 16.04
fully upgrading all packages on Ubuntu 16.04/18.04
configuring ddev like that:
run: ddev config global --router-bind-all-interfaces=true
run: ddev config global --omit-containers=dba,ddev-ssh-agent
changing to unprivileged router ports (settings router_http_port, router_https_port in config.yaml)
If I force it to continue with ddev start || exit 0 I can see containers up and running:
- run: docker ps || exit 0
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c36601a06fd6 drud/ddev-router:v1.11.0 "/app/docker-entrypo…" 27 seconds ago Up 24 seconds (unhealthy) 0.0.0.0:4430->4430/tcp, 0.0.0.0:4444->4444/tcp, 0.0.0.0:8025->8025/tcp, 80/tcp, 0.0.0.0:8080->8080/tcp ddev-router
18152602a054 drud/ddev-webserver:v1.11.0-built "/start.sh" 30 seconds ago Up 28 seconds (healthy) 8025/tcp, 127.0.0.1:32770->80/tcp, 127.0.0.1:32769->443/tcp ddev-extension-cookieman-master-web
33aca55715f2 selenium/standalone-chrome:3.12 "/opt/bin/entry_poin…" 32 seconds ago Up 30 seconds 4444/tcp ddev-extension-cookieman-master-chrome
6c852ae62974 drud/ddev-dbserver:v1.11.0-10.2-built "/docker-entrypoint.…" 32 seconds ago Up 30 seconds (healthy) 127.0.0.1:32768->3306/tcp ddev-extension-cookieman-master-db
curl 127.0.0.1 yields the default nginx start page (while I would expect '503: No ddev back-end site available')
curl 127.0.0.1/healthcheck yields a 404
So far my conclusion is: ddev-router is reachable but its nginx does not have the appropriate configuration (no servers are inside upstream in /etc/nginx/conf.d/default.conf). Thus ddev only runs the pre-start hook form config.yaml. post-start is not reached.
You can see the output of the last runs here https://github.com/dmind-gmbh/extension-cookieman/actions?query=branch%3Afeat%2Facceptance-tests
EDIT/AMEND:
This is the (mal-)generated /etc/nginx/conf.d/default.conf from ddev-router:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
# ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
rfay mentioned a miscommunication between the ddev-router and the underlying docker daemon via sockets.
EDIT:
I put my findings into a Github action that can be included in other projects, too: https://github.com/marketplace/actions/setup-ddev
I came to the conclusion that the problem is with docker-gen.
In the first line of the template (https://github.com/drud/ddev/blob/master/containers/ddev-router/nginx.tmpl or also jwilder's https://github.com/jwilder/nginx-proxy/blob/master/nginx.tmpl) the .Docker.CurrentContainerID is empty which seemed to happen to some people in some contexts https://github.com/jwilder/docker-gen/issues/196#issuecomment-225412753.
The suggested removal of '-only-exposed' did not work for me. I changed the template a bit instead to not rely on the container and that was it.
:)
This is still a bit dirty and only a Proof-of-concpet:
this is the changed template https://github.com/jonaseberle/github-action-setup-ddev/blob/master/.ddev/patches/ddev-router/nginx.tmpl (compare the upstream {} section where I removed the check if containers are on the same network with the router)
In the workflow I do
ddev start || exit 0 # this will fail and also not execute any post-start hooks
docker cp nginx-debug.tmpl ddev-router:/app/nginx-debug.tmpl
docker exec ddev-router sh -c "docker-gen -only-exposed -notify 'sleep 1 && nginx -s reload' /app/nginx-debug.tmpl /etc/nginx/conf.d/default.conf"
... ddev is now up and healthy
Not decided how to move on from here. Maybe #rfay would have an idea how to change the nginx-template. Or I will use a custom Dockerfile for the ddev-router with a docker-compose.ddev-router.yaml to change the file just for the Github actions run...
EDIT/AMEND:
The shorter and tested version of this is:
ddev start || docker cp .ddev/patches/ddev-router/nginx.tmpl ddev-router:/app/nginx.tmpl
ddev start - this triggers a container restart and thus a docker-gen run
Related
EDIT: weird DNS behavior was some kind of transient issue, and now RHEL/podman works the same way as Ubuntu/podman. I can't reproduce the issue, which makes most part (not 100% though) of this question moot.
I am trying to use Podman and docker-compose to create a compose stack with multiple replicas of backend container, and having a hard time with it.
I use Podman because I have to (comes from Red Hat platform), and picked docker-compose because it is familiar and I use in local dev host, too. I know there are alternatives (podman-compose etc). I learned that Podman 4.1 supported Docker Compose so this sounded like a good candidate.
As an example I have docker-compose.yml with one frontend container and 3 backend containers:
version: "3"
services:
frontend:
image: "nginx:latest"
ports:
- "3000:80"
depends_on:
- backend
backend:
image: "tomcat:latest"
ports:
- "8180-8280:8080"
scale: 3
Note: this stack is just an example. It's purpose is only to highlight the networking aspects of multi-replica docker-compose . I could use something else than nginx:latest, and using e.g. Traefik can solve some of the problems...but sometimes you wish to connect directly from one container to a service with multiple container replicas.
Docker & docker-compose (Ubuntu)
Running this on host which has docker and docker-compose is straightforward.
Backend containers get assigned random ports from range 8180-8280.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
98941117708c nginx:latest "/docker-entrypoint.…" 25 seconds ago Up 22 seconds 0.0.0.0:3000->80/tcp, :::3000->80/tcp example-docker-compose-frontend-1
3d749e25eaba tomcat:latest "catalina.sh run" 26 seconds ago Up 23 seconds 0.0.0.0:8193->8080/tcp, :::8193->8080/tcp example-docker-compose-backend-1
854ba8f60cb3 tomcat:latest "catalina.sh run" 26 seconds ago Up 23 seconds 0.0.0.0:8192->8080/tcp, :::8192->8080/tcp example-docker-compose-backend-2
e57e32181e8e tomcat:latest "catalina.sh run" 26 seconds ago Up 23 seconds 0.0.0.0:8194->8080/tcp, :::8194->8080/tcp example-docker-compose-backend-3
Logging into frontend container, service name backend resolves to all 3 backend containers
dig backend
; <<>> DiG 9.16.27-Debian <<>> backend
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31602
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;backend. IN A
;; ANSWER SECTION:
backend. 600 IN A 172.19.0.3
backend. 600 IN A 172.19.0.2
backend. 600 IN A 172.19.0.4
curl backend:8080 works
Podman & docker-compose (RHEL 8)
Podman came preinstalled, I added docker-compose (standalone) and podman-docker:
# curl -SL https://github.com/docker/compose/releases/download/v2.10.2/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose
# chmod a+x /usr/local/bin/docker-compose
# sudo yum install podman-docker
And activated rootless podman socket so that podman and docker-compose can talk to each other:
# systemctl --user enable podman.socket
# systemctl --user start podman.socket
# systemctl --user status podman.socket
# export DOCKER_HOST=unix:///run/user/$UID/podman/podman.sock
I also switched network backend to netavark, DNS did not work without that change
$ podman info |grep -i networkbackend
networkBackend: netavark
1. Port ranges are not supported
Podman does not like ports: - "8180-8280:8080" due to this bug: https://github.com/containers/podman/issues/15111
[+] Running 1/0
⠿ Network example-docker-compose_default Created 0.0s
⠋ Container example-docker-compose-backend-3 Creating 0.0s
⠋ Container example-docker-compose-backend-1 Creating 0.0s
⠋ Container example-docker-compose-backend-2 Creating 0.0s
Error response from daemon: make cli opts(): strconv.Atoi: parsing "8180-8280": invalid syntax
2. Without port range, address already in use conflict
Changed docker-compose.yml to remove port range
ports:
- "8080:8080"
This results in port conflict, all 3 backend containers try to bind to 8080
Catalina.startup.Catalina.start Server startup in [190] milliseconds
Error response from daemon: rootlessport listen tcp 0.0.0.0:8080: bind: address already in use
3. Scale = 1
Let's try with just one backend container. System starts up
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
217c3e726431 docker.io/library/tomcat:latest catalina.sh run 7 seconds ago Up 6 seconds ago 0.0.0.0:8080->8080/tcp example-docker-compose-backend-1
9c3f86676bde docker.io/library/nginx:latest nginx -g daemon o... 6 seconds ago Up 6 seconds ago 0.0.0.0:3000->80/tcp example-docker-compose-frontend-1
DNS from frontend server looks weird. What are all these different backend IP addresses 10.89.0.3 - 10.89.0.12? Only the last of them works when I curl 10.89.0.x. Still, curl backend:8080 works fine?
4. Scale up from command line
I remove ports and scale from docker-compose.yml and start compose stack with scale=3 option:
version: "3"
services:
frontend:
image: "nginx:latest"
ports:
- "3000:80"
depends_on:
- backend
backend:
image: "tomcat:latest"
$ docker-compose up --scale backend=3
[+] Running 4/4
⠿ Container example-docker-compose-backend-2 Recreated 0.3s
⠿ Container example-docker-compose-backend-3 Recreated 0.2s
⠿ Container example-docker-compose-backend-1 Recreated 0.3s
⠿ Container example-docker-compose-frontend-1 Recreated 0.3s
Attaching to example-docker-compose-backend-1, example-docker-compose-backend-2, example-docker-compose-backend-3, example-docker-compose-frontend-1
Now compose stack starts nicely with 3 backend containers
$ docker ps
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b81c44246ae7 docker.io/library/tomcat:latest catalina.sh run 22 minutes ago Up 22 minutes ago example-docker-compose-backend-3
814dc5d307f7 docker.io/library/tomcat:latest catalina.sh run 22 minutes ago Up 22 minutes ago example-docker-compose-backend-2
fb0a5090a456 docker.io/library/tomcat:latest catalina.sh run 22 minutes ago Up 22 minutes ago example-docker-compose-backend-1
c0219d7fded4 docker.io/library/nginx:latest nginx -g daemon o... 22 minutes ago Up 22 minutes ago 0.0.0.0:3000->80/tcp example-docker-compose-frontend-1
DNS from frontend has even more entries
# dig backend
; <<>> DiG 9.16.27-Debian <<>> backend
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29526
;; flags: qr rd ad; QUERY: 1, ANSWER: 11, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 410008265c2b0edb (echoed)
;; QUESTION SECTION:
;backend. IN A
;; ANSWER SECTION:
backend. 86400 IN A 10.89.0.3
backend. 86400 IN A 10.89.0.4
backend. 86400 IN A 10.89.0.6
backend. 86400 IN A 10.89.0.7
backend. 86400 IN A 10.89.0.15
backend. 86400 IN A 10.89.0.16
backend. 86400 IN A 10.89.0.18
backend. 86400 IN A 10.89.0.19
backend. 86400 IN A 10.89.0.20
backend. 86400 IN A 10.89.0.21
backend. 86400 IN A 10.89.0.22
And curl backend:8080 does not work (not sure which port I should use now)
Questions
What's going on here?
Can I achieve a setup of 3 backend containers, so that DNS name backend would resolve to those, with podman & docker-compose?
Podman seems to support docker-compose (or vice versa), but only to a degree. Is there some documentation which tells what docker-compose features are supported on Podman, and which are not?
Podman is my container runtime of choice...unless I'm working with docker-compose, in which case I have found it to be "close but not quite" in terms of its docker API support.
However, for what you're trying to do, you could replace Nginx with Traefik, and let Traefik handle the load balancing. Traefik is a dynamic proxy that uses the Docker API and container labelling to discover containers and configure the proxy rules.
For example:
version: "3"
services:
frontend:
image: "docker.io/traefik:v2.8"
ports:
- "3000:80"
- "127.0.0.1:3080:8080"
command:
- --api.insecure=true
- --providers.docker
volumes:
- /run/user/$UID/podman/podman.sock:/var/run/docker.sock
backend:
labels:
traefik.http.routers.backend.rule: Host(`localhost`)
image: "quay.io/larsks/demoserver:latest"
scale: 3
Here we're mapping all requests for Host: localhost to to our backend containers. This is just for the purposes of a demonstration (since I'll be running curl localhost/...); a more realistic configuration would use specific hostnames, or paths, etc. You can read more in the Routing configuration section of Traefik's Docker documentation, and also in the general Router documentation.
With this configuration, we see the following containers running:
$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c29de1d137e2 docker.io/library/traefik:v2.8 --api.insecure=tr... 5 seconds ago Up 6 seconds ago 0.0.0.0:3000->80/tcp, 127.0.0.1:3080->8080/tcp demoserver_frontend_1
f4fac24cb494 quay.io/larsks/demoserver:latest /usr/local/bin/st... 5 seconds ago Up 6 seconds ago demoserver_backend_2
d9d388202be2 quay.io/larsks/demoserver:latest /usr/local/bin/st... 5 seconds ago Up 5 seconds ago demoserver_backend_1
5a3e6330739d quay.io/larsks/demoserver:latest /usr/local/bin/st... 5 seconds ago Up 5 seconds ago demoserver_backend_3
And we can see that requests on port 3000 cycle between the available backends. Running this script:
for i in {1..10}; do
curl http://localhost:3000/hostname
done
Produces as output:
5a3e6330739d
d9d388202be2
f4fac24cb494
5a3e6330739d
d9d388202be2
f4fac24cb494
5a3e6330739d
d9d388202be2
f4fac24cb494
5a3e6330739d
After installing postgresql(13) on GCP, I tried installing citus using this command:
curl https://install.citusdata.com/community/rpm.sh | sudo bash
However I run into the following error. Any guidance/suggestions would be helpful.
[tony_stark#host]$ curl https://install.citusdata.com/community/rpm.sh | sudo bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 8667 100 8667 0 0 21791 0 --:--:-- --:--:-- --:--:-- 21831
Detected operating system as centos/7.
Checking for curl...
Detected curl...
Checking for postgresql13-server...
Detected postgresql13-server...
Checking for EPEL repositories...
Detected EPEL repoitories
Downloading repository file: https://repos.citusdata.com/community/config_file.repo?os=centos&dist=7&source=script...
curl: (7) Failed to connect to
Network is unreachable
Unable to run:
curl https://repos.citusdata.com/community/config_file.repo?os=centos&dist=7&source=script
The link curl https://repos.citusdata.com/community/config_file.repo?os=centos&dist=7&source=script works on my browser though.
Looks like a temporary issue on repository side. I just tried it and it works:
[sergiusz#host ~]$ curl https://install.citusdata.com/community/rpm.sh | sudo bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 8667 100 8667 0 0 25686 0 --:--:-- --:--:-- --:--:-- 25718
Detected operating system as centos/7.
Checking for curl...
Detected curl...
Checking for postgresql13-server...
Installing pgdg13 repo... done.
Checking for EPEL repositories...
Detected EPEL repoitories
Downloading repository file: https://repos.citusdata.com/community/config_file.repo?os=centos&dist=7&source=script.
.. done.
Installing pygpgme to verify GPG signatures... done.
Installing yum-utils... done.
Generating yum cache for citusdata_community... done.
The repository is set up! You can now install packages.
EDIT:
This file can be also downloaded manually:
curl "https://repos.citusdata.com/community/config_file.repo?os=centos&dist=7&source=script" -o /etc/yum.repos.d/citusdata_community.repo
Until now I worked a lot with github/bitbucket and jenkins/bamboo. Right now I'm trying to setup a Gitlab CE server with a private kubernetes cluster.
I want to run a hello world project in java with gitlabs AutoDevOps in kubernetes, this is the repo I'm using:
https://github.com/dstar55/docker-hello-world-spring-boot
Everything works fine until runner gets created in kubernetes, downloads the image but gets stuck on downloading maven resources.
Running on runner-h6cwaztm-project-8-concurrent-0jvd9f via runner-gitlab-runner-6dcf7dd458-jl69h...
Fetching changes with git depth set to 50...
00:02
Initialized empty Git repository in /builds/.../hello-world-spring/.git/
Created fresh repository.
From https://.../hello-world-spring
* [new ref] refs/pipelines/14 -> refs/pipelines/14
* [new branch] master -> origin/master
Checking out ad24ac6b as master...
Skipping Git submodules setup
$ if [[ -z "$CI_COMMIT_TAG" ]]; then # collapsed multi-line command
$ /build/build.sh
Logging to GitLab Container Registry with CI credentials...
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Building Dockerfile-based application...
Step 1/10 : FROM maven:3.5.2-jdk-8-alpine AS maven_build
3.5.2-jdk-8-alpine: Pulling from library/maven
22bc7fb81913: Pull complete
Digest: sha256:7cebda60f8a541e1bf2330306d22f9786f989187f4ec96539d398a0d4dbfdadb
Status: Downloaded newer image for maven:3.5.2-jdk-8-alpine
---> 293423a981a7
Step 2/10 : COPY pom.xml /tmp/
---> c0e609a509a8
Step 3/10 : COPY src /tmp/src/
---> e735a08f2b39
Step 4/10 : WORKDIR /tmp/
---> Running in 90620c0ca3ad
Removing intermediate container 90620c0ca3ad
---> a5d9fdc62aa9
Step 5/10 : RUN mvn package
---> Running in dc90f43fc83b
[INFO] Scanning for projects...
Downloading from central: https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-starter-parent/2.2.1.RELEASE/spring-boot-starter-parent-2.2.1.RELEASE.pom
It never throws an error (until it timesout) and it never goes past this point.
Kubernetes has 4 nodes 1 master and 3 slaves, using flannel and MetalLB
Edit:
I added a curl command instead of mvn package and it seems the download speed is 0, how is that possible?
Step 5/11 : RUN curl https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-starter-parent/2.2.1.RELEASE/spring-boot-starter-parent-2.2.1.RELEASE.pom --output test.pom
---> Running in db2bc24c6a4f
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:05:00 --:--:-- 0
curl: (28) Operation timed out after 300689 milliseconds with 0 out of 0 bytes received
The command '/bin/sh -c curl https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-starter-parent/2.2.1.RELEASE/spring-boot-starter-parent-2.2.1.RELEASE.pom --output test.pom' returned a non-zero code: 28
ERROR: Job failed: command terminated with exit code 1
According to place where CI hangs, your pipeline stuck at mvn package:
Step 5/10 : RUN mvn package
---> Running in dc90f43fc83b
[INFO] Scanning for projects...
Downloading from central: https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-starter-parent/2.2.1.RELEASE/spring-boot-starter-parent-2.2.1.RELEASE.pom
So, you can try to restart Artifactory.
Also, you can debug mvn packages with mvn clean package -X -e
See: this answer :
java - Maven hanging indefinitely while checking for updates - Stack Overflow
mvn clean package -X -e
I've been trying to setup Cloud Code with VSCode and I've been running in to problems when starting the deploy process with Cloud Code: Deploy.
I've tried deploying the samples, python-hello-world-1 as well as the go-hello-world-1, to my kubernetes cluster on GKE but always end up getting errors when the deploy process starts package downloading:
Go Output
Running: skaffold run --enable-rpc -v info --rpc-http-port 49869 --filename skaffold.yaml --default-repo gcr.io/abx-lernende
starting gRPC server on port 50051
starting gRPC HTTP server on port 49869
Using kubectl context: gke_abx-lernende_europe-west4-a_joshu-test-cluster
Generating tags...
- go-hello-world -> gcr.io/abx-lernende/go-hello-world:latest
Checking cache...
- go-hello-world: Not found. Building
Building [go-hello-world]...
Sending build context to Docker daemon 57.86kB
Step 1/8 : FROM golang:1.13
---> 6586e3d10e96
Step 2/8 : RUN go get -u -v github.com/go-delve/delve/cmd/dlv
---> Running in b75ce8e5dae9
[91mgithub.com/go-delve/delve (download)
[0m[91m# cd .; git clone -- https://github.com/go-delve/delve /go/src/github.com/go-delve/delve
Cloning into '/go/src/github.com/go-delve/delve'...
fatal: unable to access 'https://github.com/go-delve/delve/': Failed to connect to github.com port 443: Connection refused
package github.com/go-delve/delve/cmd/dlv: exit status 128
[0mfailed to build: build failed: building [go-hello-world]: build artifact: unable to stream build output: The command '/bin/sh -c go get -u -v github.com/go-delve/delve/cmd/dlv' returned a non-zero code: 1
Exited with code 1.
Python Output
Running: skaffold run --enable-rpc -v info --rpc-http-port 50185 --filename
skaffold.yaml --default-repo gcr.io/abx-lernende
starting gRPC server on port 50051
starting gRPC HTTP server on port 50185
Skaffold &{Version:v1.3.1 ConfigVersion:skaffold/v2alpha3 GitVersion: GitCommit:6ba887a42438d1da578a005cf550e618fee6dfb8 GitTreeState:clean BuildDate:2020-01-31T19:55:18Z GoVersion:go1.13.4 Compiler:gc Platform:windows/amd64}
Using kubectl context: gke_abx-lernende_europe-west4-a_joshu-test-cluster
Generating tags...
- python-hello-world -> Tags generated in 0s
gcr.io/abx-lernende/python-hello-world:latest
Checking cache...
- python-hello-world: Cache check complete in 6.0001ms
Not found. Building
Building [python-hello-world]...
Sending build context to Docker daemon 4.608kB
Step 1/7 : FROM python:3.8
---> efdecc2e377a
Step 2/7 : WORKDIR /app
---> Using cache
---> a131b81cad66
Step 3/7 : COPY requirements.txt .
---> Using cache
---> 4625ef1862bd
Step 4/7 : RUN pip install --trusted-host pypi.python.org -r requirements.txt
---> Running in 4da23a158ae3
[91mWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f17ba9c9d60>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/flask/
Im assuming this is due to me being behind a corporate proxy. As counter measures I have explicitly configured VSCode, Git, pip, go and google cloud sdk all to use said proxy. On top of that I set the Windows ENV variables for the proxy. sadly without success.
Thanks!
You can configure docker to pass through proxy information into the containers by adding something like the following to your ~/.docker/config.json:
{
"proxies": {
"default": {
"httpProxy": "http://192.168.1.12:3128",
"httpsProxy": "http://192.168.1.12:3128"
}
}
}
Docker will set the HTTP_PROXY/HTTPS_PROXY environment variables within the container which is picked up by many tools.
I just want to know whether I can run Karate test in a pod. Or is there any good suggestion on how to run it?
I tried to run the Karate test in terminal and it works. Just want to know if I can run it from Kubernetes pod. Nginx also running in the pod.
You can everything in pod whatever you are running outside environment. Pod run the container inside it.
So create the docker file and generate the docker image using docker file. Using that docker image and start the karate pod.
You can write the docker file like this
FROM maven:3-jdk-8-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY settings.xml /usr/share/maven/ref/
COPY pom.xml /tmp/pom.xml
COPY . /usr/src/app
RUN mvn -B -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml prepare-package -DskipTests
CMD ["/usr/src/app/maven_runner.sh"]
I found here one example : https://github.com/neillfontes/karate-sample
Posting as Community Wiki for future use.
#Harsh Manvar provided good example, however if you will just build it from Dockerfile, you will recieved errors. You have to download all files mentioned in Github. Correct oreder will be:
$ git clone https://github.com/neillfontes/karate-sample.git
$ cd karate-sample
$ docker build -t karate_docker .
After image was built you can check it:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
karate_docker latest 9dc6d7a5278a About a minute ago 136MB
Later you can start testing using:
$ docker run karate_docker
START: Running tests...
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running demo.DemoTest
11:57:49.684 [main] DEBUG c.i.karate.cucumber.CucumberRunner - init test class: class demo.DemoTest
11:57:50.412 [main] DEBUG c.i.karate.cucumber.CucumberRunner - loading feature: /usr/src/app/target/test-classes/demo/features/get-token.feature
11:57:50.663 [main] DEBUG c.i.karate.cucumber.CucumberRunner - loading feature: /usr/src/app/target/test-classes/demo/features/make-request.feature
11:57:53.898 [main] INFO com.intuit.karate.ScriptBridge - karate.env system property was: null
11:57:54.867 [main] DEBUG c.i.k.h.a.RequestLoggingInterceptor -
1 > POST http://brentertainment.com/oauth2/lockdin/token
1 > Accept-Encoding: gzip,deflate
1 > Connection: Keep-Alive
1 > Content-Length: 96