Keycloak LetsEncrypt Nginx Reverse Proxy Docker Compose - docker-compose

I am trying to setup an keycloak instance with an ssl connection over an nginx proxy my 'docker ps' output:
d7fd473cc77b jboss/keycloak "/opt/jboss/tools/do…" 34 minutes ago Up 8 minutes 0.0.0.0:8080->8080/tcp, 8443/tcp auth
76e757bbe129 mariadb "sh -c ' echo 'CREAT…" 34 minutes ago Up 8 minutes 0.0.0.0:3306->3306/tcp backend-database
d99e23470955 stilliard/pure-ftpd:hardened-latest "/bin/sh -c '/run.sh…" 34 minutes ago Up 8 minutes 0.0.0.0:21->21/tcp, 0.0.0.0:30000->30000/tcp, 30001-30009/tcp ftp-server
95f4fbdea0de wordpress:latest "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 80/tcp wordpress
b3e40ca6de48 mariadb:latest "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 3306/tcp database
e5c12bb5ba52 nginx "/docker-entrypoint.…" 37 minutes ago Up 37 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx-web
c0ac90a6c408 jrcs/letsencrypt-nginx-proxy-companion "/bin/bash /app/entr…" 37 minutes ago Up 37 minutes nginx-letsencrypt
33ae7de5f598 jwilder/docker-gen "/usr/local/bin/dock…" 37 minutes ago Up 37 minutes nginx-gen
As you can see at the above console output, I am also running an instance of wordpress in a docker container and this works like a charm, no problems with unsigned or invalid ssl certificates, just erverything fine. But when I am trying to call the web interface of keycloak over the domain with the corresponding port (in my case: 8080), I got the following error:
Fehlercode: SSL_ERROR_RX_RECORD_TOO_LONG
And when I am trying to call the web interface over the ip address also with the corresponding port, I got the message that the connection isn't safe.
Hopefully this are enough information for you guys, to figure out what I've done wrong.
So far,
Daniel

Related

How to properly query Kafka REST Proxy?

I'm running a dockerized distribution of Confluent platform:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e6963904b485 confluentinc/cp-enterprise-control-center:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:9021->9021/tcp, :::9021->9021/tcp control-center
49ade0e752b4 confluentinc/cp-ksqldb-cli:7.0.1 "/bin/sh" 11 hours ago Up 11 hours ksqldb-cli
95b0982c0159 confluentinc/ksqldb-examples:7.0.1 "bash -c 'echo Waiti…" 11 hours ago Up 11 hours ksql-datagen
e28e3b937f6e confluentinc/cp-ksqldb-server:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:8088->8088/tcp, :::8088->8088/tcp ksqldb-server
af92bfb84cb1 confluentinc/cp-kafka-rest:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:8082->8082/tcp, :::8082->8082/tcp rest-proxy
318a999e76dc cnfldemos/cp-server-connect-datagen:0.5.0-6.2.0 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:8083->8083/tcp, :::8083->8083/tcp, 9092/tcp connect
0c299fbda7c5 confluentinc/cp-schema-registry:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:8081->8081/tcp, :::8081->8081/tcp schema-registry
a33075002386 confluentinc/cp-server:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:9092->9092/tcp, :::9092->9092/tcp, 0.0.0.0:9101->9101/tcp, :::9101->9101/tcp broker
135f832fbccb confluentinc/cp-zookeeper:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 2888/tcp, 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp, 3888/tcp zookeeper
Kafka REST Proxy is running on port 8082
When I issue an HTTP GET call against the REST proxy:
curl --silent -X GET http://10.0.0.253:8082/kafka/clusters/ | jq
All I get is:
{
"error_code": 404,
"message": "HTTP 404 Not Found"
}
Given my configuration, what can I change to actually get some useful information out of Kafka REST Proxy?

How to install collections i a AWX container

Is it possible to install collections in a axw container ?
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c4d255148cc8 netboxcommunity/netbox:latest "/opt/netbox/docker-…" 5 days ago Up 5 days 0.0.0.0:8000->8080/tcp, :::8000->8080/tcp netbox-docker_netbox_1
ac0784c16861 netboxcommunity/netbox:latest "/opt/netbox/venv/bi…" 5 days ago Up 5 days netbox-docker_netbox-worker_1
31b850bf8d12 redis:6-alpine "docker-entrypoint.s…" 5 days ago Up 5 days 6379/tcp netbox-docker_redis-cache_1
df0977f446f4 postgres:12-alpine "docker-entrypoint.s…" 5 days ago Up 5 days 5432/tcp netbox-docker_postgres_1
983b698274af redis:6-alpine "docker-entrypoint.s…" 5 days ago Up 5 days 6379/tcp netbox-docker_redis_1
4150e6ae71cc ansible/awx:17.1.0 "/usr/bin/tini -- /u…" 6 days ago Up 6 days 8052/tcp awx_task
5583bbf60d45 ansible/awx:17.1.0 "/usr/bin/tini -- /b…" 6 days ago Up 6 days 0.0.0.0:80->8052/tcp, :::80->8052/tcp awx_web
c9d92412d1cd redis "docker-entrypoint.s…" 6 days ago Up 6 days 6379/tcp awx_redis
71b99bde5d41 postgres:12 "docker-entrypoint.s…" 6 days ago Up 6 days 5432/tcp awx_postgres
Got the solution.
Create a folder: collections
Create file: requirements.yml
File content:
collections:
- name: community.general

Kubeadm - no port 6443 after cluster creation

I'm trying to create Kubernetes HA cluster using kubeadm.
Kubeadm version: v.1.11.1
I'm using following instructions: kubeadm ha
All passed ok, except the final point. Nodes can't see each other on port 6443.
sudo netstat -an | grep 6443
Shows nothing.
In journalctl -u kubelet I see following error:
reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://<LB>:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-19-111-200.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.19.111.200:6443: connect: connection refused
List of docker runs on instance:
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e3eabb527a92 0e4a34a3b0e6 "kube-scheduler --ad…" 19 hours ago Up 19 hours k8s_kube-scheduler_kube-scheduler-ip-172-19-111-200.ec2.internal_kube-system_31eabaff7d89a40d8f7e05dfc971cdbd_1
123e78fa73c7 55b70b420785 "kube-controller-man…" 19 hours ago Up 19 hours k8s_kube-controller-manager_kube-controller-manager-ip-172-19-111-200.ec2.internal_kube-system_85384ca66dd4dc0adddc63923e2425a8_1
e0aa05e74fb9 1d3d7afd77d1 "/usr/local/bin/kube…" 19 hours ago Up 19 hours k8s_kube-proxy_kube-proxy-xh5dg_kube-system_f6bc49bc-959e-11e8-be29-0eaa4481e274_0
f5eac0b8fe7b k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_kube-proxy-xh5dg_kube-system_f6bc49bc-959e-11e8-be29-0eaa4481e274_0
541011b3e83a k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_etcd-ip-172-19-111-200.ec2.internal_kube-system_84d934eebaace20c70e0f268eb100028_0
a5e203947686 k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_kube-scheduler-ip-172-19-111-200.ec2.internal_kube-system_31eabaff7d89a40d8f7e05dfc971cdbd_0
89dbcdda659c k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_kube-apiserver-ip-172-19-111-200.ec2.internal_kube-system_4202bb793950ae679b2a433ea8711d18_0
5948e629d90e k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_kube-controller-manager-ip-172-19-111-200.ec2.internal_kube-system_85384ca66dd4dc0adddc63923e2425a8_0
Forwarding in sysctl exists:
sudo sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.ip_forward = 1
Nodes can't see each other on port 6443.
It seems like your api server in not runnning.
Fact that you have error stating :6443: connect: connection refused is pointing towards your api server not running.
This is further confirmed from your list of running docker containers on instances - you are missing api server container. Note that you have related container with "/pause" but you are missing container with "kube-apiserver --...". Your scheduler and controller-manger appear to run correctly, but api server is not.
Now you have to dig in and see what prevented your api server from starting properly. Check kubelet logs on all control-plane nodes.
This also happens if your Linux kernel is not configured to do ip4/ip6 transparently.
An ip4 address configured when the kube-api listens on an ip6 interface breaks.

Cannot ping containers in the same pod in Kubernetes(minikube)

On my local I run a mysql container and then ping it from another container on the same network:
$ docker run -d tutum/mysql
$ docker run -it plumsempy/plum bash
# ping MYSQL_CONTAINER_ID
PING 67e35427d638 (198.105.244.24): 56 data bytes
64 bytes from 198.105.244.24: icmp_seq=0 ttl=37 time=0.243 ms
...
That is good. Then, using Kubernetes(minikube) locally, I deploy tutum/mysql using the following YAML:
...
- name: mysql
image: tutum/mysql
...
There is nothing else for the mysql container. Then I deploy it, ssh into the minikube pod, spin up a random container and try pinging the mysql container inside the pod this time:
$ kubectl create -f k8s-deployment.yml
$ minikube ssh
$ docker ps
$ docker run -it plumsempy/plum bash
# ping MYSQL_CONTAINER_ID_INSIDE_MINIKUBE
PING mysql (198.105.244.24): 56 data bytes
^C--- mysql ping statistics ---
10 packets transmitted, 0 packets received, 100% packet loss
# traceroute MYSQL_CONTAINER_ID_INSIDE_MINIKUBE
traceroute to aa7f7ed7af01 (198.105.244.24), 30 hops max, 60 byte packets
1 172.17.0.1 (172.17.0.1) 0.031 ms 0.009 ms 0.007 ms
2 10.0.2.2 (10.0.2.2) 0.156 ms 0.086 ms 0.050 ms
3 * * *
4 * * *
5 dtr02gldlca-tge-0-2-0-1.gldl.ca.charter.com (96.34.102.201) 16.153 ms 16.107 ms 16.077 ms
6 crr01lnbhca-bue-200.lnbh.ca.charter.com (96.34.98.188) 18.753 ms 18.011 ms 30.642 ms
7 crr01mtpkca-bue-201.mtpk.ca.charter.com (96.34.96.63) 30.779 ms 30.523 ms 30.428 ms
8 bbr01mtpkca-bue-2.mtpk.ca.charter.com (96.34.2.24) 24.089 ms 23.900 ms 23.814 ms
9 bbr01ashbva-tge-0-1-0-1.ashb.va.charter.com (96.34.3.139) 26.061 ms 25.949 ms 36.002 ms
10 10ge9-10.core1.lax1.he.net (65.19.189.177) 34.027 ms 34.436 ms 33.857 ms
11 100ge12-1.core1.ash1.he.net (184.105.80.201) 107.873 ms 107.750 ms 104.078 ms
12 100ge3-1.core1.nyc4.he.net (184.105.223.166) 100.554 ms 100.478 ms 100.393 ms
13 xerocole-inc.10gigabitethernet12-4.core1.nyc4.he.net (216.66.41.242) 109.184 ms 111.122 ms 111.018 ms
14 * * *
15 * * *
...(til it ends)
the plumsempy/plum can be any container since they are both on the same network and same pod, the pinging should go through. The question is Why can I not reach mysql on minikube and how could I fix that?
From k8s multi-container pod docs:
Pods share fate, and share some resources, such as storage volumes and IP addresses.
Hence the mysql container is reachable from the plum container at the IP address 127.0.0.1.
Also, since mysql runs on port 3306 by default, you probably want telnet 127.0.0.1 3306 to check if it's reachable (ping uses ICMP which doesn't have the concept of ports).
I guess the container ID just don't work with Kubernetes. You can also see, that the container ID resolved to the public IP 198.105.244.24, which looks wrong.
You have multiple ways to contact this pod:
get the pod IP via kubectl describe -f k8s-deployment.yml
create a service for that pod and do one of these (assuming the service name is mysql):
use environment variables like ping ${MYSQL_SERVICE_HOST}
use DNS like ping mysql.default.svc.cluster.local

Running SonarQube with Docker in CI/CD pipeline

I'm trying to get SonarQube stood up and scanning applications via Docker containers on an EC2 instance. I've spent the past day poring over SonarQube and Postgres documentation and am having very little luck.
The most sensible guide I've found is the docker-sonarqube project maintained by SonarSource. More specifically, I am following the SonarQube/Postgres guide using docker-compose.
My docker-compose.yml file looks identical to the one provided by SonarSource:
sonarqube:
build: "5.2"
ports:
- "9000:9000"
links:
- db
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
volumes_from:
- plugins
db:
image: postgres
volumes_from:
- datadb
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
datadb:
image: postgres
volumes:
- /var/lib/postgresql
command: /bin/true
plugins:
build: "5.2"
volumes:
- /opt/sonarqube/extensions
- /opt/sonarqube/lib/bundled-plugins
command: /bin/true
docker ps -a yields:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2d003aef18f2 dockersonarqube_sonarqube "./bin/run.sh" 47 seconds ago Up 46 seconds 0.0.0.0:9000->9000/tcp dockersonarqube_sonarqube_1
c7d5043f4381 dockersonarqube_plugins "./bin/run.sh /bin/tr" 48 seconds ago Exited (0) 46 seconds ago dockersonarqube_plugins_1
590c72b4a723 postgres "/docker-entrypoint.s" 48 seconds ago Up 47 seconds 5432/tcp dockersonarqube_db_1
c105e6aebe09 postgres "/docker-entrypoint.s" 49 seconds ago Exited (0) 48 seconds ago dockersonarqube_datadb_1
Latest output from the sonarqube_1 container is:
sonarqube_1 | 2016.01.20 17:49:09 INFO web[o.s.s.a.TomcatAccessLog] Web server is started
sonarqube_1 | 2016.01.20 17:49:09 INFO web[o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
sonarqube_1 | 2016.01.20 17:49:09 INFO app[o.s.p.m.Monitor] Process[web] is up
What does concern me is the latest output from the db_1 container:
PostgreSQL init process complete; ready for start up.
LOG: database system was shut down at 2016-01-20 17:48:40 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
ERROR: relation "schema_migrations" does not exist at character 21
STATEMENT: select version from schema_migrations
ERROR: relation "schema_migrations" does not exist at character 21
STATEMENT: select version from schema_migrations
ERROR: relation "schema_migrations" does not exist at character 21
STATEMENT: select version from schema_migrations
ERROR: relation "schema_info" does not exist at character 15
STATEMENT: SELECT * FROM "schema_info" LIMIT 1
Navigating to http://my.instance.ip:9000 is unsuccessful. I am able to hit the respective ports of other running containers from the same machine.
Could anyone help to point me in the right direction? Any other guides or documentation that may serve me better? I also see issues with the documentation stating that analyzing a project begins with mvn sonar:sonar, but I'll defer that for now. Thank you very much in advance!
Use this image
I modified this image to talk to a RDS instance.
EC2(docker-sonar)<==> RDS postgres