How can KeyCloak on Kubernetes connect to PostgreSQL? - kubernetes

I am trying to run KeyCloak on Kubernetes using PostgreSQL as a database.
The files I am referring to are on the peterzandbergen/keycloak-kubernetes project on GitHub.
I used kompose to generate the yaml files, as a staring point, using the files that jboss published.
PostgreSQL is started first using:
./start-postgres.sh
Then I try to start KeyCloak:
kubectl create -f keycloak-deployment.yaml
The KeyCloak pod stops because it cannot connect to the database with the error:
10:00:40,652 SEVERE [org.postgresql.Driver] (ServerService Thread Pool -- 58) Error in url: jdbc:postgresql://172.17.0.4:tcp://10.101.187.192:5432/keycloak
The full log can be found on github. This is also the place to look at the yaml files that I use to create the deployment and the services.

After some experimenting I found out that using the name postgres in the keycloak-deployment.yaml file
- env:
- name: DB_ADDR
value: postgres
messes things up and results in a strange expansion. After replacing this part of the yaml file with:
- env:
- name: DB_ADDR
value: postgres-keycloak
makes it work fine. This also requires changing the postgres-service.yaml file. The new versions of the files are in github.

Related

How to confirm volumes configured correctly for docker service?

We have docker-compose.yml with multiple services configured.
In one of the docker service we have set volumes for docker service.
Example : volumes: - ./src/main/resources/db/changelog:/init
enter code here
We need to execute all the db log scripts present in changelog folder but it is not executing. Can someone pinpoint the issue? What is the use of :/init at the end of folder path?

Devspace tool. How to deploy dependeny in --force-build mode

I use DevSpace tool to deploy my services into minikube local cluster.
I have two services to deploy: auth-handler and mysql;
auth-handler has the dependency of my-sql in devspace.xml. So it can't start till mysql hasn't been deployed.
auth-handler
dependencies:
- source:
path: ../mysql
namespace: databases
mysql has the image stage. Where in Dockerfile I perform logic to initiate DB by some data.
images:
backend:
image: registry.kube-system.svc.cluster.local/mysql
tags:
- local
dockerfile: ./mysql/Dockerfile
The first time, it works fine. But for example when I redeploy services the second time mysql image stage for mysql is skipped because DevSpace caches the image stage if it's already been successfully built. So my DB isn't initialized at this time because image stage skipped.
I can manually deploy mysql with -b / --force-buildto deploy mysql with the forcing launching of image stage but I don't need to manually deploy mysql. I need to initiate deployment of auth-handler and it will initiate deploying mysql in -b / --force-build``-b / --force-build mode.
Instead of populating your database within the Dockerfile, I would recommend adding a hook in the hooks section of devspace.yaml which could run devspace enter -c [mysql] -- command-to-populate-db or alternatively, adding an init container to populate the database. This will be a lot more flexible.
For more details on hooks, have a look at the DevSpace docs: https://devspace.sh/cli/docs/configuration/hooks/basics

Creating MongoDB through Helm with Gitlab fails

I followed the instructions here to create a Mongo instance using helm. This requires some adaptation as it's being created through gitlab with gitlab-ci.
Unfortunately, the part about values.yaml gets skimmed over and I haven't found complete examples for mongo through helm. [Many of the examples also seemed deprecated as well.]
Being unsure how to address the issue, using this as a values.yaml file:
global:
mongodb:
DBName: 'example'
mongodbUsername: "user"
mongodbPassword: "password"
mongodbDatabase: "database"
mongodbrootPassword: "password"
auth.rootPassword: "password"
The following error is returned:
'auth.rootPassword' must not be empty, please add '--set auth.rootPassword=$MONGODB_ROOT_PASSWORD' to the command. To get the current value:
export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace angular-23641052-review-86-sh-mousur review-86-sh-mousur-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
Given that I'm only using helm as called by gitlab-ci, I am unsure on how to implement the set or otherwise set the root password.
From what I could tell, I thought setting variables in env in values.yaml had solved the problem:
env:
- name: "auth.rootPassword"
value: "password"
The problem went away - but has returned.
Gitlab is making a move towards using .gitlab/auto-deploy-values.yaml.
I created a local version of autodeploy.sh and added the value auth.rootPassword="password" to auto-deploy-value.yaml and the value seems to work. cf. [Gitlab's documentation][1]

unable to read config after upgrade to envoy 1.15

I'm using Envoy docker image in docker-compose. Docker is running in Ubuntu, which is running in VM, which is running in Windows 10.
I have been using Envoy 1.14 without any problems. After upgrading image to 1.15, Envoy doesn't start and I'm getting this error:
unable to read file: /etc/envoy/envoy.yaml
line before this one says basically the same:
[critical][main] [source/server/server.cc:101] error initializing configuration '/etc/envoy/envoy.yaml': unable to read file: /etc/envoy/envoy.yaml
My docker-compose part for Envoy is simple:
envoy:
image: envoyproxy/envoy:v1.15-latest
container_name: envoy
restart: always
volumes:
- "~/envoy.yaml:/etc/envoy/envoy.yaml:ro"
If I just change envoyproxy/envoy:v1.15-latest to envoyproxy/envoy:v1.14-latest and do docker-compose down && docker-compose up, everything works fine. Are there any special permissions for config file now? Or is it something during my upgrade process?
Solved in github issue: https://github.com/envoyproxy/envoy/issues/12747#issuecomment-677485704
Solution: change permissions for envoy.yaml (chmod 777 is working fine for me).

Docker compose - secrets Additional property secrets is not allowed

docker-compose --version
docker-compose version 1.11.1, build 7c5d5e4
I have secret 'my_secret_data' added to my swarm cluster:
The start of my compose file looks like:
version: "3.1"
secrets:
my_secret_data:
external: true
services:
master:
image: jenkins-master
secrets:
- my_secret_data
ports:
- "8080:8080"
- "50000:50000"
'docker stack deploy' continually gives the error:
secrets Additional property secrets is not allowed
I have followed how do you manage secret values with docker-compose v3.1? to the letter as far as I can tell and have the correct versions installed but keep getting the above error. Any help greatly appreciated.
Change compose file version to latest version.
In short, version '3' is not resolved to the latest '3.x' version. Find what the latest version is here https://docs.docker.com/compose/compose-file/#compose-and-docker-compatibility-matrix
The "Additional property secrets is not allowed" error can be caused either by:
running Docker Engine < 1.13.1, or
using a compose file version number < '3.1' in a docker-compose file such as docker-compose.yml or docker-cloud.yml
If you are experiencing this problem confirm that both are correct.
This also applies to other Docker interfaces and tools.
For examples, in Portainer, yml with secrets lines pasted into the Create Stack dialog should begin with the line version: '3.1' or you will encounter the same error -- even with an up-to-date Docker Engine 1.13.1+.
In my case, Service: had an extra tab prior. Moment I removed tab prior to it, it worked.