How to apply the imported realm configuration file of keycloak when deploying on k8s - kubernetes

My file directory looks like below:
deployment.yaml
config.yaml
import
realm.json
This is the deployment.yaml file that I used based on the suggestion from Harsh Manvar:
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
selector:
app: keycloak
type: NodePort
ports:
- port: 8080
targetPort: http
protocol: TCP
name: http
nodePort: 32488
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:17.0.1
args:
- "start-dev"
- "--import-realm"
env:
- name: KEYCLOAK_ADMIN
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: KEYCLOAK_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: KC_PROXY
value: "edge"
volumeMounts:
- name: keycloak-volume
mountPath: "/import/realm.json"
name: "keycloak-volume"
readOnly: true
subPath: "realm.json"
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /realms/master
port: 8080
initialDelaySeconds: 120
volumes:
- name: keycloak-volume
configMap:
name: keycloak-configmap
And my config.ymal looks like this (where the json_content is where I copy paste the content of the imported realm JSON file):
apiVersion: v1
data:
realm.json: |
{json_content}
kind: ConfigMap
metadata:
name: keycloak-configmap
But when I accessed to the keycloak dash's web GUI, the imported realm did not show up.

try with once
- mountPath: "/import/realm.json"
name: "keycloak-volume"
readOnly: true
subPath: "realm.json"
On older version(i think widelyfy onces) it was supported to import the keycloak realm using environment variables however it is stopped now : https://github.com/keycloak/keycloak/issues/10216
also, it's supported in version 18 you are using the 17
still with 17 you can give it try by passing an argument to the deployment config : official import doc
args:
- "start-dev"
- "--import-realm"
also if you also check thread some are suggesting to use variable : KEYCLOAK_REALM_IMPORT
i also come across this blog which point legacy option to import the realm do check it out once: http://www.mastertheboss.com/keycloak/keycloak-with-docker/

Related

Running a drupal site using Minikube

I'm learning how to use Kubernetes by using Minikube and creating a Drupal site.
I was able to minikube service my drupal site and reach up to the "Set up Database" page and that's about it. It keeps telling me I need to insert the correct info. I checked my MYSQL pod and was able to exec and MySQL into my database.
I'm not sure what I'm missing? Is MYSQL pod not connected to my Drupal pod?
Here's my drupal-mysql.yaml file:
---
apiVersion: v1
kind: Service
metadata:
name: drupal-mysql-service
spec:
ports:
- name: mysql
port: 3306
targetPort: 3306
protocol: TCP
selector:
app: drupal
type: ClusterIP
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: drupal-mysql
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: seto_password
- name: MYSQL_DATABASE
value: drupal_databases
ports:
- containerPort: 3306
name: mysql
protocol: TCP
volumeMounts:
- name: vol-drupal
mountPath: /var/lib/mysql
subPath: 'mysql'
volumes:
- name: vol-drupal
persistentVolumeClaim:
claimName: drupal-seto-mysql
Here's my drupal.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: drupal
labels:
app: drupal
spec:
selector:
matchLabels:
app: drupal
tier: frontend
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
app: drupal
tier: frontend
spec:
selector:
initContainers:
- name: init-sites-volume
image: drupal:8.9.11
command: ['/bin/bash', '-c']
args:
[
'cp -r /var/www/html/sites/ /data/; chown www-data:www-data /data/ -R',
]
volumeMounts:
- mountPath: /data
name: vol-drupal
containers:
- image: drupal:8.9.11
name: drupal
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/modules
name: vol-drupal
subPath: modules
- mountPath: /var/www/html/profiles
name: vol-drupal
subPath: profiles
- mountPath: /var/www/html/sites
name: vol-drupal
subPath: sites
- mountPath: /var/www/html/themes
name: vol-drupal
subPath: themes
volumes:
- name: vol-drupal
persistentVolumeClaim:
claimName: drupal-seto
And Here's my drupal-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: drupal-service
labels:
app: drupal
spec:
type: NodePort
ports:
- name: web
protocol: TCP
port: 80
targetPort: 80
selector:
app: drupal
You need to connect between containers using IP address of containers or may be pods, but what is point of getting database in container, instead install mysql in host machine I have done it, checkout the guide
https://www.youtube.com/watch?v=7Y4RJrk-cFw

SonarQube + Postgresql Connection refused error in Kubernetes Cluster

sonar-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonarqube
spec:
replicas: 1
selector:
matchLabels:
app: sonarqube
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- image: 10.150.0.131/devops/sonarqube:1.0
args:
- -Dsonar.web.context=/sonar
name: sonarqube
env:
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-pwd
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://sonar-postgres:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
sonar-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: sonarqube
name: sonarqube
spec:
type: NodePort
ports:
- port: 80
targetPort: 9000
name: sonarport
selector:
name: sonarqube
sonar-postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonar-postgres
spec:
replicas: 1
selector:
matchLabels:
app: sonar-postgres
template:
metadata:
labels:
app: sonar-postgres
spec:
containers:
- image: 10.150.0.131/devops/postgres:12.1
name: sonar-postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-pwd
key: password
- name: POSTGRES_USER
value: sonar
ports:
- containerPort: 5432
name: postgresport
volumeMounts:
# This name must match the volumes.name below.
- name: data-disk
mountPath: /var/lib/postgresql/data
volumes:
- name: data-disk
persistentVolumeClaim:
claimName: claim-postgres
sonar-postgresql-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: sonar-postgres
name: sonar-postgres
spec:
ports:
- port: 5432
selector:
name: sonar-postgres
Kubernetes Version:1.18.0
Docker Version : 19.03
**I am having a connection problem between the Sonarqube pod and the Postgresql pod.
I use the flannel network plug.
Can you help with the error?
Postgresql pod log value does not come.
**
ERROR
Try with:
apiVersion: v1
kind: Service
metadata:
labels:
name: sonar-postgres
name: sonar-postgres
spec:
ports:
- port: 5432
selector:
app: sonar-postgres
because it looks like your selector is wrong. The same issue with sonar-service.yaml, change name to app and it should work.
If you installed postgresql on the sql cloud service, it is necessary to release the firewall access ip. To validate this question, try adding the 0.0.0.0/0 ip, it will release everything, but placing the correct sonar ip is the best solution

Kubernetes Endpoint with SSL

Is it possible to use SSL in Endpoint?
I have an Azure Database for MySQL, which requires an SSL certificate for connection. I use the following:
https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem
On Kubernetes I have run a NodeJS Pod which communicate to MySQL by an Endpoint like that:
kind: Service
apiVersion: v1
metadata:
name: mysql-remote
spec:
ports:
- protocol: TCP
port: 3306
targetPort: 3306
---
kind: Endpoints
apiVersion: v1
metadata:
name: mysql-remote
subsets:
- addresses:
- ip: xx.xxx.xxx.xx
ports:
- port: 3306
My deplomyent.yaml file looks like that:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs
spec:
replicas: 2
template:
metadata:
labels:
app: nodejs
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: nodejs
image: xxx
ports:
- containerPort: 80
name: nodejs
env:
- name: HOST
value: "mysql-remote"
- name: USER
valueFrom:
secretKeyRef:
name: nodejssecret
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: nodejssecret
key: password
First I tried to mount the certificate as a Volume to NodeJS. That looks very awkward if I use for example 10 different Applications.
Is there an easy way to use the SSL?
Review the Security on Azure Kubernetes Service (AKS) part in deploying-a-stateful-application-on-azure-kubernetes-service-aks article.
The idea is to create secter from BaltimoreCyberTrustRoot.crt.pem cert and use it in the deployment
- name: database__connection__ssl
valueFrom:
secretKeyRef:
name: ssl-cert
key: BaltimoreCyberTrustRoot.crt.pem

Ingress endpoint displays a blank page with response 200 on GKE

Being completly new to google cloud, and almost new to kubernetes, I struggled my whole weekend trying to deploy my app in GKE.
My app consists of a react frontend, nodejs backend, postgresql database (connected to the backend with a cloudsql-proxy) and redis.
I serve the frontend and backend with an Ingress, everything seems to be working and all, my pods are running. The ingress-nginx exposes the endpoint of my app, but when when I open it, instead of seeing my app, I see blank page with a 200 response. And when I do kubectl logs MY_POD, I can see that my react app is running.
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: superflix-ingress-service
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: superflix-ui-node-service
servicePort: 3000
- path: /graphql/*
backend:
serviceName: superflix-backend-node-service
servicePort: 4000
Here is my backend:
kind: Service
apiVersion: v1
metadata:
name: superflix-backend-node-service
spec:
type: NodePort
selector:
app: app
ports:
- port: 4000
targetPort: 4000
# protocol: TCP
name: http
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: superflix-backend-deployment
namespace: default
spec:
replicas: 2
template:
metadata:
labels:
app: app
spec:
containers:
- name: superflix-backend
image: gcr.io/superflix-project/superflix-server:v6
ports:
- containerPort: 4000
# The following environment variables will contain the database host,
# user and password to connect to the PostgreSQL instance.
env:
- name: REDIS_HOST
value: superflix-redis.default.svc.cluster.local
- name: IN_PRODUCTION
value: "true"
- name: POSTGRES_DB_HOST
value: "127.0.0.1"
- name: POSTGRES_DB_PORT
value: "5432"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-env-secrets
key: REDIS_PASS
# [START cloudsql_secrets]
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=superflix-project:europe-west3:superflix-db=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
# [START cloudsql_security_context]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
# [END cloudsql_security_context]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
# [END volumes]
And here is my frontend:
kind: Service
apiVersion: v1
metadata:
name: superflix-ui-node-service
spec:
type: NodePort
selector:
app: app
ports:
- port: 3000
targetPort: 3000
# protocol: TCP
name: http
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: superflix-ui-deployment
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: superflix-ui
image: gcr.io/superflix-project/superflix-ui:v4
ports:
- containerPort: 3000
env:
- name: IN_PRODUCTION
value: 'true'
- name: BACKEND_HOST
value: superflix-backend-node-service
EDIT:
When I look at the stackdriver logs of my nginx-ingress-controller I have warnings:
Service "default/superflix-ui" does not have any active Endpoint.
Service "default/superflix-backend" does not have any active Endpoint.
I actually found what was the issue. I changed the ingress service path from /* to /, and now it is working perfectly.

IP Pod to container environment variable

I have an angular app and some node containers for backend, in my deployment file, how i can get container backed for connect my front end.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: container_imaer_backend
env:
- name: IP_BACKEND
value: here_i_need_my_container_ip_pod
ports:
- containerPort: 80
protocol: TCP
I would recommend instead of using the IP to use the DNS Name there's more info here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
But basically it's http://metadata-name.namespace.svc.cluster.local so in the case for that deployment it's http://frontend.default.svc.cluster.local
It's better this way because the local IP address can change.
You could use Pod field values for environment(ref: here). That way you can set POD IP in environment variable.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mysql
name: mysql
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
ports:
- containerPort: 3306
name: mysql
protocol: TCP
volumeMounts:
- mountPath: /var/lib/mysql
name: data
volumes:
- name: data
emptyDir: {}