Running a Keycloak Cluster in GKE - keycloak

Im trying to get KeyCloak 20.0.1 running as a cluster in GKE.
The deployment is not a problem, but figuring out how the cluster cache is working is a pain.
The deployment is running using a Cloud SQL (Mysql 5.7) instance.
The question is, should I use
cache-stack=kubernetes or cache-stack=google, or can I use UDP og TCP?
If I should use the kubernets cache stack, how do I configure the headless-service thing needed?
Hopefully someone is already running Keycloak as a cluster in GKE and are willing to share some knowledge and maybe the yaml files for the deployment.
I have tried to configure all different cache-stack options, the result is the following in the logs:
[org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) keycloak: no members discovered after 2008 ms: creating cluster as coordinator
"

I found the answer.
The parameter for the jGroup described in https://www.keycloak.org/server/caching, -Djgroups.dns.query=, need to be set to to the service name for the clusterIP in the deployment yml file. In my case ${GKE_NAME}-jgroups-ping
Here is the yml files and Docerfile for doing cluster deployment of Keycloak in GKE
Im using bitbucket pipeline.
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
labels:
io.kompose.service: ${GKE_NAME}
name: ${GKE_NAME}
namespace: default
spec:
replicas: 4
selector:
matchLabels:
app: ${GKE_NAME}
template:
metadata:
labels:
app: ${GKE_NAME}
spec:
containers:
- image: $IMAGE_NAME
imagePullPolicy: Always
args: ["start", "--optimized","-Djgroups.dns.query=${GKE_NAME}-jgroups-ping"]
name: ${GKE_NAME}
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 4444
- containerPort: 8888
envFrom:
- configMapRef:
name: ${GKE_NAME}
readinessProbe:
httpGet:
path: /realms/master
port: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
service: ${GKE_NAME}
name: ${GKE_NAME}-service
namespace: default
spec:
selector:
app: ${GKE_NAME}
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
type: LoadBalancer
externalTrafficPolicy: Cluster
---
apiVersion: v1
kind: Service
metadata:
labels:
service: ${GKE_NAME}
name: ${GKE_NAME}-jgroups-ping
spec:
clusterIP: None
ports:
- port: 4444
name: ping
protocol: TCP
targetPort: 4444
selector:
app: ${GKE_NAME}
sessionAffinity: None
type: ClusterIP
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: ${GKE_NAME}
namespace: default
spec:
domains:
- $DOMAIN
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
networking.gke.io/managed-certificates: "${GKE_NAME}"
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.global-static-ip-name: "${GKE_STATIC_IP_NAME}"
name: ${GKE_NAME}-ingress
namespace: default
spec:
defaultBackend:
service:
name: ${GKE_NAME}-service
port:
number: 8080
Dockerfile
# Dockerfile
FROM quay.io/keycloak/keycloak:latest
ENV KC_HEALTH_ENABLED=true
ENV KC_METRICS_ENABLED=true
ENV KC_DB=mysql
ENV KC_CACHE=ispn
ENV KC_CACHE_STACK=kubernetes
WORKDIR /opt/keycloak
# Replace with your own certificate if needed
RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname "CN=server" -alias server -ext "SAN:c=DNS:localhost,IP:127.0.0.1" -keystore conf/server.keystore
RUN bin/kc.sh build
EXPOSE 8080
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]

Related

Azure AKS Application Gateway 502 bad gateway

I have been following the tutorial here:
MS Azure
This is fine. However deploying a local config file I get a "502 Gate Way" error. This config has been fine and works as expected.
Can anyone see anything obvious with this: At this point I don't know where to start.
I am trying to achieve using the ingress controller that is Application gateway. Then add deployments and apply additional ingress rules
apiVersion: apps/v1
kind: Deployment
metadata:
name: one-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: one-api
template:
metadata:
labels:
run: one-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: one-api
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: one-api
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: one-api
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: two-api
template:
metadata:
labels:
run: two-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: two-api
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: two-api
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: two-api
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: strata-2022
labels:
app: my-docker-apps
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: one-api
port:
number: 80
- path: /two-api
pathType: Prefix
backend:
service:
name: two-api
port:
number: 80
Output of: kubectl describe ingress strata-2022
Name: strata-2022
Labels: app=my-docker-apps
Namespace: default
Address: 51.142.191.83
Ingress Class:
Default backend:
Rules:
Host Path Backends
/ one-api:80 (10.224.0.15:80,10.224.0.59:80,10.224.0.94:80)
/two-api two-api:80 (10.224.0.13:80,10.224.0.51:80,10.224.0.82:80)
Annotations: kubernetes.io/ingress.class: azure/application-gateway
Events:
kubectl describe ingress
Name: strata-2022
Labels: app=my-docker-apps
Namespace: default
Address: 51.142.191.83
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ one-api:80 (10.224.0.15:80,10.224.0.59:80,10.224.0.94:80)
/two-api two-api:80 (10.224.0.13:80,10.224.0.51:80,10.224.0.82:80)
Annotations: kubernetes.io/ingress.class: azure/application-gateway
Events: <none>
Commands used to create AKS using Azure CLI.
az aks create -n myCluster -g david-tutorial --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name testApplicationGateway --appgw-subnet-cidr "10.225.0.0/16" --generate-ssh-keys
// Get credentials and switch to this context
az aks get-credentials -n myCluster -g david-tutorial
// This line is from the tutorial -- this works as expected
//kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/aspnetapp.yaml
// This is what i ran. It works locally
kubectl apply -f nano new-deploy.yaml
// Get address
kubectl get ingress
kubectl get configmap
I tried recreating the same setup on my end, and I could identify the following issue right after running the same az aks create command: All the instances in one or more of your backend pools are unhealthy.
Since this appeared to indicate that the backend pools are unreachable, it was strange at first so I tried to look at the logs of one of the pods based on the hello-app images you were using and noticed this right away:
> kubectl logs one-api-77f9b4b9f-6sv6f
2022/08/12 00:22:04 Server listening on port 8080
Hence, my immediate thought was that maybe in the Docker image that you are using, nothing is configured to listen on port 80, which is the port you are using in your kubernetes resources definition.
After updating your Deployment and Service definitions to use port 8080 instead of 80, everything worked perfectly fine and I started getting the following response in my browser:
Hello, world!
Version: 1.0.0
Hostname: one-api-d486fbfd7-pm8kt
Below you can find the updated YAML file that I used to successfully deploy all the resources:
apiVersion: apps/v1
kind: Deployment
metadata:
name: one-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: one-api
template:
metadata:
labels:
run: one-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: one-api
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: one-api
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: one-api
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: two-api
template:
metadata:
labels:
run: two-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: two-api
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: two-api
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: two-api
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: strata-2022
labels:
app: my-docker-apps
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: one-api
port:
number: 8080
- path: /two-api
pathType: Prefix
backend:
service:
name: two-api
port:
number: 8080

Why can't I curl endpoint on GCP?

I am working my way through a kubernetes tutorial using GKE, but it was written with Azure in mind - tho it has been working ok so far.
The first part where it has not worked has been with exercises regarding coreDNS - which I understand does not exist on GKE - it's kubedns only?
Is this why I can't get a pod endpoint with:
export PODIP=$(kubectl get endpoints hello-world-clusterip -o jsonpath='{ .subsets[].addresses[].ip}')
and then curl:
curl http://$PODIP:8080
My deployment is definitely on the right port:
ports:
- containerPort: 8080
And, in fact, the deployment for the tut is from a google sample.
Is this to do with coreDNS or authorisation/needing a service account? What can I do to make the curl request work?
Deployment yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
ports:
- port: 80
protocol: TCP
targetPort: 8080
Having a deeper insight on what Gari comments, when exposing a service outside your cluster, this services must be configured as NodePort or LoadBalancer, since ClusterIP only exposes the Service on a cluster-internal IP making the service only reachable from within the cluster, and since Cloud Shell is a a shell environment for managing resources hosted on Google Cloud, and not part of the cluster, that's why you're not getting any response. To change this, you can change your yaml file with the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
After redeploying your service, you can run command kubectl get all -o wide on cloud shell to validate that NodePort type service has been created with a node and target port.
To test your deployment just throw a CURL test to he external IP from one of your nodes incluiding the node port that was assigned, the command should look like something like:
curl <node_IP_address>:<Node_port>

How to create a HTTPS route to a Service that is listening on Https with Traefik, and Kubernetes

I'm a newbie in kubernetes and Traefik.
I follow up that tutorial:
https://docs.traefik.io/user-guides/crd-acme/
And I changed it to use my Service in Scala, that it is under https and 9463 port.
I'm trying to deploy my Scala service with kubernetes and traefik.
When I forward directly to the service :
kubectl port-forward service/core-service 8001:9463
And I perform a curl -k 'https://localhost:8001/health' :
I get the "{Message:Ok}"
But when I perform a port forward to traefik
kubectl port-forward service/traefik 9463:9463 -n default
And perform a curl -k 'https://ejemplo.com:9463/tls/health'
I get an "Internal server error"
I guess the problem is that my "core-service" is listening over HTTPS protocol, that's what I add scheme:https.
I tried to find the solution over the documentation but it is confusing.
Those are my yml files:
Services.yaml
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 9463
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: core-service
spec:
ports:
- protocol: TCP
name: websecure
port: 9463
selector:
app: core-service
Deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.0
args:
- --api.insecure
- --accesslog
- --entrypoints.websecure.Address=:9463
- --providers.kubernetescrd
- --certificatesresolvers.default.acme.tlschallenge
- --certificatesresolvers.default.acme.email=foo#you.com
- --certificatesresolvers.default.acme.storage=acme.json
# Please note that this is the staging Let's Encrypt server.
# Once you get things working, you should remove that whole line altogether.
- --certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
ports:
- name: websecure
containerPort: 9463
- name: admin
containerPort: 8080
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: core-service
labels:
app: core-service
spec:
replicas: 1
selector:
matchLabels:
app: core-service
template:
metadata:
labels:
app: core-service
spec:
containers:
- name: core-service
image: core-service:0.1.4-SNAPSHOT
ports:
- name: websecure
containerPort: 9463
livenessProbe:
httpGet:
port: 9463
scheme: HTTPS
path: /health
initialDelaySeconds: 10
IngressRoute2.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`ejemplo.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: core-service
port: 9463
scheme: https
tls:
certResolver: default
From the docs
A TLS router will terminate the TLS connection by default. However,
the passthrough option can be specified to set whether the requests
should be forwarded "as is", keeping all data encrypted.
In your case SSL Passthrough need to be enabled because the pod is expecting HTTPS traffic.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`ejemplo.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: core-service
port: 9463
scheme: https
tls:
certResolver: default
passthrough: true

enable https on local domain with Kubernetes / Traefik Ingress

When I test my Spring boot app without docker, I test it with:
https://localhost:8081/points/12345/search
And it works great. I get an error if I use http
Now, I want to deploy it with Kubernetes in local, with url: https://sge-api.local
When I use http, I get the same error as when I don't use docker.
But when I use https, I get:
<html><body><h1>404 Not Found</h1></body></html>
Here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sge-api-local
name: sge-api-local
namespace: sge
spec:
selector:
matchLabels:
app: sge-api-local
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: sge-api-local
spec:
containers:
- image: sge_api:local
name: sge-api-local
Here is my ingress:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: sge-ingress
namespace: sge
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: sge-api.local
http:
paths:
- backend:
serviceName: sge-api-local
servicePort: 8081
tls:
- secretName: sge-api-tls-cert
with :
kubectl -n kube-system create secret tls sge-api-tls-cert --key=../certs/privkey.pem --cert=../certs/cert1.pem
Finally, here is my service:
apiVersion: v1
kind: Service
metadata:
labels:
app: sge-api-local
name: sge-api-local
namespace: sge
spec:
ports:
- name: "8081"
port: 8081
selector:
app: sge-api-local
What should I do ?
EDIT:
traefik-config.yml:
kind: ConfigMap
apiVersion: v1
metadata:
name: traefik-config
data:
traefik.toml: |
# traefik.toml
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
traefik-deployment:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:1.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
traefik-service.yml
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
Please make sure that you have enable TLS. Let’s Encrypt is a free TLS Certificate Authority (CA) and you can use it to automatically request and renew Let’s Encrypt certificates for public domain names. Make sure that you have created configmap. Check if you follow every steps during traefik setup: traefik-ingress-controller.
Then you have to assign to which hosts creted secret have to be assigned, egg.
tls:
- secretName: sge-api-tls-cert
hosts:
- sge-api.local
Remember to add specific port assigned to host while executing link.
In your case should be: https://sge-api.local:8081
When using SSL offloading outside of cluster it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available.
You could also add annotations to ingress configuration file:
traefik.ingress.kubernetes.io/frontend-entry-points: http, https
traefik.ingress.kubernetes.io/redirect-entry-point: https
to enable Redirect to another entryPoint for that frontend (e.g. HTTPS).
Let me know if it helps.

Istio Gateway With multiple ports | service is responding only on port 80

Hey so I configured gateway for port 80 and 8083 for same domain i-e example.com. Now when I create attributes using below config file everything get up and running.
issue is I am using 8083 in service and virtualService but I get response from service at 80 where on 8083 getting connection timeout.
Unable to understand why service is responding on 80 not 8083. I want to keep both ports in gateway but when define in service and ingress port 8083 it should response on specifically on 8083.
Would appreciate your feedback in this.
apiVersion: v1
data:
my.databag.1: need_triage
kind: ConfigMap
metadata:
name: my-service-env-variables
namespace: api
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: my-service
name: my-service-service-deployment
namespace: api
spec:
replicas: 1
template:
metadata:
annotations:
traffic.sidecar.istio.io/excludeOutboundIPRanges: 0.0.0.0/0
labels:
app: my-service-service-deployment
spec:
containers:
- env:
- name: my.variable
valueFrom:
secretKeyRef:
key: my_token
name: my.variable
envFrom:
- configMapRef:
name: my-service-env-variables
image: imaagepath:tag
name: my-service-pod
ports:
- containerPort: 8080
name: mysvcport
resources:
limits:
cpu: 700m
memory: 1.8Gi
requests:
cpu: 500m
memory: 1.7Gi
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: api
spec:
ports:
- port: 8083
protocol: TCP
targetPort: mysvcport
selector:
app: my-service-service-deployment
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-service-ingress
namespace: api
spec:
gateways:
- http-gateway
hosts:
- my-service.example.com
http:
- route:
- destination:
host: my-service
port:
number: 8083
---
apiVersion: v1
items:
- apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
clusterName: ""
creationTimestamp: 2018-11-07T13:17:00Z
name: http-gateway
namespace: api
resourceVersion: "11778445"
selfLink: /apis/networking.istio.io/v1alpha3/namespaces/api/gateways/http-gateway
uid: 694f66a4-e28f-11e8-bc21-0ac9e31187a0
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*.example.com'
port:
name: http
number: 80
protocol: HTTP
- hosts:
- '*.example.com'
port:
name: tomcat-http
number: 8083
protocol: HTTP
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Two issues with your configuration:
You have to call your port http-tomcat and not tomcat-http, see Istio requirements for named ports
In order to enable ingress on port 8083, you have to redeploy the istio-ingressgateway service, with the port 8083 added:
helm template install/kubernetes/helm/istio/ --name istio-ingressgateway \
--namespace istio-system -x charts/gateways/templates/service.yaml \
--set gateways.istio-egressgateway.enabled=false \
--set gateways.istio-ingressgateway.ports[0].port=80 \
--set gateways.istio-ingressgateway.ports[0].name=http \
--set gateways.istio-ingressgateway.ports[1].port=443 \
--set gateways.istio-ingressgateway.ports[1].name=https \
--set gateways.istio-ingressgateway.ports[2].port=8083 \
--set gateways.istio-ingressgateway.ports[2].name=http-tomcat \
| kubectl apply -f -
Having said that, do you really have to enable ingress access to the port 8083? You can define some path in the VirtualService for the port 80, e.g. /tomcat/* and direct the incoming traffic from the port 80 to your service on the port 8083.