Unable to login to high available keycloak cluster - kubernetes

Im using bitnami helm chart for keycloak. and trying to achieve High availability with 3 keycloak replics, using DNS ping.
Chart version: 5.2.8
Image version: 15.1.1-debian-10-r10
Helm repo: https://charts.bitnami.com/bitnami -> bitnami/keycloak
The modified parameters of values.yaml file is as follows:
global:
image:
registry: docker.io
repository: bitnami/keycloak
tag: 15.1.1-debian-10-r10
pullPolicy: IfNotPresent
pullSecrets: []
debug: true
proxyAddressForwarding: true
serviceDiscovery:
enabled: true
protocol: dns.DNS_PING
properties:
- dns_query=keycloak.identity.svc.cluster.local
transportStack: tcp
cache:
ownersCount: 3
authOwnersCount: 3
replicaCount: 3
ingress:
enabled: true
hostname: my-keycloak.keycloak.example.com
apiVersion: ""
ingressClassName: "nginx"
path: /
pathType: ImplementationSpecific
annotations: {}
tls: false
extraHosts: []
extraTls: []
secrets: []
existingSecret: ""
servicePort: http
When login to the keycloak UI, after entering the username and password , the login does not happen, it redirects the back to login page.
From the pod logs I see following error:
0:07:05,251 WARN [org.keycloak.events] (default task-1) type=CODE_TO_TOKEN_ERROR, realmId=master, clientId=security-admin-console, userId=null, ipAddress=10.244.5.46, error=invalid_code, grant_type=authorization_code, code_id=157e0483-67fa-4ea4-a964-387f3884cbc9, client_auth_method=client-secret
When I checked about this error in forums, As per some suggestions to set proxyAddressForwarding to true, but with this as well, issue remains same.
Apart from this I have tried some other version of the helm chart , but with that the UI itself does not load correctly with page not found errors.
Update
I get the above error i.e, CODE_TO_TOKEN_ERROR in logs when I use the headless service with ingress. But if I use the service of type ClusterIP with ingress , the error is as follows:
06:43:37,587 WARN [org.keycloak.events] (default task-6) type=LOGIN_ERROR, realmId=master, clientId=null, userId=null, ipAddress=10.122.0.26, error=expired_code, restart_after_timeout=true, authSessionParentId=453870cd-5580-495d-8f03-f73498cd3ace, authSessionTabId=1d17vpIoysE
Another additional information I would like to post is , I see following INFO in all the keycloak pod logs at the startup.
05:27:10,437 INFO [org.jgroups.protocols.pbcast.GMS] (ServerService Thread Pool -- 58) my-keycloak-0: no members discovered after 3006 ms: creating cluster as coordinator
This sounds like the 3 members have not combined and not formed a keycloak cluster.

One common scenario that may lead to such a situation is when the node that issued the access code is not the one who has received the code to token request. So the client gets the access code from node 1 but the second request reaches node 2 and the value is not yet in this node's cache. The safest approach to prevent such scenario is to setup a session sticky load balancer.
I suggest you to try setting the service.spec.sessionAffinity to ClientIP. Its default value is None.

This part of the error
expired_code
might indicate a mismatch in timekeeping between the server and the client

Related

cetic-nifi Invalid host header issue

Helm version: v3.5.2
Kubernetes version: v1.20.4
nifi chart version:latest : 1.0.2 rel
Issue: [cetic/nifi]-issue
I'm trying to connect to nifi UI deployed in kubernetes.
I have set following properties in values yaml
properties:
# use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
sensitiveKey: changeMechangeMe # Must to have minimal 12 length key
algorithm: NIFI_PBKDF2_AES_GCM_256
externalSecure: false
isNode: false
httpsPort: 8443
webProxyHost: 10.0.39.39:30666
clusterPort: 6007
# ui service
service:
type: NodePort
httpsPort: 8443
nodePort: 30666
annotations: {}
# loadBalancerIP:
## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24
## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
# sessionAffinity: ClientIP
# sessionAffinityConfig:
# clientIP:
# timeoutSeconds: 10800
10.0.39.39 - is the kubernetes masternode internal ip.
When nifi get started i get follwoing
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/k8sadmin/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/k8sadmin/.kube/config
NAME: nifi
LAST DEPLOYED: Thu Nov 25 12:38:00 2021
NAMESPACE: jeed-cluster
STATUS: deployed
REVISION: 1
NOTES:
Cluster endpoint IP address will be available at:
kubectl get svc nifi -n jeed-cluster -o jsonpath='{.status.loadBalancer.ingress[*].ip}'
Cluster endpoint domain name is: 10.0.39.39:30666 - please update your DNS or /etc/hosts accordingly!
Once you are done, your NiFi instance will be available at:
https://10.0.39.39:30666/nifi
and when i do a curl
curl https://10.0.39.39:30666 put sample.txt -k
<h1>System Error</h1>
<h2>The request contained an invalid host header [<code>10.0.39.39:30666</
the request [<code>/</code>]. Check for request manipulation or third-part
t.</h2>
<h3>Valid host headers are [<code>empty
<ul><li>127.0.0.1</li>
<li>127.0.0.1:8443</li>
<li>localhost</li>
<li>localhost:8443</li>
<li>[::1]</li>
<li>[::1]:8443</li>
<li>nifi-0.nifi-headless.jeed-cluste
<li>nifi-0.nifi-headless.jeed-cluste
<li>10.42.0.8</li>
<li>10.42.0.8:8443</li>
<li>0.0.0.0</li>
<li>0.0.0.0:8443</li>
</ul>
Tried lot of things but still cannot whitelist master node ip in
proxy hosts
Ingress is not used
edit: it looks like properties set in values.yaml is not set in nifi.properties in side the pod. Is there any reason for this?
Appreciate help!
As NodePort service you can also assign port number from 30000-32767.
You can apply values when you install your chart with:
properties:
webProxyHost: localhost
httpsPort:
This should let nifi whitelist your https://localhost:

How to set a service port for ingresses in helmfile?

I'm new to Kubernetes and Helm and want to create a SSO with OIDC using vouch-proxy
I found a tutorial which explains how to do it and was able to write some helmfiles that were accepted by kubernetes.
I added the ingress configuration to the values.yaml that I load in my helmfile.yaml.
helmfile.yaml
bases:
- environments.yaml
---
releases:
- name: "vouch"
chart: "halkeye/vouch"
version: {{ .Environment.Values.version }}
namespace: {{ .Environment.Values.namespace }}
values:
- values.yaml
values.yaml
# vouch config
# bare minimum to get vouch running with OpenID Connect (such as okta)
config:
vouch:
some:
other:
values:
# important part
ingress:
enabled: true
hosts:
- "vouch.minikube"
paths:
- /
With this configuration helmfile creates an Ingress for the correct host, but when I open the URL in my Browser it returns a 404 Not Found which makes sense, since I didn't specify the correct port (9090).
I tried some notations to add the port but it lead to either helmfile not updating the pod or 500 Internal Server Errors.
How can I add a port in the configuration? And is it the "correct" way to do it? Or should ingresses be handled by kubectl still?

SSL handshake failed in Kafka broker

I have a Kafka cluster in Kubernetes created using Strimzi.
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: {{ .Values.cluster.kafka.name }}
spec:
kafka:
version: 2.7.0
replicas: 3
storage:
deleteClaim: true
size: {{ .Values.cluster.kafka.storagesize }}
type: persistent-claim
rack:
topologyKey: failure-domain.beta.kubernetes.io/zone
template:
pod:
metadata:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9404'
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
authentication:
type: tls
- name: external
port: 9094
type: loadbalancer
tls: true
authentication:
type: tls
configuration:
bootstrap:
loadBalancerIP: {{ .Values.cluster.kafka.bootstrapipaddress }}
brokers:
{{- range $key, $value := (split "," .Values.cluster.kafka.brokersipaddress) }}
- broker: {{ (split "=" .)._0 }}
loadBalancerIP: {{ (split "=" .)._1 | quote }}
{{- end }}
authorization:
type: simple
Cluster is created and up, I am able to create topics and produce/consume to/from topic.
The issue is that if I exec into one of Kafka brokers pods I see intermittent errors
INFO [SocketServer brokerId=0] Failed authentication with /10.240.0.35 (SSL handshake failed) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SSL-9]
INFO [SocketServer brokerId=0] Failed authentication with /10.240.0.159 (SSL handshake failed) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SSL-11]
INFO [SocketServer brokerId=0] Failed authentication with /10.240.0.4 (SSL handshake failed) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SSL-10]
INFO [SocketServer brokerId=0] Failed authentication with /10.240.0.128 (SSL handshake failed) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SSL-1]
After inspecting these IPs [10.240.0.35, 10.240.0.159, 10.240.0.4,10.240.0.128] I figured out the all they are related to pods from kube-system namespace which are implicitly created as part of Kafka cluster deployment.
Any idea what can be wrong?
I do not think this is necessarily wrong. You seem to have somewhere some application trying to connect to the broker without properly configured TLS. But as the connection is forwarded the IP probably gets masked - so it does not shwo the real external IP anymore. These can be all kind of things from misconfigured clients up to some healthchecks trying to just open TCP connection (depending on your environment, the load balancer can do it for example).
Unfortunately, it is a bit hard to find out where they really come from. You can try to trace it through the logs of whoeevr owns the IP address it came from, as that forwarded it from someone else etc. You could also try to enable TLS debug in Kafka with the Java system property javax.net.debug=ssl. But that might help only in some cases with misconfigured clients, not with some TPC probes and it will also make it hard to find the right place in the logs because it will also dump the replication traffic etc. which used TLS as well.

Configure SMTP for SonarQube on Kubernetes Helm Chart

I want to automatically deploy SonarQube on Kubernetes, so the goal is to have everything configued automatically. I successfully created a values.yaml for the helm chart that installs the LDAP plugin and configure it using our DC. But when configuring email settings like SMTP host, they seems ignored.
Already tried to completely delete the chart and re-installed it:
helm delete --purge sonarqube-test
helm install stable/sonarqube --namespace sonarqube-test --name sonarqube-test -f values-test.yaml
Altough I set e.g. http.proxyHost to our mailserver, it's still empty in the UI after deploying those values.yaml;
The sonarProperties property is documented and it seems to work: Other properties like from ldap were applied, since I can login using LDAP after updating the values.
I'm not sure if this is k8s related, since other said it works generally. I went into the container using kubectl exec and looked at the generated sonar.properties file, it seems fine:
$ cat /opt/sonarqube/conf/sonar.properties
email.from=noreply#mydomain.com
email.fromName=SonarQube Test
email.prefix=[SONARQUBE Test]
email.smtp_host.secured=mymailserver.internal
sonar.security.realm=LDAP
sonar.updatecenter.activate=true
sonar.web.javaOpts=-Xmx2048m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -serversonarqube
There were some more properties for LDAP like Bind user and so on, which I removed.
So why are the email settings not applied after updating the chart, and not even when it got completely deleted and re-deployed?
values.yaml
replicaCount: 1
image:
tag: 7.9-community
service:
type: ClusterIP
port: 80
ingress:
enabled: true
hosts:
- name: sonarqube-test.mycluster.internal
path: /
tls:
- hosts:
- sonarqube-test.mycluster.internal
persistence:
enabled: true
storageClass: nfs-client
accessMode: ReadWriteOnce
size: 10Gi
postgresql:
enabled: true
plugins:
install:
- "https://github.com/SonarSource/sonar-ldap/releases/download/2.2-RC3/sonar-ldap-plugin-2.2.0.601.jar"
sonarProperties:
sonar.web.javaOpts: "-Xmx2048m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -server"
sonar.security.realm: LDAP
ldap.url: "..."
# More ldap config vars ...
sonar.updatecenter.activate: true
email.smtp_host.secured: "mymailserver.internal"
email.fromName: "SonarQube Test"
email.from: "noreply#mydomain.com"
email.prefix: "[SONARQUBE Test]"
resources:
limits:
cpu: 4000m
memory: 8096Mi
requests:
cpu: 500m
memory: 3096Mi
You have defined chart for sonarqube and configured tls in your value.yaml file. Take notice that you don't specify secret name according to definition of sonarquebue your tls section should look like this. Remeber that you have to create this secret in proper namespace manually.
Template for configuring tls looks like this:
tls: []
# Secrets must be manually created in the namespace.
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
So in your case this section should loks like this:
tls: []
# Secrets must be manually created in the namespace.
- secretName: your-secret-name
hosts:
- sonarqube-test.mycluster.internal
At the same time during configuration postgresql dependencies you didn't specify user, database, password and port for postgreSQL, which you should do because you choose to use this database instead of mySQL.
Here is template:
database:
type: "postgresql"
## Configuration values for postgresql dependency
## ref: https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md
postgresql:
# Enable to deploy the PostgreSQL chart
enabled: true
# To use an external PostgreSQL instance, set enabled to false and uncomment
# the line below:
# postgresServer: ""
# To use an external secret for the password for an external PostgreSQL
# instance, set enabled to false and provide the name of the secret on the
# line below:
# postgresPasswordSecret: ""
postgresUser: "sonarUser"
postgresPassword: "sonarPass"
postgresDatabase: "sonarDB"
# Specify the TCP port that PostgreSQL should use
service:
port: 5432
The most common cause of SMTP failures is because of a wrong outbound mail configuration.
You have to introduce the following parameters in the SMTP configuration:
SMTP Host
SMTP Port
SMTP Username
SMTP Password
SMTP Encryption
Check that these parameters are those provided by your mail provider.
Check that you have followed the “Configure outbound mail settings” section in the application documentation page.
In your case you didn't specify password, user name and port.
Add follwing sections to your sonar.properities definition:
email.smtp_port.secured=port-name
email.smtp_secure_connection.secured=true
email.smtp_username.secured=your-username
email.smtp_password.secured=your-password
Next thing:
Make sure that your cloud environment allows Traffic In SMTP Ports.
To avoid massive SPAM attacks, several clouds do not allow SMTP traffic in their default ports.
Google Cloud Platform does not allow SMTP traffic through default
ports 25, 465 or 587
GoDaddy also blocks SMTP traffic.
Here is troubleshooting documenttion connected to SMTP issues: SMTP-issues.
Make sure that you didn't have one of them.
Please let me know if its help.

kubernetes dashboard will not load

I am completely new to Kubernetes, so go easy on me.
I am running kubectl proxy but am only seeing the JSON output. Based on this discussion I attempted to set the memory limits by running:
kubectl edit deployment kubernetes-dashboard --namespace kube-system
I then changed the container memory limit:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
...
spec:
...
template:
metadata:
...
spec:
containers:
- image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1
imagePullPolicy: IfNotPresent
livenessProbe:
...
name: kubernetes-dashboard
ports:
- containerPort: 9090
protocol: TCP
resources:
limits:
memory: 1Gi
I still only get the JSON served when I save that and visit http://127.0.0.1:8001/ui
Running kubectl logs --namespace kube-system kubernetes-dashboard-665756d87d-jssd8 I see the following:
Starting overwatch
Using in-cluster config to connect to apiserver
Using service account token for csrf signing
No request provided. Skipping authorization
Successful initial request to the apiserver, version: v1.10.0
Generating JWE encryption key
New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
Initializing JWE encryption key from synchronized object
Creating in-cluster Heapster client
Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
Serving insecurely on HTTP port: 9090
I read through a bunch of links from a Google search on the error but nothing really worked.
Key components are:
Local: Ubuntu 18.04 LTS
minikube: v0.28.0
Kubernetes Dashboard: 1.8.3
Installed via:
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Halp!
Have you considered using the minikube dashboard? You can reach it by:
minikube dashboard
Also you will get json on http://127.0.0.1:8001/ui because it is deprecated, so you have to use full proxy URL as it states in the dashboard github page.
If you still want to use this 'external' dashboard for some future not minikube related projects or there is some other reason I don't know about you can reach it by:
kubectl proxy
and then:
http://localhost:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/
note that in the documentation it is https which is not correct in this case (might be documentation error or it might be clarified in the documentation part which I suggest you read if you need further information on web UI).
Hope this helps.