Helm / Kubernetes - Statefulset & Permissions - postgresql

I keep seeing this error:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
12s 2s 12 {statefulset } Warning FailedCreate create Pod pgset-0 in StatefulSet pgset failed error: pods "pgset-0" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{26}: 26 is not an allowed group]
I've created a ServiceAccount named "pgset-sa", and granted it the cluster-admin role. I've been researching other ways to get this to work (including editing scc restricted), but keep getting the error from fsGroup stating it's not an allowed group. What am I missing?
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.ContainerName}}"
labels:
name: "{{.Values.ReplicaName}}"
app: "{{.Values.ContainerName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
selector:
matchLabels:
app: "{{.Values.ContainerName}}"
serviceName: "{{.Values.ContainerName}}"
replicas: 2
template:
metadata:
labels:
app: "{{.Values.ContainerName}}"
spec:
serviceAccount: "{{.Values.ContainerServiceAccount}}"
securityContext:
fsGroup: 26
terminationGracePeriodSeconds: 10
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_HOST
value: "{{.Values.PrimaryName}}"
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}"
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}

Take a look at this document titled: Managing Security Context Constraints.
The service account associated with the statefulset must be granted a security context constraint sufficient to allow the pod (one that either allows exactly the fsGroup 26 or allows any fsGroup, in this case).

Related

configuring keycloak with external postgres database

How do we configure keycloak to use the external postgres (AWS RDS)?
We deployed it in kubernetes using quarkus distro and update dthe DB env variables in our deployment.yaml , however it is still taking the local h2 data base and not the postgres.
For better understanding providing the deployment.yaml file we are using:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "5"
kubectl.kubernetes.io/last-applied-configuration: |
creationTimestamp: "2022-06-21T16:47:29Z"
generation: 5
labels:
app: keycloak
name: keycloak
namespace: kc***
resourceVersion: "29233550"
uid: 3634683e-657c-4278-9002-82a3ce64b968
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: keycloak
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: keycloak
spec:
containers:
- args:
- start
- --hostname=kc-test.k8.com
- --https-certificate-file=/opt/pem/cert-pem/cert.pem
- --https-certificate-key-file=/opt/pem/key-pem/key.pem
- --log-level=DEBUG
env:
- name: KEYCLOAK_ADMIN
value: ****
- name: KEYCLOAK_ADMIN_PASSWORD
value: *****
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: jdbc:postgresql://database.c**7irl*****.us-east-1.rds.amazonaws.com/database
- name: DB_DATABASE
value: ****
- name: DB_USER
value: postgres
- name: DB_SCHEMA
value: public
- name: DB_VENDOR
value: POSTGRES
- name: JGROUPS_DISCOVERY_PROTOCOL
value: dns.DNS_PING
- name: JGROUPS_DISCOVERY_PROPERTIES
value: dns_query=keycloak
- name: CACHE_OWNERS_COUNT
value: "2"
- name: CACHE_OWNERS_AUTH_SESSIONS_COUNT
value: "2"
image: quay.io/keycloak/keycloak:17.0.0
imagePullPolicy: IfNotPresent
name: keycloak
ports:
- containerPort: 7600
name: jgroups
protocol: TCP
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 8443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /realms/master
port: 8443
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
resources: {}
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/pem/key-pem
name: key-pem
- mountPath: /opt/pem/cert-pem
name: cert-pem
- mountPath: /opt/keycloak/data
name: keydata
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: key-pem
name: key-pem
- configMap:
defaultMode: 420
name: cert-pem
name: cert-pem
- emptyDir: {}
name: keydata
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-06-21T18:02:32Z"
lastUpdateTime: "2022-06-21T18:02:32Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-06-21T18:01:53Z"
lastUpdateTime: "2022-06-21T18:16:41Z"
message: ReplicaSet "keycloak-5c84476694" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 5
readyReplicas: 3
replicas: 3
updatedReplicas: 3
Is your external DB also in same namespace?
if yes you can use below way.
<external postgres (AWS RDS)>secret-name in k8s secret contains all the below details.
Using this method it will dynamically fetch details from secret.
env:
- name: DB_DATABASE
valueFrom:
secretKeyRef:
name: database-secret-name
key: dbname
- name: DB_ADDR
valueFrom:
secretKeyRef:
name: database-secret-name
key: host
- name: DB_PORT
valueFrom:
secretKeyRef:
name: database-secret-name
key: port
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: database-secret-name
key: password
- name: DB_USER
valueFrom:
secretKeyRef:
name: database-secret-name
key: user
if your external postgres in different namespace, copy you database secret to keycloak namespace and give it a try.
DB_ADDR is env variable for Keycloak versions 16-. Use doc for you Keycloak version https://www.keycloak.org/server/all-config
Keycloak 17+ has KC_DB_URL:
db-url
The full database JDBC URL.
If not provided, a default URL is set based on the selected database vendor. For instance, if using 'postgres', the default JDBC URL would be 'jdbc:postgresql://localhost/keycloak'.
CLI: --db-url
Env: KC_DB_URL
Of course configure also other env variables for your Keycloak version properly.

VerneMQ in Kubernetes with persistent volume claim is throwing Forbidden: may not specify more than 1 volume type

Unable to create Verne MQ pod in AWS EKS cluster with persistent volume claim for authentication and SSL. Below is my yaml file:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: vernemq-storage
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: verne-aws-pv
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: xfs
volumeID: aws://ap-south-1a/vol-xxxxx
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: vernemq-storage
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: mysql
name: verne-aws-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp2-retain
volumeMode: Filesystem
volumeName: verne-aws-pv
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vernemq
spec:
replicas: 1
selector:
matchLabels:
app: vernemq
serviceName: vernemq
template:
metadata:
labels:
app: vernemq
spec:
serviceAccountName: vernemq
terminationGracePeriodSeconds: 200
containers:
- name: vernemq
image: vernemq/vernemq:latest
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- /usr/sbin/vmq-admin cluster leave node=VerneMQ#${MY_POD_NAME}.vernemq.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local ; sleep 60 ; /usr/sbin/vmq-admin cluster leave node=VerneMQ#${MY_POD_NAME}.vernemq.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local -k; sleep 60;
ports:
- containerPort: 1883
name: mqtt
hostPort: 1883
- containerPort: 8883
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 8888
name: health
- containerPort: 9100
- containerPort: 9101
- containerPort: 9102
- containerPort: 9103
- containerPort: 9104
- containerPort: 9105
- containerPort: 9106
- containerPort: 9107
- containerPort: 9108
- containerPort: 9109
- containerPort: 8888
resources:
limits:
cpu: "2"
memory: 3Gi
requests:
cpu: "1"
memory: 1Gi
env:
- name: DOCKER_VERNEMQ_ACCEPT_EULA
value: "yes"
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
value: "9100"
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
value: "9109"
- name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
value: "on"
- name: DOCKER_VERNEMQ_LISTENER__TCP__DEFAULT
value: "0.0.0.0:1883"
- name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__POOL_timeout
value: "6000"
- name: DOCKER_VERNEMQ_LISTENER__HTTP__DEFAULT
value: "0.0.0.0:8888"
- name: DOCKER_VERNEMQ_LISTENER__MAX_CONNECTIONS
value: "infinity"
- name: DOCKER_VERNEMQ_LISTENER__NR_OF_ACCEPTORS
value: "10000"
- name: DOCKER_VERNEMQ_MAX_INFLIGHT_MESSAGES
value: "0"
- name: DOCKER_VERNEMQ_ALLOW_MULTIPLE_SESSIONS
value: "off"
- name: DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_PUBLISH_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_SUBSCRIBE_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_UNSUBSCRIBE_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
value: "/etc/vernemq/vmq.passwd"
- name: DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT
value: "0.0.0.0:8883"
- name: DOCKER_VERNEMQ_LISTENER__SSL__CAFILE
value: "/etc/ssl/ca.crt"
- name: DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE
value: "/etc/ssl/server.crt"
- name: DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE
value: "/etc/ssl/server.key"
volumeMounts:
- mountPath: /etc/ssl
name: vernemq-certifications
readOnly: true
- mountPath: /etc/vernemq-passwd
name: vernemq-passwd
readOnly: true
volumes:
- name: vernemq-certifications
persistentVolumeClaim:
claimName: verne-aws-pvc
secret:
secretName: vernemq-certifications
- name: vernemq-passwd
persistentVolumeClaim:
claimName: verne-aws-pvc
secret:
secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: empd
- port: 44053
name: vmq
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- name: mqtt
port: 1883
targetPort: 1883
- name: health
port: 8888
targetPort: 8888
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vernemq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["endpoints", "deployments", "replicasets", "pods", "statefulsets", "persistentvolumeclaims"]
verbs: ["get", "patch", "list", "watch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
subjects:
- kind: ServiceAccount
name: vernemq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader
Created an AWS EBS volume in the same region and subnet as in the node group and added it to the persistent volume storage.
Pod is not getting created instead when we do kubectl describe statefulset vernemq getting below error:
Volumes:
vernemq-certifications:
Type: Secret (a volume populated by a Secret)
SecretName: vernemq-certifications
Optional: false
vernemq-passwd:
Type: Secret (a volume populated by a Secret)
SecretName: vernemq-passwd
Optional: false
Volume Claims: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 2m2s (x5 over 2m2s) statefulset-controller create Pod vernemq-0 in StatefulSet vernemq failed error: pods "vernemq-0" is forbidden: error looking up service account default/vernemq: serviceaccount "vernemq" not found
Warning FailedCreate 40s (x10 over 2m2s) statefulset-controller create Pod vernemq-0 in StatefulSet vernemq failed error: Pod "vernemq-0" is invalid: [spec.volumes[0].persistentVolumeClaim: Forbidden: may not specify more than 1 volume type, spec.volumes[1].persistentVolumeClaim: Forbidden: may not specify more than 1 volume type, spec.containers[0].volumeMounts[0].name: Not found: "vernemq-certifications", spec.containers[0].volumeMounts[1].name: Not found: "vernemq-passwd"]

Accessing bitnami/kafka outside the kubernetes cluster

I am currently using bitnami/kafka image(https://hub.docker.com/r/bitnami/kafka) and deploying it on kubernetes.
kubernetes master: 1
kubernetes workers: 3
Within the cluster the other application are able to find kafka. The problem occurs when trying to access the kafka container from outside the cluster. When reading little bit I read that we need to set property "advertised.listener=PLAINTTEXT://hostname:port_number" for external kafka clients.
I am currently referencing "https://github.com/bitnami/charts/tree/master/bitnami/kafka". Inside my values.yaml file I have added
values.yaml
advertisedListeners1: 10.21.0.191
and statefulset.yaml
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: 'PLAINTEXT://{{ .Values.advertisedListeners }}:9092'
For a single kafka instance it is working fine.
But for 3 node kafka cluster, I changed some configuration like below:
values.yaml
advertisedListeners1: 10.21.0.191
advertisedListeners2: 10.21.0.192
advertisedListeners3: 10.21.0.193
and Statefulset.yaml
- name: KAFKA_CFG_ADVERTISED_LISTENERS
{{- if $MY_POD_NAME := "kafka-0" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners1 }}:9092'
{{- else if $MY_POD_NAME := "kafka-1" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners2 }}:9092'
{{- else if $MY_POD_NAME := "kafka-2" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners3 }}:9092'
{{- end }}
Expected result is that all the 3 kafka instances should get advertised.listener property set to worker nodes ip address.
example:
kafka-0 --> "PLAINTEXT://10.21.0.191:9092"
kafka-1 --> "PLAINTEXT://10.21.0.192:9092"
kafka-3 --> "PLAINTEXT://10.21.0.193:9092"
Currently only one kafka pod in up and running and the other two are going to crashloopbackoff state.
and the other two pods are showing error as:
[2019-10-20 13:09:37,753] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-10-20 13:09:37,786] ERROR [KafkaServer id=1002] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.IllegalArgumentException: requirement failed: Configured end points 10.21.0.191:9092 in advertised listeners are already registered by broker 1001
at scala.Predef$.require(Predef.scala:224)
at kafka.server.KafkaServer$$anonfun$createBrokerInfo$2.apply(KafkaServer.scala:399)
at kafka.server.KafkaServer$$anonfun$createBrokerInfo$2.apply(KafkaServer.scala:397)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.server.KafkaServer.createBrokerInfo(KafkaServer.scala:397)
at kafka.server.KafkaServer.startup(KafkaServer.scala:261)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:84)
at kafka.Kafka.main(Kafka.scala)
That means the logic applied in statefulset.yaml is not working.
Can anyone help me in resolving this..?
Any help would be appreciated..
The output of kubectl get statefulset kafka -o yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: "2019-10-29T07:04:12Z"
generation: 1
labels:
app.kubernetes.io/component: kafka
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: kafka
helm.sh/chart: kafka-6.0.1
name: kafka
namespace: default
resourceVersion: "12189730"
selfLink: /apis/apps/v1/namespaces/default/statefulsets/kafka
uid: d40cfd5f-46a6-49d0-a9d3-e3a851356063
spec:
podManagementPolicy: Parallel
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: kafka
app.kubernetes.io/instance: kafka
app.kubernetes.io/name: kafka
serviceName: kafka-headless
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: kafka
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: kafka
helm.sh/chart: kafka-6.0.1
name: kafka
spec:
containers:
- env:
- name: MY_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: KAFKA_CFG_ZOOKEEPER_CONNECT
value: kafka-zookeeper
- name: KAFKA_PORT_NUMBER
value: "9092"
- name: KAFKA_CFG_LISTENERS
value: PLAINTEXT://:$(KAFKA_PORT_NUMBER)
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: PLAINTEXT://10.21.0.191:9092
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
- name: KAFKA_CFG_BROKER_ID
value: "-1"
- name: KAFKA_CFG_DELETE_TOPIC_ENABLE
value: "false"
- name: KAFKA_HEAP_OPTS
value: -Xmx1024m -Xms1024m
- name: KAFKA_CFG_LOG_FLUSH_INTERVAL_MESSAGES
value: "10000"
- name: KAFKA_CFG_LOG_FLUSH_INTERVAL_MS
value: "1000"
- name: KAFKA_CFG_LOG_RETENTION_BYTES
value: "1073741824"
- name: KAFKA_CFG_LOG_RETENTION_CHECK_INTERVALS_MS
value: "300000"
- name: KAFKA_CFG_LOG_RETENTION_HOURS
value: "168"
- name: KAFKA_CFG_LOG_MESSAGE_FORMAT_VERSION
- name: KAFKA_CFG_MESSAGE_MAX_BYTES
value: "1000012"
- name: KAFKA_CFG_LOG_SEGMENT_BYTES
value: "1073741824"
- name: KAFKA_CFG_LOG_DIRS
value: /bitnami/kafka/data
- name: KAFKA_CFG_DEFAULT_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM
value: https
- name: KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISR
value: "1"
- name: KAFKA_CFG_NUM_IO_THREADS
value: "8"
- name: KAFKA_CFG_NUM_NETWORK_THREADS
value: "3"
- name: KAFKA_CFG_NUM_PARTITIONS
value: "1"
- name: KAFKA_CFG_NUM_RECOVERY_THREADS_PER_DATA_DIR
value: "1"
- name: KAFKA_CFG_SOCKET_RECEIVE_BUFFER_BYTES
value: "102400"
- name: KAFKA_CFG_SOCKET_REQUEST_MAX_BYTES
value: "104857600"
- name: KAFKA_CFG_SOCKET_SEND_BUFFER_BYTES
value: "102400"
- name: KAFKA_CFG_ZOOKEEPER_CONNECTION_TIMEOUT_MS
value: "6000"
image: docker.io/bitnami/kafka:2.3.0-debian-9-r88
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 2
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: kafka
timeoutSeconds: 5
name: kafka
ports:
- containerPort: 9092
name: kafka
protocol: TCP
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: kafka
timeoutSeconds: 5
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/kafka
name: data
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
runAsUser: 1001
terminationGracePeriodSeconds: 30
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
volumeMode: Filesystem
status:
phase: Pending
status:
collisionCount: 0
currentReplicas: 3
currentRevision: kafka-56ff499d74
observedGeneration: 1
readyReplicas: 1
replicas: 3
updateRevision: kafka-56ff499d74
updatedReplicas: 3
I see you have some trouble with passing different environment variables for differents pods in a StatefulSet.
You are trying to achieve this using helm templates:
- name: KAFKA_CFG_ADVERTISED_LISTENERS
{{- if $MY_POD_NAME := "kafka-0" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners1 }}:9092'
{{- else if $MY_POD_NAME := "kafka-1" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners2 }}:9092'
{{- else if $MY_POD_NAME := "kafka-2" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners3 }}:9092'
{{- end }}
In helm template guide documentation you can find this explaination:
In Helm templates, a variable is a named reference to another object.
It follows the form $name. Variables are assigned with a special assignment operator: :=.
Now let's look at your code:
{{- if $MY_POD_NAME := "kafka-0" }}
This is variable assignment, not comparasion and
after this assignment, if statement evaluates this expression to true and that's why in your
staefulset yaml manifest you see this as an output:
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: PLAINTEXT://10.21.0.191:9092
To make it work as expected, you shouldn't use helm templating. It's not going to work.
One way to do it would be to create separate enviroment variable for every kafka node
and pass all of these variables to all pods, like this:
- env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: KAFKA_0
value: 10.21.0.191
- name: KAFKA_1
value: 10.21.0.192
- name: KAFKA_2
value: 10.21.0.193
# - name: KAFKA_CFG_ADVERTISED_LISTENERS
# value: PLAINTEXT://$MY_POD_NAME:9092
and also create your own docker image with modified starting script that will export KAFKA_CFG_ADVERTISED_LISTENERS variable
with appropriate value depending on MY_POD_NAME.
If you dont want to create your own image, you can create a ConfigMap with modified entrypoint.sh and mount it
in place of old entrypoint.sh (you can also use any other file, just take a look here
for more information on how kafka image is built).
Mounting ConfigMap looks like this:
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test-container
image: docker.io/bitnami/kafka:2.3.0-debian-9-r88
volumeMounts:
- name: config-volume
mountPath: /entrypoint.sh
subPath: entrypoint.sh
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: kafka-entrypoint-config
defaultMode: 0744 # remember to add proper (executable) permissions
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-entrypoint-config
namespace: default
data:
entrypoint.sh: |
#!/bin/bash
# Here add modified entrypoint script
Please let me know if it helped.
I think the The helm chart doesn't whitelist your external (to kubernetes) network for advertised.listeners. I solved a similar issue by reconfiguring the helm values.yaml like this. In my case the 127.0.0.1 network is mac, yours might be different:
externalAccess:
enabled: true
autoDiscovery:
enabled: false
image:
registry: docker.io
repository: bitnami/kubectl
tag: 1.23.4-debian-10-r17
pullPolicy: IfNotPresent
pullSecrets: []
resources:
limits: {}
requests: {}
service:
type: NodePort
port: 9094
loadBalancerIPs: []
loadBalancerSourceRanges: []
nodePorts:
- 30000
- 30001
- 30002
useHostIPs: false
annotations: {}
domain: 127.0.0.1

Kuberenetes POD not coming

I am trying to install Jhipster Registry on Kubernetes as stateful set with the given (git as jhipster-registry.yml).
I see the services & statefulset coming up. I dont see the worker PODs :-(
Could you share, how to get the worker PODs up .
Update:
I am totally new to this jhipster & kubernetes. Thx for your response. Pasted the jhipster-register.yml below.
Cmds i have tried is to kubectl create -f jhipster-register.yml.
Error # kubectl describe statefulset
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: application-config
Optional: false
Volume Claims: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1d 5s 4599 statefulset Warning FailedCreate create Pod jhipster-registry-0 in StatefulSet jhipster-registry failed error: Pod "jhipster-registry-0" is invalid: spec.containers[0].env[8].name: Invalid value: "JHIPSTER_LOGGING_LOGSTASH_ENABLED=true": a valid C identifier must start with alphabetic character or '_', followed by a string of alphanumeric characters or '_' (e.g. 'my_name', or 'MY_NAME', or 'MyName', regex used for validation is '[A-Za-z_][A-Za-z0-9_]*')
YML:
apiVersion: v1
kind: Service
metadata:
name: jhipster-registry
labels:
app: jhipster-registry
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
ports:
- port: 9761
clusterIP: None
selector:
app: jhipster-registry
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: jhipster-registry
spec:
serviceName: jhipster-registry
replicas: 2
template:
metadata:
labels:
app: jhipster-registry
spec:
terminationGracePeriodSeconds: 10
containers:
- name: jhipster-registry
image: jhipster/jhipster-registry:v3.2.3
ports:
- containerPort: 9761
env:
- name: CLUSTER_SIZE
value: "2"
- name: STATEFULSET_NAME
value: "jhipster-registry"
- name: STATEFULSET_NAMESPACE
value: "stage"
- name: SPRING_PROFILES_ACTIVE
value: prod,swagger,native
- name: SECURITY_USER_PASSWORD
value: admin
- name: JHIPSTER_SECURITY_AUTHENTICATION_JWT_SECRET
value: secret-is-nothing-its-just-inside-you
- name: EUREKA_CLIENT_FETCH_REGISTRY
value: 'true'
- name: EUREKA_CLIENT_REGISTER_WITH_EUREKA
value: 'true'
- name: JHIPSTER_LOGGING_LOGSTASH_ENABLED=true
value: 'true'
- name: GIT_URI
value: https://github.com/jhipster/jhipster-registry/
- name: GIT_SEARCH_PATHS
value: central-config
command:
- "/bin/sh"
- "-ec"
- |
HOSTNAME=$(hostname)
export EUREKA_INSTANCE_HOSTNAME="${HOSTNAME}.jhipster-registry.${STATEFULSET_NAMESPACE}.svc.cluster.local"
echo "Setting EUREKA_INSTANCE_HOSTNAME=${EUREKA_INSTANCE_HOSTNAME}"
echo "Configuring Registry Replicas for CLUSTER_SIZE=${CLUSTER_SIZE}"
LAST_POD_INDEX=$((${CLUSTER_SIZE} - 1))
REPLICAS=""
for i in $(seq 0 $LAST_POD_INDEX); do
REPLICAS="${REPLICAS}http://admin:${SECURITY_USER_PASSWORD}#${STATEFULSET_NAME}-${i}.jhipster-registry.${STATEFULSET_NAMESPACE}.svc.cluster.local:8761/eureka/"
if [ $i -lt $LAST_POD_INDEX ]; then
REPLICAS="${REPLICAS},"
fi
done
export EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=$REPLICAS
echo "EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=${REPLICAS}"
java -jar /jhipster-registry.war --spring.cloud.config.server.git.uri=${GIT_URI} --spring.cloud.config.server.git.search-paths=${GIT_SEARCH_PATHS} -Djava.security.egd=file:/dev/./urandom
volumeMounts:
- name: config-volume
mountPath: /central-config
volumes:
- name: config-volume
configMap:
name: application-config
Your StatefulSet is invalid because of invalid ENV name.
- name: JHIPSTER_LOGGING_LOGSTASH_ENABLED=true
value: 'true'
You have used ENV name JHIPSTER_LOGGING_LOGSTASH_ENABLED=true which is invalid.
Correct format: [A-Za-z_][A-Za-z0-9_]*
Thats why Pods are not coming
Change StatefulSet to use as
- name: JHIPSTER_LOGGING_LOGSTASH_ENABLED
value: 'true'
This will work.

Kubernetes Helm Chart - Debugging

I'm unable to find good information describing these errors:
[sarah#localhost helm] helm install statefulset --name statefulset --debug
[debug] Created tunnel using local port: '33172'
[debug] SERVER: "localhost:33172"
[debug] Original chart version: ""
[debug] CHART PATH: /home/helm/statefulset/
Error: error validating "": error validating data: [field spec.template for v1beta1.StatefulSetSpec is required, field spec.serviceName for v1beta1.StatefulSetSpec is required, found invalid field containers for v1beta1.StatefulSetSpec]
I'm still new to Helm; I've built two working charts that were similar to this template and didn't have these errors, even though the code isn't much different. I'm thinking there might be some kind of formatting error that I'm not noticing. Either that, or it's due to the different type (the others were Pods, this is StatefulSet).
The YAML file it's referencing is here:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
#serviceAccount: "{{.Values.PrimaryName}}-sa"
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}"
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}
Would someone be able to a) point me in the right direction to find out how to implement the spec.template and spec.serviceName required fields, b) understand why the field 'containers' is invalid, and/or c) give mention of any tool that can help debug Helm charts? I've attempted 'helm lint' and the '--debug' flag but 'helm lint' shows no errors, and the flag output is shown with the errors above.
Is it possible the errors are coming from a different file, also?
StatefulSets objects has different structure than Pods are. You need to modify your yaml file a little:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
selector:
matchLabels:
app: "" # has to match .spec.template.metadata.labels
serviceName: "" # put your serviceName here
replicas: 1 # by default is 1
template:
metadata:
labels:
app: "" # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}