Kong-ingress-controller CrashLoopBackOff : "kong-proxy" is forbidden - kubernetes

I've installed kong-ingress-controller using yaml file on a 3-nodes k8s cluster.
but I'm getting this (the status of pod is CrashLoopBackOff):
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kong ingress-kong-74d8d78f57-57fqr 1/2 CrashLoopBackOff 12 (3m23s ago) 40m
[...]
there are 2 container declarations in kong yaml file: proxy and ingress-controller.
The first one is up and running but the ingress-controller container is not:
$kubectl describe pod ingress-kong-74d8d78f57-57fqr -n kong |less
[...]
ingress-controller:
Container ID: docker://8e9a3370f78b3057208b943048c9ecd51054d0b276ef6c93ccf049093261d8de
Image: kong/kubernetes-ingress-controller:1.3
Image ID: docker-pullable://kong/kubernetes-ingress-controller#sha256:cff0df9371d5ad07fef406c356839736ce9eeb0d33f918f56b1b232cd7289207
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 07 Sep 2021 17:15:54 +0430
Finished: Tue, 07 Sep 2021 17:15:54 +0430
Ready: False
Restart Count: 13
Liveness: http-get http://:10254/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
CONTROLLER_KONG_ADMIN_URL: https://127.0.0.1:8444
CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY: true
CONTROLLER_PUBLISH_SERVICE: kong/kong-proxy
POD_NAME: ingress-kong-74d8d78f57-57fqr (v1:metadata.name)
POD_NAMESPACE: kong (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ft7gg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-ft7gg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 46m default-scheduler Successfully assigned kong/ingress-kong-74d8d78f57-57fqr to kung-node-2
Normal Pulled 46m kubelet Container image "kong:2.5" already present on machine
Normal Created 46m kubelet Created container proxy
Normal Started 46m kubelet Started container proxy
Normal Pulled 45m (x4 over 46m) kubelet Container image "kong/kubernetes-ingress-controller:1.3" already present on machine
Normal Created 45m (x4 over 46m) kubelet Created container ingress-controller
Normal Started 45m (x4 over 46m) kubelet Started container ingress-controller
Warning BackOff 87s (x228 over 46m) kubelet Back-off restarting failed container
And here is the log of ingress-controller container:
-------------------------------------------------------------------------------
Kong Ingress controller
Release:
Build:
Repository:
Go: go1.16.7
-------------------------------------------------------------------------------
W0907 12:56:12.940106 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
time="2021-09-07T12:56:12Z" level=info msg="version of kubernetes api-server: 1.22" api-server-host="https://10.*.*.1:443" git_commit=632ed300f2c34f6d6d15ca4cef3d3c7073412212 git_tree_state=clean git_version=v1.22.1 major=1 minor=22 platform=linux/amd64
time="2021-09-07T12:56:12Z" level=fatal msg="failed to fetch publish-service: services \"kong-proxy\" is forbidden: User \"system:serviceaccount:kong:kong-serviceaccount\" cannot get resource \"services\" in API group \"\" in the namespace \"kong\"" service_name=kong-proxy service_namespace=kong
If someone could help me to get a solution, that would be awesome.
============================================================
UPDATE:
The kong-ingress-controller's yaml file:
apiVersion: v1
kind: Namespace
metadata:
name: kong
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongclusterplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongClusterPlugin
plural: kongclusterplugins
shortNames:
- kcp
scope: Cluster
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
config:
type: object
configFrom:
properties:
secretKeyRef:
properties:
key:
type: string
name:
type: string
namespace:
type: string
required:
- name
- namespace
- key
type: object
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongconsumers.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .username
description: Username of a Kong Consumer
name: Username
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: KongConsumer
plural: kongconsumers
shortNames:
- kc
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
credentials:
items:
type: string
type: array
custom_id:
type: string
username:
type: string
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongingresses.configuration.konghq.com
spec:
group: configuration.konghq.com
names:
kind: KongIngress
plural: kongingresses
shortNames:
- ki
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
proxy:
properties:
connect_timeout:
minimum: 0
type: integer
path:
pattern: ^/.*$
type: string
protocol:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
read_timeout:
minimum: 0
type: integer
retries:
minimum: 0
type: integer
write_timeout:
minimum: 0
type: integer
type: object
route:
properties:
headers:
additionalProperties:
items:
type: string
type: array
type: object
https_redirect_status_code:
type: integer
methods:
items:
type: string
type: array
path_handling:
enum:
- v0
- v1
type: string
preserve_host:
type: boolean
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
regex_priority:
type: integer
request_buffering:
type: boolean
response_buffering:
type: boolean
snis:
items:
type: string
type: array
strip_path:
type: boolean
upstream:
properties:
algorithm:
enum:
- round-robin
- consistent-hashing
- least-connections
type: string
hash_fallback:
type: string
hash_fallback_header:
type: string
hash_on:
type: string
hash_on_cookie:
type: string
hash_on_cookie_path:
type: string
hash_on_header:
type: string
healthchecks:
properties:
active:
properties:
concurrency:
minimum: 1
type: integer
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
http_path:
pattern: ^/.*$
type: string
timeout:
minimum: 0
type: integer
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
passive:
properties:
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
threshold:
type: integer
type: object
host_header:
type: string
slots:
minimum: 10
type: integer
type: object
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongPlugin
plural: kongplugins
shortNames:
- kp
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
config:
type: object
configFrom:
properties:
secretKeyRef:
properties:
key:
type: string
name:
type: string
required:
- name
- key
type: object
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tcpingresses.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .status.loadBalancer.ingress[*].ip
description: Address of the load balancer
name: Address
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: TCPIngress
plural: tcpingresses
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
properties:
rules:
items:
properties:
backend:
properties:
serviceName:
type: string
servicePort:
format: int32
type: integer
type: object
host:
type: string
port:
format: int32
type: integer
type: object
type: array
tls:
items:
properties:
hosts:
items:
type: string
type: array
secretName:
type: string
type: object
type: array
type: object
status:
type: object
version: v1beta1
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kong-serviceaccount
namespace: kong
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kong-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
- extensions
- networking.internal.knative.dev
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
- extensions
- networking.internal.knative.dev
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- tcpingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- kongplugins
- kongclusterplugins
- kongcredentials
- kongconsumers
- kongingresses
- tcpingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kong-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kong-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: kong-serviceaccount
namespace: kong
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 8000
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 8443
selector:
app: ingress-kong
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: kong-validation-webhook
namespace: kong
spec:
ports:
- name: webhook
port: 443
protocol: TCP
targetPort: 8080
selector:
app: ingress-kong
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ingress-kong
name: ingress-kong
namespace: kong
spec:
replicas: 1
selector:
matchLabels:
app: ingress-kong
template:
metadata:
annotations:
kuma.io/gateway: enabled
prometheus.io/port: "8100"
prometheus.io/scrape: "true"
traffic.sidecar.istio.io/includeInboundPorts: ""
labels:
app: ingress-kong
spec:
containers:
- env:
- name: KONG_PROXY_LISTEN
value: 0.0.0.0:8000, 0.0.0.0:8443 ssl http2
- name: KONG_PORT_MAPS
value: 80:8000, 443:8443
- name: KONG_ADMIN_LISTEN
value: 127.0.0.1:8444 ssl
- name: KONG_STATUS_LISTEN
value: 0.0.0.0:8100
- name: KONG_DATABASE
value: "off"
- name: KONG_NGINX_WORKER_PROCESSES
value: "2"
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
image: kong:2.5
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-ssl
protocol: TCP
- containerPort: 8100
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
- env:
- name: CONTROLLER_KONG_ADMIN_URL
value: https://127.0.0.1:8444
- name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
value: "true"
- name: CONTROLLER_PUBLISH_SERVICE
value: kong/kong-proxy
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: kong/kubernetes-ingress-controller:1.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: ingress-controller
ports:
- containerPort: 8080
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
serviceAccountName: kong-serviceaccount

Having analysed the comments it looks like changing apiVersion from rbac.authorization.k8s.io/v1beta1 to rbac.authorization.k8s.io/v1 has solved the problem temporally, an alternative to this solution is to downgrade the cluster.

I had installed kubernetes:1.22 and tried to use kong/kubernetes-ingress-controller:1.3 .
as #mdaniel said in the comment:
Upon further investigation into that repo, 1.x only works up to k8s 1.21 so I'll delete my answer and you'll have to downgrade your cluster(!) or find an alternative Ingress controller
‌Based on documentation(you can find this at KIC version compatibility) :
As you can see, Kong/kubernetes-ingress-controller supports a maximum version of 1.21 of kubernetes(at the time of writing this answer). So I decided to downgrade my cluster to version 1.20 and this solved my problem.

Related

Grafana alert provisioning issue

I'd like to be able to provision alerts, and try to follow this instructions, but no luck!
OS Grafana version: 9.2.0 Running in Kubernetes
Steps that I take:
Created a new alert rule from UI.
Extract alert rule from API:
curl -k https://<my-grafana-url>/api/v1/provisioning/alert-rules/-4pMuQFVk -u admin:<my-admin-password>
It returns the following:
---
id: 18
uid: "-4pMuQFVk"
orgID: 1
folderUID: 3i72aQKVk
ruleGroup: cpu_alert_group
title: my_cpu_alert
condition: B
data:
- refId: A
queryType: ''
relativeTimeRange:
from: 600
to: 0
datasourceUid: _SaubQF4k
model:
editorMode: code
expr: system_cpu_usage
hide: false
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: true
refId: A
- refId: B
queryType: ''
relativeTimeRange:
from: 0
to: 0
datasourceUid: "-100"
model:
conditions:
- evaluator:
params:
- 3
type: gt
operator:
type: and
query:
params:
- A
reducer:
params: []
type: last
type: query
datasource:
type: __expr__
uid: "-100"
expression: A
hide: false
intervalMs: 1000
maxDataPoints: 43200
refId: B
type: classic_conditions
updated: '2022-12-07T20:01:47Z'
noDataState: NoData
execErrState: Error
for: 5m
Deleted the alert rule from UI.
Made a configmap from above alert-rule, as such:
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-alerting
data:
alerting.yaml: |-
apiVersion: 1
groups:
- id: 18
uid: "-4pMuQFVk"
orgID: 1
folderUID: 3i72aQKVk
ruleGroup: cpu_alert_group
title: my_cpu_alert
condition: B
data:
- refId: A
queryType: ''
relativeTimeRange:
from: 600
to: 0
datasourceUid: _SaubQF4k
model:
editorMode: code
expr: system_cpu_usage
hide: false
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: true
refId: A
- refId: B
queryType: ''
relativeTimeRange:
from: 0
to: 0
datasourceUid: "-100"
model:
conditions:
- evaluator:
params:
- 3
type: gt
operator:
type: and
query:
params:
- A
reducer:
params: []
type: last
type: query
datasource:
type: __expr__
uid: "-100"
expression: A
hide: false
intervalMs: 1000
maxDataPoints: 43200
refId: B
type: classic_conditions
updated: '2022-12-07T20:01:47Z'
noDataState: NoData
execErrState: Error
for: 5m
I mount above configmap in grafana container (in /etc/grafna/provisioning/alerting).
The full manifest of the deployment is as follow:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: monitoring
creationTimestamp: "2022-12-08T18:31:30Z"
generation: 4
labels:
app.kubernetes.io/instance: grafana
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 9.3.0
helm.sh/chart: grafana-6.46.1
name: grafana
namespace: monitoring
resourceVersion: "648617"
uid: dc06b802-5281-4f31-a2b2-fef3cf53a70b
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: grafana
app.kubernetes.io/name: grafana
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
checksum/config: 98cac51656714db48a116d3109994ee48c401b138bc8459540e1a497f994d197
checksum/dashboards-json-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
checksum/sc-dashboard-provider-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
checksum/secret: 203f0e4d883461bdd41fe68515fc47f679722dc2fdda49b584209d1d288a5f07
creationTimestamp: null
labels:
app.kubernetes.io/instance: grafana
app.kubernetes.io/name: grafana
spec:
automountServiceAccountToken: true
containers:
- env:
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
key: admin-user
name: grafana
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: admin-password
name: grafana
- name: GF_PATHS_DATA
value: /var/lib/grafana/
- name: GF_PATHS_LOGS
value: /var/log/grafana
- name: GF_PATHS_PLUGINS
value: /var/lib/grafana/plugins
- name: GF_PATHS_PROVISIONING
value: /etc/grafana/provisioning
image: grafana/grafana:9.3.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 10
httpGet:
path: /api/health
port: 3000
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: grafana
ports:
- containerPort: 3000
name: grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/health
port: 3000
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/grafana/grafana.ini
name: config
subPath: grafana.ini
- mountPath: /etc/grafana/provisioning/alerting
name: grafana-alerting
- mountPath: /var/lib/grafana
name: storage
dnsPolicy: ClusterFirst
enableServiceLinks: true
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 472
runAsGroup: 472
runAsUser: 472
serviceAccount: grafana
serviceAccountName: grafana
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: grafana-alerting
name: grafana-alerting
- configMap:
defaultMode: 420
name: grafana
name: config
- emptyDir: {}
name: storage
However, grafana fail to start with this error:
Failed to start grafana. error: failure to map file alerting.yaml: failure parsing rules: rule group has no name set
I fixed above error by adding Group Name, but similar errors about other missing elements kept showing up again and again (to the point that I stopped chasing, as I couldn't figure out what exactly the correct schema is). Digging in, it looks like the format/schema returned from API in step 2, is different than the schema that's pointed out in the documentation.
Why is the schema of the alert-rule returned from API different? Am I supposed to convert it, and if so how? Otherwise, what am I doing wrong?
Edit: Replaced Statefulset with deployment, since I was able to reproduce this in a normal/minimal Grafana deployment too.

VerneMQ in Kubernetes with persistent volume claim is throwing Forbidden: may not specify more than 1 volume type

Unable to create Verne MQ pod in AWS EKS cluster with persistent volume claim for authentication and SSL. Below is my yaml file:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: vernemq-storage
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: verne-aws-pv
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: xfs
volumeID: aws://ap-south-1a/vol-xxxxx
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: vernemq-storage
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: mysql
name: verne-aws-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp2-retain
volumeMode: Filesystem
volumeName: verne-aws-pv
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vernemq
spec:
replicas: 1
selector:
matchLabels:
app: vernemq
serviceName: vernemq
template:
metadata:
labels:
app: vernemq
spec:
serviceAccountName: vernemq
terminationGracePeriodSeconds: 200
containers:
- name: vernemq
image: vernemq/vernemq:latest
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- /usr/sbin/vmq-admin cluster leave node=VerneMQ#${MY_POD_NAME}.vernemq.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local ; sleep 60 ; /usr/sbin/vmq-admin cluster leave node=VerneMQ#${MY_POD_NAME}.vernemq.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local -k; sleep 60;
ports:
- containerPort: 1883
name: mqtt
hostPort: 1883
- containerPort: 8883
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 8888
name: health
- containerPort: 9100
- containerPort: 9101
- containerPort: 9102
- containerPort: 9103
- containerPort: 9104
- containerPort: 9105
- containerPort: 9106
- containerPort: 9107
- containerPort: 9108
- containerPort: 9109
- containerPort: 8888
resources:
limits:
cpu: "2"
memory: 3Gi
requests:
cpu: "1"
memory: 1Gi
env:
- name: DOCKER_VERNEMQ_ACCEPT_EULA
value: "yes"
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
value: "9100"
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
value: "9109"
- name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
value: "on"
- name: DOCKER_VERNEMQ_LISTENER__TCP__DEFAULT
value: "0.0.0.0:1883"
- name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__POOL_timeout
value: "6000"
- name: DOCKER_VERNEMQ_LISTENER__HTTP__DEFAULT
value: "0.0.0.0:8888"
- name: DOCKER_VERNEMQ_LISTENER__MAX_CONNECTIONS
value: "infinity"
- name: DOCKER_VERNEMQ_LISTENER__NR_OF_ACCEPTORS
value: "10000"
- name: DOCKER_VERNEMQ_MAX_INFLIGHT_MESSAGES
value: "0"
- name: DOCKER_VERNEMQ_ALLOW_MULTIPLE_SESSIONS
value: "off"
- name: DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_PUBLISH_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_SUBSCRIBE_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_UNSUBSCRIBE_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
value: "/etc/vernemq/vmq.passwd"
- name: DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT
value: "0.0.0.0:8883"
- name: DOCKER_VERNEMQ_LISTENER__SSL__CAFILE
value: "/etc/ssl/ca.crt"
- name: DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE
value: "/etc/ssl/server.crt"
- name: DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE
value: "/etc/ssl/server.key"
volumeMounts:
- mountPath: /etc/ssl
name: vernemq-certifications
readOnly: true
- mountPath: /etc/vernemq-passwd
name: vernemq-passwd
readOnly: true
volumes:
- name: vernemq-certifications
persistentVolumeClaim:
claimName: verne-aws-pvc
secret:
secretName: vernemq-certifications
- name: vernemq-passwd
persistentVolumeClaim:
claimName: verne-aws-pvc
secret:
secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: empd
- port: 44053
name: vmq
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- name: mqtt
port: 1883
targetPort: 1883
- name: health
port: 8888
targetPort: 8888
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vernemq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["endpoints", "deployments", "replicasets", "pods", "statefulsets", "persistentvolumeclaims"]
verbs: ["get", "patch", "list", "watch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
subjects:
- kind: ServiceAccount
name: vernemq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader
Created an AWS EBS volume in the same region and subnet as in the node group and added it to the persistent volume storage.
Pod is not getting created instead when we do kubectl describe statefulset vernemq getting below error:
Volumes:
vernemq-certifications:
Type: Secret (a volume populated by a Secret)
SecretName: vernemq-certifications
Optional: false
vernemq-passwd:
Type: Secret (a volume populated by a Secret)
SecretName: vernemq-passwd
Optional: false
Volume Claims: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 2m2s (x5 over 2m2s) statefulset-controller create Pod vernemq-0 in StatefulSet vernemq failed error: pods "vernemq-0" is forbidden: error looking up service account default/vernemq: serviceaccount "vernemq" not found
Warning FailedCreate 40s (x10 over 2m2s) statefulset-controller create Pod vernemq-0 in StatefulSet vernemq failed error: Pod "vernemq-0" is invalid: [spec.volumes[0].persistentVolumeClaim: Forbidden: may not specify more than 1 volume type, spec.volumes[1].persistentVolumeClaim: Forbidden: may not specify more than 1 volume type, spec.containers[0].volumeMounts[0].name: Not found: "vernemq-certifications", spec.containers[0].volumeMounts[1].name: Not found: "vernemq-passwd"]

Kubernetes Ingress looking for secret in wrong place

I have Keycloak Chart (https://codecentric.github.io/helm-charts). Where I configured ingress to look at my secret for certificates, but instead it is looking at wrong place:
W0830 15:05:12.330745 7 controller.go:1387] Error getting SSL certificate "default/tls-keycloak-czv9g": local SSL certificate default/tls-keycloak-czv9g was not found
Here is how Chart looks like:
keycloak:
basepath: auth/
username: admin
password: password
route:
tls:
enabled: true
extraEnv: |
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: KEYCLOAK_IMPORT
value: /keycloak/master-realm.json
- name: JAVA_OPTS
value: >-
-Djboss.socket.binding.port-offset=1000
extraVolumes: |
- name: realm-secret
secret:
secretName: realm-secret
extraVolumeMounts: |
- name: realm-secret
mountPath: "/keycloak/"
readOnly: true
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
cert-manager.io/cluster-issuer: "keycloak-issuer"
path: /auth/?(.*)
hosts:
- keycloak.localtest.me
tls:
- hosts:
- keycloak.localtest.me
secretName: tls-keycloak-czv9g
That is what i see from console:
$ kubectl get secret
NAME TYPE DATA AGE
default-token-lbt48 kubernetes.io/service-account-token 3 22m
keycloak-admin-password Opaque 1 15m
keycloak-realm-secret Opaque 1 15m
tls-keycloak-czv9g Opaque 1 15m
$ kubectl describe secrets/tls-keycloak-czv9g
Name: tls-keycloak-czv9g
Namespace: default
Labels: cert-manager.io/next-private-key=true
Annotations: <none>
Type: Opaque
Data
====
tls.key: 1704 bytes
Why ingress is looking wrong place?

Kuberenetes POD not coming

I am trying to install Jhipster Registry on Kubernetes as stateful set with the given (git as jhipster-registry.yml).
I see the services & statefulset coming up. I dont see the worker PODs :-(
Could you share, how to get the worker PODs up .
Update:
I am totally new to this jhipster & kubernetes. Thx for your response. Pasted the jhipster-register.yml below.
Cmds i have tried is to kubectl create -f jhipster-register.yml.
Error # kubectl describe statefulset
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: application-config
Optional: false
Volume Claims: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1d 5s 4599 statefulset Warning FailedCreate create Pod jhipster-registry-0 in StatefulSet jhipster-registry failed error: Pod "jhipster-registry-0" is invalid: spec.containers[0].env[8].name: Invalid value: "JHIPSTER_LOGGING_LOGSTASH_ENABLED=true": a valid C identifier must start with alphabetic character or '_', followed by a string of alphanumeric characters or '_' (e.g. 'my_name', or 'MY_NAME', or 'MyName', regex used for validation is '[A-Za-z_][A-Za-z0-9_]*')
YML:
apiVersion: v1
kind: Service
metadata:
name: jhipster-registry
labels:
app: jhipster-registry
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
ports:
- port: 9761
clusterIP: None
selector:
app: jhipster-registry
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: jhipster-registry
spec:
serviceName: jhipster-registry
replicas: 2
template:
metadata:
labels:
app: jhipster-registry
spec:
terminationGracePeriodSeconds: 10
containers:
- name: jhipster-registry
image: jhipster/jhipster-registry:v3.2.3
ports:
- containerPort: 9761
env:
- name: CLUSTER_SIZE
value: "2"
- name: STATEFULSET_NAME
value: "jhipster-registry"
- name: STATEFULSET_NAMESPACE
value: "stage"
- name: SPRING_PROFILES_ACTIVE
value: prod,swagger,native
- name: SECURITY_USER_PASSWORD
value: admin
- name: JHIPSTER_SECURITY_AUTHENTICATION_JWT_SECRET
value: secret-is-nothing-its-just-inside-you
- name: EUREKA_CLIENT_FETCH_REGISTRY
value: 'true'
- name: EUREKA_CLIENT_REGISTER_WITH_EUREKA
value: 'true'
- name: JHIPSTER_LOGGING_LOGSTASH_ENABLED=true
value: 'true'
- name: GIT_URI
value: https://github.com/jhipster/jhipster-registry/
- name: GIT_SEARCH_PATHS
value: central-config
command:
- "/bin/sh"
- "-ec"
- |
HOSTNAME=$(hostname)
export EUREKA_INSTANCE_HOSTNAME="${HOSTNAME}.jhipster-registry.${STATEFULSET_NAMESPACE}.svc.cluster.local"
echo "Setting EUREKA_INSTANCE_HOSTNAME=${EUREKA_INSTANCE_HOSTNAME}"
echo "Configuring Registry Replicas for CLUSTER_SIZE=${CLUSTER_SIZE}"
LAST_POD_INDEX=$((${CLUSTER_SIZE} - 1))
REPLICAS=""
for i in $(seq 0 $LAST_POD_INDEX); do
REPLICAS="${REPLICAS}http://admin:${SECURITY_USER_PASSWORD}#${STATEFULSET_NAME}-${i}.jhipster-registry.${STATEFULSET_NAMESPACE}.svc.cluster.local:8761/eureka/"
if [ $i -lt $LAST_POD_INDEX ]; then
REPLICAS="${REPLICAS},"
fi
done
export EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=$REPLICAS
echo "EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=${REPLICAS}"
java -jar /jhipster-registry.war --spring.cloud.config.server.git.uri=${GIT_URI} --spring.cloud.config.server.git.search-paths=${GIT_SEARCH_PATHS} -Djava.security.egd=file:/dev/./urandom
volumeMounts:
- name: config-volume
mountPath: /central-config
volumes:
- name: config-volume
configMap:
name: application-config
Your StatefulSet is invalid because of invalid ENV name.
- name: JHIPSTER_LOGGING_LOGSTASH_ENABLED=true
value: 'true'
You have used ENV name JHIPSTER_LOGGING_LOGSTASH_ENABLED=true which is invalid.
Correct format: [A-Za-z_][A-Za-z0-9_]*
Thats why Pods are not coming
Change StatefulSet to use as
- name: JHIPSTER_LOGGING_LOGSTASH_ENABLED
value: 'true'
This will work.

Helm / Kubernetes - Statefulset & Permissions

I keep seeing this error:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
12s 2s 12 {statefulset } Warning FailedCreate create Pod pgset-0 in StatefulSet pgset failed error: pods "pgset-0" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{26}: 26 is not an allowed group]
I've created a ServiceAccount named "pgset-sa", and granted it the cluster-admin role. I've been researching other ways to get this to work (including editing scc restricted), but keep getting the error from fsGroup stating it's not an allowed group. What am I missing?
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.ContainerName}}"
labels:
name: "{{.Values.ReplicaName}}"
app: "{{.Values.ContainerName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
selector:
matchLabels:
app: "{{.Values.ContainerName}}"
serviceName: "{{.Values.ContainerName}}"
replicas: 2
template:
metadata:
labels:
app: "{{.Values.ContainerName}}"
spec:
serviceAccount: "{{.Values.ContainerServiceAccount}}"
securityContext:
fsGroup: 26
terminationGracePeriodSeconds: 10
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_HOST
value: "{{.Values.PrimaryName}}"
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}"
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}
Take a look at this document titled: Managing Security Context Constraints.
The service account associated with the statefulset must be granted a security context constraint sufficient to allow the pod (one that either allows exactly the fsGroup 26 or allows any fsGroup, in this case).