Helm2 could not find tiller But tiller pod is okay - kubernetes-helm

before helm2 init, I've run:
kubectl delete all --all -n tiller-ns
rm -rf -- ~/.helm
then I run this:
helm2 init --tiller-namespace tiller-ns --stable-repo-url https://charts.helm.sh/stable
out:
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://charts.helm.sh/stable
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://v2.helm.sh/docs/securing_installation/
then, check it:
helm2 version
out:
Client: &version.Version{SemVer:"v2.16.12", GitCommit:"47f0b88409e71fd9ca272abc7cd762a56a1c613e", GitTreeState:"clean"}
Error: could not find tiller
see the tiller pod:
kubectl get all -n tiller-ns
out:
NAME READY STATUS RESTARTS AGE
pod/tiller-deploy-86b4fc48c9-x6tk9 1/1 Running 0 2m24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tiller-deploy ClusterIP 10.107.83.126 <none> 44134/TCP 2m24s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tiller-deploy 1/1 1 1 2m24s
NAME DESIRED CURRENT READY AGE
replicaset.apps/tiller-deploy-86b4fc48c9 1 1 1 2m24s
how about describe it?:
kubectl describe po tiller-deploy-86b4fc48c9-x6tk9 -n tiller-ns
out:
Name: tiller-deploy-86b4fc48c9-x6tk9
Namespace: tiller-ns
Priority: 0
Node: node-04/192.168.3.144
Start Time: Mon, 22 Nov 2021 18:57:44 +0800
Labels: app=helm
name=tiller
pod-template-hash=86b4fc48c9
Annotations: cni.projectcalico.org/podIP: 100.103.62.181/32
cni.projectcalico.org/podIPs: 100.103.62.181/32
Status: Running
IP: 100.103.62.181
IPs:
IP: 100.103.62.181
Controlled By: ReplicaSet/tiller-deploy-86b4fc48c9
Containers:
tiller:
Container ID: docker://2eaac0dfcea9a77a591b0c612549265a59df126c855f65089e1ba3402305b518
Image: gcr.io/kubernetes-helm/tiller:v2.16.12
Image ID: docker://sha256:c11838703bf83ca52d95fe66b0db8d6a8797581fdd132925c27cbc9a70e1d9e1
Ports: 44134/TCP, 44135/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Mon, 22 Nov 2021 18:57:48 +0800
Ready: True
Restart Count: 0
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: tiller-ns
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cppv9 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-cppv9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-cppv9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m18s default-scheduler Successfully assigned tiller-ns/tiller-deploy-86b4fc48c9-x6tk9 to node-04
Normal Pulled 3m14s kubelet Container image "gcr.io/kubernetes-helm/tiller:v2.16.12" already present on machine
Normal Created 3m14s kubelet Created container tiller
Normal Started 3m13s kubelet Started container tiller
and logs ?
kubectl logs tiller-deploy-86b4fc48c9-x6tk9 -n tiller-ns
out:
[main] 2021/11/22 10:57:49 Starting Tiller v2.16.12 (tls=false)
[main] 2021/11/22 10:57:49 GRPC listening on :44134
[main] 2021/11/22 10:57:49 Probes listening on :44135
[main] 2021/11/22 10:57:49 Storage driver is ConfigMap
[main] 2021/11/22 10:57:49 Max history per release is 0
then I run again:
helm2 version
out:
Client: &version.Version{SemVer:"v2.16.12", GitCommit:"47f0b88409e71fd9ca272abc7cd762a56a1c613e", GitTreeState:"clean"}
Error: could not find tiller
what the hell ??

If you're using the now-unsupported Helm 2, and you configure it for a non-default Tiller namespace, it doesn't remember that. You either need to include the --tiller-namespace option in every helm invocation, or set the $TILLER_NAMESPACE environment variable.
export TILLER_NAMESPACE=tiller-ns
helm2 version
(As of this writing, Helm 2 has been out of support for over a year, and several community charts have moved over to Helm-3-only syntax. You should upgrade if you can, though I acknowledge that changing deployment systems in a production environment is tricky. My experience has been that most charts written for Helm 2 work unmodified in Helm 3.)

well, I've find the way to get it right:
helm2 version --tiller-namespace tiller-ns
out:
Client: &version.Version{SemVer:"v2.16.12", GitCommit:"47f0b88409e71fd9ca272abc7cd762a56a1c613e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.12", GitCommit:"47f0b88409e71fd9ca272abc7cd762a56a1c613e", GitTreeState:"clean"}
the way by: https://stackoverflow.com/a/51662259/15861205

Related

A question about pod running on the kubernetes(k8s) platform:The pods are running but the containers are not-ready

I build a k8s cluster on my virtual Machines(CentOS/7) with Virtual Box:
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready control-plane,master 8d v1.21.2 192.168.0.186 <none> CentOS Linux 7 (Core) 3.10.0-1160.31.1.el7.x86_64 docker://20.10.7
k8s-worker01 Ready <none> 8d v1.21.2 192.168.0.187 <none> CentOS Linux 7 (Core) 3.10.0-1160.31.1.el7.x86_64 docker://20.10.7
k8s-worker02 Ready <none> 8d v1.21.2 192.168.0.188 <none> CentOS Linux 7 (Core) 3.10.0-1160.31.1.el7.x86_64 docker://20.10.7
And i run some pods on the default namespace with a ReplicaSet several days before.
They were all worked fine at first, and then I shut down the VM.
Today, after I restarted the VMs, I found that they are not working properly anymore:
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/dnsutils 1/1 Running 3 5d13h
pod/kubapp-6qbfz 0/1 Running 0 5d13h
pod/kubapp-d887h 0/1 Running 0 5d13h
pod/kubapp-z6nw7 0/1 Running 0 5d13h
NAME DESIRED CURRENT READY AGE
replicaset.apps/kubapp 3 3 0 5d13h
Then I delete the ReplicaSet and re-create it to create the pods.
And i run the command to get more infomations:
[root#k8s-master ch04]# kubectl describe po kubapp-z887v
Name: kubapp-d887h
Namespace: default
Priority: 0
Node: k8s-worker02/192.168.0.188
Start Time: Fri, 23 Jul 2021 15:55:16 +0000
Labels: app=kubapp
Annotations: cni.projectcalico.org/podIP: 10.244.69.244/32
cni.projectcalico.org/podIPs: 10.244.69.244/32
Status: Running
IP: 10.244.69.244
IPs:
IP: 10.244.69.244
Controlled By: ReplicaSet/kubapp
Containers:
kubapp:
Container ID: docker://fc352ce4c6a826f2cf108f9bb9a335e3572509fd5ae2002c116e2b080df5ee10
Image: evalle/kubapp
Image ID: docker-pullable://evalle/kubapp#sha256:560c9c50b1d894cf79ac472a9925dc795b116b9481ec40d142b928a0e3995f4c
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 23 Jul 2021 15:55:21 +0000
Ready: False
Restart Count: 0
Readiness: exec [ls /var/ready] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m9rwr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-m9rwr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30m default-scheduler Successfully assigned default/kubapp-d887h to k8s-worker02
Normal Pulling 30m kubelet Pulling image "evalle/kubapp"
Normal Pulled 30m kubelet Successfully pulled image "evalle/kubapp" in 4.049160061s
Normal Created 30m kubelet Created container kubapp
Normal Started 30m kubelet Started container kubapp
Warning Unhealthy 11s (x182 over 30m) kubelet Readiness probe failed: ls: cannot access /var/ready: No such file or directory
I don`t know what it happens and how i should do for fix it.
SO here i am and ask to you guys for help.
I am a k8s newbie,just give a hand please.
Thanks for paul-becotte`s help and recommendation.I think i should to post the definition of the pod:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
# here is the name of the replication controller (RC)
name: kubapp
spec:
replicas: 3
# what pods the RC is operating on
selector:
matchLabels:
app: kubapp
# the pod template for creating new pods
template:
metadata:
labels:
app: kubapp
spec:
containers:
- name: kubapp
image: evalle/kubapp
readinessProbe:
exec:
command:
- ls
- /var/ready
There is a example definition of yaml from https://github.com/Evalle/k8s-in-action/blob/master/Chapter_4/kubapp-rs.yaml.
I don`t know where to find the dockerfile of the image evalle/kubapp.
And I don't know if it has the /var/ready directory.
Look at your event
Warning Unhealthy 11s (x182 over 30m) kubelet Readiness probe failed: ls: cannot access /var/ready: No such file or directory
Your readiness probe is failing- looks like it is checking for the existence of a file at /var/ready.
Your next step is "does that make sense? Is my container going to actually write a file at /var/ready when its ready?" If so, you'll want to look at the logs from your pod and figure out why its not writing the file. If its NOT the correct check, look at the yaml you used to create your pod/deployment/replicaset whatever and replace that check with something that does make sense.

Dashboard not running

I have setup kubenertes on ubuntu server using this link.
Then I installed kubernetes dashboard using:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc6/aio/deploy/recommended.yaml
Then I changed the ClusterIP to NodePort 32323, the service to NodePort.
But container is not running.
uday#dockermaster:~$ kubectl -n kubernetes-dashboard get all
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-779f5454cb-pqfrj 1/1 Running 0 50m
pod/kubernetes-dashboard-64686c4bf9-5jkwq 0/1 CrashLoopBackOff 14 50m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.103.22.252 <none> 8000/TCP 50m
service/kubernetes-dashboard NodePort 10.102.48.80 <none> 443:32323/TCP 50m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 50m
deployment.apps/kubernetes-dashboard 0/1 1 0 50m
NAME DESIRED CURRENT READY AGE
replicaset.apps/dashboard-metrics-scraper-779f5454cb 1 1 1 50m
replicaset.apps/kubernetes-dashboard-64686c4bf9 1 1 0 50m
uday#dockermaster:~$ kubectl -n kubernetes-dashboard describe svc kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP: 10.102.48.80
Port: <unset> 443/TCP
TargetPort: 8443/TCP
NodePort: <unset> 32323/TCP
Endpoints:
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Other apps are working fine with NodePort, whether Tomcat/nginx/databases.
But here, it is failing with container creation.
C:\Users\uday\Desktop>kubectl.exe get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-779f5454cb-pqfrj 1/1 Running 1 20h
kubernetes-dashboard-64686c4bf9-g9z2k 0/1 CrashLoopBackOff 84 18h
C:\Users\uday\Desktop>kubectl.exe describe pod kubernetes-dashboard-64686c4bf9-g9z2k -n kubernetes-dashboard
Name: kubernetes-dashboard-64686c4bf9-g9z2k
Namespace: kubernetes-dashboard
Priority: 0
Node: slave-node/10.0.0.6
Start Time: Sat, 28 Mar 2020 14:16:54 +0000
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=64686c4bf9
Annotations: <none>
Status: Running
IP: 182.244.1.12
IPs:
IP: 182.244.1.12
Controlled By: ReplicaSet/kubernetes-dashboard-64686c4bf9
Containers:
kubernetes-dashboard:
Container ID: docker://470ee8c61998c3c3dda86c58ad17817468f55aa73cd4feecf3b018977ce13ca3
Image: kubernetesui/dashboard:v2.0.0-rc6
Image ID: docker-pullable://kubernetesui/dashboard#sha256:61f9c378c427a3f8a9643f83baa9f96db1ae1357c67a93b533ae7b36d71c69dc
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
--namespace=kubernetes-dashboard
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Sun, 29 Mar 2020 09:01:31 +0000
Finished: Sun, 29 Mar 2020 09:02:01 +0000
Ready: False
Restart Count: 84
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-pzfbl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kubernetes-dashboard-token-pzfbl:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-pzfbl
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 4m49s (x501 over 123m) kubelet, slave-node Back-off restarting failed container
kubectl.exe logs kubernetes-dashboard-64686c4bf9-g9z2k -n kubernetes-dashboard
2020/03/29 09:01:31 Starting overwatch
2020/03/29 09:01:31 Using namespace: kubernetes-dashboard
2020/03/29 09:01:31 Using in-cluster config to connect to apiserver
2020/03/29 09:01:31 Using secret token for csrf signing
2020/03/29 09:01:31 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0004e2dc0)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40 +0x3b0
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc00043ae80)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:499 +0xc6
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc00043ae80)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:467 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:548
main.main()
/home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x20d
The Problem
The reason the application is not coming up is because the Dashboard container itself is not running. If you look at the output you provided you can see this:
pod/kubernetes-dashboard-64686c4bf9-5jkwq 0/1 CrashLoopBackOff 14
So how do we troubleshoot this? Well, there are three principle ways. One of which you'll probably be using more than the other two.
Describe
Describe is a is a command that allows you to fetch details about a resource in kubernetes. This could be metadata information, the amount of replicas you assigned, or even some events depicting why a resource is failing to start. For example, a referenced Container Image in your Pod manifest can not be found in the usable container registries. The syntax for using Describe is like so:
kubectl describe pod -n kubernetes-dashboard kubernetes-dashboard-64686c4bf9-5jkwq
Here are some great docs on the tool as well.
Logs
The next troubleshooting step you'll likely take advantage of in Kubernetes is using the logging architecture. As you're probably aware, when a Docker container is spawned it is common practice to have the logs that are produced by the application be redirected to STDOUT or STDERR for the process. Kubernetes them captures this log data for you and provides an API abstraction layer with which you can interact with it. Sometimes your Describe events won't have any indication of why a process isn't running. However, you can then proceed with grabbing logs from the process to determine what is going wrong. An example syntax might look like:
kubectl logs -f -n kubernetes-dashboard kubernetes-dashboard-64686c4bf9-5jkwq
Exec
The last common troubleshooting technique is Exec. Exec effectively allows you to attach to the a shell in a running container so that you can interact with the live environment to troubleshoot the application. This allows you to do things like see if configuration files were properly staged on the container's filesystem, determine if environment variables were properly expanded and set, etc. An example syntax for Exec might look like:
kubectl exec -it -n kubernetes-dashboard kubernetes-dashboard-64686c4bf9-5jkwq sh
In this case, however, your pod is in a CrashLoopBackoff state. This means that you will not be able to exec into it due to the fact that the container is not running. The Kubernetes API Server recognized a pattern of failures and automatically reduced its attempts at scheduling the workload accordingly.
Here is a good thread on how to troubleshoot pods that enter this state.
Summary
So, now that I've said all of this. How do we answer your question? Well, we can't answer it directly. But I sort of did with my summary above. Because the real answer you're looking for is how to properly troubleshoot linux containers running in Kubernetes. These issues will be a reoccurring theme in your experience with Kubernetes so it's essential to develop debugging skills in the ecosystem as soon as possible.
If the Describe, Logs, and Exec command are unable to help you find out why the Dashboard pod is failing to come up, feel free to add a comment on this answer requesting additional support and I'll be happy to help where I can!

docker-registry deploys to K8S get an issue "CrashLoopBackOff"

I am stuck with docker-resgitry deployment to K8S. Here I show detail what I did. Hope you can give me any ideas.
My K8S version:
ii kubeadm 1.14.1-00 amd64 Kubernetes Cluster Bootstrapping Tool
ii kubectl 1.14.1-00 amd64 Kubernetes Command Line Tool
ii kubelet 1.14.1-00 amd64 Kubernetes Node Agent
ii kubernetes-cni 0.7.5-00 amd64 Kubernetes CNI
What I did?
Create selfcert
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout cert.key -out cert.crt
Import selfcert to K8S
$ kubectl create secret tls registry-cert-secret --key cert.key --cert cert.crt
$ vim chart_values.yaml
ingress:
enabled: true
hosts:
- registry.mgmt.home.local
annotations:
kubernetes.io/ingress.class: traefik
tls:
- secretName: registry-cert-secret
hosts:
- registry.mgmt.home.local
secrets:
htpasswd: "admin:$2y$05$f95dCd6fRxQdDoPJ6mJIb.YMvR0qfhddSl3NSL1wCk1ZMl4JyFBDW"
s3:
accessKey: "admin"
secretKey: "admin2019"
storage: s3
s3:
region: us-east-1
regionEndpoint: http://minio.home.local:9000
secure: true
bucket: registry
then install with helm
$ helm install stable/docker-registry -f chart_values.yaml --name docker-registry
NAME: docker-registry
LAST DEPLOYED: Thu Oct 31 16:29:31 2019
NAMESPACE: default
STATUS: DEPLOYED
show the kubectl deployments
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
docker-registry 0/1 1 0 35m
get pods
$ kubectl get pods --namespace default
NAME READY STATUS RESTARTS AGE
docker-registry-6989668db6-78d84 0/1 **CrashLoopBackOff** 7 13m
docker-registry-6989668db6-jttrz 1/1 Terminating 0 37m
describe pod
$ kubectl describe pod docker-registry-6989668db6-78d84 --namespace default
Name: docker-registry-6989668db6-78d84
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: k8s-worker-promox/10.102.11.223
Start Time: Thu, 31 Oct 2019 18:03:13 +0800
Labels: app=docker-registry
pod-template-hash=6989668db6
release=docker-registry
Annotations: checksum/config: 89b20bb43a348d6b8dedacac583a596ccef4e570a935e7c5b464ba746eb88307
Status: Running
IP: 10.244.52.10
Controlled By: ReplicaSet/docker-registry-6989668db6
Containers:
docker-registry:
Container ID: docker://9a40c5e100711b122ddd78439c9fa21790f04f5a442b704140639f8fbfbd8929
Image: registry:2.7.1
Image ID: docker-pullable://registry#sha256:8004747f1e8cd820a148fb7499d71a76d45ff66bac6a29129bfdbfdc0154d146
Port: 5000/TCP
Host Port: 0/TCP
Command:
/bin/registry
serve
/etc/docker/registry/config.yml
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 31 Oct 2019 18:14:21 +0800
Finished: Thu, 31 Oct 2019 18:15:19 +0800
Ready: False
Restart Count: 7
Liveness: http-get http://:5000/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:5000/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_HTTP_SECRET: <set to the key 'haSharedSecret' in secret 'docker-registry-secret'> Optional: false
REGISTRY_STORAGE_S3_ACCESSKEY: <set to the key 's3AccessKey' in secret 'docker-registry-secret'> Optional: false
REGISTRY_STORAGE_S3_SECRETKEY: <set to the key 's3SecretKey' in secret 'docker-registry-secret'> Optional: false
REGISTRY_STORAGE_S3_REGION: us-east-1
REGISTRY_STORAGE_S3_REGIONENDPOINT: http://10.102.11.218:9000
REGISTRY_STORAGE_S3_BUCKET: registry
REGISTRY_STORAGE_S3_SECURE: true
Mounts:
/auth from auth (ro)
/etc/docker/registry from docker-registry-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qfwkm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
auth:
Type: Secret (a volume populated by a Secret)
SecretName: docker-registry-secret
Optional: false
docker-registry-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: docker-registry-config
ingress:
Optional: false
default-token-qfwkm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qfwkm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14m default-scheduler Successfully assigned default/docker-registry-6989668db6-78d84 to k8s-worker-promox
Normal Pulled 12m (x3 over 14m) kubelet, k8s-worker-promox Container image "registry:2.7.1" already present on machine
Normal Created 12m (x3 over 14m) kubelet, k8s-worker-promox Created container docker-registry
Normal Started 12m (x3 over 14m) kubelet, k8s-worker-promox Started container docker-registry
Normal Killing 12m (x2 over 13m) kubelet, k8s-worker-promox Container docker-registry failed liveness probe, will be restarted
Warning Unhealthy 12m (x7 over 14m) kubelet, k8s-worker-promox Liveness probe failed: HTTP probe failed with statuscode: 503
Warning Unhealthy 9m8s (x15 over 13m) kubelet, k8s-worker-promox Readiness probe failed: HTTP probe failed with statuscode: 503
Warning BackOff 4m26s (x18 over 8m40s) kubelet, k8s-worker-promox Back-off restarting failed container
I see the issue related to Liveness and Readiness. So they made the pod is trying to start/ restart many times, then it gets "Back-off".
Following the troubleshooting, I see that should be related to DNS. But, DNS should not have any issues. I tried to lookup at K8S host.
$ nslookup minio.home.local
Server: 10.102.11.201
Address: 10.102.11.201#53
Non-authoritative answer:
Name: minio.home.local
Address: 10.101.12.213
Updated November 1st. I went into another pod, then nslookup, this pod could not find minio.home.local. Is that related this issue? also I tried to replace minio.home.local to IP in *.yaml, but also get the same issue.
$ kubectl exec -it net-utils-5b5f89f777-2cwgq bash
root#net-utils-5b5f89f777-2cwgq:/#
root#net-utils-5b5f89f777-2cwgq:/#
root#net-utils-5b5f89f777-2cwgq:/#
root#net-utils-5b5f89f777-2cwgq:/# nslookup minio.home.local
Server: 10.96.0.10
Address: 10.96.0.10#53
** server can't find minio.skylab.local: NXDOMAIN
root#net-utils-5b5f89f777-2cwgq:/# ping minio.home.local
ping: unknown host
Googled/ Github discussion, but I still could not fix it. Do you have any ideas?
Thank you so much.

Helm install or upgrade release failed on Kubernetes cluster: the server could not find the requested resource or UPGRADE FAILED: no deployed releases

Using helm for deploying chart on my Kubernetes cluster, since one day, I can't deploy a new one or upgrading one existed.
Indeed, each time I am using helm I have an error message telling me that it is not possible to install or upgrade ressources.
If I run helm install --name foo . -f values.yaml --namespace foo-namespace I have this output:
Error: release foo failed: the server could not find the requested
resource
If I run helm upgrade --install foo . -f values.yaml --namespace foo-namespace or helm upgrade foo . -f values.yaml --namespace foo-namespace I have this error:
Error: UPGRADE FAILED: "foo" has no deployed releases
I don't really understand why.
This is my helm version:
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
On my kubernetes cluster I have tiller deployed with the same version, when I run kubectl describe pods tiller-deploy-84b... -n kube-system:
Name: tiller-deploy-84b8...
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: k8s-worker-1/167.114.249.216
Start Time: Tue, 26 Feb 2019 10:50:21 +0100
Labels: app=helm
name=tiller
pod-template-hash=84b...
Annotations: <none>
Status: Running
IP: <IP_NUMBER>
Controlled By: ReplicaSet/tiller-deploy-84b8...
Containers:
tiller:
Container ID: docker://0302f9957d5d83db22...
Image: gcr.io/kubernetes-helm/tiller:v2.12.3
Image ID: docker-pullable://gcr.io/kubernetes-helm/tiller#sha256:cab750b402d24d...
Ports: 44134/TCP, 44135/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Tue, 26 Feb 2019 10:50:28 +0100
Ready: True
Restart Count: 0
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from helm-token-... (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
helm-token-...:
Type: Secret (a volume populated by a Secret)
SecretName: helm-token-...
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned kube-system/tiller-deploy-84b86cbc59-kxjqv to worker-1
Normal Pulling 26m kubelet, k8s-worker-1 pulling image "gcr.io/kubernetes-helm/tiller:v2.12.3"
Normal Pulled 26m kubelet, k8s-worker-1 Successfully pulled image "gcr.io/kubernetes-helm/tiller:v2.12.3"
Normal Created 26m kubelet, k8s-worker-1 Created container
Normal Started 26m kubelet, k8s-worker-1 Started container
Is someone have faced the same issue ?
Update:
This the folder structure of my actual chart named foo:
structure folder of the chart:
> templates/
> deployment.yaml
> ingress.yaml
> service.yaml
> .helmignore
> Chart.yaml
> values.yaml
I have already tried to delete the chart in failure using the delete command helm del --purge foo but the same errors occurred.
Just to be more precise, the chart foo is in fact a custom chart using my own private registry. ImagePullSecret are normally setting up.
I have run these two commands helm upgrade foo . -f values.yaml --namespace foo-namespace --force | helm upgrade --install foo . -f values.yaml --namespace foo-namespace --force and I still get an error:
UPGRADE FAILED
ROLLING BACK
Error: failed to create resource: the server could not find the requested resource
Error: UPGRADE FAILED: failed to create resource: the server could not find the requested resource
Notice that foo-namespace already exist. So the error don't come from the namespace name or the namespace itself. Indeed, if I run helm list, I can see that the foo chart is in a FAILED status.
Tiller stores all releases as ConfigMaps in Tiller's namespace(kube-system in your case). Try to find broken release and delete it's ConfigMap using commands:
$ kubectl get cm --all-namespaces -l OWNER=TILLER
NAMESPACE NAME DATA AGE
kube-system nginx-ingress.v1 1 22h
$ kubectl delete cm nginx-ingress.v1 -n kube-system
Next, delete all release objects (deployment,services,ingress, etc) manually and reinstall release using helm again.
If it didn't help, you may try to download newer release of Helm (v2.14.3 at the moment) and update/reinstall Tiller.
I had the same issue, but cleanup did not help also try the same helm chart on a brand new k8s cluster did not help.
So I found out that there was a missing apiVersion caused the problem. I found it out by doing a
helm install xyz --dry-run
copy the output to a new test.yaml file and use
kubectl apply test.yaml
there I see the error (apiVersion line was moved to a comment line)
I had the same problem but not due to broken releases. After upgrading helm. It seems newer versions of helm do bad with the --wait parameter. So for anyone facing the same issue: Just removing --wait, and leave --debug from helm upgrade parameters solved my issue.
I had this issue when I tried to deploy custom chart with CronJob instead deployment. The error occurs on this step in deploy script. To resolve it need to add ENV Variable ROLLOUT_STATUS_DISABLED=true it is solved in this issue.

Installing kong-ingress-controller to manage ingress on kubernetes

I am installing kong ingress controller on my AKS cluster, but I don't want to have the postgres Statefulset service inside my cluster. Instead, I have a postgres database in my azure infrastructure, and I want connect it from my kong-ingress-controller deplopyment creating the postgres credentials like secrets in my aks cluster and store it in an environment variables.
I've create the secret
⟩ kubectl create secret generic az-pg-db-user-pass --from-literal=username='az-pg-username' --from-literal=password='az-pg-password' --namespace kong
secret/az-pg-db-user-pass created
And in my kongwithingress.yaml file, I have the deployment manifest declarations, which I did want to present from this gist link in order to don't fill the body question of a lot of yaml code lines.
This gist is based in this AKS deployment all in one, but removing postgres like Statefulset and Service due to the previous reasons, my objective is setup connection with my own azure managed postgres service
I've configured the az-pg-db-user-pass generic secret created in the kong-ingress-controller deployment and my kong deployment and my kong-migrations job presents in my whole gist script on order to create an environment variables such as follow:
KONG_PG_USERNAME
KONG_PG_PASSWORD
These environment variables has been created and referenced as a secrets in the kong-ingress-controller deployment and kong deployment and kong-migrations job which need access or connect with the postgres database
When I execute the kubectl apply -f kongwithingres.yaml command I get the following output:
The kong-ingress-controller deployment, kong deployment and kong-migrations job were created successfully.
⟩ kubectl apply -f kongwithingres.yaml
namespace/kong unchanged
customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com unchanged
serviceaccount/kong-serviceaccount unchanged
clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole unchanged
role.rbac.authorization.k8s.io/kong-ingress-role unchanged
rolebinding.rbac.authorization.k8s.io/kong-ingress-role-nisa-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding unchanged
service/kong-ingress-controller created
deployment.extensions/kong-ingress-controller created
service/kong-proxy created
deployment.extensions/kong created
job.batch/kong-migrations created
[I]
But their respective pods have the CrashLoopBackOff status
NAME READY STATUS RESTARTS AGE
pod/kong-d8b88df99-j6hvl 0/1 Init:CrashLoopBackOff 5 4m24s
pod/kong-ingress-controller-984fc9666-cd2b5 0/2 Init:CrashLoopBackOff 5 4m24s
pod/kong-migrations-t6n7p 0/1 CrashLoopBackOff 5 4m24s
I am checking the respective logs of each pod and I found this:
The pod/kong-d8b88df99-j6hvl:
⟩ kubectl logs pod/kong-d8b88df99-j6hvl -p -n kong
Error from server (BadRequest): previous terminated container "kong-proxy" in pod "kong-d8b88df99-j6hvl" not found
And in their describe information this pod is getting the environment variables and the image
⟩ kubectl describe pod/kong-d8b88df99-j6hvl -n kong
Name: kong-d8b88df99-j6hvl
Namespace: kong
Status: Pending
IP: 10.244.1.18
Controlled By: ReplicaSet/kong-d8b88df99
Init Containers:
wait-for-migrations:
Container ID: docker://7007a89ada215daf853ec103d79dca60ccc5fb3a14c51ac6c5c56655da6da62f
Image: kong:1.0.0
Image ID: docker-pullable://kong#sha256:8fd6a312d7715a9cc85c49625a4c2f53951f6e4422926091e4d2ae67c480b6d5
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
kong migrations list
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 26 Feb 2019 16:25:01 +0100
Finished: Tue, 26 Feb 2019 16:25:01 +0100
Ready: False
Restart Count: 6
Environment:
KONG_ADMIN_LISTEN: off
KONG_PROXY_LISTEN: off
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
KONG_PG_HOST: zcrm365-postgresql1.postgres.database.azure.com
KONG_PG_USERNAME: <set to the key 'username' in secret 'az-pg-db-user-pass'> Optional: false
KONG_PG_PASSWORD: <set to the key 'password' in secret 'az-pg-db-user-pass'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gnkjq (ro)
Containers:
kong-proxy:
Container ID:
Image: kong:1.0.0
Image ID:
Ports: 8000/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
KONG_PG_USERNAME: <set to the key 'username' in secret 'az-pg-db-user-pass'> Optional: false
KONG_PG_PASSWORD: <set to the key 'password' in secret 'az-pg-db-user-pass'> Optional: false
KONG_PG_HOST: zcrm365-postgresql1.postgres.database.azure.com
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_LISTEN: off
KUBERNETES_PORT_443_TCP_ADDR: zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
KUBERNETES_PORT: tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
KUBERNETES_SERVICE_HOST: zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gnkjq (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-gnkjq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gnkjq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m44s default-scheduler Successfully assigned kong/kong-d8b88df99-j6hvl to aks-default-75800594-1
Normal Pulled 7m9s (x5 over 8m40s) kubelet, aks-default-75800594-1 Container image "kong:1.0.0" already present on machine
Normal Created 7m8s (x5 over 8m40s) kubelet, aks-default-75800594-1 Created container
Normal Started 7m7s (x5 over 8m40s) kubelet, aks-default-75800594-1 Started container
Warning BackOff 3m34s (x26 over 8m38s) kubelet, aks-default-75800594-1 Back-off restarting failed container
The pod/kong-ingress-controller-984fc9666-cd2b5:
kubectl logs pod/kong-ingress-controller-984fc9666-cd2b5 -p -n kong
Error from server (BadRequest): a container name must be specified for pod kong-ingress-controller-984fc9666-cd2b5, choose one of: [admin-api ingress-controller] or one of the init containers: [wait-for-migrations]
[I]
And their respective description
⟩ kubectl describe pod/kong-ingress-controller-984fc9666-cd2b5 -n kong
Name: kong-ingress-controller-984fc9666-cd2b5
Namespace: kong
Status: Pending
IP: 10.244.2.18
Controlled By: ReplicaSet/kong-ingress-controller-984fc9666
Init Containers:
wait-for-migrations:
Container ID: docker://8eb035f755322b3ac72792d922974811933ba9a71afb1f4549cfe7e0a6519619
Image: kong:1.0.0
Image ID: docker-pullable://kong#sha256:8fd6a312d7715a9cc85c49625a4c2f53951f6e4422926091e4d2ae67c480b6d5
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
kong migrations list
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 26 Feb 2019 16:29:56 +0100
Finished: Tue, 26 Feb 2019 16:29:56 +0100
Ready: False
Restart Count: 7
Environment:
KONG_ADMIN_LISTEN: off
KONG_PROXY_LISTEN: off
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
KONG_PG_HOST: zcrm365-postgresql1.postgres.database.azure.com
KONG_PG_USERNAME: <set to the key 'username' in secret 'az-pg-db-user-pass'> Optional: false
KONG_PG_PASSWORD: <set to the key 'password' in secret 'az-pg-db-user-pass'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
Containers:
admin-api:
Container ID:
Image: kong:1.0.0
Image ID:
Port: 8001/TCP
Host Port: 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Liveness: http-get http://:8001/status delay=30s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8001/status delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
KONG_PG_USERNAME: <set to the key 'username' in secret 'az-pg-db-user-pass'> Optional: false
KONG_PG_PASSWORD: <set to the key 'password' in secret 'az-pg-db-user-pass'> Optional: false
KONG_PG_HOST: zcrm365-postgresql1.postgres.database.azure.com
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ERROR_LOG: /dev/stderr
KONG_ADMIN_LISTEN: 0.0.0.0:8001, 0.0.0.0:8444 ssl
KONG_PROXY_LISTEN: off
KUBERNETES_PORT_443_TCP_ADDR: zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
KUBERNETES_PORT: tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
KUBERNETES_SERVICE_HOST: zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
ingress-controller:
Container ID:
Image: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.3.0
Image ID:
Port: <none>
Host Port: <none>
Args:
/kong-ingress-controller
--kong-url=https://localhost:8444
--admin-tls-skip-verify
--default-backend-service=kong/kong-proxy
--publish-service=kong/kong-proxy
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Liveness: http-get http://:10254/healthz delay=30s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: kong-ingress-controller-984fc9666-cd2b5 (v1:metadata.name)
POD_NAMESPACE: kong (v1:metadata.namespace)
KUBERNETES_PORT_443_TCP_ADDR: zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
KUBERNETES_PORT: tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
KUBERNETES_SERVICE_HOST: zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
kong-serviceaccount-token-rc4sp:
Type: Secret (a volume populated by a Secret)
SecretName: kong-serviceaccount-token-rc4sp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned kong/kong-ingress-controller-984fc9666-cd2b5 to aks-default-75800594-2
Normal Pulled 10m (x5 over 12m) kubelet, aks-default-75800594-2 Container image "kong:1.0.0" already present on machine
Normal Created 10m (x5 over 12m) kubelet, aks-default-75800594-2 Created container
Normal Started 10m (x5 over 12m) kubelet, aks-default-75800594-2 Started container
Warning BackOff 2m14s (x49 over 12m) kubelet, aks-default-75800594-2 Back-off restarting failed container
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/kong · (Deployments±)
⟩
I unknown the reason by which teh CrashLoopBackOff status and their status respective is Waiting: PodInitiazing
How to can I debug this behavior?
Is possible that Kong cannot talk to the Postgres database?
My AKS cluster is on Azure and also my postgres database and they have communication as a services.
UPDATE
These are the logs of my container pods created:
⟩ kubectl logs pod/kong-ingress-controller-984fc9666-w4vvn -p -n kong -c ingress-controller
Error from server (BadRequest): previous terminated container "ingress-controller" in pod "kong-ingress-controller-984fc9666-w4vvn" not found
[I]
⟩ kubectl logs pod/kong-d8b88df99-qsq4j -p -n kong -c kong-proxy
Error from server (BadRequest): previous terminated container "kong-proxy" in pod "kong-d8b88df99-qsq4j" not found
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/kong · (Deployments±)
⟩
My kong-ingress-controller deployment pods are CrashLoopBackOff and some times in Waiting: PodInitiazing because I don't had in mind some things such as follow:
The main reason, such as says #Amityo, the kong-ingress-controller and kong have init-container called - wait-for-migrations which waits for the kong-migrations job before to be executed. Here, I can identify that is necessary perform my kong migrations
But my kong-migrations job was not working because I don't had the KONG_DATABASE environment variable parameter to setup the connection.
Other reason by which my deployment was not working is because kong internally to connect with postgres maybe wait that the user environment variable defined in the container to be called KONG_PG_USER. I was called KONG_PG_USERNAME and it was other reason to fail the execution of my script. (I am not sure completely about this)
⟩ kubectl create -f kongwithingres.yaml
namespace/kong created
secret/az-pg-db-user-pass created
customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com created
serviceaccount/kong-serviceaccount created
clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole created
role.rbac.authorization.k8s.io/kong-ingress-role created
rolebinding.rbac.authorization.k8s.io/kong-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding created
service/kong-ingress-controller created
deployment.extensions/kong-ingress-controller created
service/kong-proxy created
deployment.extensions/kong created
job.batch/kong-migrations created
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/kong · (Deployments)
By the way, to start with kong I've recommend install konga which is a front-end dashboard tool to manage kong and check the things that we can make via yaml files.
We have this konga.yaml script to be installed like deployment in our kubernetes clusters
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: konga
namespace: kong
spec:
replicas: 1
template:
metadata:
labels:
app: konga
spec:
containers:
- env:
- name: NODE_TLS_REJECT_UNAUTHORIZED
value: "0"
image: pantsel/konga:latest
name: konga
ports:
- containerPort: 1337
And, we can start the service locally on our machines, via kubectl port-forward command
⟩ kubectl port-forward pod/konga-85b66cffff-mxq85 1337:1337 -n kong
Forwarding from 127.0.0.1:1337 -> 1337
Forwarding from [::1]:1337 -> 1337
We are going to http://localhost:1337/ and is necessary to register an admin user to use konga.
Create a kong connection (Connections in the dashboard) with the following kong admin URL http://kong-ingress-controller:8001/