Kubernetes No external IP - kubernetes

I installed kubernetes as a single node cluster on a Debian 10 Box.
I changed the dashboard config with :
sudo kubectl -n kubernetes-dashboard edit svc kubernetes-dashboard
and changed ClusterIP to NodePort and set the port to 32321. As described in this tutorial: https://k21academy.com/docker-kubernetes/kubernetes-dashboard/
sudo kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.100.91.194 <none> 443:32321/TCP 25m
I Still don't get any external IP and can't access the Dashboard via external ip :(
Any advice?
sudo kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-558bd4d5db-9fxkw 1/1 Running 0 136m
kube-system pod/coredns-558bd4d5db-bq79s 1/1 Running 0 136m
kube-system pod/etcd-dyd-001 1/1 Running 0 136m
kube-system pod/kube-apiserver-dyd-001 1/1 Running 0 136m
kube-system pod/kube-controller-manager-dyd-001 1/1 Running 0 136m
kube-system pod/kube-flannel-ds-amd64-hh5qm 1/1 Running 0 136m
kube-system pod/kube-proxy-4pg4r 1/1 Running 0 136m
kube-system pod/kube-scheduler-dyd-001 1/1 Running 0 136m
kubernetes-dashboard pod/dashboard-metrics-scraper-84f48697d6-6sqqt 1/1 Running 0 19m
kubernetes-dashboard pod/kubernetes-dashboard-689fddb6b4-5sbhf 1/1 Running 0 19m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 136m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 136m
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.98.66.248 <none> 8000/TCP 19m
kubernetes-dashboard service/kubernetes-dashboard NodePort 10.100.91.194 <none> 443:32321/TCP 19m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-flannel-ds-amd64 1 1 1 1 1 <none> 136m
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 <none> 136m
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 <none> 136m
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 <none> 136m
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 <none> 136m
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 136m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 2/2 2 2 136m
kubernetes-dashboard deployment.apps/dashboard-metrics-scraper 1/1 1 1 19m
kubernetes-dashboard deployment.apps/kubernetes-dashboard 1/1 1 1 19m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-558bd4d5db 2 2 2 136m
kubernetes-dashboard replicaset.apps/dashboard-metrics-scraper-84f48697d6 1 1 1 19m
kubernetes-dashboard replicaset.apps/kubernetes-dashboard-689fddb6b4 1 1 1 19m
sudo kubectl get all -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-84f48697d6-6sqqt 1/1 Running 0 17m
pod/kubernetes-dashboard-689fddb6b4-5sbhf 1/1 Running 0 17m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.98.66.248 <none> 8000/TCP 17m
service/kubernetes-dashboard NodePort 10.100.91.194 <none> 443:32321/TCP 17m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 17m
deployment.apps/kubernetes-dashboard 1/1 1 1 17m
NAME DESIRED CURRENT READY AGE
replicaset.apps/dashboard-metrics-scraper-84f48697d6 1 1 1 17m
replicaset.apps/kubernetes-dashboard-689fddb6b4 1 1 1 17m
sudo kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.98.66.248 <none> 8000/TCP 15m
kubernetes-dashboard NodePort 10.100.91.194 <none> 443:32321/TCP 15m
and
sudo kubectl get deployment kubernetes-dashboard -n kubernetes-dashboard
NAME READY UP-TO-DATE AVAILABLE AGE
kubernetes-dashboard 1/1 1 1 17m
and
sudo kubectl describe service kubernetes-dashboard -n kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.91.194
IPs: 10.100.91.194
Port: <unset> 443/TCP
TargetPort: 8443/TCP
NodePort: <unset> 32321/TCP
Endpoints: 10.244.0.6:8443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

You will not see the External IP for a nodeport service.
Try accessing your dashboard with your server's public IP and port
https://<server_IP>:32321
Above, port 32321 is taken from your output
sudo kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.100.91.194 <none> 443:32321/TCP 25m
Remember, to use https and that the port will change if you redeploy the service.

You need to create a Kubernetes service of type LoadBalancer like below example that will give you external IP.
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
type: LoadBalancer
Remember for using service of type LoadBalancer, you need to use something like Metallb or similar for your network.

Related

Nginx Ingress Controller not curling localhost on worker nodes

CentOS 7, 3 VMs -- 1 master and 2 workers, Kubernetes 1.26 (kubelet is 1.25.5.0), cri-dockerd, Calico CNI
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rxxxx-vm1 Ready control-plane 4h48m v1.25.5 10.253.137.20 <none> CentOS Linux 7 (Core) 3.10.0-1160.80.1.el7.x86_64 docker://20.10.22
rxxxx-vm2 Ready <none> 4h27m v1.25.5 10.253.137.17 <none> CentOS Linux 7 (Core) 3.10.0-1160.80.1.el7.x86_64 docker://20.10.22
rxxxx-vm3 Ready <none> 4h27m v1.25.5 10.253.137.10 <none> CentOS Linux 7 (Core) 3.10.0-1160.80.1.el7.x86_64 docker://20.10.22
NGINX Ingress controller is deployed as a daemonset:
# kubectl get po -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver calico-apiserver-685568b969-b8mfr 1/1 Running 0 4h50m 172.17.0.6 rxxxx-vm1 <none> <none>
calico-apiserver calico-apiserver-685568b969-xrj2h 1/1 Running 0 4h50m 172.17.0.7 rxxxx-vm1 <none> <none>
calico-system calico-kube-controllers-67df98bdc8-2zdnj 1/1 Running 0 4h51m 172.17.0.4 rxxxx-vm1 <none> <none>
calico-system calico-node-498bb 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
calico-system calico-node-sblv9 1/1 Running 0 4h30m 10.253.137.17 rxxxx-vm2 <none> <none>
calico-system calico-node-zkn28 1/1 Running 0 4h29m 10.253.137.10 rxxxx-vm3 <none> <none>
calico-system calico-typha-76c8f59f87-mq52d 1/1 Running 0 4h29m 10.253.137.10 rxxxx-vm3 <none> <none>
calico-system calico-typha-76c8f59f87-zk6jr 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system coredns-787d4945fb-6mq5k 1/1 Running 0 4h51m 172.17.0.3 rxxxx-vm1 <none> <none>
kube-system coredns-787d4945fb-kmqcv 1/1 Running 0 4h51m 172.17.0.2 rxxxx-vm1 <none> <none>
kube-system etcd-rxxxx-vm1 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system kube-apiserver-rxxxx-vm1 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system kube-controller-manager-rxxxx-vm1 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system kube-proxy-g9dbt 1/1 Running 0 4h29m 10.253.137.10 rxxxx-vm3 <none> <none>
kube-system kube-proxy-mnzks 1/1 Running 0 4h30m 10.253.137.17 rxxxx-vm2 <none> <none>
kube-system kube-proxy-n98xb 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system kube-scheduler-rxxxx-vm1 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
nginx-ingress nginx-ingress-2chhn 1/1 Running 0 4h29m 172.17.0.2 rxxxx-vm3 <none> <none>
nginx-ingress nginx-ingress-95h7s 1/1 Running 0 4h30m 172.17.0.2 rxxxx-vm2 <none> <none>
nginx-ingress nginx-ingress-wbxng 1/1 Running 0 4h51m 172.17.0.5 rxxxx-vm1 <none> <none>
play apple-app 1/1 Running 0 4h45m 172.17.0.8 rxxxx-vm1 <none> <none>
play banana-app 1/1 Running 0 4h45m 172.17.0.9 rxxxx-vm1 <none> <none>
tigera-operator tigera-operator-7795f5d79b-hmm5g 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
Services:
# kubectl get svc -A -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
calico-apiserver calico-api ClusterIP 10.111.117.42 <none> 443/TCP 6h34m apiserver=true
calico-system calico-kube-controllers-metrics ClusterIP 10.99.121.254 <none> 9094/TCP 6h35m k8s-app=calico-kube-controllers
calico-system calico-typha ClusterIP 10.104.50.90 <none> 5473/TCP 6h35m k8s-app=calico-typha
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h36m <none>
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6h36m k8s-app=kube-dns
play apple-service ClusterIP 10.98.78.251 <none> 5678/TCP 6h29m app=apple
play banana-service ClusterIP 10.103.87.112 <none> 5678/TCP 6h29m app=banana
Service details:
# kubectl -n play describe svc apple-service
Name: apple-service
Namespace: play
Labels: <none>
Annotations: <none>
Selector: app=apple
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.98.78.251
IPs: 10.98.78.251
Port: <unset> 5678/TCP
TargetPort: 5678/TCP
Endpoints: 172.17.0.8:5678
Session Affinity: None
Events: <none>
Endpoints:
# kubectl get ep -A
NAMESPACE NAME ENDPOINTS AGE
calico-apiserver calico-api 172.17.0.6:5443,172.17.0.7:5443 6h39m
calico-system calico-kube-controllers-metrics 172.17.0.4:9094 6h39m
calico-system calico-typha 10.253.137.10:5473,10.253.137.20:5473 6h40m
default kubernetes 10.253.137.20:6443 6h40m
kube-system kube-dns 172.17.0.2:53,172.17.0.3:53,172.17.0.2:53 + 3 more... 6h40m
play apple-service 172.17.0.8:5678 6h34m
play banana-service 172.17.0.9:5678 6h34m
Endpoint details:
# kubectl -n play describe ep apple-service
Name: apple-service
Namespace: play
Labels: <none>
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2023-01-11T20:21:27Z
Subsets:
Addresses: 172.17.0.8
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 5678 TCP
Events: <none>
Ingress resource:
# kubectl get ing -A -o wide
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
play example-ingress nginx localhost 80 6h30m
Ingress details:
# kubectl -n play describe ing example-ingress
Name: example-ingress
Labels: <none>
Namespace: play
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
localhost
/apple apple-service:5678 (172.17.0.8:5678)
/banana banana-service:5678 (172.17.0.9:5678)
Annotations: ingress.kubernetes.io/rewrite-target: /
Events: <none>
QUESTION:
While curl -kL http://localhost/apple on master node returns apple the same command produces below output on worker nodes:
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.23.2</center>
</body>
</html>
My understanding is that if the ingress controller pod is running on every node, then localhost should be resolved as this is defined as host in ingress resource definition. Is this understanding incorrect? If not, what am I doing wrong?
When I look at the corresponding node's ingress controller's pod's logs, I see this:
2023/01/12 03:02:05 [error] 68#68: *11 connect() failed (113: No route to host) while connecting to upstream, client: 172.17.0.1, server: localhost, request: "GET /apple HTTP/1.1", upstream: "http://172.17.0.8:5678/apple", host: "localhost"
172.17.0.1 - - [12/Jan/2023:03:02:05 +0000] "GET /apple HTTP/1.1" 502 157 "-" "curl/7.29.0" "-"

Microk8s + metallb + ingress

Im quite new to kubernetes and Im trying to set up a microk8s test environment on a VPS with CentOS.
What I did:
I set up the cluster, enabled the ingress and metallb
microk8s enable ingress
microk8s enable metallb
Exposed the ingress-controller service:
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
type: LoadBalancer
selector:
name: nginx-ingress-microk8s
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
Exposed an nginx deployment to test the ingress
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx-deploy
spec:
replicas: 1
selector:
matchLabels:
run: nginx-deploy
template:
metadata:
labels:
run: nginx-deploy
spec:
containers:
- image: nginx
name: nginx
This is the status of my cluster:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/hostpath-provisioner-5c65fbdb4f-m2xq6 1/1 Running 3 41h
kube-system pod/coredns-86f78bb79c-7p8bs 1/1 Running 3 41h
kube-system pod/calico-node-g4ws4 1/1 Running 6 42h
kube-system pod/calico-kube-controllers-847c8c99d-xhmd7 1/1 Running 4 42h
kube-system pod/metrics-server-8bbfb4bdb-ggvk7 1/1 Running 0 41h
kube-system pod/kubernetes-dashboard-7ffd448895-ktv8j 1/1 Running 0 41h
kube-system pod/dashboard-metrics-scraper-6c4568dc68-l4xmg 1/1 Running 0 41h
container-registry pod/registry-9b57d9df8-xjh8d 1/1 Running 0 38h
cert-manager pod/cert-manager-cainjector-5c6cb79446-vv5j2 1/1 Running 0 12h
cert-manager pod/cert-manager-794657589-srrmr 1/1 Running 0 12h
cert-manager pod/cert-manager-webhook-574c9758c9-9dwr6 1/1 Running 0 12h
metallb-system pod/speaker-9gjng 1/1 Running 0 97m
metallb-system pod/controller-559b68bfd8-trk5z 1/1 Running 0 97m
ingress pod/nginx-ingress-microk8s-controller-f6cdb 1/1 Running 0 65m
default pod/nginx-deploy-5797b88878-vgp7x 1/1 Running 0 20m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 42h
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 41h
kube-system service/metrics-server ClusterIP 10.152.183.243 <none> 443/TCP 41h
kube-system service/kubernetes-dashboard ClusterIP 10.152.183.225 <none> 443/TCP 41h
kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.109 <none> 8000/TCP 41h
container-registry service/registry NodePort 10.152.183.44 <none> 5000:32000/TCP 38h
cert-manager service/cert-manager ClusterIP 10.152.183.183 <none> 9402/TCP 12h
cert-manager service/cert-manager-webhook ClusterIP 10.152.183.99 <none> 443/TCP 12h
echoserver service/echoserver ClusterIP 10.152.183.110 <none> 80/TCP 72m
ingress service/ingress LoadBalancer 10.152.183.4 192.168.0.11 80:32617/TCP,443:31867/TCP 64m
default service/nginx-deploy ClusterIP 10.152.183.149 <none> 80/TCP 19m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 42h
metallb-system daemonset.apps/speaker 1 1 1 1 1 beta.kubernetes.io/os=linux 97m
ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 65m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/hostpath-provisioner 1/1 1 1 41h
kube-system deployment.apps/coredns 1/1 1 1 41h
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 42h
kube-system deployment.apps/metrics-server 1/1 1 1 41h
kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 41h
kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 41h
container-registry deployment.apps/registry 1/1 1 1 38h
cert-manager deployment.apps/cert-manager-cainjector 1/1 1 1 12h
cert-manager deployment.apps/cert-manager 1/1 1 1 12h
cert-manager deployment.apps/cert-manager-webhook 1/1 1 1 12h
metallb-system deployment.apps/controller 1/1 1 1 97m
default deployment.apps/nginx-deploy 1/1 1 1 20m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/hostpath-provisioner-5c65fbdb4f 1 1 1 41h
kube-system replicaset.apps/coredns-86f78bb79c 1 1 1 41h
kube-system replicaset.apps/calico-kube-controllers-847c8c99d 1 1 1 42h
kube-system replicaset.apps/metrics-server-8bbfb4bdb 1 1 1 41h
kube-system replicaset.apps/kubernetes-dashboard-7ffd448895 1 1 1 41h
kube-system replicaset.apps/dashboard-metrics-scraper-6c4568dc68 1 1 1 41h
container-registry replicaset.apps/registry-9b57d9df8 1 1 1 38h
cert-manager replicaset.apps/cert-manager-cainjector-5c6cb79446 1 1 1 12h
cert-manager replicaset.apps/cert-manager-794657589 1 1 1 12h
cert-manager replicaset.apps/cert-manager-webhook-574c9758c9 1 1 1 12h
metallb-system replicaset.apps/controller-559b68bfd8 1 1 1 97m
default replicaset.apps/nginx-deploy-5797b88878 1 1 1 20m
It looks like Metallb works, as the ingress services received an ip from the pool I specified.
Now, when I try to deploy an ingress to reach the nginx deployment, I dont get the ADDRESS:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: ingress-nginx-deploy
spec:
rules:
- host: test.com
http:
paths:
- backend:
serviceName: nginx-deploy
servicePort: 80
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default ingress-nginx-deploy <none> test.com 80 13m
An help would be really appreciated. Thank you!
TL;DR
There are some ways to fix your Ingress so that it would get the IP address.
You can either:
Delete the kubernetes.io/ingress.class: nginx and add ingressClassName: public under spec section.
Use the newer example (apiVersion) from official documentation that by default will have assigned an IngressClass:
Kubernetes.io: Docs: Concepts: Services networking: Ingress
Example of Ingress resource that will fix your issue:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-deploy
spec:
ingressClassName: public
# above field is optional as microk8s default ingressclass will be assigned
rules:
- host: test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deploy
port:
number: 80
You can read more about IngressClass by following official documentation:
Kubernetes.io: Blog: Improvements to the Ingress API in Kubernetes 1.18
I've included more explanation that should shed some additional light on this particular setup.
After you apply above Ingress resource the output of:
$ kubectl get ingress
Will be following:
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-nginx-deploy public test.com 127.0.0.1 80 43s
As you can see the ADDRESS contains 127.0.0.1. It's because this particular Ingress controller enabled by an addon, binds to your host (MicroK8S node) to ports 80,443.
You can see it by running:
$ sudo microk8s kubectl get daemonset -n ingress nginx-ingress-microk8s-controller -o yaml
A side note!
Look for hostPort and securityContext.capabilities.
The Service of type LoadBalancer created by you will work with your Ingress controller but it will not be displayed under ADDRESS in $ kubectl get ingress.
A side note!
Please remember that in this particular setup you will need to connect to your Ingress controller with a Header Host: test.com unless you have DNS resolution configured to support your setup. Otherwise you will get a 404.
Additional resource:
Github.com: Ubuntu: Microk8s: Ingress with MetalLb issues
Kubernetes.io: Docs: Concepts: Configuration: Overview

Kubernetes nslookup kubernetes.default fails

My Environment:
OS - CentOS-8.2
Kubernetes Vesion:
Client Version: v1.18.8
Server Version: v1.18.8
I have successfully configured Kubernetes cluster (One master & one worker), But currently while checking the dns resolution with below code it is failing.
apiVersion: v1
kind: Pod
metadata:
name: dnsutils
namespace: default
spec:
containers:
- name: dnsutils
image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
# kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default dnsutils 1/1 Running 0 4m38s 10.244.1.20 K8s-Worker-1 <none> <none>
kube-system coredns-66bff467f8-2q4z9 1/1 Running 1 4d14h 10.244.0.5 K8s-Master <none> <none>
kube-system coredns-66bff467f8-ktbd4 1/1 Running 1 4d14h 10.244.0.4 K8s-Master <none> <none>
kube-system etcd-K8s-Master 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-apiserver-K8s-Master 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-controller-manager-K8s-Master 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-flannel-ds-amd64-d6h9c 1/1 Running 61 45h 65.66.67.6 K8s-Worker-1 <none> <none>
kube-system kube-flannel-ds-amd64-tc4qf 1/1 Running 202 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-proxy-cl9n4 1/1 Running 0 45h 65.66.67.6 K8s-Worker-1 <none> <none>
kube-system kube-proxy-s7jlc 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-scheduler-K8s-Master 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
# kubectl get pods
NAME READY STATUS RESTARTS AGE
dnsutils 1/1 Running 0 22m
Currently below commands executed on Kubernetes cluster master and nslookup kubernetes.default is failing.
# kubectl exec -i -t dnsutils -- nslookup kubernetes.default
;; connection timed out; no servers could be reached
command terminated with exit code 1
# kubectl exec -ti dnsutils -- cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local company.domain.com
options ndots:5
# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-2q4z9 1/1 Running 1 4d14h
coredns-66bff467f8-ktbd4 1/1 Running 1 4d14h
# kubectl logs --namespace=kube-system -l k8s-app=kube-dns
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
# kubectl get svc --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4d14h
# kubectl get endpoints kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 10.244.0.4:53,10.244.0.5:53,10.244.0.4:9153 + 3 more... 4d14h
# kubectl describe svc -n kube-system kube-dns
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: prometheus.io/port: 9153
prometheus.io/scrape: true
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.96.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.244.0.4:53,10.244.0.5:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.244.0.4:53,10.244.0.5:53
Port: metrics 9153/TCP
TargetPort: 9153/TCP
Endpoints: 10.244.0.4:9153,10.244.0.5:9153
Session Affinity: None
Events: <none>
# kubectl describe svc kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.96.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 65.66.67.5:6443
Session Affinity: None
Events: <none>
Can anyone please help me to debug this issue. Thanks.
I have uninstalled and re-installed Kubernetes version - v1.19.0 Now everything working fine. Thanks.

Rancher: kube-system pods stuck on ContainerCreating

I'm trying to spin up a cluster with one node (VM machine) but I'm getting some pods for kube-system stuck as ContainerCreating
> kubectl get pods,svc -owide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cattle-system pod/cattle-cluster-agent-7db88c6b68-bz5dp 0/1 ContainerCreating 0 7m13s <none> hdn-dev-app66 <none> <none>
cattle-system pod/cattle-node-agent-ccntw 1/1 Running 0 7m13s 10.105.1.76 hdn-dev-app66 <none> <none>
cattle-system pod/kube-api-auth-9kdpw 1/1 Running 0 7m13s 10.105.1.76 hdn-dev-app66 <none> <none>
ingress-nginx pod/default-http-backend-598b7d7dbd-rwvhm 0/1 ContainerCreating 0 7m29s <none> hdn-dev-app66 <none> <none>
ingress-nginx pod/nginx-ingress-controller-62vhq 1/1 Running 0 7m29s 10.105.1.76 hdn-dev-app66 <none> <none>
kube-system pod/coredns-849545576b-w87zr 0/1 ContainerCreating 0 7m39s <none> hdn-dev-app66 <none> <none>
kube-system pod/coredns-autoscaler-5dcd676cbd-pj54d 0/1 ContainerCreating 0 7m38s <none> hdn-dev-app66 <none> <none>
kube-system pod/kube-flannel-d9m6q 2/2 Running 0 7m43s 10.105.1.76 hdn-dev-app66 <none> <none>
kube-system pod/metrics-server-697746ff48-q7cpx 0/1 ContainerCreating 0 7m33s <none> hdn-dev-app66 <none> <none>
kube-system pod/rke-coredns-addon-deploy-job-npjll 0/1 Completed 0 7m40s 10.105.1.76 hdn-dev-app66 <none> <none>
kube-system pod/rke-ingress-controller-deploy-job-b9rs4 0/1 Completed 0 7m30s 10.105.1.76 hdn-dev-app66 <none> <none>
kube-system pod/rke-metrics-addon-deploy-job-5rpbj 0/1 Completed 0 7m35s 10.105.1.76 hdn-dev-app66 <none> <none>
kube-system pod/rke-network-plugin-deploy-job-lvk2q 0/1 Completed 0 7m50s 10.105.1.76 hdn-dev-app66 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 8m19s <none>
ingress-nginx service/default-http-backend ClusterIP 10.43.144.25 <none> 80/TCP 7m29s app=default-http-backend
kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 7m39s k8s-app=kube-dns
kube-system service/metrics-server ClusterIP 10.43.251.47 <none> 443/TCP 7m34s k8s-app=metrics-server
when I will do describe on failing pods I'm getting that:
Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "345460c8f6399a0cf20956d8ea24d52f5a684ae47c3e8ec247f83d66d56b2baa" network for pod "cattle-cluster-agent-7db88c6b68-bz5dp": networkPlugin cni failed to set up pod "cattle-cluster-agent-7db88c6b68-bz5dp_cattle-system" network: error getting ClusterInformation: connection is unauthorized: clusterinformations.crd.projectcalico.org "default" is forbidden: User "system:node" cannot get resource "clusterinformations" in API group "crd.projectcalico.org" at the cluster scope, failed to clean up sandbox container "345460c8f6399a0cf20956d8ea24d52f5a684ae47c3e8ec247f83d66d56b2baa" network for pod "cattle-cluster-agent-7db88c6b68-bz5dp": networkPlugin cni failed to teardown pod "cattle-cluster-agent-7db88c6b68-bz5dp_cattle-system" network: error getting ClusterInformation: connection is unauthorized: clusterinformations.crd.projectcalico.org "default" is forbidden: User "system:node" cannot get resource "clusterinformations" in API group "crd.projectcalico.org" at the cluster scope]
Had try to re-registry that node once more time but no luck. Any thoughts?
As it says unauthorized so you have to give rbac permissions to make it work.
Try adding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
Fixed problem with following article from https://rancher.com/docs/rancher/v2.x/en/cluster-admin/cleaning-cluster-nodes/ on how to recycle broken node.

How to Exposing app in kubernetes with consul

We have installed consul through helm charts on k8 cluster. Here, I have deployed one consul server and the rest are consul agents.
kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-7csp9 1/1 Running 0 4h
consul-connect-injector-webhook-deployment-66d46867f6-wqtt7 1/1 Running 0 4h
consul-server-0 1/1 Running 0 4h
consul-sync-catalog-85f5654b89-9qblx 1/1 Running 0 4h
consul-x4mqq 1/1 Running 0 4h
We see that the nodes are registered onto the Consul Server. http://XX.XX.XX.XX/ui/kube/nodes
We have deployed an hello world application onto k8 cluster. This will bring-up Hello-World
kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-7csp9 1/1 Running 0 4h
consul-connect-injector-webhook-deployment-66d46867f6-wqtt7 1/1 Running 0 4h
consul-server-0 1/1 Running 0 4h
consul-sync-catalog-85f5654b89-9qblx 1/1 Running 0 4h
consul-x4mqq 1/1 Running 0 4h
sampleapp-69bf9f84-ms55k 2/2 Running 0 4h
Below is the yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: sampleapp
spec:
replicas: 1
selector:
matchLabels:
app: sampleapp
template:
metadata:
labels:
app: sampleapp
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
containers:
- name: sampleapp
image: "docker-dev-repo.aws.com/sampleapp-java/helloworld-service:a8c9f65-65"
ports:
- containerPort: 8080
name: http
Successful deployment of sampleapp, I see that sampleapp-proxy is registered in consul. and sampleapp-proxy is listed in kubernetes services. (This is because the toConsul and toK8S are passed as true during installation)
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul ExternalName <none> consul.service.test <none> 4h
consul-connect-injector-svc ClusterIP XX.XX.XX.XX <none> 443/TCP 4h
consul-dns ClusterIP XX.XX.XX.XX <none> 53/TCP,53/UDP 4h
consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 4h
consul-ui LoadBalancer XX.XX.XX.XX XX.XX.XX.XX 80:32648/TCP 4h
dns-test-proxy ExternalName <none> dns-test-proxy.service.test <none> 2h
fluentd-gcp-proxy ExternalName <none> fluentd-gcp-proxy.service.test <none> 33m
kubernetes ClusterIP XX.XX.XX.XX <none> 443/TCP 5d
sampleapp-proxy ExternalName <none> sampleapp-proxy.service.test <none> 4h
How can I access my sampleapp? Should I expose my application as kube service again?
Earlier, without consul, we used a create a service for the sampleapp and expose the service as ingress. Using the Ingress Loadbalancer, we used to access our application.
Consul does not provide any new ways to expose your apps. You need to create ingress Loadbalancer as before.