I deployed nginx ingress by kubespray. I have 3 masters and 2 workers and 5 ingress-nginx-controller. I tried to shutdown one worker and now I see still 5 nginx ingress on all hosts.
[root#node1 ~]# kubectl get pod -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-controller-5828c 1/1 Running 0 7m4s 10.233.96.9 node2 <none> <none>
ingress-nginx-controller-h5zzl 1/1 Running 0 7m42s 10.233.92.7 node3 <none> <none>
ingress-nginx-controller-wrvv6 1/1 Running 0 6m11s 10.233.90.17 node1 <none> <none>
ingress-nginx-controller-xdkrx 1/1 Running 0 5m44s 10.233.105.25 node4 <none> <none>
ingress-nginx-controller-xgpn2 1/1 Running 0 6m38s 10.233.70.32 node5 <none> <none>
The problem is I am getting 503 error with app after one node was power off. Is some option disconnect not working ingress-nginx-controller or possibility to use round robin, please? Or could I catch non working ingress-nginx-controller and redirect traffic to correct one, please?
I shutdown the node where the app was running. Now is everything working.
Related
I have followed the instructions on this blog to create a simple container image and deploy it in a k8s cluster.
However, in my case the pods do not run:
student#master:~$ k get pod -o wide -l app=hello-python --field-selector spec.nodeName=master
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-python-58547cf485-7l8dg 0/1 ErrImageNeverPull 0 2m26s 192.168.219.126 master <none> <none>
hello-python-598c594dc5-4c9zd 0/1 ErrImageNeverPull 0 2m26s 192.168.219.67 master <none> <none>
student#master:~$ sudo podman images hello-python
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/hello-python latest 11cf1e5a86b1 50 minutes ago 941 MB
student#master:~$ hostname
master
student#master:~$
I understand why it may not work on the worker node, but why it does not work on the same node where the image is cached - the master node?
student#master:~$ k describe pod hello-python-58547cf485-7l8dg | grep -A 10 'Events:'
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned default/hello-python-58547cf485-7l8dg to master
Warning Failed 8m7s (x12 over 10m) kubelet Error: ErrImageNeverPull
Warning ErrImageNeverPull 4m59s (x27 over 10m) kubelet Container image "localhost/hello-python:latest" is not present with pull policy of Never
student#master:~$
My question is: how to make the pod run on the master node with the imagePullPolicy = never given that the image in question is available on the master node as the podman images attests?
EDIT 1
I am using a k8s cluster running on two VMs deployed in GCE. It was setup with a script provided in the context of the Linux Foundation Kubernetes Developer course LFD0259.
EDIT 2
The master node is allowed to run workloads - this is how the LFD259 course sets it up. For example:
student#master:~$ k create deployment xyz --image=httpd
deployment.apps/xyz created
student#master:~$ k get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xyz-6c6bd4cd89-qn4zr 1/1 Running 0 5m37s 192.168.171.66 worker <none> <none>
student#master:~$
student#master:~$ k scale deployment xyz --replicas=10
deployment.apps/xyz scaled
student#master:~$ k get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xyz-6c6bd4cd89-c2xv4 1/1 Running 0 73s 192.168.219.71 master <none> <none>
xyz-6c6bd4cd89-g89k2 0/1 ContainerCreating 0 73s <none> master <none> <none>
xyz-6c6bd4cd89-jfftl 0/1 ContainerCreating 0 73s <none> worker <none> <none>
xyz-6c6bd4cd89-kbdnq 1/1 Running 0 73s 192.168.219.106 master <none> <none>
xyz-6c6bd4cd89-nm6rt 0/1 ContainerCreating 0 73s <none> worker <none> <none>
xyz-6c6bd4cd89-qn4zr 1/1 Running 0 7m22s 192.168.171.66 worker <none> <none>
xyz-6c6bd4cd89-vts6x 1/1 Running 0 73s 192.168.171.84 worker <none> <none>
xyz-6c6bd4cd89-wd2ls 1/1 Running 0 73s 192.168.171.127 worker <none> <none>
xyz-6c6bd4cd89-wv4jn 0/1 ContainerCreating 0 73s <none> worker <none> <none>
xyz-6c6bd4cd89-xvtlm 0/1 ContainerCreating 0 73s <none> master <none> <none>
student#master:~$
It depends how you've set up your Kubernetes Cluster. I assume you've installed it with kubeadm. However, by default the Master is not scheduleable for workloads. And by my understanding the image you're talking about only exists on the master node right? If that's the case you can't start a pod with that Image as it only exists on the master node, which doesn't allow workloads by default.
If you were to copy the Image to the worker node, your given command should work.
However if you want to make your Master-Node scheduleable just taint it with (maybe you need to amend the last bit if it differs from yours):
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
I have a daemonset configuration that runs on all nodes.
every pod listens on port 34567. I want from other pod on different node to communicate with this pod. how can I achieve that?
Find the target Pod's IP address as shown below
controlplane $ k get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-42pq8 1/1 Running 1 5m43s 10.88.0.4 node01 <none> <none>
coredns-fb8b8dccf-f9n5x 1/1 Running 1 5m43s 10.88.0.3 node01 <none> <none>
etcd-controlplane 1/1 Running 0 4m38s 172.17.0.23 controlplane <none> <none>
katacoda-cloud-provider-74dc75cf99-2jrpt 1/1 Running 3 5m42s 10.88.0.2 node01 <none> <none>
kube-apiserver-controlplane 1/1 Running 0 4m33s 172.17.0.23 controlplane <none> <none>
kube-controller-manager-controlplane 1/1 Running 0 4m45s 172.17.0.23 controlplane <none> <none>
kube-keepalived-vip-smkdc 1/1 Running 0 5m27s 172.17.0.26 node01 <none> <none>
kube-proxy-8sxkt 1/1 Running 0 5m27s 172.17.0.26 node01 <none> <none>
kube-proxy-jdcqc 1/1 Running 0 5m43s 172.17.0.23 controlplane <none> <none>
kube-scheduler-controlplane 1/1 Running 0 4m47s 172.17.0.23 controlplane <none> <none>
weave-net-8cxqg 2/2 Running 1 5m27s 172.17.0.26 node01 <none> <none>
weave-net-s4tcj 2/2 Running 1 5m43s 172.17.0.23 controlplane <none> <none>
Next "exec" into the originating pod - kube-proxy-8sxkt in my example
kubectl -n kube-system exec -it kube-proxy-8sxkt sh
Next, you will use the destination pod's IP and port (10256 - my example) number to connect. Please note that you may have to install curl/telnet if your originating container's image does not include the application
# curl telnet://172.17.0.23:10256
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close
You can call via pod's IP.
Note: This IP can only be used in the k8s cluster.
POD address (IP) is a good option you can use it, unless you know the POD IP which might get changed from time to time due to deployment and scaling changes.
i would suggest trying out the Daemon set by exposing it using the service type Node port if you have a fix amount of Node and not much autoscaling there.
If you want to connect your POD with a specific POD you can use the Node IP on which POD is scheduled and use the Node port service.
Node IP:Node port
Read more at : https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
If you don't want to connect to a specific POD and just any of the Daemon sets replica will work to connect with you can use the service name to connect PODs with each other.
my-svc.my-namespace.svc.cluster-domain.example
Read more about the service and POD DNS
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I have one master and two worker nodes (worker-1 and worker-2). All the Nodes are up and running without any issue. when i was planned to installed istio service mesh i tried to deploy sample book info deployment.
After deploying bookinfo i verified pod status running below command
root#master:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
details-v1-79c697d759-9k98l 2/2 Running 0 11h 10.200.226.104 worker-1 <none> <none>
productpage-v1-65576bb7bf-zsf6f 2/2 Running 0 11h 10.200.226.107 worker-1 <none> <none>
ratings-v1-7d99676f7f-zxrtq 2/2 Running 0 11h 10.200.226.105 worker-1 <none> <none>
reviews-v1-987d495c-hsnmc 1/2 Running 0 21m 10.200.133.194 worker-2 <none> <none>
reviews-v2-6c5bf657cf-jmbkr 1/2 Running 0 11h 10.200.133.252 worker-2 <none> <none>
reviews-v3-5f7b9f4f77-g2s6p 2/2 Running 0 11h 10.200.226.106 worker-1 <none> <none>
I have noticed that two pod are not running here status shows 1/2 (which is in worker-2 node), almost i spent two days but not able to find anything to fix the above issue. here the describe pod status
Warning Unhealthy 63s (x14 over 89s) kubelet Readiness probe failed: Get "http://10.244.133.194:15021/healthz/ready":
dial tcp 10.200.133.194:15021: connect: connection refused
Then today morning i realized something issue with worker-2 node when the pod is not running with status of 1/2, i planned cordon node like below
kubectl cordon worker-2
kubectl delete pod <worker-2 pod>
kubectl get pod -o wide
After cordon worker-2 node i could see all the pod are up with status of 2/2 in worker-1 node without any issue.
root#master:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
details-v1-79c697d759-9k98l 2/2 Running 0 11h 10.200.226.104 worker-1 <none> <none>
productpage-v1-65576bb7bf-zsf6f 2/2 Running 0 11h 10.200.226.107 worker-1 <none> <none>
ratings-v1-7d99676f7f-zxrtq 2/2 Running 0 11h 10.200.226.105 worker-1 <none> <none>
reviews-v1-987d495c-2n4d9 2/2 Running 0 17s 10.200.226.113 worker-1 <none> <none>
reviews-v2-6c5bf657cf-wzqpt 2/2 Running 0 17s 10.200.226.112 worker-1 <none> <none>
reviews-v3-5f7b9f4f77-g2s6p 2/2 Running 0 11h 10.200.226.106 worker-1 <none> <none>
could you please someone help me how to fix this issue to schedule (pending pods) pods in worker-2 node as well.
Note: when i am trying to re-deploy all the nodes (worker-1 and worker-2) again pod status going back to 1/2 status
oot#master:~/istio-1.9.1/samples# kubectl logs -f ratings-v1-b6994bb9-wfckn -c istio-proxy
ates: 0 successful, 0 rejected
2021-04-21T07:12:19.941679Z warn Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-04-21T07:12:21.942096Z warn Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
I tried to deploy nginx server using kubernetes. I was able to create deployment and thn create service. But when i gave the curl command im facing an error. Im not able to curl and open nginx webpage in browser.
Below are the commands i used and error i got.
kubectl get pods
NAME READY STATUS RESTARTS AGE
curl 1/1 Running 8 15d
curl-deployment-646445496f-59fs9 1/1 Running 7 15d
hello-5d448ffc76-cwzcl 1/1 Running 13 23d
hello-node-7567d9fdc9-ffdkx 1/1 Running 8 20d
my-nginx-5b6fb7fb46-bdzdq 0/1 ContainerCreating 0 15d
mytestwebapp 1/1 Running 10 21d
nginx-6799fc88d8-w76cb 1/1 Running 5 13d
nginx-deployment-66b6c48dd5-9mkh8 1/1 Running 12 23d
nginx-test-795d659f45-d9shx 1/1 Running 4 13d
rss-site-7b6794856f-9586w 2/2 Running 40 15d
rss-site-7b6794856f-z59vn 2/2 Running 78 21d
jit#jit-Vostro-15-3568:~$ kubectl logs webserver
Error from server (NotFound): pods "webserver" not found
jit#jit-Vostro-15-3568:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node LoadBalancer 10.104.134.171 <pending> 8080:31733/TCP 13d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d
my-nginx NodePort 10.103.114.92 <none> 8080:32563/TCP,443:32397/TCP 15d
nginx NodePort 10.110.113.60 <none> 80:30985/TCP 13d
nginx-test NodePort 10.109.16.192 <none> 8080:31913/TCP 13d
jit#jit-Vostro-15-3568:~$ curl kube-worker-1:30985
curl: (6) Could not resolve host: kube-worker-1
As you can see you have pod called nginx, that indicates that you have had nginx server already deployed in pod on your cluster. You don't have pod called webserver that's why you're getting
Error from server (NotFound): pods "webserver" not found error.
Also to access nginx service try to pass curl it via ip:port:
$ curl 10.110.113.60:30985
If you point a web browser to http://IP_OF_NODE:ASSIGNED_PORT (where IP_OF_NODE is an IP address of one of your nodes and ASSIGNED_PORT is the port assigned during the create service command), you should see the NGINX Welcome page!
Take a look: nginx-app-kubernetes.
I tried the above scenario locally.
do a kubectl describe svc <svc-name>
check whether it have any end-points.
probably it doesn't have any endpoints
I installed the minikube in my CentOS 7.7 Server.
there are several pods in it:
[dele#att root]$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-f9fd979d6-4p6xg 1/1 Running 1 23h 172.18.0.2 minikube <none> <none>
kube-system etcd-minikube 1/1 Running 0 22h 172.17.0.2 minikube <none> <none>
kube-system kube-apiserver-minikube 1/1 Running 0 22h 172.17.0.2 minikube <none> <none>
kube-system kube-controller-manager-minikube 1/1 Running 1 23h 172.17.0.2 minikube <none> <none>
kube-system kube-proxy-4k468 1/1 Running 1 23h 172.17.0.2 minikube <none> <none>
kube-system kube-scheduler-minikube 1/1 Running 1 23h 172.17.0.2 minikube <none> <none>
kube-system storage-provisioner 1/1 Running 2 23h 172.17.0.2 minikube <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-c95fcf479-k7zpn 1/1 Running 1 23h 172.18.0.3 minikube <none> <none>
kubernetes-dashboard kubernetes-dashboard-5c448bc4bf-f9swt 1/1 Running 1 23h 172.18.0.4 minikube <none> <none>
but I can not see a clear network topology diagram, is it possible to show the network topology using the kubectl?
This is not possible out of the box with kubernetes (and kubectl) as far as I know.
With additional software in your cluster I know about three possiblities with visualization:
Istio has the possibility to visualize the communication within the mesh with kiali (For reference: https://istio.io/latest/docs/tasks/observability/kiali/)
The second option is spekt8
Weavescope comes with agents that gather data and visualizes them
Despite these options others could exist and I would really like to see more options because not everyone wants to add Istio and accept the performance impact just to visualize the pod/network landscape.
And as far as I understand spekt8 it's more about the visualization of relations between Kubernetes resources than about network topology visualization.
Weavescope needs cluster administration rights therefore it isn't advisable to make it public accessible without setting up some form of authentication.