Kubernetes status hang on Pending status - kubernetes

I tried multiple times and searched a lot but could not figure out why my pod is still in Pending status.
I have a very simple docker-compose.yml file as below:
version: '3'
services:
nginx:
build: .
container_name: "something_cool"
ports:
- '80:80'
and converted it to Kubernetes syntax with kompose command, so it created the two deployment and service files as below.
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: nginx
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
spec:
containers:
- image: nginx
name: something-cool
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
status: {}
Service:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
name: nginx
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: nginx
status:
loadBalancer: {}
Now that when I run kubectl apply -k ., I see service/nginx configured and deployment.apps/nginx configured, but kubectl get pods shows it in Pending status.
This is the result of grepping pod name in events:
6s Warning FailedScheduling pod/nginx-77546f7866-j5gmd 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.
4m4s Warning FailedScheduling pod/nginx-77546f7866-j5gmd 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.
5m6s Normal SuccessfulCreate replicaset/nginx-77546f7866 Created pod: nginx-77546f7866-j5gmd
If I'm correct, I see that I can ping and nslookup kubernetes.io fine, but I'm not sure why I'm getting this error.

I solve the issue first by resetting the Kubernetes by the below ways:
kubeadm reset
rm ~/.kube/config
kubeadm init --pod-network-cidr=10.244.0.0/16
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -k .

Related

Kubernetes Networkpolicy dosen't block traffic

i've a namespace called: test, and containing 3 pods: frontend, backend and database.
this is the manifest of pods:
kind: Pod
apiVersion: v1
metadata:
name: frontend
namespace: test
labels:
app: todo
tier: frontend
spec:
containers:
- name: frontend
image: nginx
---
kind: Pod
apiVersion: v1
metadata:
name: backend
namespace: test
labels:
app: todo
tier: backend
spec:
containers:
- name: backend
image: nginx
---
kind: Pod
apiVersion: v1
metadata:
name: database
namespace: test
labels:
app: todo
tier: database
spec:
containers:
- name: database
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: example
I would implement a network policy , that allow only allow incoming traffic from the backend to the database but disallow incoming traffic from the frontend.
this my network policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-allow
namespace: test
spec:
podSelector:
matchLabels:
app: todo
tier: database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: todo
tier: backend
ports:
- protocol: TCP
port: 3306
- protocol: UDP
port: 3306
This is the output of kubectl get pods -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
backend 1/1 Running 0 28m 172.17.0.5 minikube <none> <none>
database 1/1 Running 0 28m 172.17.0.4 minikube <none> <none>
frontend 1/1 Running 0 28m 172.17.0.3 minikube <none> <none>
This is the output of kubectl get networkpolicy -n test -o wide
NAME POD-SELECTOR AGE
app-allow app=todo,tier=database 21m
when i execute telnet #ip-of-mysql-pod 3306 from the frontend pod , the connection look be established and the network policy is not working
kubectl exec -it pod/frontend bash -n test
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root#frontend:/# telnet 172.17.0.4 3306
Trying 172.17.0.4...
Connected to 172.17.0.4.
Escape character is '^]'.
J
8.0.25 k{%J\�#(t%~qI%7caching_sha2_password
there are something i missing ?
Thanks
It seems that you forgot to add "default deny" policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
The default behavior of NetworkPolicy is to allow all connections between pod unless explicitly denied.
More details here: https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-traffic

Taints and Toleration

im practicing with kubernetes taints , i have tainted my node and than make a deploy like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
tolerations:
- key: "test"
operator: "Equal"
value: "blue"
effect: "NoSchedule"
kubectl describe nodes knode2 :
Name: knode2
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=knode2
kubernetes.io/os=linux
testing=test
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: **********
projectcalico.org/IPv4IPIPTunnelAddr: ********
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 27 Oct 2020 17:23:47 +0200
Taints: test=blue:NoSchedule
but when i deploy this yaml file the pods are not going only to that tainted node.
Why is that?
Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. That's exactly opposite of what you intend to do.
You can constrain a Pod to only be able to run on particular Node(s), or to prefer to run on particular nodes using NodeSelector or NodeAffinity.
NodeSelector example
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
Node affinity is conceptually similar to nodeSelector -- it allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node.

Kubernetes service without a external IP, not able to access the service

I'm trying to deploy my container in a kubernetes cluster but I'm not getting an External ip and hence I'm not able to access the server.
This is my .yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: service-app
name: service-app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: service-app
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: service-app
spec:
containers:
- image: magneto-challengue:1.0
imagePullPolicy: ""
name: magneto-challengue
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: service-app
name: service-app
spec:
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
selector:
io.kompose.service: service-app
type: NodePort
When I use the kubectl get svc,deployment,pods command. I'm getting the next response:
As you can see I'm not getting an external Ip. With the kubectl describe service service-app command I'm getting the next response:
I tried with the 10.107.179.10 ip, but it didn't work.
Any idea?
You can not use 10.107.179.10 IP to access a pod from outside kubernetes cluster because that IP is clusterIP and is valid inside the kubernetes cluster and can be used from another pod for example.
NodePort type does not get an EXTERNAL-IP. To access a pod from outside the kubernetes cluster via NodePort service you can use NodeIP:NodePort where NodeIP is any of your kubernetes cluster nodes IP address.

No pods created when deploying a Docker image to Google Cloud using a Kubernetes deployment YAML file

I am facing an issue when trying to create a deployment on Google Cloud using a Kubernetes YAML file. I see that no pods are created when using a YAML file; however, if I use kubectl create deployment..., pods do get created.
My YAML file is as follows :
#Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
generation: 1
labels:
app: hello-world
name: hello-world
namespace: default
resourceVersion: "3691124"
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: hello-world
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: myrepo/hello-world:0.0.4.RELEASE
imagePullPolicy: IfNotPresent
name: hello-world
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
#Service
apiVersion: v1
kind: Service
metadata:
labels:
app: hello-world
name: hello-world
namespace: default
resourceVersion: "3691877"
spec:
externalTrafficPolicy: Cluster
ports:
- nodePort: 32448
port: 8080
protocol: TCP
targetPort: 8001
selector:
app: hello-world
sessionAffinity: None
type: LoadBalancer
This is what I see when I run kubectl get all :
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hello-world LoadBalancer Some IP Some IP 3 8001:32449/TCP 15m
service/kubernetes ClusterIP Some IP 2 <none> 444/TCP 12d
As you can see, the only resource that started was the service. I neither see any deployment nor see any pods being created.
Note I have replaced the actual IP addresses with "Some IP" above for obvious reasons..
Question: Why are no pods getting created when I use the YAML but pods get created when I use kubectl create deployment instead of using a YAML configuration file?
You should use --- in YAML, when you have multiple resources in a single file.
You have to set apiVersion: apps/v1 in deployment.
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
generation: 1
labels:
app: hello-world
name: hello-world
namespace: default
resourceVersion: "3691124"
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: hello-world
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: myrepo/hello-world:0.0.4.RELEASE
imagePullPolicy: IfNotPresent
name: hello-world
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hello-world
name: hello-world
namespace: default
resourceVersion: "3691877"
spec:
externalTrafficPolicy: Cluster
ports:
- nodePort: 32448
port: 8080
protocol: TCP
targetPort: 8001
selector:
app: hello-world
sessionAffinity: None
type: LoadBalancer
Output
$ kubectl get deply,po,svc
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hello-world 0/3 3 0 5m34s
NAME READY STATUS RESTARTS AGE
pod/hello-world-6bd8d58486-7lzh9 0/1 ImagePullBackOff 0 3m49s
pod/hello-world-6bd8d58486-m56rq 0/1 ImagePullBackOff 0 3m49s
pod/hello-world-6bd8d58486-z9xmz 0/1 ImagePullBackOff 0 3m49s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hello-world LoadBalancer 10.108.65.81 <pending> 8080:32448/TCP 3m49s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
The best practice is to create different yaml files for different resources. use Helm if you need to package multiple kubernetes resources into a single entity.

istio side car is not created

we have istio installed without the side car enabled gloablly , and I want to enable it to specific service in a new namespace
I’ve added to my deployment the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gow
labels:
app: gowspec:
replicas: 2
template:
metadata:
labels:
app: gow
tier: service
annotations:
sidecar.istio.io/inject: "true"
while running
get namespace -L istio-injection I don’t see anything enabled , everything is empty…
How can I verify that the side car is created ? I dont see anything new ...
You can use istioctl kube-inject to make that
kubectl create namespace asdd
istioctl kube-inject -f nginx.yaml | kubectl apply -f -
nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: asdd
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
annotations:
sidecar.istio.io/inject: "True"
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Result:
nginx-deployment-55b6fb474b-77788 2/2 Running 0 5m36s
nginx-deployment-55b6fb474b-jrkqk 2/2 Running 0 5m36s
Let me know if You have any more questions.
You can describe your pod to see list of containers and one of those should be sidecar container. Look for something called istio-proxy.
kubectl describe pod pod name
It should look something like below
$ kubectl describe pod demo-red-pod-8b5df99cc-pgnl7
SNIPPET from the output:
Name: demo-red-pod-8b5df99cc-pgnl7
Namespace: default
.....
Labels: app=demo-red
pod-template-hash=8b5df99cc
version=version-red
Annotations: sidecar.istio.io/status={"version":"3c0b8d11844e85232bc77ad85365487638ee3134c91edda28def191c086dc23e","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs...
Status: Running
IP: 10.32.0.6
Controlled By: ReplicaSet/demo-red-pod-8b5df99cc
Init Containers:
istio-init:
Container ID: docker://bef731eae1eb3b6c9d926cacb497bb39a7d9796db49cd14a63014fc1a177d95b
Image: docker.io/istio/proxy_init:1.0.2
Image ID: docker-pullable://docker.io/istio/proxy_init#sha256:e16a0746f46cd45a9f63c27b9e09daff5432e33a2d80c8cc0956d7d63e2f9185
.....
State: Terminated
Reason: Completed
.....
Ready: True
Containers:
demo-red:
Container ID: docker://8cd9957955ff7e534376eb6f28b56462099af6dfb8b9bc37aaf06e516175495e
Image: chugtum/blue-green-image:v3
Image ID: docker-pullable://docker.io/chugtum/blue-green-image#sha256:274756dbc215a6b2bd089c10de24fcece296f4c940067ac1a9b4aea67cf815db
State: Running
Started: Sun, 09 Dec 2018 18:12:31 -0800
Ready: True
istio-proxy:
Container ID: docker://ca5d690be8cd6557419cc19ec4e76163c14aed2336eaad7ebf17dd46ca188b4a
Image: docker.io/istio/proxyv2:1.0.2
Image ID: docker-pullable://docker.io/istio/proxyv2#sha256:54e206530ba6ca9b3820254454e01b7592e9f986d27a5640b6c03704b3b68332
Args:
proxy
sidecar
.....
State: Running
Started: Sun, 09 Dec 2018 18:12:31 -0800
Ready: True
.....
You need to have the admission webhook for automatic sidecar injection.
kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml | grep "namespaceSelector:" -A5
There may be many reasons for sidecar injection failures as described here
Here is a table which shows final injection status based on different scenarios.
Based on above table its mandatory to label the namespace with a label istio-injection=enabled
kubectl label namespace default istio-injection=enabled --overwrite