minikube: how to add second container to the pod? - kubernetes

I am looking for help on how to add a second container to the existing pod (i mean one pod with two containers). The error is CrashLoopBackOff. I am using minikube. I request any assistance in resolving the issue will be appreciated.
wordpress.yaml
--------XXXX------------------
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
- name: wordpress
image: wordpress
ports:
- containerPort: 80
#watch kubectl get pods
Every 2.0s: kubectl get pods cent8-minikube: Wed Jun 10 16:47:30 2020
NAME READY STATUS RESTARTS AGE
my-nginx-6b474476c4-9p4cn 1/1 Running 0 5h41m
my-nginx-6b474476c4-m2xkd 1/1 Running 0 5h41m
my-nginx-9f44b5996-744n5 1/2 CrashLoopBackOff 9 21m
my-nginx-9f44b5996-vl6g2 1/2 CrashLoopBackOff 9 22m
test-minikube-f4df69575-2sbl5 1/1 Running 0 26h
[root#cent8-minikube ~]# kubectl describe pod my-nginx-9f44b5996-vl6g2
Name: my-nginx-9f44b5996-vl6g2
Namespace: default
Priority: 0
Node: cent8-minikube/192.168.194.128
Start Time: Wed, 10 Jun 2020 16:25:14 -0700
Labels: app=nginx
pod-template-hash=9f44b5996
Annotations: <none>
Status: Running
IP: 172.17.0.8
IPs:
IP: 172.17.0.8
Controlled By: ReplicaSet/my-nginx-9f44b5996
Containers:
nginx:
Container ID: docker://5e4cfd4e726373916a105a95644a7a286966482e33eaaac986e44514aef86606
Image: nginx:1.14.2
Image ID: docker-pullable://nginx#sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 10 Jun 2020 16:25:16 -0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lqnj8 (ro)
wordpress:
Container ID: docker://5e37277badca1658a10d4d826428f538a45d5e0eaecabd5e196f8b6ab5848ec7
Image: wordpress
Image ID: docker-pullable://wordpress#sha256:ff8be61894e74b6a005ab54ba73aa7084b6dbd11605f12ac383549763918bf09
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 10 Jun 2020 16:46:37 -0700
Finished: Wed, 10 Jun 2020 16:46:38 -0700
Ready: False
Restart Count: 9
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lqnj8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-lqnj8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lqnj8
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 23m default-scheduler Successfully assigned default/my-nginx-9f44b5996-vl6g2 to cent8-minikube
Normal Pulled 23m kubelet, cent8-minikube Container image "nginx:1.14.2" already present on machine
Normal Created 23m kubelet, cent8-minikube Created container nginx
Normal Started 23m kubelet, cent8-minikube Started container nginx
Normal Pulling 22m (x4 over 23m) kubelet, cent8-minikube Pulling image "wordpress"
Normal Pulled 22m (x4 over 23m) kubelet, cent8-minikube Successfully pulled image "wordpress"
Normal Created 22m (x4 over 23m) kubelet, cent8-minikube Created container wordpress
Normal Started 22m (x4 over 23m) kubelet, cent8-minikube Started container wordpress
Warning BackOff 3m35s (x95 over 23m) kubelet, cent8-minikube Back-off restarting failed container
[root#cent8-minikube ~]#
[root#cent8-minikube ~]# kubectl describe pod my-nginx-9f44b5996-744n5
Name: my-nginx-9f44b5996-744n5
Namespace: default
Priority: 0
Node: cent8-minikube/192.168.194.128
Start Time: Wed, 10 Jun 2020 16:25:33 -0700
Labels: app=nginx
pod-template-hash=9f44b5996
Annotations: <none>
Status: Running
IP: 172.17.0.10
IPs:
IP: 172.17.0.10
Controlled By: ReplicaSet/my-nginx-9f44b5996
Containers:
nginx:
Container ID: docker://9e3d6f0073e51eb475c2f2677fa413509f49a07c955d04b09417811d37ba8433
Image: nginx:1.14.2
Image ID: docker-pullable://nginx#sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 10 Jun 2020 16:25:35 -0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lqnj8 (ro)
wordpress:
Container ID: docker://2ca8f22ab14b88973dca8d4d486f82a5b0d9bc7b84960882cff0a81afd744bf4
Image: wordpress
Image ID: docker-pullable://wordpress#sha256:ff8be61894e74b6a005ab54ba73aa7084b6dbd11605f12ac383549763918bf09
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 10 Jun 2020 16:46:41 -0700
Finished: Wed, 10 Jun 2020 16:46:42 -0700
Ready: False
Restart Count: 9
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lqnj8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-lqnj8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lqnj8
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned default/my-nginx-9f44b5996-744n5 to cent8-minikube
Normal Pulled 26m kubelet, cent8-minikube Container image "nginx:1.14.2" already present on machine
Normal Created 26m kubelet, cent8-minikube Created container nginx
Normal Started 26m kubelet, cent8-minikube Started container nginx
Normal Pulling 25m (x4 over 26m) kubelet, cent8-minikube Pulling image "wordpress"
Normal Pulled 25m (x4 over 26m) kubelet, cent8-minikube Successfully pulled image "wordpress"
Normal Created 25m (x4 over 26m) kubelet, cent8-minikube Created container wordpress
Normal Started 25m (x4 over 26m) kubelet, cent8-minikube Started container wordpress
Warning BackOff 67s (x116 over 26m) kubelet, cent8-minikube Back-off restarting failed container
[root#cent8-minikube ~]#
``` [root#cent8-minikube ~]# kubectl logs my-nginx-9f44b5996-vl6g2 -c wordpress WordPress not found in /var/www/html - copying now... Complete! WordPress has been successfully copied to /var/www/html AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.8. Set the 'ServerName' directive globally to suppress this message (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down AH00015: Unable to open logs
How to add a second container to the pod?
where to check the logs of the crash container or How to debug?
could you guide me, how to fix this error?

To check logs of a container in a pod
kubectl logs my-pod -c my-container
To check logs of previous instance of a container in a pod
kubectl logs my-pod -c my-container --previous
In your case this will translate to
kubectl logs my-nginx-9f44b5996-vl6g2 -c nginx
kubectl logs my-nginx-9f44b5996-vl6g2 -c wordpress
Wordpress is probably not a good idea to run with nginx in the same pod. Check this guide on multicontainer pod
Check this guide to run wordpress on kubernetes

The important part of your error is this:
Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down
If you run multiple containers in a single pod, they share a network namespace. Where the Service forwards to port 80 of the pod, it could reach any container. In this example, that means you can't have two containers in the same pod listening to the same port.
A better practice would be to split these two components into two separate Deployments, each with a matching Service. To do this:
Make two copies of your existing Deployment. In one, delete all mentions of Wordpress. In the other, delete the Nginx container and otherwise globally replace nginx with wordpress.
Make two copies of your existing Service. Change the second one to be type: ClusterIP, and globally replace nginx with wordpress.
In your Nginx proxy configuration, where you proxy_pass to the Wordpress container, change its backend to http://my-wordpress-svc/.
It would be very routine to wind up with 5 separate YAML files for this (two Deployments, two Services, one ConfigMap) and you can run kubectl apply -f on a directory to install them all in one shot.

While I agree that running separate applications in separate Pods, sometimes it's easier to keep them together. In the current scenario this could have been achieved as follows:
Wordpress is hard configure, so moving Nginx to a separate port is more feasible (if not totally straight forward)
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx
data:
default.conf.template: |
server {
listen ${NGINX_PORT};
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
env:
- name: NGINX_PORT
value: "81"
volumeMounts:
- mountPath: /etc/nginx/templates/
name: config
ports:
- containerPort: 81
- name: wordpress
image: wordpress
ports:
- containerPort: 80
volumes:
- name: config
configMap:
name: nginx

Related

CrashLoopBackOff error when creating k38 replica set

I've created a replicaset on Kubernetes using a yaml file, while the replicaset is created - the pods are not starting .. giving CrashLoopBackOff error.
Please see the yaml file & the pod status below:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: new-replica-set
labels:
app: new-replica-set
type: front-end
spec:
template:
metadata:
name: myimage
labels:
app: myimg-app
type: front-end
spec:
containers:
- name: my-busybody
image: busybox
replicas: 4
selector:
matchLabels:
type: front-end
Here is output, when list the pods:
new-replica-set-8v4l2 0/1 CrashLoopBackOff 10 (38s ago) 27m
new-replica-set-kd6nq 0/1 CrashLoopBackOff 10 (44s ago) 27m
new-replica-set-nkkns 0/1 CrashLoopBackOff 10 (21s ago) 27m
new-replica-set-psjcc 0/1 CrashLoopBackOff 10 (40s ago) 27m
output of describe command
$ kubectl describe pods new-replica-set-8v4l2
Name: new-replica-set-8v4l2
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Wed, 03 Nov 2021 19:57:54 -0700
Labels: app=myimg-app
type=front-end
Annotations: <none>
Status: Running
IP: 172.17.0.14
IPs:
IP: 172.17.0.14
Controlled By: ReplicaSet/new-replica-set
Containers:
my-busybody:
Container ID: docker://67dec2d3a1e6d73fa4e67222e5d57fd980a1e6bf6593fbf3f275474e36956077
Image: busybox
Image ID: docker-pullable://busybox#sha256:15e927f78df2cc772b70713543d6b651e3cd8370abf86b2ea4644a9fba21107f
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 03 Nov 2021 22:12:32 -0700
Finished: Wed, 03 Nov 2021 22:12:32 -0700
Ready: False
Restart Count: 16
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lvnh6 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-lvnh6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 138m default-scheduler Successfully assigned default/new-replica-set-8v4l2 to minikube
Normal Pulled 138m kubelet Successfully pulled image "busybox" in 4.009613585s
Normal Pulled 138m kubelet Successfully pulled image "busybox" in 4.339635544s
Normal Pulled 138m kubelet Successfully pulled image "busybox" in 2.293243043s
Normal Created 137m (x4 over 138m) kubelet Created container my-busybody
Normal Started 137m (x4 over 138m) kubelet Started container my-busybody
Normal Pulled 137m kubelet Successfully pulled image "busybox" in 2.344639501s
Normal Pulling 136m (x5 over 138m) kubelet Pulling image "busybox"
Normal Pulled 136m kubelet Successfully pulled image "busybox" in 1.114394958s
Warning BackOff 61s (x231 over 138m) kubelet Back-off restarting failed container
How do I fix this?
Also, what is the best way to debug these error?
busybox default to the docker command sh which opens a shell and because the container is neither not started with a terminal attached the sh process exits immediatly after container startup leading to the CrashLoopBackOff Status of your pods.
Try switching to an image that is intended to have a long running/always running process, e.g. nginx or define a command ( = docker entrypoint equivalent) and an argument ( = docker CMD equivalent), e.g.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: new-replica-set
labels:
app: new-replica-set
type: front-end
spec:
template:
metadata:
name: myimage
labels:
app: myimg-app
type: front-end
spec:
containers:
- name: my-busybody
image: busybox
command: ["sh"]
args: ["-c", "while true; do echo Hello from busybox; sleep 100;done"]
replicas: 4
selector:
matchLabels:
type: front-end

CrashLoopBackOff - Back-off restarting failed container

I have my image hosted on GCR.
I want to create Kubernetes Cluster on my local system(mac).
Steps I followed :
Create a imagePullSecretKey
Create generic key to communicate with GCP. (kubectl create secret generic gcp-key --from-file=key.json)
I have deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: sv-premier
spec:
selector:
matchLabels:
app: sv-premier
template:
metadata:
labels:
app: sv-premier
spec:
volumes:
- name: google-cloud-key
secret:
secretName: gcp-key
containers:
- name: sv-premier
image: gcr.io/proto/premiercore1:latest
imagePullPolicy: Always
command: ["echo", "Done deploying sv-premier"]
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 8080
imagePullSecrets:
- name: imagepullsecretkey
When I execute the command - kubectl apply -f deployment.yaml , I get CrashLoopBackOff Error
Logs for -
kubectl describe pods podname
=======================
Name: sv-premier-6b77ddd747-cvdr5
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Tue, 04 Feb 2020 14:18:47 +0530
Labels: app=sv-premier
pod-template-hash=6b77ddd747
Annotations:
Status: Running
IP: 10.1.0.43
IPs:
Controlled By: ReplicaSet/sv-premier-6b77ddd747
Containers:
sv-premierleague:
Container ID: docker://141126d732409427fe39b405865f88856ac4e1d8586112797fc5bf4fdfbe317c
Image: gcr.io/proto/premiercore1:latest
Image ID: docker-pullable://gcr.io/proto/premiercore1#sha256:b3800ccca3f30725d5c9235dd349548f0fcfe309f51883d8af16397aef2c3953
Port: 8080/TCP
Host Port: 0/TCP
Command:
echo
Done deploying sv-premier
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 04 Feb 2020 15:00:51 +0530
Finished: Tue, 04 Feb 2020 15:00:51 +0530
Ready: False
Restart Count: 13
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /var/secrets/google/key.json
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-s4jgd (ro)
/var/secrets/google from google-cloud-key (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
google-cloud-key:
Type: Secret (a volume populated by a Secret)
SecretName: gcp-key
Optional: false
default-token-s4jgd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-s4jgd
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From
Message
---- ------ ---- ----
Normal Scheduled 46m default-scheduler
Successfully assigned default/sv-premier-6b77ddd747-cvdr5 to
docker-desktop
Normal Pulled 45m (x4 over 46m) kubelet, docker-desktop
Successfully pulled image
"gcr.io/proto/premiercore1:latest"
Normal Created 45m (x4 over 46m) kubelet, docker-desktop
Created container sv-premier
Normal Started 45m (x4 over 46m) kubelet, docker-desktop
Started container sv-premier
Normal Pulling 45m (x5 over 46m) kubelet, docker-desktop
Pulling image "gcr.io/proto/premiercore1:latest"
Warning BackOff 92s (x207 over 46m) kubelet, docker-desktop
Back-off restarting failed container
=======================
And output for -
kubectl logs podname --> Done Deploying sv-premier
I am confused why my container is exiting. not able to start.
Kindly guide please.
Update your deployment.yaml with a long running task example.
command: ["/bin/sh"]
args: ["-c", "while true; do echo Done Deploying sv-premier; sleep 3600;done"]
This will put your container to sleep after deployment and every hour it will log the message.
Read more about pod lifecycle container states here

Why an application deployment status is "Available:0" when service is deployed properly in minikube?

i am trying to deploy the back-end component of my application for testing REST API's. i have dockerized the components and created an image in minikube.i have created a yaml file for deploying and creating services. Now when i try to deploy it through sudo kubectl create -f frontend-deployment.yaml, it deploys without any error but when i check the status of deployments this is what is shown :
NAME READY UP-TO-DATE AVAILABLE AGE
back 0/3 3 0 2m57s
Interestingly the service corresponding to this deployment is available.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
back ClusterIP 10.98.73.249 <none> 8080/TCP 3m9s
i also tried to create deployment by running deplyment statemnts individually like sudo kubectl run back --image=back --port=8080 --image-pull-policy Never but the result was same.
Here is how my `deployment.yaml file looks like :
kind: Service
apiVersion: v1
metadata:
name: back
spec:
selector:
app: back
ports:
- protocol: TCP
port: 8080
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: back
spec:
selector:
matchLabels:
app: back
replicas: 3
template:
metadata:
labels:
app: back
spec:
containers:
- name: back
image: back
imagePullPolicy: Never
ports:
- containerPort: 8080
How can i make this deployment up and running as this causes internal server error on my front end side of application?
Description of pod back
Name: back-7fd9995747-nlqhq
Namespace: default
Priority: 0
Node: minikube/10.0.2.15
Start Time: Mon, 15 Jul 2019 12:49:52 +0200
Labels: pod-template-hash=7fd9995747
run=back
Annotations: <none>
Status: Running
IP: 172.17.0.7
Controlled By: ReplicaSet/back-7fd9995747
Containers:
back:
Container ID: docker://8a46e16c52be24b12831bb38d2088b8059947d099299d15755d77094b9cb5a8b
Image: back:latest
Image ID: docker://sha256:69218763696932578e199b9ab5fc2c3e9087f9482ac7e767db2f5939be98a534
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 15 Jul 2019 12:49:54 +0200
Finished: Mon, 15 Jul 2019 12:49:54 +0200
Ready: False
Restart Count: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c247f (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-c247f:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c247f
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6s default-scheduler Successfully assigned default/back-7fd9995747-nlqhq to minikube
Normal Pulled 4s (x2 over 5s) kubelet, minikube Container image "back:latest" already present on machine
Normal Created 4s (x2 over 5s) kubelet, minikube Created container back
Normal Started 4s (x2 over 5s) kubelet, minikube Started container back
Warning BackOff 2s (x2 over 3s) kubelet, minikube Back-off restarting failed container
As you can see zero of three Pods have Ready status:
NAME READY AVAILABLE
back 0/3 0
To find out what is going on you should check the underlying Pods:
$ kubectl get pods -l app=back
and then look at the Events in their description:
$ kubectl describe pod back-...

Istio allowing all outbound traffic

So putting everything in detail here for better clarification. My service consist of following attributes in dedicated namespace (Not using ServiceEntry)
Deployment (1 deployment)
Configmaps (1 configmap)
Service
VirtualService
GW
Istio is enabled in namespace and when I create / run deployment it create 2 pods as it should. Now as stated in issues subject I want to allow all outgoing traffic for deployment because my serives needs to connect with 2 service discovery server:
vault running on port 8200
spring config server running on http
download dependencies and communicate with other services (which are not part of vpc/ k8)
Using following deployment file will not open outgoing connections. Only thing works is simple https request on port 443 like when i run curl https://google.com its success but no response on curl http://google.com Also logs showing connection with vault is not establishing as well.
I have used almost all combinations in deployment but non of them seems to work. Anything I am missing or doing this in wrong way? would really appreciate contributions in this :)
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: my-application-service
name: my-application-service-deployment
namespace: temp-nampesapce
annotations:
traffic.sidecar.istio.io/excludeOutboundIPRanges: 0.0.0.0/0
spec:
replicas: 1
template:
metadata:
labels:
app: my-application-service-deployment
spec:
containers:
- envFrom:
- configMapRef:
name: my-application-service-env-variables
image: image.from.dockerhub:latest
name: my-application-service-pod
ports:
- containerPort: 8080
name: myappsvc
resources:
limits:
cpu: 700m
memory: 1.8Gi
requests:
cpu: 500m
memory: 1.7Gi
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-application-service-ingress
namespace: temp-namespace
spec:
hosts:
- my-application.mydomain.com
gateways:
- http-gateway
http:
- route:
- destination:
host: my-application-service
port:
number: 80
kind: Service
apiVersion: v1
metadata:
name: my-application-service
namespace: temp-namespace
spec:
selector:
app: api-my-application-service-deployment
ports:
- port: 80
targetPort: myappsvc
protocol: TCP
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
namespace: temp-namespace
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*.mydomain.com"
Namespace with istio enabled:
Name: temp-namespace
Labels: istio-injection=enabled
Annotations: <none>
Status: Active
No resource quota.
No resource limits.
Describe pods showing that istio and sidecare is working.
Name: my-application-service-deployment-fb897c6d6-9ztnx
Namespace: temp-namepsace
Node: ip-172-31-231-93.eu-west-1.compute.internal/172.31.231.93
Start Time: Sun, 21 Oct 2018 14:40:26 +0500
Labels: app=my-application-service-deployment
pod-template-hash=964537282
Annotations: sidecar.istio.io/status={"version":"2e0c897425ef3bd2729ec5f9aead7c0566c10ab326454e8e9e2b451404aee9a5","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs...
Status: Running
IP: 100.115.0.4
Controlled By: ReplicaSet/my-application-service-deployment-fb897c6d6
Init Containers:
istio-init:
Container ID: docker://a47003a092ec7d3dc3b1d155bca0ec53f00e545ad1b70e1809ad812e6f9aad47
Image: docker.io/istio/proxy_init:1.0.2
Image ID: docker-pullable://istio/proxy_init#sha256:e16a0746f46cd45a9f63c27b9e09daff5432e33a2d80c8cc0956d7d63e2f9185
Port: <none>
Host Port: <none>
Args:
-p
15001
-u
1337
-m
REDIRECT
-i
*
-x
-b
8080,
-d
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 21 Oct 2018 14:40:26 +0500
Finished: Sun, 21 Oct 2018 14:40:26 +0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts: <none>
Containers:
my-application-service-pod:
Container ID: docker://1a30a837f359d8790fb72e6b8fda040e121fe5f7b1f5ca47a5f3732810fd4f39
Image: image.from.dockerhub:latest
Image ID: docker-pullable://848569320300.dkr.ecr.eu-west-1.amazonaws.com/k8_api_env#sha256:98abee8d955cb981636fe7a81843312e6d364a6eabd0c3dd6b3ff66373a61359
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 21 Oct 2018 14:40:28 +0500
Ready: True
Restart Count: 0
Limits:
cpu: 700m
memory: 1932735283200m
Requests:
cpu: 500m
memory: 1825361100800m
Environment Variables from:
my-application-service-env-variables ConfigMap Optional: false
Environment:
vault.token: <set to the key 'vault_token' in secret 'vault.token'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rc8kc (ro)
istio-proxy:
Container ID: docker://3ae851e8ded8496893e5b70fc4f2671155af41c43e64814779935ea6354a8225
Image: docker.io/istio/proxyv2:1.0.2
Image ID: docker-pullable://istio/proxyv2#sha256:54e206530ba6ca9b3820254454e01b7592e9f986d27a5640b6c03704b3b68332
Port: <none>
Host Port: <none>
Args:
proxy
sidecar
--configPath
/etc/istio/proxy
--binaryPath
/usr/local/bin/envoy
--serviceCluster
my-application-service-deployment
--drainDuration
45s
--parentShutdownDuration
1m0s
--discoveryAddress
istio-pilot.istio-system:15007
--discoveryRefreshDelay
1s
--zipkinAddress
zipkin.istio-system:9411
--connectTimeout
10s
--statsdUdpAddress
istio-statsd-prom-bridge.istio-system:9125
--proxyAdminPort
15000
--controlPlaneAuthPolicy
NONE
State: Running
Started: Sun, 21 Oct 2018 14:40:28 +0500
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
POD_NAME: my-application-service-deployment-fb897c6d6-9ztnx (v1:metadata.name)
POD_NAMESPACE: temp-namepsace (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
ISTIO_META_POD_NAME: my-application-service-deployment-fb897c6d6-9ztnx (v1:metadata.name)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
Mounts:
/etc/certs/ from istio-certs (ro)
/etc/istio/proxy from istio-envoy (rw)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-rc8kc:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rc8kc
Optional: false
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
istio-certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.default
Optional: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container
Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "istio-certs"
Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "default-token-rc8kc"
Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "istio-envoy"
Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "docker.io/istio/proxy_init:1.0.2" already present on machine
Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container
Normal Scheduled 3m default-scheduler Successfully assigned my-application-service-deployment-fb897c6d6-9ztnx to ip-172-42-231-93.eu-west-1.compute.internal
Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "image.from.dockerhub:latest" already present on machine
Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container
Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container
Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "docker.io/istio/proxyv2:1.0.2" already present on machine
Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container
Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container
Issue was that I tried to adding sidecar in deployment not in pod by adding in pod resolved the issue. Got help from here:
https://github.com/istio/istio/issues/9304

Kubernetes continuously killing and recreating last pod

The last(3rd) container is continuously being delete and recreated by kubernetes. It goes from Running to Terminating state. The Kubernetes UI shows status as : 'Terminated: ExitCode:${state.terminated.exitCode}'
My deployment YAML:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: openapi
spec:
scaleTargetRef:
kind: Deployment
name: openapi
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 75
---
kind: Service
apiVersion: v1
metadata:
name: openapi
spec:
selector:
app: openapi
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: 8443
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: openapi
spec:
template:
metadata:
labels:
app: openapi
spec:
containers:
- name: openapi
image: us.gcr.io/PROJECT_ID/openapi:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
Portion of Output of kubectl get events -n namespace:
Pod Normal Created kubelet Created container
Pod Normal Started kubelet Started container
Pod Normal Killing kubelet Killing container with id docker://openapi:Need to kill Pod
ReplicaSet Normal SuccessfulCreate replicaset-controller (combined from similar events): Created pod: openapi-7db5f8d479-p7mcl
ReplicaSet Normal SuccessfulDelete replicaset-controller (combined from similar events): Deleted pod: openapi-7db5f8d479-pgmxf
HorizontalPodAutoscaler Normal SuccessfulRescale horizontal-pod-autoscaler New size: 2; reason: Current number of replicas above Spec.MaxReplicas
HorizontalPodAutoscaler Normal SuccessfulRescale horizontal-pod-autoscaler New size: 3; reason: Current number of replicas below Spec.MinReplicas
Deployment Normal ScalingReplicaSet deployment-controller Scaled up replica set openapi-7db5f8d479 to 3
Deployment Normal ScalingReplicaSet deployment-controller Scaled down replica set openapi-7db5f8d479 to 2
kubectl describe pod -n default openapi-7db5f8d479-2d2nm for a pod that spawned and was killed:
A different pod with a different unique id spawns each time after a pod gets killed by Kubernetes.
Name: openapi-7db5f8d479-2d2nm
Namespace: default
Node: gke-testproject-default-pool-28ce3836-t4hp/10.150.0.2
Start Time: Thu, 23 Nov 2017 11:50:17 +0000
Labels: app=openapi
pod-template-hash=3861948035
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"openapi-7db5f8d479","uid":"b7b3e48f-ceb2-11e7-afe7-42010a960003"...
kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container openapi
Status: Terminating (expires Thu, 23 Nov 2017 11:51:04 +0000)
Termination Grace Period: 30s
IP:
Created By: ReplicaSet/openapi-7db5f8d479
Controlled By: ReplicaSet/openapi-7db5f8d479
Containers:
openapi:
Container ID: docker://93d2f1372a7ad004aaeb34b0bc9ee375b6ed48609f505b52495067dd0dcbb233
Image: us.gcr.io/testproject-175705/openapi:latest
Image ID: docker-pullable://us.gcr.io/testproject-175705/openapi#sha256:54b833548cbed32db36ba4808b33c87c15c4ecde673839c3922577f30b
Port: 8080/TCP
State: Terminated
Reason: Error
Exit Code: 143
Started: Thu, 23 Nov 2017 11:50:18 +0000
Finished: Thu, 23 Nov 2017 11:50:35 +0000
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-61k6c (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-61k6c:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-61k6c
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21s default-scheduler Successfully assigned openapi-7db5f8d479-2d2nm to gke-testproject-default-pool-28ce3836-t4hp
Normal SuccessfulMountVolume 21s kubelet, gke-testproject-default-pool-28ce3836-t4hp MountVolume.SetUp succeeded for volume "default-token-61k6c"
Normal Pulling 21s kubelet, gke-testproject-default-pool-28ce3836-t4hp pulling image "us.gcr.io/testproject-175705/openapi:latest"
Normal Pulled 20s kubelet, gke-testproject-default-pool-28ce3836-t4hp Successfully pulled image "us.gcr.io/testproject-175705/openapi:latest"
Normal Created 20s kubelet, gke-testproject-default-pool-28ce3836-t4hp Created container
Normal Started 20s kubelet, gke-testproject-default-pool-28ce3836-t4hp Started container
Normal Killing 3s kubelet, gke-testproject-default-pool-28ce3836-t4hp Killing container with id docker://openapi:Need to kill Pod
Check the pod logs using the commands below:
kubectl get events -w -n namespace
and
kubectl describe pod -n namespace pod_name