I am setting up kubernetes on my Laptop (Windows 10 OS) to work on the Containers and it's Orchestration. I have created Minikube VM using the below command and it got succeeded.
minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"
I am able to Launch Kubernetes and able to launch the Minikube dashboard as well
I started the Kubernetes cluster and deployed nginx app into Cluster. Below are the commands.
kubectl run hello-nginx --image=nginx --port=8020
kubectl expose deployment hello-nginx --type=NodePort --port=8020 --target-port=8020
I am able to view PODs and Services using the below commands.
kubectl get pods
kubectl get services
it works perfect up to here. I am able to view deployment, PODs and Service information in the Minikube dashboard.
When i run the below command to launch the application in the browser, browser is throwing a message as "Resource Not Found" but I am able to view POD and Service information in the MiniKube Dashboard.
minikube service hello-nginx
URL: http://192.168.43.20:32087/
getting exception in a browser
This website could not be found.
Error Code: INET_E_RESOURCE_NOT_FOUND
Below given the deployment YAML file,
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "hello-nginx",
"namespace": "default",
"selfLink": "/apis/extensions/v1beta1/namespaces/default/deployments/hello-nginx",
"uid": "5629038e-93e5-11e9-ad2e-00155d162e0e",
"resourceVersion": "49313",
"generation": 1,
"creationTimestamp": "2019-06-21T05:28:01Z",
"labels": {
"run": "hello-nginx"
},
"annotations": {
"deployment.kubernetes.io/revision": "1"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"run": "hello-nginx"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"run": "hello-nginx"
}
},
"spec": {
"containers": [
{
"name": "hello-nginx",
"image": "nginx",
"ports": [
{
"containerPort": 8020,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": 1,
"maxSurge": 1
}
},
"revisionHistoryLimit": 2147483647,
"progressDeadlineSeconds": 2147483647
},
"status": {
"observedGeneration": 1,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2019-06-21T05:28:01Z",
"lastTransitionTime": "2019-06-21T05:28:01Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
}
]
}
}
Below given the replica set YAML file info,
{
"kind": "ReplicaSet",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "hello-nginx-76696c698f",
"namespace": "default",
"selfLink": "/apis/extensions/v1beta1/namespaces/default/replicasets/hello-nginx-76696c698f",
"uid": "562be1e8-93e5-11e9-ad2e-00155d162e0e",
"resourceVersion": "49310",
"generation": 3,
"creationTimestamp": "2019-06-21T05:28:01Z",
"labels": {
"pod-template-hash": "76696c698f",
"run": "hello-nginx"
},
"annotations": {
"deployment.kubernetes.io/desired-replicas": "1",
"deployment.kubernetes.io/max-replicas": "2",
"deployment.kubernetes.io/revision": "1"
},
"ownerReferences": [
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"name": "hello-nginx",
"uid": "5629038e-93e5-11e9-ad2e-00155d162e0e",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"pod-template-hash": "76696c698f",
"run": "hello-nginx"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"pod-template-hash": "76696c698f",
"run": "hello-nginx"
}
},
"spec": {
"containers": [
{
"name": "hello-nginx",
"image": "nginx",
"ports": [
{
"containerPort": 8020,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
}
},
"status": {
"replicas": 1,
"fullyLabeledReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"observedGeneration": 3
}
}
Now I am trying out the Port Forwarding option to route the request to POD, but it is not working.
kubectl port-forward deployment/hello-nginx 8020:8020
I am getting the below exception when i try to access the URL "http://127.0.0.1:8020"
Handling connection for 8020
E0622 01:07:06.306320 18888 portforward.go:331] an error occurred forwarding 8020 -> 8020: error forwarding port 8020 to pod c54d6faaa545992dce02f58490a26154134843eb7426a51e78df2cda172b514c, uid : exit status 1: 2019/06/21 08:01:18 socat[4535] E connect(5, AF=2 127.0.0.1:8020, 16): Connection refused
I have read many articles on this but couldn't found the root cause for this issue. Am i missing anything important here?
Thanks for your help in Advance.
Your issue is actually unrelated to Minikube or port forwarding. You expose the port 8020, however the application hello-nginx uses 80. So you should use 80 everywhere instead of 8020. For example:
kubectl run hello-nginx --image=nginx --port=80
Saying that, using Minikube is not the best option for Windows. Much better is to use Docker Desktop, then everything you run on Kubernetes is available on your localhost.
Related
This question already has answers here:
CoreDNS only works in one host in kubernete cluster
(2 answers)
Closed 3 years ago.
I am deploy coredns (running in node-01):
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "coredns",
"namespace": "kube-system",
"selfLink": "/apis/extensions/v1beta1/namespaces/kube-system/deployments/coredns",
"uid": "5d470d90-6cdf-4ef1-be00-6774d70fcb54",
"resourceVersion": "14708222",
"generation": 18,
"creationTimestamp": "2019-09-22T06:28:28Z",
"labels": {
"addonmanager.kubernetes.io/mode": "Reconcile",
"k8s-app": "kube-dns",
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "CoreDNS"
},
"annotations": {
"deployment.kubernetes.io/revision": "6"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"k8s-app": "kube-dns"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"k8s-app": "kube-dns"
},
"annotations": {
"seccomp.security.alpha.kubernetes.io/pod": "docker/default"
}
},
"spec": {
"volumes": [
{
"name": "config-volume",
"configMap": {
"name": "coredns",
"items": [
{
"key": "Corefile",
"path": "Corefile"
}
],
"defaultMode": 420
}
}
],
"containers": [
{
"name": "coredns",
"image": "gcr.azk8s.cn/google-containers/coredns:1.3.1",
"args": [
"-conf",
"/etc/coredns/Corefile"
],
"ports": [
{
"name": "dns",
"containerPort": 53,
"protocol": "UDP"
},
{
"name": "dns-tcp",
"containerPort": 53,
"protocol": "TCP"
},
{
"name": "metrics",
"containerPort": 9153,
"protocol": "TCP"
}
],
"resources": {
"limits": {
"memory": "70Mi"
},
"requests": {
"cpu": "100m",
"memory": "70Mi"
}
},
"volumeMounts": [
{
"name": "config-volume",
"readOnly": true,
"mountPath": "/etc/coredns"
}
],
"livenessProbe": {
"httpGet": {
"path": "/health",
"port": 8080,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"timeoutSeconds": 5,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 5
},
"readinessProbe": {
"httpGet": {
"path": "/health",
"port": 8080,
"scheme": "HTTP"
},
"timeoutSeconds": 1,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 3
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent",
"securityContext": {
"capabilities": {
"add": [
"NET_BIND_SERVICE"
],
"drop": [
"all"
]
},
"readOnlyRootFilesystem": true,
"allowPrivilegeEscalation": false
}
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "Default",
"nodeSelector": {
"beta.kubernetes.io/os": "linux"
},
"serviceAccountName": "coredns",
"serviceAccount": "coredns",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "CriticalAddonsOnly",
"operator": "Exists"
}
],
"priorityClassName": "system-cluster-critical"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": 1,
"maxSurge": "25%"
}
},
"revisionHistoryLimit": 10,
"progressDeadlineSeconds": 600
},
"status": {
"observedGeneration": 18,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2019-09-22T06:28:28Z",
"lastTransitionTime": "2019-09-22T06:28:28Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
},
{
"type": "Progressing",
"status": "True",
"lastUpdateTime": "2020-02-12T14:54:06Z",
"lastTransitionTime": "2020-01-23T16:14:05Z",
"reason": "NewReplicaSetAvailable",
"message": "ReplicaSet \"coredns-89764d78c\" has successfully progressed."
}
],
"collisionCount": 1
}
}
when I ping domain from pods(running in node-01), it failed:
# access external domain
/ # ping baidu.com
ping: bad address 'baidu.com'
# access oneself
/ # ping eureka-0
PING eureka-0 (172.30.208.2): 56 data bytes
64 bytes from 172.30.208.2: seq=0 ttl=64 time=0.054 ms
# access other pod
/ # ping zuul-service
ping: bad address 'zuul-service'
and I want to install curl in pod in node-01:
/ # apk add curl
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/main: temporary error (try again later)
WARNING: Ignoring APKINDEX.b89edf6e.tar.gz: No such file or directory
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/community: temporary error (try again later)
WARNING: Ignoring APKINDEX.737f7e01.tar.gz: No such file or directory
ERROR: unsatisfiable constraints:
curl (missing):
required by: world[curl]
when I execute this command in Node-03,it works fine.what should I do to figure out where is going wrong?
[root#ops001 ~]# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-89764d78c-zmz27 1/1 Running 0 90m
this is kubernetes version:
[root#ops001 ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
azshara-k8s01 Ready <none> 144d v1.15.2 172.19.104.231 <none> CentOS Linux 7 (Core) 3.10.0-957.5.1.el7.x86_64 docker://19.3.1
azshara-k8s02 Ready <none> 144d v1.15.2 172.19.104.230 <none> CentOS Linux 7 (Core) 3.10.0-957.5.1.el7.x86_64 docker://18.9.6
azshara-k8s03 Ready <none> 144d v1.15.2 172.19.150.82 <none> CentOS Linux 7 (Core) 3.10.0-957.5.1.el7.x86_64 docker://18.9.6
As I see, you are using openjdk:8-jre-alpine for docker container, so:
Using this comand line for installing :
RUN apk update first OR
Remove cached:
RUN rm -rf /var/cache/apk/* && \
rm -rf /tmp/*
RUN apk --no-cache add curl replace to apk add curl
I can obtain my service by running
$ kubectl get service <service-name> --namespace <namespace name>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service name LoadBalancer ********* ********* port numbers 16h
here is my service running at kubernetes but I can't access it through public IP. below are my service and deployment files added . i am using azre devops to build and release container image to azure container registry . As you see above on service describe i got external ip and cluster ip but when i try this ip in browser or use curl i get no response. `
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "service-name",
"namespace": "namespace-name",
"selfLink": "*******************",
"uid": "*******************",
"resourceVersion": "1686278",
"creationTimestamp": "2019-07-15T14:12:11Z",
"labels": {
"run": "service name"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": ****,
"nodePort": ****
}
],
"selector": {
"run": "profile-management-service"
},
"clusterIP": "**********",
"type": "LoadBalancer",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "*************"
}
]
}
}
}
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "deployment-name",
"namespace": "namespace-name",
"selfLink": "*************************",
"uid": "****************************",
"resourceVersion": "1686172",
"generation": 1,
"creationTimestamp": "2019-07-15T14:12:04Z",
"labels": {
"run": "deployment-name"
},
"annotations": {
"deployment.kubernetes.io/revision": "1"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"run": "deployment-name"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"run": "deployment-name"
}
},
"spec": {
"containers": [
{
"name": "deployment-name",
"image": "dev/containername:50",
"ports": [
{
"containerPort": ****,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": 1,
"maxSurge": 1
}
},
"revisionHistoryLimit": 2147483647,
"progressDeadlineSeconds": 2147483647
},
"status": {
"observedGeneration": 1,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2019-07-15T14:12:04Z",
"lastTransitionTime": "2019-07-15T14:12:04Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
}
]
}
}
`
Apparently there's a mismatch in label and selector:
Service selector
"selector": {
"run": "profile-management-service"
While deployment label
"labels": {
"run": "deployment-name"
},
Also check targetPort value of the service, it should match containerPort of your deployment
You need to add readinessProbe and livenessProbe on your Deployment and after that check your firewall if all rules are correct.
Here you have some more info about liveness and readiness
I am using kubernetes v1.9.7-gke.6. I am trying to edit the metrics-server deployment yaml and add --metric-resolution flag, when I add the flag and save the change I see on the terminal that the edit was successful. When I edit again the metrics-server deployment file the flag that I added it doesn't exist. Is there any way to edit the metrics server deployment yaml?
Here is the deployment , its the default that created when I create a new kuberentes cluster at google cloud.
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"annotations": {
"deployment.kubernetes.io/revision": "12",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"k8s-app\":\"metrics-server\",\"kubernetes.io/cluster-service\":\"true\",\"version\":\"v0.2.1\"},\"name\":\"metrics-server-v0.2.1\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"k8s-app\":\"metrics-server\",\"version\":\"v0.2.1\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"k8s-app\":\"metrics-server\",\"version\":\"v0.2.1\"},\"name\":\"metrics-server\"},\"spec\":{\"containers\":[{\"command\":[\"/metrics-server\",\"--source=kubernetes.summary_api:''\"],\"image\":\"gcr.io/google_containers/metrics-server-amd64:v0.2.1\",\"name\":\"metrics-server\",\"ports\":[{\"containerPort\":443,\"name\":\"https\",\"protocol\":\"TCP\"}]},{\"command\":[\"/pod_nanny\",\"--config-dir=/etc/config\",\"--cpu=40m\",\"--extra-cpu=0.5m\",\"--memory=40Mi\",\"--extra-memory=4Mi\",\"--threshold=5\",\"--deployment=metrics-server-v0.2.1\",\"--container=metrics-server\",\"--poll-period=300000\",\"--estimator=exponential\"],\"env\":[{\"name\":\"MY_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"MY_POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"gcr.io/google_containers/addon-resizer:1.8.1\",\"name\":\"metrics-server-nanny\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"300Mi\"},\"requests\":{\"cpu\":\"5m\",\"memory\":\"50Mi\"}},\"volumeMounts\":[{\"mountPath\":\"/etc/config\",\"name\":\"metrics-server-config-volume\"}]}],\"serviceAccountName\":\"metrics-server\",\"tolerations\":[{\"key\":\"CriticalAddonsOnly\",\"operator\":\"Exists\"}],\"volumes\":[{\"configMap\":{\"name\":\"metrics-server-config\"},\"name\":\"metrics-server-config-volume\"}]}}}}\n"
},
"creationTimestamp": "2018-09-20T13:04:03Z",
"generation": 14,
"labels": {
"addonmanager.kubernetes.io/mode": "Reconcile",
"k8s-app": "metrics-server",
"kubernetes.io/cluster-service": "true",
"version": "v0.2.1"
},
"name": "metrics-server-v0.2.1",
"namespace": "kube-system",
"resourceVersion": "822513",
"selfLink": "/apis/extensions/v1beta1/namespaces/kube-system/deployments/metrics-server-v0.2.1",
"uid": "a5cd1f4c-bcd5-11e8-9313-42010a80005f"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"k8s-app": "metrics-server",
"version": "v0.2.1"
}
},
"strategy": {
"rollingUpdate": {
"maxSurge": 1,
"maxUnavailable": 1
},
"type": "RollingUpdate"
},
"template": {
"metadata": {
"annotations": {
"scheduler.alpha.kubernetes.io/critical-pod": ""
},
"creationTimestamp": null,
"labels": {
"k8s-app": "metrics-server",
"version": "v0.2.1"
},
"name": "metrics-server"
},
"spec": {
"containers": [
{
"command": [
"/metrics-server",
"--source=kubernetes.summary_api:''"
],
"image": "gcr.io/google_containers/metrics-server-amd64:v0.2.1",
"imagePullPolicy": "IfNotPresent",
"name": "metrics-server",
"ports": [
{
"containerPort": 443,
"name": "https",
"protocol": "TCP"
}
],
"resources": {
"limits": {
"cpu": "48m",
"memory": "104Mi"
},
"requests": {
"cpu": "48m",
"memory": "104Mi"
}
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File"
},
{
"command": [
"/pod_nanny",
"--config-dir=/etc/config",
"--cpu=40m",
"--extra-cpu=0.5m",
"--memory=40Mi",
"--extra-memory=4Mi",
"--threshold=5",
"--deployment=metrics-server-v0.2.1",
"--container=metrics-server",
"--poll-period=300000",
"--estimator=exponential"
],
"env": [
{
"name": "MY_POD_NAME",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.name"
}
}
},
{
"name": "MY_POD_NAMESPACE",
"valueFrom": {
{
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
}
],
"image": "gcr.io/google_containers/addon-resizer:1.8.1",
"imagePullPolicy": "IfNotPresent",
"name": "metrics-server-nanny",
"resources": {
"limits": {
"cpu": "100m",
"memory": "300Mi"
},
"requests": {
"cpu": "5m",
"memory": "50Mi"
}
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/etc/config",
"name": "metrics-server-config-volume"
}
]
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "metrics-server",
"serviceAccountName": "metrics-server",
"terminationGracePeriodSeconds": 30,
"tolerations": [
{
"key": "CriticalAddonsOnly",
"operator": "Exists"
}
],
"volumes": [
{
"configMap": {
"defaultMode": 420,
"name": "metrics-server-config"
},
"name": "metrics-server-config-volume"
}
]
}
}
},
"status": {
"availableReplicas": 1,
"conditions": [
{
"lastTransitionTime": "2018-09-20T13:04:03Z",
"lastUpdateTime": "2018-09-20T13:04:03Z",
"message": "Deployment has minimum availability.",
"reason": "MinimumReplicasAvailable",
"status": "True",
"type": "Available"
}
],
"observedGeneration": 14,
"readyReplicas": 1,
"replicas": 1,
"updatedReplicas": 1
}
}
Editing the yaml/flags of anything in kube-system on GKE (Google Kubernetes Engine) will not work as it will get reverted by the master. So, that part is working as intended.
It looks like fluentd which is auto-managed by GKE for logging, is what is causing the changes to get reverted. So the option I can think of, would be to disable the gke addons (ie cloud logging), and roll your own fluentd daemonset, and then configure things yourself. I will recommend you to visit this discussion for more information
Additionally, I will request you to take a look into this guide, if you'd like to roll your own fluentd on your cluster as well.
I have 2 kubernetes installs for different projects that as best as I can see have equivalent config in the areas that matter yet the 2 perform rolling updates differently.
Both were installed on AWS using kops.
System 1 (k8s v1.7.0) - Kill pod in a deployment using k8s web gui, new pod is created first and then once running will terminate old pod. No downtime.
System 2 (k8s v1.8.4) - Kill pod in a deployment using k8s web gui, old pod is killed instantly and then new pod is created. Causes brief downtime.
Any suggestions or ideas as to why they behave differently and how I can get system 2 to create new pod before terminating the old one?
System 1 deployment
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "proxy-deployment",
"namespace": "namespace",
"selfLink": "/apis/extensions/v1beta1/namespaces/namespace/deployments/proxy-deployment",
"uid": "d12778ba-8950-11e7-9e69-12f38e55b21a",
"resourceVersion": "31538492",
"generation": 7,
"creationTimestamp": "2017-08-25T04:49:45Z",
"labels": {
"app": "proxy"
},
"annotations": {
"deployment.kubernetes.io/revision": "6",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"proxy-deployment\",\"namespace\":\"namespace\"},\"spec\":{\"replicas\":2,\"template\":{\"metadata\":{\"labels\":{\"app\":\"proxy\"}},\"spec\":{\"containers\":[{\"image\":\"xxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/nginx-proxy-xxxxxx:latest\",\"name\":\"proxy-ctr\",\"ports\":[{\"containerPort\":80},{\"containerPort\":8080}]}]}}}}\n"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "proxy"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "proxy",
"date": "1522386390"
}
},
"spec": {
"containers": [
{
"name": "proxy-ctr",
"image": "xxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/nginx-proxy-xxxxxx:latest",
"ports": [
{
"containerPort": 80,
"protocol": "TCP"
},
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": "25%",
"maxSurge": "25%"
}
},
"revisionHistoryLimit": 2,
"progressDeadlineSeconds": 600
},
"status": {
"observedGeneration": 7,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Progressing",
"status": "True",
"lastUpdateTime": "2018-03-30T05:03:01Z",
"lastTransitionTime": "2017-08-25T04:49:45Z",
"reason": "NewReplicaSetAvailable",
"message": "ReplicaSet \"proxy-deployment-1457650622\" has successfully progressed."
},
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2018-06-01T06:55:12Z",
"lastTransitionTime": "2018-06-01T06:55:12Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
}
]
}
}
System 2 Deployment
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "prodefault-deployment",
"namespace": "namespace",
"selfLink": "/apis/extensions/v1beta1/namespaces/namespace/deployments/prodefault-deployment",
"uid": "a80528c8-eb79-11e7-9364-068125440f70",
"resourceVersion": "25203392",
"generation": 10,
"creationTimestamp": "2017-12-28T02:49:00Z",
"labels": {
"app": "prodefault"
},
"annotations": {
"deployment.kubernetes.io/revision": "7",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"prodefault-deployment\",\"namespace\":\"namespace\"},\"spec\":{\"replicas\":1,\"strategy\":{\"rollingUpdate\":{\"maxSurge\":\"25%\",\"maxUnavailable\":\"25%\"},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"labels\":{\"app\":\"prodefault\"}},\"spec\":{\"containers\":[{\"image\":\"xxxxxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/xxxxxxxxxxx-pro-default:latest\",\"livenessProbe\":{\"httpGet\":{\"path\":\"/healthchk\",\"port\":80},\"initialDelaySeconds\":120,\"periodSeconds\":15,\"timeoutSeconds\":1},\"name\":\"prodefault-ctr\",\"ports\":[{\"containerPort\":80}],\"readinessProbe\":{\"httpGet\":{\"path\":\"/healthchk\",\"port\":80},\"initialDelaySeconds\":5,\"periodSeconds\":2,\"timeoutSeconds\":3},\"resources\":{\"limits\":{\"cpu\":\"1\",\"memory\":\"1024Mi\"},\"requests\":{\"cpu\":\"150m\",\"memory\":\"256Mi\"}},\"volumeMounts\":[{\"mountPath\":\"/var/www/html/homes\",\"name\":\"efs-pvc\"},{\"mountPath\":\"/var/xero\",\"name\":\"xero-key\",\"readOnly\":true},{\"mountPath\":\"/var/gcal\",\"name\":\"gcal-json\",\"readOnly\":true}]}],\"volumes\":[{\"name\":\"efs-pvc\",\"persistentVolumeClaim\":{\"claimName\":\"tio-pv-claim-homes\"}},{\"name\":\"xero-key\",\"secret\":{\"secretName\":\"xero-key\"}},{\"name\":\"gcal-json\",\"secret\":{\"secretName\":\"gcaljson\"}}]}}}}\n"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "prodefault"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "prodefault"
}
},
"spec": {
"volumes": [
{
"name": "efs-pvc",
"persistentVolumeClaim": {
"claimName": "tio-pv-claim-homes"
}
},
{
"name": "xero-key",
"secret": {
"secretName": "xero-key",
"defaultMode": 420
}
},
{
"name": "gcal-json",
"secret": {
"secretName": "gcaljson",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "prodefault-ctr",
"image": "xxxxxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/xxxxxxxxxxx-pro-default:latest",
"ports": [
{
"containerPort": 80,
"protocol": "TCP"
}
],
"resources": {
"limits": {
"cpu": "1",
"memory": "1Gi"
},
"requests": {
"cpu": "150m",
"memory": "256Mi"
}
},
"volumeMounts": [
{
"name": "efs-pvc",
"mountPath": "/var/www/html/homes"
},
{
"name": "xero-key",
"readOnly": true,
"mountPath": "/var/xero"
},
{
"name": "gcal-json",
"readOnly": true,
"mountPath": "/var/gcal"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthchk",
"port": 80,
"scheme": "HTTP"
},
"initialDelaySeconds": 120,
"timeoutSeconds": 1,
"periodSeconds": 15,
"successThreshold": 1,
"failureThreshold": 3
},
"readinessProbe": {
"httpGet": {
"path": "/healthchk",
"port": 80,
"scheme": "HTTP"
},
"initialDelaySeconds": 5,
"timeoutSeconds": 3,
"periodSeconds": 2,
"successThreshold": 1,
"failureThreshold": 3
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": "25%",
"maxSurge": "25%"
}
},
"revisionHistoryLimit": 2,
"progressDeadlineSeconds": 600
},
"status": {
"observedGeneration": 10,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Progressing",
"status": "True",
"lastUpdateTime": "2018-01-15T06:07:52Z",
"lastTransitionTime": "2017-12-28T03:00:16Z",
"reason": "NewReplicaSetAvailable",
"message": "ReplicaSet \"prodefault-deployment-9685f46d4\" has successfully progressed."
},
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2018-06-13T07:12:41Z",
"lastTransitionTime": "2018-06-13T07:12:41Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
}
]
}
}
I noticed both pods have the following rolling update stratege defined:
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": "25%",
"maxSurge": "25%"
}
},
In this way, It should be terminating the old pod after new pod created in a normal rolling update through 'set image' or 'kubectl apply'.
So the different behavior between two systems maybe come from the dashboard. I guess you are running dashboard with different version in your two system, since according to the compatibility metrix of dashboard, the kubernetes v1.7 needs dashboard 1.7 to support, while the kubernetes v1.8 needs dashboard 1.8 to support. Maybe version different dashboard treat 'kill pod' as different action, I don't know.
Or if you are running dashboard 1.7 in your v1.8 system, then try to upgrade your dashbord at first.
And lastly, don't use 'kill pod' to do rolling update.
I'm testing the new Openshift platform based on Docker and Kubernetes.
I've created a new project from scratch, then when I try to deploy a simple MongoDB service (as well with a python app), I got the following errors in the Monitoring section in Web console:
Unable to mount volumes for pod "mongodb-1-sfg8t_rob1(e9e53040-ab59-11e6-a64c-0e3d364e19a5)": timeout expired waiting for volumes to attach/mount for pod "mongodb-1-sfg8t"/"rob1". list of unattached/unmounted volumes=[mongodb-data]
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "mongodb-1-sfg8t"/"rob1". list of unattached/unmounted volumes=[mongodb-data]
It seems a problem mounting the PVC in the container, however the PVC is correctly created and bounded:
oc get pvc
Returns:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
mongodb-data Bound pv-aws-9dged 1Gi RWO 29m
I've deployed it with the following commands:
oc process -f openshift/templates/mongodb.json | oc create -f -
oc deploy mongodb --latest
The complete log from Web console:
The content of the template that I used is:
{
"kind": "Template",
"apiVersion": "v1",
"metadata": {
"name": "mongo-example",
"annotations": {
"openshift.io/display-name": "Mongo example",
"tags": "quickstart,mongo"
}
},
"labels": {
"template": "mongo-example"
},
"message": "The following service(s) have been created in your project: ${NAME}.",
"objects": [
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "${DATABASE_DATA_VOLUME}"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "${DB_VOLUME_CAPACITY}"
}
}
}
},
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "${DATABASE_SERVICE_NAME}",
"annotations": {
"description": "Exposes the database server"
}
},
"spec": {
"ports": [
{
"name": "mongodb",
"port": 27017,
"targetPort": 27017
}
],
"selector": {
"name": "${DATABASE_SERVICE_NAME}"
}
}
},
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "${DATABASE_SERVICE_NAME}",
"annotations": {
"description": "Defines how to deploy the database"
}
},
"spec": {
"strategy": {
"type": "Recreate"
},
"triggers": [
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"mymongodb"
],
"from": {
"kind": "ImageStreamTag",
"namespace": "",
"name": "mongo:latest"
}
}
},
{
"type": "ConfigChange"
}
],
"replicas": 1,
"selector": {
"name": "${DATABASE_SERVICE_NAME}"
},
"template": {
"metadata": {
"name": "${DATABASE_SERVICE_NAME}",
"labels": {
"name": "${DATABASE_SERVICE_NAME}"
}
},
"spec": {
"volumes": [
{
"name": "${DATABASE_DATA_VOLUME}",
"persistentVolumeClaim": {
"claimName": "${DATABASE_DATA_VOLUME}"
}
}
],
"containers": [
{
"name": "mymongodb",
"image": "mongo:latest",
"ports": [
{
"containerPort": 27017
}
],
"env": [
{
"name": "MONGODB_USER",
"value": "${DATABASE_USER}"
},
{
"name": "MONGODB_PASSWORD",
"value": "${DATABASE_PASSWORD}"
},
{
"name": "MONGODB_DATABASE",
"value": "${DATABASE_NAME}"
}
],
"volumeMounts": [
{
"name": "${DATABASE_DATA_VOLUME}",
"mountPath": "/data/db"
}
],
"readinessProbe": {
"timeoutSeconds": 1,
"initialDelaySeconds": 5,
"exec": {
"command": [ "/bin/bash", "-c", "mongo --eval 'db.getName()'"]
}
},
"livenessProbe": {
"timeoutSeconds": 1,
"initialDelaySeconds": 30,
"tcpSocket": {
"port": 27017
}
},
"resources": {
"limits": {
"memory": "${MEMORY_MONGODB_LIMIT}"
}
}
}
]
}
}
}
}
],
"parameters": [
{
"name": "NAME",
"displayName": "Name",
"description": "The name",
"required": true,
"value": "mongo-example"
},
{
"name": "MEMORY_MONGODB_LIMIT",
"displayName": "Memory Limit (MONGODB)",
"required": true,
"description": "Maximum amount of memory the MONGODB container can use.",
"value": "512Mi"
},
{
"name": "DB_VOLUME_CAPACITY",
"displayName": "Volume Capacity",
"description": "Volume space available for data, e.g. 512Mi, 2Gi",
"value": "512Mi",
"required": true
},
{
"name": "DATABASE_DATA_VOLUME",
"displayName": "Volumne name for DB data",
"required": true,
"value": "mongodb-data"
},
{
"name": "DATABASE_SERVICE_NAME",
"displayName": "Database Service Name",
"required": true,
"value": "mongodb"
},
{
"name": "DATABASE_NAME",
"displayName": "Database Name",
"required": true,
"value": "test1"
},
{
"name": "DATABASE_USER",
"displayName": "Database Username",
"required": false
},
{
"name": "DATABASE_PASSWORD",
"displayName": "Database User Password",
"required": false
}
]
}
Is there any issue with my template ? Is it a OpenShift issue ? Where and how can I get further details about the mount problem in OpenShift logs ?
So, I think you're coming up against 2 different issues.
You're template is setup to pull from the Mongo image on Dockerhub (specified by the blank "namespace" value. When trying to pull the mongo:latest image from Dockerhub in the Web UI, you are greeted by a friendly message notifying you that the docker image is not usable because it runs as root:
OpenShift Online Dev preview has been having some issues related to PVC recently (http://status.preview.openshift.com/). Specifically this reported bug at the moment, https://bugzilla.redhat.com/show_bug.cgi?id=1392650. This may be a cause for some issues, as the "official" Mongo image on OpenShift is also failing to build.
I would like to direct you to an OpenShift MongoDB template, not the exact one used in the Developer Preview, but should hopefully provide some good direction going forward! https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_examples/files/examples/v1.4/db-templates/mongodb-persistent-template.json