Cannot access service from external IP azure devops kubernetes - kubernetes

I can obtain my service by running
$ kubectl get service <service-name> --namespace <namespace name>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service name LoadBalancer ********* ********* port numbers 16h
here is my service running at kubernetes but I can't access it through public IP. below are my service and deployment files added . i am using azre devops to build and release container image to azure container registry . As you see above on service describe i got external ip and cluster ip but when i try this ip in browser or use curl i get no response. `
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "service-name",
"namespace": "namespace-name",
"selfLink": "*******************",
"uid": "*******************",
"resourceVersion": "1686278",
"creationTimestamp": "2019-07-15T14:12:11Z",
"labels": {
"run": "service name"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": ****,
"nodePort": ****
}
],
"selector": {
"run": "profile-management-service"
},
"clusterIP": "**********",
"type": "LoadBalancer",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "*************"
}
]
}
}
}
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "deployment-name",
"namespace": "namespace-name",
"selfLink": "*************************",
"uid": "****************************",
"resourceVersion": "1686172",
"generation": 1,
"creationTimestamp": "2019-07-15T14:12:04Z",
"labels": {
"run": "deployment-name"
},
"annotations": {
"deployment.kubernetes.io/revision": "1"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"run": "deployment-name"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"run": "deployment-name"
}
},
"spec": {
"containers": [
{
"name": "deployment-name",
"image": "dev/containername:50",
"ports": [
{
"containerPort": ****,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": 1,
"maxSurge": 1
}
},
"revisionHistoryLimit": 2147483647,
"progressDeadlineSeconds": 2147483647
},
"status": {
"observedGeneration": 1,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2019-07-15T14:12:04Z",
"lastTransitionTime": "2019-07-15T14:12:04Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
}
]
}
}
`

Apparently there's a mismatch in label and selector:
Service selector
"selector": {
"run": "profile-management-service"
While deployment label
"labels": {
"run": "deployment-name"
},
Also check targetPort value of the service, it should match containerPort of your deployment

You need to add readinessProbe and livenessProbe on your Deployment and after that check your firewall if all rules are correct.
Here you have some more info about liveness and readiness

Related

Port Forward Not Working in MiniKube on Windows 10

I am setting up kubernetes on my Laptop (Windows 10 OS) to work on the Containers and it's Orchestration. I have created Minikube VM using the below command and it got succeeded.
minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"
I am able to Launch Kubernetes and able to launch the Minikube dashboard as well
I started the Kubernetes cluster and deployed nginx app into Cluster. Below are the commands.
kubectl run hello-nginx --image=nginx --port=8020
kubectl expose deployment hello-nginx --type=NodePort --port=8020 --target-port=8020
I am able to view PODs and Services using the below commands.
kubectl get pods
kubectl get services
it works perfect up to here. I am able to view deployment, PODs and Service information in the Minikube dashboard.
When i run the below command to launch the application in the browser, browser is throwing a message as "Resource Not Found" but I am able to view POD and Service information in the MiniKube Dashboard.
minikube service hello-nginx
URL: http://192.168.43.20:32087/
getting exception in a browser
This website could not be found.
Error Code: INET_E_RESOURCE_NOT_FOUND
Below given the deployment YAML file,
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "hello-nginx",
"namespace": "default",
"selfLink": "/apis/extensions/v1beta1/namespaces/default/deployments/hello-nginx",
"uid": "5629038e-93e5-11e9-ad2e-00155d162e0e",
"resourceVersion": "49313",
"generation": 1,
"creationTimestamp": "2019-06-21T05:28:01Z",
"labels": {
"run": "hello-nginx"
},
"annotations": {
"deployment.kubernetes.io/revision": "1"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"run": "hello-nginx"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"run": "hello-nginx"
}
},
"spec": {
"containers": [
{
"name": "hello-nginx",
"image": "nginx",
"ports": [
{
"containerPort": 8020,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": 1,
"maxSurge": 1
}
},
"revisionHistoryLimit": 2147483647,
"progressDeadlineSeconds": 2147483647
},
"status": {
"observedGeneration": 1,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2019-06-21T05:28:01Z",
"lastTransitionTime": "2019-06-21T05:28:01Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
}
]
}
}
Below given the replica set YAML file info,
{
"kind": "ReplicaSet",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "hello-nginx-76696c698f",
"namespace": "default",
"selfLink": "/apis/extensions/v1beta1/namespaces/default/replicasets/hello-nginx-76696c698f",
"uid": "562be1e8-93e5-11e9-ad2e-00155d162e0e",
"resourceVersion": "49310",
"generation": 3,
"creationTimestamp": "2019-06-21T05:28:01Z",
"labels": {
"pod-template-hash": "76696c698f",
"run": "hello-nginx"
},
"annotations": {
"deployment.kubernetes.io/desired-replicas": "1",
"deployment.kubernetes.io/max-replicas": "2",
"deployment.kubernetes.io/revision": "1"
},
"ownerReferences": [
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"name": "hello-nginx",
"uid": "5629038e-93e5-11e9-ad2e-00155d162e0e",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"pod-template-hash": "76696c698f",
"run": "hello-nginx"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"pod-template-hash": "76696c698f",
"run": "hello-nginx"
}
},
"spec": {
"containers": [
{
"name": "hello-nginx",
"image": "nginx",
"ports": [
{
"containerPort": 8020,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
}
},
"status": {
"replicas": 1,
"fullyLabeledReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"observedGeneration": 3
}
}
Now I am trying out the Port Forwarding option to route the request to POD, but it is not working.
kubectl port-forward deployment/hello-nginx 8020:8020
I am getting the below exception when i try to access the URL "http://127.0.0.1:8020"
Handling connection for 8020
E0622 01:07:06.306320 18888 portforward.go:331] an error occurred forwarding 8020 -> 8020: error forwarding port 8020 to pod c54d6faaa545992dce02f58490a26154134843eb7426a51e78df2cda172b514c, uid : exit status 1: 2019/06/21 08:01:18 socat[4535] E connect(5, AF=2 127.0.0.1:8020, 16): Connection refused
I have read many articles on this but couldn't found the root cause for this issue. Am i missing anything important here?
Thanks for your help in Advance.
Your issue is actually unrelated to Minikube or port forwarding. You expose the port 8020, however the application hello-nginx uses 80. So you should use 80 everywhere instead of 8020. For example:
kubectl run hello-nginx --image=nginx --port=80
Saying that, using Minikube is not the best option for Windows. Much better is to use Docker Desktop, then everything you run on Kubernetes is available on your localhost.

Kubernetes metrics-server unable to add metric-resolution flag

I am using kubernetes v1.9.7-gke.6. I am trying to edit the metrics-server deployment yaml and add --metric-resolution flag, when I add the flag and save the change I see on the terminal that the edit was successful. When I edit again the metrics-server deployment file the flag that I added it doesn't exist. Is there any way to edit the metrics server deployment yaml?
Here is the deployment , its the default that created when I create a new kuberentes cluster at google cloud.
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"annotations": {
"deployment.kubernetes.io/revision": "12",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"k8s-app\":\"metrics-server\",\"kubernetes.io/cluster-service\":\"true\",\"version\":\"v0.2.1\"},\"name\":\"metrics-server-v0.2.1\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"k8s-app\":\"metrics-server\",\"version\":\"v0.2.1\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"k8s-app\":\"metrics-server\",\"version\":\"v0.2.1\"},\"name\":\"metrics-server\"},\"spec\":{\"containers\":[{\"command\":[\"/metrics-server\",\"--source=kubernetes.summary_api:''\"],\"image\":\"gcr.io/google_containers/metrics-server-amd64:v0.2.1\",\"name\":\"metrics-server\",\"ports\":[{\"containerPort\":443,\"name\":\"https\",\"protocol\":\"TCP\"}]},{\"command\":[\"/pod_nanny\",\"--config-dir=/etc/config\",\"--cpu=40m\",\"--extra-cpu=0.5m\",\"--memory=40Mi\",\"--extra-memory=4Mi\",\"--threshold=5\",\"--deployment=metrics-server-v0.2.1\",\"--container=metrics-server\",\"--poll-period=300000\",\"--estimator=exponential\"],\"env\":[{\"name\":\"MY_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"MY_POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"gcr.io/google_containers/addon-resizer:1.8.1\",\"name\":\"metrics-server-nanny\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"300Mi\"},\"requests\":{\"cpu\":\"5m\",\"memory\":\"50Mi\"}},\"volumeMounts\":[{\"mountPath\":\"/etc/config\",\"name\":\"metrics-server-config-volume\"}]}],\"serviceAccountName\":\"metrics-server\",\"tolerations\":[{\"key\":\"CriticalAddonsOnly\",\"operator\":\"Exists\"}],\"volumes\":[{\"configMap\":{\"name\":\"metrics-server-config\"},\"name\":\"metrics-server-config-volume\"}]}}}}\n"
},
"creationTimestamp": "2018-09-20T13:04:03Z",
"generation": 14,
"labels": {
"addonmanager.kubernetes.io/mode": "Reconcile",
"k8s-app": "metrics-server",
"kubernetes.io/cluster-service": "true",
"version": "v0.2.1"
},
"name": "metrics-server-v0.2.1",
"namespace": "kube-system",
"resourceVersion": "822513",
"selfLink": "/apis/extensions/v1beta1/namespaces/kube-system/deployments/metrics-server-v0.2.1",
"uid": "a5cd1f4c-bcd5-11e8-9313-42010a80005f"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"k8s-app": "metrics-server",
"version": "v0.2.1"
}
},
"strategy": {
"rollingUpdate": {
"maxSurge": 1,
"maxUnavailable": 1
},
"type": "RollingUpdate"
},
"template": {
"metadata": {
"annotations": {
"scheduler.alpha.kubernetes.io/critical-pod": ""
},
"creationTimestamp": null,
"labels": {
"k8s-app": "metrics-server",
"version": "v0.2.1"
},
"name": "metrics-server"
},
"spec": {
"containers": [
{
"command": [
"/metrics-server",
"--source=kubernetes.summary_api:''"
],
"image": "gcr.io/google_containers/metrics-server-amd64:v0.2.1",
"imagePullPolicy": "IfNotPresent",
"name": "metrics-server",
"ports": [
{
"containerPort": 443,
"name": "https",
"protocol": "TCP"
}
],
"resources": {
"limits": {
"cpu": "48m",
"memory": "104Mi"
},
"requests": {
"cpu": "48m",
"memory": "104Mi"
}
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File"
},
{
"command": [
"/pod_nanny",
"--config-dir=/etc/config",
"--cpu=40m",
"--extra-cpu=0.5m",
"--memory=40Mi",
"--extra-memory=4Mi",
"--threshold=5",
"--deployment=metrics-server-v0.2.1",
"--container=metrics-server",
"--poll-period=300000",
"--estimator=exponential"
],
"env": [
{
"name": "MY_POD_NAME",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.name"
}
}
},
{
"name": "MY_POD_NAMESPACE",
"valueFrom": {
{
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
}
],
"image": "gcr.io/google_containers/addon-resizer:1.8.1",
"imagePullPolicy": "IfNotPresent",
"name": "metrics-server-nanny",
"resources": {
"limits": {
"cpu": "100m",
"memory": "300Mi"
},
"requests": {
"cpu": "5m",
"memory": "50Mi"
}
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/etc/config",
"name": "metrics-server-config-volume"
}
]
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "metrics-server",
"serviceAccountName": "metrics-server",
"terminationGracePeriodSeconds": 30,
"tolerations": [
{
"key": "CriticalAddonsOnly",
"operator": "Exists"
}
],
"volumes": [
{
"configMap": {
"defaultMode": 420,
"name": "metrics-server-config"
},
"name": "metrics-server-config-volume"
}
]
}
}
},
"status": {
"availableReplicas": 1,
"conditions": [
{
"lastTransitionTime": "2018-09-20T13:04:03Z",
"lastUpdateTime": "2018-09-20T13:04:03Z",
"message": "Deployment has minimum availability.",
"reason": "MinimumReplicasAvailable",
"status": "True",
"type": "Available"
}
],
"observedGeneration": 14,
"readyReplicas": 1,
"replicas": 1,
"updatedReplicas": 1
}
}
Editing the yaml/flags of anything in kube-system on GKE (Google Kubernetes Engine) will not work as it will get reverted by the master. So, that part is working as intended.
It looks like fluentd which is auto-managed by GKE for logging, is what is causing the changes to get reverted. So the option I can think of, would be to disable the gke addons (ie cloud logging), and roll your own fluentd daemonset, and then configure things yourself. I will recommend you to visit this discussion for more information
Additionally, I will request you to take a look into this guide, if you'd like to roll your own fluentd on your cluster as well.

Kubernetes rolling update not working

I have 2 kubernetes installs for different projects that as best as I can see have equivalent config in the areas that matter yet the 2 perform rolling updates differently.
Both were installed on AWS using kops.
System 1 (k8s v1.7.0) - Kill pod in a deployment using k8s web gui, new pod is created first and then once running will terminate old pod. No downtime.
System 2 (k8s v1.8.4) - Kill pod in a deployment using k8s web gui, old pod is killed instantly and then new pod is created. Causes brief downtime.
Any suggestions or ideas as to why they behave differently and how I can get system 2 to create new pod before terminating the old one?
System 1 deployment
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "proxy-deployment",
"namespace": "namespace",
"selfLink": "/apis/extensions/v1beta1/namespaces/namespace/deployments/proxy-deployment",
"uid": "d12778ba-8950-11e7-9e69-12f38e55b21a",
"resourceVersion": "31538492",
"generation": 7,
"creationTimestamp": "2017-08-25T04:49:45Z",
"labels": {
"app": "proxy"
},
"annotations": {
"deployment.kubernetes.io/revision": "6",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"proxy-deployment\",\"namespace\":\"namespace\"},\"spec\":{\"replicas\":2,\"template\":{\"metadata\":{\"labels\":{\"app\":\"proxy\"}},\"spec\":{\"containers\":[{\"image\":\"xxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/nginx-proxy-xxxxxx:latest\",\"name\":\"proxy-ctr\",\"ports\":[{\"containerPort\":80},{\"containerPort\":8080}]}]}}}}\n"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "proxy"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "proxy",
"date": "1522386390"
}
},
"spec": {
"containers": [
{
"name": "proxy-ctr",
"image": "xxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/nginx-proxy-xxxxxx:latest",
"ports": [
{
"containerPort": 80,
"protocol": "TCP"
},
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": "25%",
"maxSurge": "25%"
}
},
"revisionHistoryLimit": 2,
"progressDeadlineSeconds": 600
},
"status": {
"observedGeneration": 7,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Progressing",
"status": "True",
"lastUpdateTime": "2018-03-30T05:03:01Z",
"lastTransitionTime": "2017-08-25T04:49:45Z",
"reason": "NewReplicaSetAvailable",
"message": "ReplicaSet \"proxy-deployment-1457650622\" has successfully progressed."
},
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2018-06-01T06:55:12Z",
"lastTransitionTime": "2018-06-01T06:55:12Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
}
]
}
}
System 2 Deployment
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "prodefault-deployment",
"namespace": "namespace",
"selfLink": "/apis/extensions/v1beta1/namespaces/namespace/deployments/prodefault-deployment",
"uid": "a80528c8-eb79-11e7-9364-068125440f70",
"resourceVersion": "25203392",
"generation": 10,
"creationTimestamp": "2017-12-28T02:49:00Z",
"labels": {
"app": "prodefault"
},
"annotations": {
"deployment.kubernetes.io/revision": "7",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"prodefault-deployment\",\"namespace\":\"namespace\"},\"spec\":{\"replicas\":1,\"strategy\":{\"rollingUpdate\":{\"maxSurge\":\"25%\",\"maxUnavailable\":\"25%\"},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"labels\":{\"app\":\"prodefault\"}},\"spec\":{\"containers\":[{\"image\":\"xxxxxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/xxxxxxxxxxx-pro-default:latest\",\"livenessProbe\":{\"httpGet\":{\"path\":\"/healthchk\",\"port\":80},\"initialDelaySeconds\":120,\"periodSeconds\":15,\"timeoutSeconds\":1},\"name\":\"prodefault-ctr\",\"ports\":[{\"containerPort\":80}],\"readinessProbe\":{\"httpGet\":{\"path\":\"/healthchk\",\"port\":80},\"initialDelaySeconds\":5,\"periodSeconds\":2,\"timeoutSeconds\":3},\"resources\":{\"limits\":{\"cpu\":\"1\",\"memory\":\"1024Mi\"},\"requests\":{\"cpu\":\"150m\",\"memory\":\"256Mi\"}},\"volumeMounts\":[{\"mountPath\":\"/var/www/html/homes\",\"name\":\"efs-pvc\"},{\"mountPath\":\"/var/xero\",\"name\":\"xero-key\",\"readOnly\":true},{\"mountPath\":\"/var/gcal\",\"name\":\"gcal-json\",\"readOnly\":true}]}],\"volumes\":[{\"name\":\"efs-pvc\",\"persistentVolumeClaim\":{\"claimName\":\"tio-pv-claim-homes\"}},{\"name\":\"xero-key\",\"secret\":{\"secretName\":\"xero-key\"}},{\"name\":\"gcal-json\",\"secret\":{\"secretName\":\"gcaljson\"}}]}}}}\n"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "prodefault"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "prodefault"
}
},
"spec": {
"volumes": [
{
"name": "efs-pvc",
"persistentVolumeClaim": {
"claimName": "tio-pv-claim-homes"
}
},
{
"name": "xero-key",
"secret": {
"secretName": "xero-key",
"defaultMode": 420
}
},
{
"name": "gcal-json",
"secret": {
"secretName": "gcaljson",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "prodefault-ctr",
"image": "xxxxxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/xxxxxxxxxxx-pro-default:latest",
"ports": [
{
"containerPort": 80,
"protocol": "TCP"
}
],
"resources": {
"limits": {
"cpu": "1",
"memory": "1Gi"
},
"requests": {
"cpu": "150m",
"memory": "256Mi"
}
},
"volumeMounts": [
{
"name": "efs-pvc",
"mountPath": "/var/www/html/homes"
},
{
"name": "xero-key",
"readOnly": true,
"mountPath": "/var/xero"
},
{
"name": "gcal-json",
"readOnly": true,
"mountPath": "/var/gcal"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthchk",
"port": 80,
"scheme": "HTTP"
},
"initialDelaySeconds": 120,
"timeoutSeconds": 1,
"periodSeconds": 15,
"successThreshold": 1,
"failureThreshold": 3
},
"readinessProbe": {
"httpGet": {
"path": "/healthchk",
"port": 80,
"scheme": "HTTP"
},
"initialDelaySeconds": 5,
"timeoutSeconds": 3,
"periodSeconds": 2,
"successThreshold": 1,
"failureThreshold": 3
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": "25%",
"maxSurge": "25%"
}
},
"revisionHistoryLimit": 2,
"progressDeadlineSeconds": 600
},
"status": {
"observedGeneration": 10,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Progressing",
"status": "True",
"lastUpdateTime": "2018-01-15T06:07:52Z",
"lastTransitionTime": "2017-12-28T03:00:16Z",
"reason": "NewReplicaSetAvailable",
"message": "ReplicaSet \"prodefault-deployment-9685f46d4\" has successfully progressed."
},
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2018-06-13T07:12:41Z",
"lastTransitionTime": "2018-06-13T07:12:41Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
}
]
}
}
I noticed both pods have the following rolling update stratege defined:
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": "25%",
"maxSurge": "25%"
}
},
In this way, It should be terminating the old pod after new pod created in a normal rolling update through 'set image' or 'kubectl apply'.
So the different behavior between two systems maybe come from the dashboard. I guess you are running dashboard with different version in your two system, since according to the compatibility metrix of dashboard, the kubernetes v1.7 needs dashboard 1.7 to support, while the kubernetes v1.8 needs dashboard 1.8 to support. Maybe version different dashboard treat 'kill pod' as different action, I don't know.
Or if you are running dashboard 1.7 in your v1.8 system, then try to upgrade your dashbord at first.
And lastly, don't use 'kill pod' to do rolling update.

How do I do this deployment by command line

I can do a deploy like this, but cannot do it via command line.
I was looking at doing it like this
kubectl create -f kubernetes-rc.json
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "foo-frontend-rc",
"labels": {
"www": true
},
"namespace": "foo"
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"app": "foo-frontend"
}
},
"spec": {
"containers": [
{
"name": "foo-frontend",
"image": "gcr.io/atomic-griffin-130023/foo-frontend:b3fc862",
"ports": [
{
"containerPort": 3009,
"protocol": "TCP"
}
],
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"dnsPolicy": "ClusterFirst"
}
}
}
}
and
kubectl create -f kubernetes-service.json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "foo-frontend-service"
},
"spec": {
"selector": {
"app": "foo-frontend-rc"
},
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 3009
}
]
}
}
to no avail. It creates the rc, but it won’t expose the service externally.
Your service's selector is wrong. It should be selecting a label from the pod template, not a label on the RC itself.
If you change the following in your service:
"selector": {
"app": "foo-frontend-rc"
},
to:
"selector": {
"app": "foo-frontend"
},
It should fix it.
Update
Change your service definition to
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "foo-frontend-service"
},
"spec": {
"selector": {
"app": "foo-frontend"
},
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 3009,
"nodePort": 30009
}
],
"type": "LoadBalancer"
}
}

gCloud Container (GKE) Kubernetes Service Port 8000 unreachable

I am getting this error, when I am trying to access the Public IP's http exposure.
RajRajen:lb4btest rajrajen$ curl http://104.154.84.143:8000
curl: (56) Recv failure: Connection reset by peer
RajRajen:lb4btest rajrajen$ telnet 104.154.84.143 8000
Trying 104.154.84.143...
Connected to 143.84.154.104.bc.googleusercontent.com.
Escape character is '^]'.
Connection closed by foreign host.
RajRajen:lb4btest rajrajen$
< Pls note the above IP is just representation , Once I redeploy, the IP may change. But the problem is not >
Controller URL as it in in my json file .
RajRajen:lb4btest rajrajen$ kubectl create -f middleware-service.json
services/lb4b-api-v9
And the rc - replication controller json file.
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "lb4b-api-v9",
"labels": {
"app": "lb4bapi",
"tier": "middleware"
}
},
"spec": {
"replicas": 1,
"selector": {
"app": "lb4bapi",
"tier": "middleware"
},
"template": {
"metadata": {
"labels": {
"app": "lb4bapi",
"tier": "middleware"
}
},
"spec": {
"containers": [
{
"name": "lb4bapicontainer",
"image": "gcr.io/helloworldnodejs-1119/myproject",
"resources": {
"requests": {
"cpu": "500m",
"memory": "128Mi"
}
},
"env": [
{
"name": "GET_HOSTS_FROM",
"value": "dns"
},
{
"name": "PORT",
"value": "8000"
}
],
"ports": [
{
"name": "middleware",
"containerPort": 8000,
"hostPort": 8000
}
]
}
]
}
}
}
}
And here is the service json file
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "lb4b-api-v9",
"labels": {
"name": "lb4b-api-v9",
"app": "lb4bmiddleware",
"tier": "middleware"
}
},
"spec": {
"type": "LoadBalancer",
"selector": {
"name": "lb4b-api-v9",
"app": "lb4bmiddleware",
"tier": "middleware"
},
"ports": [
{
"protocol": "TCP",
"port": 8000
}
]
}
}
Docker container is running a node application as non-root user as per the pm2 requirement .
ENTRYPOINT ["pm2"]
CMD ["start", "app.js", "--no-daemon"]
I am perfectly able to do curl on this POD Local IP curl http://podIP:podPort inside the POD docker as well as inside the NODE.
But unable to do curl on http://serviceLocalIP:8000 inside the NODE.
Can you please give some suggestions to make this work?
Thanks in advance.
This issue is resolved.. After following the troubleshooting comment about the Service Endpoint, especially keeping the SERVICE selector value same as in POD selector value.
https://cloud.google.com/container-engine/docs/debugging/
Search for My service is missing endpoints
Solution in Controller.json
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "lb4b-api-v9",
"labels": {
"name": "lb4b-api-v9",
"app": "lb4bmiddleware",
"tier": "middleware"
}
},
"spec": {
"replicas": 1,
"selector": {
**"name": "lb4b-api-v9",
"app": "lb4bmiddleware",
"tier": "middleware"**
},
"template": {
"metadata": {
"labels": {
"name": "lb4b-api-v9",
"app": "lb4bmiddleware",
"tier": "middleware"
}
},
"spec": {
"containers": [
{
"name": "lb4b-api-v9",
"image": "gcr.io/myprojectid/myproect",
"resources": {
"requests": {
"cpu": "500m",
"memory": "128Mi"
}
},
"env": [
{
"name": "GET_HOSTS_FROM",
"value": "dns"
},
{
"name": "PORT",
"value": "8000"
}
],
"ports": [
{
"name": "middleware",
"containerPort": 8000,
"hostPort": 8000
}
]
}
]
}
}
}
}
And in Service.json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "lb4b-api-v9",
"labels": {
"name": "lb4b-api-v9",
"app": "lb4bmiddleware",
"tier": "middleware"
}
},
"spec": {
"type": "LoadBalancer",
"selector": {
**"name": "lb4b-api-v9",
"app": "lb4bmiddleware",
"tier": "middleware"**
},
"ports": [
{
"protocol": "TCP",
"port": 8000
}
]
}
}
Thats all..