GKE with Ingress setup always gives status UNHEALTHY - kubernetes

To start of I have tested the tutorial at https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
which works fine. I also tested the same tutorial but added a tls secret as well to test https which also worked fine.
My problems arise when I create my own image. Here is the steps I take:
The Dockerfile:
# We label our stage as "builder"
FROM node:9.4.0-alpine as builder
COPY package.json package-lock.json ./
## Storing node modules on a separate layer will prevent unnecessary npm installs at each build
RUN npm i && mkdir /srv/cs-ui && cp -R ./node_modules ./srv/cs-ui
WORKDIR /srv/cs-ui
COPY . .
## Build the angular app in production mode and store the artifacts in dist folder
RUN $(npm bin)/ng build --environment "prod"
FROM nginx
## Copy our default nginx config
COPY nginx/default.conf /etc/nginx/conf.d/
## Remove default nginx website
RUN rm -rf /usr/share/nginx/html/*
## From "builder" stage copy over the artifacts in dist folder to default nginx nginx public folder
COPY --from=builder /srv/cs-ui/dist /usr/share/nginx/html/
The Dockerfile is run with docker-compose file that looks like this:
version: '2'
services:
cs-ui:
image: "gcr.io/cs-micro/cs-ui:v1"
container_name: "cs-ui"
tty: true
build: .
ports:
- "80:80"
Locally this works without any issues. The next thing I do is to push it to the Container Registry.
gcloud docker -- push gcr.io/cs-micro/cs-ui:v1
After that I create a container:
kubectl run cs-ui --image=gcr.io/cs-micro/cs-ui:v1 --port=80
Then I expose it:
kubectl expose deployment cs-ui --target-port=80 --type=NodePort
Then I run the following ingress file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
tls:
- secretName: tls-certificate
backend:
serviceName: cs-ui
servicePort: 80
with command:
kubectl apply -f test.yaml
kubectl describe service
Name: cs-ui
Namespace: default
Labels: run=cs-ui
Annotations:
Selector: run=cs-ui
Type: NodePort
IP: 10.35.244.124
Port: 80/TCP
TargetPort: 80/TCP
NodePort: 30272/TCP
Endpoints: 10.32.0.32:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations:
Selector:
Type: ClusterIP
IP: 10.35.240.1
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: 35.195.192.28:443
Session Affinity: ClientIP
Events:
kubectl describe deployment
Name: cs-ui
Namespace: default
CreationTimestamp: Thu, 25 Jan 2018 12:27:59 +0100
Labels: run=cs-ui
Annotations: deployment.kubernetes.io/revision=1
Selector: run=cs-ui
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: run=cs-ui
Containers:
cs-ui:
Image: gcr.io/cs-micro/cs-ui:v1
Port: 80/TCP
Environment:
Mounts:
Volumes:
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets:
NewReplicaSet: cs-ui-2929390783 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 9m deployment-controller Scaled up replica set cs-ui-2929390783 to 1
kubectl describe ing
Name: basic-ingress
Namespace: default
Address: 35.227.220.186
Default backend: cs-ui:80 (10.32.0.32:80)
TLS:
tls-certificate terminates
Rules:
Host Path Backends
---- ---- --------
* * cs-ui:80 (10.32.0.32:80)
Annotations:
https-forwarding-rule: k8s-fws-default-basic-ingress--f5fde3efbfa51336
https-target-proxy: k8s-tps-default-basic-ingress--f5fde3efbfa51336
ssl-cert: k8s-ssl-default-basic-ingress--f5fde3efbfa51336
target-proxy: k8s-tp-default-basic-ingress--f5fde3efbfa51336
url-map: k8s-um-default-basic-ingress--f5fde3efbfa51336
backends: {"k8s-be-30272--f5fde3efbfa51336":"UNHEALTHY"}
forwarding-rule: k8s-fw-default-basic-ingress--f5fde3efbfa51336
static-ip: k8s-fw-default-basic-ingress--f5fde3efbfa51336
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 12m loadbalancer-controller default/basic-ingress
Normal CREATE 11m loadbalancer-controller ip: 35.227.220.186
Normal Service 6m (x4 over 11m) loadbalancer-controller default backend set to cs-ui:30272
After 3-5 minutes I get Unhealthy and I have no clue why because the setup is almost exactly the same as with their setup.
I have read countless of threads on what to do when you get the backend status of Unhealthy, but none of them have helped. One mentioned to add a firewall rule mention in this tutorial: https://cloud.google.com/compute/docs/load-balancing/health-checks which I have added, but did not help.
If you have any suggestions I will gladly test them.

Turned out our Angular application had a redirect on '/' which gave it a 302 response. This response makes the health check fail and results in a UNHEALTHY state.
As soon as we set up a custom health check it worked.

Related

kubernetes : cert-manager/secret-for-certificate-mapper "msg"="unable to fetch certificate that owns the secret

Cert-manager/secret-for-certificate-mapper "msg"="unable to fetch certificate that owns the secret" "error"="Certificate.cert-manager.io "grafanaps-tls" not found"
So , from the investigation , I’m not able to find the grafanaps-tls
Kubectl get certificates
NAME READY SECRET AGE
Alertmanagerdf-tls False alertmanagerdf-tls 1y61d
Prometheusps-tls False prometheusps-tls 1y58
We have do this followings : The nginx ingress and cert-manager were outdated and not compatible with the Kubernetes version of 1.22 anymore. As a result, an upgrade of those components was initiated in order to restore pod operation.
The cmctl check api -n cert-manager command now returns: The cert-manager API has been upgraded to version 1.7 and orphaned secrets have been cleaned up
Cert-manager/webhook "msg"="Detected root CA rotation - regenerating serving certificates"
After a restart the logs looked mainly clean.
For my finding , the issue is integration of cert-manager with the Kubernetes ingress controlle .
So I was interest in cert-manager configuration mostly on ingressshim configuration and args section
It appears that the SSL certificate for several servers has expired and looks like the issue with the certificate resources or the integration of cert-manager with the Kubernetes ingress controller.
Config:
C:\Windows\system32>kubectl describe deployment cert-manager-cabictor -n cert-manager
Name: cert-manager-cabictor
Namespace: cert-manager
CreationTimestamp: Thu, 01 Dec 2022 18:31:02 +0530
Labels: app=cabictor
app.kubernetes.io/component=cabictor
app.kubernetes.io/instance=cert-manager
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=cabictor
app.kubernetes.io/version=v1.7.3
helm.sh/chart=cert-manager-v1.7.3
Annotations: deployment.kubernetes.io/revision: 2
meta.helm.sh/release-name: cert-manager
meta.helm.sh/release-namespace: cert-manager
Selector: app.kubernetes.io/component=cabictor ,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=cabictor
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=cabictor
app.kubernetes.io/component=cabictor
app.kubernetes.io/instance=cert-manager
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=cabictor
app.kubernetes.io/version=v1.7.3
helm.sh/chart=cert-manager-v1.7.3
Service Account: cert-manager-cabictor
Containers:
cert-manager:
Image: quay.io/jetstack/cert-manager-cabictor :v1.7.3
Port: <none>
Host Port: <none>
Args:
--v=2
--leader-election-namespace=kube-system
Environment:
POD_NAMESPACE: (v1:metadata.namespace)
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: cert-manager-cabictor -5b65bcdbbd (1/1 replicas created)
Events: <none>
I was not able to identify and fix the root cause here ..
What is the problem here, and how can it be resolved? Any help would be greatly appreciated

Minikube Service URL not working | Windows 11 [duplicate]

I'm new to Kubernetes. I successfully created a deployment with 2 replicas of my Angular frontend application, but when I expose it with a service and try to access the service with 'minikube service service-name', the browser can't show me the application.
This is my docker file
FROM registry.gitlab.informatica.aci.it/ccsc/images/nodejs/10_15
LABEL maintainer="d.vaccaro#informatica.aci.it" name="assistenza-fo" version="v1.0.0" license=""
WORKDIR /usr/src/app
ARG PRODUCTION_MODE="false"
ENV NODE_ENV='development'
ENV HTTP_PORT=4200
COPY package*.json ./
RUN if [ "${PRODUCTION_MODE}" = "true" ] || [ "${PRODUCTION_MODE}" = "1" ]; then \
echo "Build di produzione"; \
npm ci --production ; \
else \
echo "Build di sviluppo"; \
npm ci ; \
fi
RUN npm audit fix
RUN npm install -g #angular/cli
COPY dockerize /usr/local/bin
RUN chmod +x /usr/local/bin/dockerize
COPY . .
EXPOSE 4200
CMD ng serve --host 0.0.0.0
pod description
Name: assistenza-fo-674f85c547-bzf8g
Namespace: default
Priority: 0
Node: minikube/172.17.0.2
Start Time: Sun, 19 Apr 2020 12:41:06 +0200
Labels: pod-template-hash=674f85c547
run=assistenza-fo
Annotations: <none>
Status: Running
IP: 172.18.0.6
Controlled By: ReplicaSet/assistenza-fo-674f85c547
Containers:
assistenza-fo:
Container ID: docker://ef2bfb66d22dea56b2dc0e49e875376bf1edff369274015445806451582703a0
Image: registry.gitlab.informatica.aci.it/apra/sta-r/assistenza/assistenza-fo:latest
Image ID: docker-pullable://registry.gitlab.informatica.aci.it/apra/sta-r/assistenza/assistenza-fo#sha256:8d02a3e69d6798c1ac88815ef785e05aba6e394eb21f806bbc25fb761cca5a98
Port: 4200/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 19 Apr 2020 12:41:08 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zdrwg (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-zdrwg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zdrwg
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
my deployment description
Name: assistenza-fo
Namespace: default
CreationTimestamp: Sun, 19 Apr 2020 12:41:06 +0200
Labels: run=assistenza-fo
Annotations: deployment.kubernetes.io/revision: 1
Selector: run=assistenza-fo
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: run=assistenza-fo
Containers:
assistenza-fo:
Image: registry.gitlab.informatica.aci.it/apra/sta-r/assistenza/assistenza-fo:latest
Port: 4200/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: assistenza-fo-674f85c547 (2/2 replicas created)
Events: <none>
and my service description
Name: assistenza-fo
Namespace: default
Labels: run=assistenza-fo
Annotations: <none>
Selector: run=assistenza-fo
Type: LoadBalancer
IP: 10.97.3.206
Port: <unset> 4200/TCP
TargetPort: 4200/TCP
NodePort: <unset> 30375/TCP
Endpoints: 172.18.0.6:4200,172.18.0.7:4200
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
When i run the command
minikube service assistenza-fo
I get the following output:
|-----------|---------------|-------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------------|-------------|-------------------------|
| default | assistenza-fo | 4200 | http://172.17.0.2:30375 |
|-----------|---------------|-------------|-------------------------|
* Opening service default/assistenza-fo in default browser...
but Chrome prints out: "unable to reach the site" for timeout.
Thank you
EDIT
I create again the service, this time as a NodePort service. Still not working. This is the service description:
Name: assistenza-fo
Namespace: default
Labels: run=assistenza-fo
Annotations: <none>
Selector: run=assistenza-fo
Type: NodePort
IP: 10.107.46.43
Port: <unset> 4200/TCP
TargetPort: 4200/TCP
NodePort: <unset> 30649/TCP
Endpoints: 172.18.0.7:4200,172.18.0.8:4200
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I was able to reproduce your issue.
It's actually a bug on latest version of Minikube for Windows running Docker Driver: --driver=docker
You can see it here: Issue - minikube service not working with Docker driver on Windows 10 Pro #7644
it was patched with the merge: Pull - docker driver: Add Service & Tunnel features to windows
it is available now on Minikube v1.10.0-beta.0
In order to make it work, download the beta version from the website:
https://github.com/kubernetes/minikube/releases/download/v1.10.0-beta.0/minikube-windows-amd64.exe
move it to your working folder and rename it to minikube.exe
C:\Kubernetes>rename minikube-windows-amd64.exe minikube.exe
C:\Kubernetes>dir
22/04/2020 21:10 <DIR> .
22/04/2020 21:10 <DIR> ..
22/04/2020 21:04 55.480.832 minikube.exe
22/04/2020 20:05 489 nginx.yaml
2 File(s) 55.481.321 bytes
If you haven't yet, stop and uninstall the older version, then start Minikube with the new binary:
C:\Kubernetes>minikube.exe start --driver=docker
* minikube v1.10.0-beta.0 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
* Using the docker driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Restarting existing docker container for "minikube" ...
* Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
- kubeadm.pod-network-cidr=10.244.0.0/16
* Enabled addons: dashboard, default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"
C:\Kubernetes>kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-76df748b9-t6q59 1/1 Running 1 78m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 85m
service/nginx-svc NodePort 10.100.212.15 <none> 80:31027/TCP 78m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 78m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-76df748b9 1 1 1 78m
Minikube is now running on version v1.10.0-beta.0, now you can run the service as intended (and note the command will be unavailable because it will be tunneling the connection:
The browser will open automatically and your service will be available:
If you have any doubts let me know in the comments.

K8s tutorial fails on my local installation with i/o timeout

I'm working on a local kubernetes installation with three nodes. They are installed via geerlingguy/kubernetes Ansible role (with default settings). I've recreated the whole VMs multiple times. I try to follow the Kubernetes tutorials on https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-interactive/ to get services up and running inside the cluster and try to reach them now.
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
enceladus Ready <none> 162m v1.17.9
mimas Ready <none> 162m v1.17.9
titan Ready master 162m v1.17.9
I tried it with the 1.17.9 or 1.18.6, I tried it with https://github.com/geerlingguy/ansible-role-kubernetes and https://github.com/kubernetes-sigs/kubespray on fresh Debian-Buster VMs. I tried it with Flannel and Calico network plugin. There is no a firewall configured.
I can deploy the kubernetes-bootcamp and exec into it, but when I try to reach the pod via kubectl proxy and curl I'm getting an error.
# kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
# kubectl describe pods
Name: kubernetes-bootcamp-69fbc6f4cf-nq4tj
Namespace: default
Priority: 0
Node: enceladus/192.168.10.12
Start Time: Thu, 06 Aug 2020 10:53:34 +0200
Labels: app=kubernetes-bootcamp
pod-template-hash=69fbc6f4cf
Annotations: <none>
Status: Running
IP: 10.244.1.4
IPs:
IP: 10.244.1.4
Controlled By: ReplicaSet/kubernetes-bootcamp-69fbc6f4cf
Containers:
kubernetes-bootcamp:
Container ID: docker://77eae93ca1e6b574ef7b0623844374a5b2f3054075025492b708b23fc3474a45
Image: gcr.io/google-samples/kubernetes-bootcamp:v1
Image ID: docker-pullable://gcr.io/google-samples/kubernetes-bootcamp#sha256:0d6b8ee63bb57c5f5b6156f446b3bc3b3c143d233037f3a2f00e279c8fcc64af
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 06 Aug 2020 10:53:35 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kkcvk (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-kkcvk:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kkcvk
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10s default-scheduler Successfully assigned default/kubernetes-bootcamp-69fbc6f4cf-nq4tj to enceladus
Normal Pulled 9s kubelet, enceladus Container image "gcr.io/google-samples/kubernetes-bootcamp:v1" already present on machine
Normal Created 9s kubelet, enceladus Created container kubernetes-bootcamp
Normal Started 9s kubelet, enceladus Started container kubernetes-bootcamp
Update service list
# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d20h
I can exec curl inside the deployment. It is running.
# kubectl exec -ti kubernetes-bootcamp-69fbc6f4cf-nq4tj curl http://localhost:8080/
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-69fbc6f4cf-nq4tj | v=1
But, when I try to curl from master node the response is not good:
curl http://localhost:8001/api/v1/namespaces/default/pods/kubernetes-bootcamp-69fbc6f4cf-nq4tj/proxy/
Error trying to reach service: 'dial tcp 10.244.1.4:80: i/o timeout'
The curl itself needs ca. 30sec to return. The version etc. is available. The proxy is running fine.
# curl http://localhost:8001/version
{
"major": "1",
"minor": "17",
"gitVersion": "v1.17.9",
"gitCommit": "4fb7ed12476d57b8437ada90b4f93b17ffaeed99",
"gitTreeState": "clean",
"buildDate": "2020-07-15T16:10:45Z",
"goVersion": "go1.13.9",
"compiler": "gc",
"platform": "linux/amd64"
}
The tutorial shows on kubectl describe pods that the container has open ports (in my case it's <none>):
Port: 8080/TCP
Host Port: 0/TCP
Ok, I than created an apply-file bootcamp.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-bootcamp
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-bootcamp
template:
metadata:
labels:
app: kubernetes-bootcamp
spec:
containers:
- name: kubernetes-bootcamp
image: gcr.io/google-samples/kubernetes-bootcamp:v1
ports:
- containerPort: 8080
protocol: TCP
I removed the previous deployment
# kubectl delete deployments.apps kubernetes-bootcamp --force
# kubectl apply -f bootcamp.yaml
But after that I'm getting still the same i/o timeout on the new deployment.
So, what is my problem?

Back-off restarting failed container In Azure AKS

Linux container pod, with docker images from Azure Container registry, keeps restarting with restartPolicy as Always. Pod description is as below.
kubectl describe pod example-pod
...
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 11 Jun 2020 03:27:11 +0000
Finished: Thu, 11 Jun 2020 03:27:12 +0000
...
Back-off restarting failed container
This pod is created with secret to access ACR registry repository.
Reason is that pod completes execution successfully with exit code 0. However, It should keep listening at particular port number. Microsoft document link is at this URL Container Group Runtime under header "Container continually exits and restarts"
deployment-example.yml file content is as below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
namespace: development
labels:
app: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example
image: contentocr.azurecr.io/example:latest
#command: ["ping -t localhost"]
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 3000
imagePullSecrets:
- name: regpass
restartPolicy: Always
nodeSelector:
agent: linux
---
apiVersion: v1
kind: Service
metadata:
name: example
namespace: development
labels:
app: example
spec:
ports:
- name: http-port
port: 3000
targetPort: 3000
selector:
app: example
type: LoadBalancer
Output of kubectl get events is as below.
3m39s Normal Scheduled pod/example-deployment-5dc964fcf8-gbm5t Successfully assigned development/example-deployment-5dc964fcf8-gbm5t to aks-agentpool-18342716-vmss000000
2m6s Normal Pulling pod/example-deployment-5dc964fcf8-gbm5t Pulling image "contentocr.azurecr.io/example:latest"
2m5s Normal Pulled pod/example-deployment-5dc964fcf8-gbm5t Successfully pulled image "contentocr.azurecr.io/example:latest"
2m5s Normal Created pod/example-deployment-5dc964fcf8-gbm5t Created container example
2m49s Normal Started pod/example-deployment-5dc964fcf8-gbm5t Started container example
2m20s Warning BackOff pod/example-deployment-5dc964fcf8-gbm5t Back-off restarting failed container
6m6s Normal SuccessfulCreate replicaset/example-deployment-5dc964fcf8 Created pod: example-deployment-5dc964fcf8-2fdt5
3m39s Normal SuccessfulCreate replicaset/example-deployment-5dc964fcf8 Created pod: example-deployment-5dc964fcf8-gbm5t
6m6s Normal ScalingReplicaSet deployment/example-deployment Scaled up replica set example-deployment-5dc964fcf8 to 1
3m39s Normal ScalingReplicaSet deployment/example-deployment Scaled up replica set example-deployment-5dc964fcf8 to 1
3m38s Normal EnsuringLoadBalancer service/example Ensuring load balancer
3m34s Normal EnsuredLoadBalancer service/example Ensured load balancer
Docker file entry point is like ENTRYPOINT ["npm", "start"] with CMD ["tail -f /dev/null/"]
It runs locally. Implicitly, it assigns CI="true" flag. However, in docker-compose stdin_open: true or tty: true is to be set and in Kubernetes deployment file, ENV named variable CI is to be set up with value "true".
The below command solved my problem:-
az aks update -n aks-nks-k8s-cluster -g aks-nks-k8s-rg --attach-acr aksnksk8s
After executing the above command, below will be displayed:-
Add ROLE Propagation done [###############] 100.0000%
and then,
Running.. followed by Response trail after some time.
Here,
aks-nks-k8s-cluster : Cluster name I have created and using
aks-nks-k8s-rg : Resource Group have created and using
aksnksk8s : Container Registries which I have created and using

Kubernetes pod Troubleshoot

I deployed my container in kubernetes pod, andd pod and related services are up and running.
please find the below pod and deployment status of the pods
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get pods
NAME READY STATUS RESTARTS AGE
angular-deployment-5d5fbf967c-zvzvl 1/1 Running 0 70m
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
angular-service NodePort 10.96.16.68 <none> 80:31000/TCP 79m
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
angular-deployment 1/1 1 1 70m
please find the below curl access
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl exec -it angular-deployment-5d5fbf967c-zvzvl curl 10.0.0.1:31000
curl: (7) Failed to connect to 10.0.0.1 port 31000: Connection refused
command terminated with exit code 7
Even though i am not able to access my application services in the browser as like below.
https://10.0.0.1:31000
apiVersion: apps/v1
kind: Deployment
metadata:
name: angular-deployment
spec:
selector:
matchLabels:
app: frontend-app
replicas: 1
template:
metadata:
labels:
app: frontend-app
spec:
containers:
- name: frontend-app
image: ${IMAGE_NAME}:${IMAGE_TAG}
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: angular-service
spec:
selector:
app: frontend-app
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 31000
type: NodePort
root#jenkins-linux-vm:/home/admin# kubectl describe pod angular-deployment-556c47f666-9d2x4
Name: angular-deployment-556c47f666-9d2x4
Namespace: pre-release
Priority: 0
Node: poc-worker2/10.0.0.2
Start Time: Sat, 18 Jan 2020 08:47:35 +0000
Labels: app=frontend-app
pod-template-hash=556c47f666
Annotations: <none>
Status: Running
IP: 10.32.0.8
IPs:
IP: 10.32.0.8
Controlled By: ReplicaSet/angular-deployment-556c47f666
Containers:
frontend-app:
Container ID: docker://43fea22e4c1d49e0c94fc8aca3a4b41df44b5f91f45ea29ede263c5a6bcf6503
Image: frontend-app:future-master-fix-f2d2a8bd
Image ID: docker://sha256:0099587db89de9ef999a7d1f087d4781e73c491b17e89392e92b08d2f935ad27
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 18 Jan 2020 08:47:40 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r67p7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-r67p7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r67p7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21s default-scheduler Successfully assigned pre-release/angular-deployment-556c47f666-9d2x4 to poc-worker2
Normal Pulled 17s kubelet, poc-worker2 Container image "frontend-app:future-master-fix-f2d2a8bd" already present on machine
Normal Created 16s kubelet, poc-worker2 Created container frontend-app
Normal Started 16s kubelet, poc-worker2 Started container frontend-app
please find the below are the svc for deployment
root#jenkins-linux-vm:/home/admin# kubectl describe svc angular-service
Name: angular-service
Namespace: pre-release
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"angular-service","namespace":"pre-release"},"spec":{"ports":[{"no...
Selector: app=frontend-app
Type: NodePort
IP: 10.96.227.143
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31000/TCP
Endpoints: 10.32.0.4:80,10.32.0.8:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Please find the docker file here
FROM node:12.2.0
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
# add app
COPY . /app
# start app
CMD ng serve --host 0.0.0.0
Can you please some one help me how to fix this issue
Here I could see the issue is with your Angular Dockerfile, you are using ng serve if you see your dependencies in package.json you will see "#angular/cli": "*" in order to have that inside your docker you need to add RUN npm install this will install all your dependencies inside your docker container, and you can do ng serve but ng serve is for development locally, it's not a good approach I would say.
To identify these kind of issues, it's advisable to run something like this on local machine in order to find if your docker container is working fine or not, before you could deploy it onto to k8s cluster, as you know kubernetes is a very big universe it will take time to identify the actual problem.
Ok coming to the issue (I can simply add a single command to your dockerfile and add my answer, but I wouldn't suggest that approach. So adding the complete answer which would look good), when you are deploying some frontend related application your docker image need to have the capabilities of serving the index.html page it's the end product after you build your Angular or React applications.
There are several ways this could be done. And there are several tutorials explaining the same, here is something I would suggest, your dockerfile should look like this. Adding comments what they do.
#Stage0: builder, based on Node alpine imagine to build and compile your angular code
FROM node:10-alpine as builder
WORKDIR /app
COPY package*.json /app/
# This is one thing you forgot in your dockerfile, if you add this it might work
RUN npm install
COPY . .
# This is normal build, it does ng build
RUN npm run build
#Stage 1, based on Nginx imagine, to have only the compile app inside nginx folders to serve
FROM nginx:1.15
COPY --from=builder /app/dist/ /usr/share/nginx/html
# This one copies the local nginx.conf file as default.conf for nginx to let it serve
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
Please make sure you have nginx.conf file inside your code, file level is same as package.json
server {
listen 80;
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 1100;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
#try_files $uri $uri/ /index.html =404;
expires -1d;
alias /usr/share/nginx/html/;
try_files $uri$args $uri$args/ /index.html =404;
location ~* \.(?:ico|css|js|gif|jpe?g|png|svg|woff|woff2|ttf|eot)$ {
add_header Access-Control-Allow-Origin *;
}
}
}
Make sure you run docker run on local machine before you deploy it on to cluster.
Hope this helps.
If you are accessing the pod via nodeip:nodeport which will only work if you are accessing the pod from outside the cluster using a browser.
Here is a guide on how to expose an application via NodePort. In this case the nodes of your kubernetes cluster need to be accessible i.e should have public ip.
You should be using the cluster ip to access the pod from within the cluster i.e via exec from another pod as shown in below command.
kubectl exec -it angular-deployment-5d5fbf967c-zvzvl curl 10.96.16.68:80
I have a feeling that your docker container is not listening on port 80.