Kubernetes Cluster master/ Worker Nodes - kubernetes

I am trying to create a Kubernetes cluster, this cluster will contain 3 nodes
Master Nodes, where I Installed and configured kubeadm , kubelete, and installed my system there (which is web application developed by laravel ),
the worker nodes is joined to the master without any problem ,
and I deployed my system to PHP-fpm pods and created services and horizontal Pods Autoscaling
this is my service:
PHP LoadBalancer 10.108.218.232 <pending> 9000:30026/TCP 15h app=php
this is my pods
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
qsinavphp-5b67996888-9clxp 1/1 Running 0 40m 10.244.0.4 taishan <none> <none>
qsinavphp-5b67996888-fnv7c 1/1 Running 0 43m 10.244.0.12 kubernetes-master <none> <none>
qsinavphp-5b67996888-gbtdw 1/1 Running 0 40m 10.244.0.3 taishan <none> <none>
qsinavphp-5b67996888-l6ghh 1/1 Running 0 33m 10.244.0.2 taishan <none> <none>
qsinavphp-5b67996888-ndbc8 1/1 Running 0 43m 10.244.0.11 kubernetes-master <none> <none>
qsinavphp-5b67996888-qgdbc 1/1 Running 0 43m 10.244.0.10 kubernetes-master <none> <none>
qsinavphp-5b67996888-t97qm 1/1 Running 0 43m 10.244.0.13 kubernetes-master <none> <none>
qsinavphp-5b67996888-wgrzb 1/1 Running 0 43m 10.244.0.14 kubernetes-master <none> <none>
the worker nondes is taishan , and the master is Kubernetes-master.
and this is my nginx config which is sending request to php service
server {
listen 80;
listen 443 ssl;
server_name k8s.example.com;
root /var/www/html/Test/project-starter/public;
ssl_certificate "/var/www/cert/example.cer";
ssl_certificate_key "/var/www/cert/example.key";
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.php;
charset utf-8;
# if ($scheme = http) {
# return 301 https://$server_name$request_uri;
# }
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES25>
ssl_prefer_server_ciphers on;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
fastcgi_pass 10.108.218.232:9000;
include fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
the problem is I have 3 pods on the worker node and 5 pods on the master node, but no request going to the worker's pods all request is going to the master,
both of my nodes are in ready status
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kubernetes-master Ready control-plane,master 15h v1.20.4 10.14.0.58 <none> Ubuntu 20.04.1 LTS 5.4.0-70-generic docker://19.3.8
taishan Ready <none> 79m v1.20.5 10.14.2.66 <none> Ubuntu 20.04.1 LTS 5.4.0-42-generic docker://19.3.8
this is my kubectl describe nodes php result
Name: php
Namespace: default
Labels: tier=backend
Annotations: <none>
Selector: app=php
Type: LoadBalancer
IP Families: <none>
IP: 10.108.218.232
IPs: 10.108.218.232
Port: <unset> 9000/TCP
TargetPort: 9000/TCP
NodePort: <unset> 30026/TCP
Endpoints: 10.244.0.10:9000,10.244.0.11:9000,10.244.0.12:9000 + 7 more...
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 48m service-controller ClusterIP -> LoadBalancer
this is my yaml file which I am using to create the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: php
name: qsinavphp
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: php
strategy:
type: Recreate
template:
metadata:
labels:
app: php
spec:
containers:
- name: taishan-php-fpm
image: starking8b/taishanphp:last
imagePullPolicy: Never
ports:
- containerPort: 9000
volumeMounts:
- name: qsinav-nginx-config-volume
mountPath: /usr/local/etc/php-fpm.d/www.conf
subPath: www.conf
- name: qsinav-nginx-config-volume
mountPath: /usr/local/etc/php/conf.d/docker-php-memlimit.ini
subPath: php-memory
- name: qsinav-php-config-volume
mountPath: /usr/local/etc/php/php.ini-production
subPath: php.ini
- name: qsinav-php-config-volume
mountPath: /usr/local/etc/php/php.ini-development
subPath: php.ini
- name: qsinav-php-config-volume
mountPath: /usr/local/etc/php-fpm.conf
subPath: php-fpm.conf
- name: qsinav-www-storage
mountPath: /var/www/html/Test/qSinav-starter
resources:
limits:
cpu: 4048m
requests:
cpu: 4048m
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: qsinav-www-storage
persistentVolumeClaim:
claimName: qsinav-pv-www-claim
- name: qsinav-nginx-config-volume
configMap:
name: qsinav-nginx-config
- name: qsinav-php-config-volume
configMap:
name: qsinav-php-config
and this is my service yaml file
apiVersion: v1
kind: Service
metadata:
name: php
labels:
tier: backend
spec:
selector:
app: php
ports:
- protocol: TCP
port: 9000
type: LoadBalancer
I am not sure where is my error , so please help to solve this problem

actually the problem was with flannel network , it was not able to make connection between nodes , so I solved it by installing weave plugin which is working fine now
by applying this command
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Here I have added from basic baremetal k8 installation
##### Creating ssh keys
From master node
`ssh-keygen`
Copy content in `~/.ssh/id_rsa.pub`
Login to other servers and paste this copied part into `~/.ssh/authorized_keys`
Follow these steps in all servers. Master and worker.
`sudo apt-get install python`
`sudo apt install python3-pip`
Adding Ansible
`sudo apt-add-repository ppa:ansible/ansible`
`sudo apt update`
`sudo apt-get install ansible -y`
[Reference](https://www.techrepublic.com/article/how-to-install-ansible-on-ubuntu-server-18-04/)
### Install Kubernetes
`sudo apt-get update`
`sudo apt-get install docker.io`
`sudo systemctl enable docker`
`curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add`
`sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"`
`sudo apt-get install kubeadm kubelet kubectl`
`sudo apt-mark hold kubeadm kubelet kubectl`
For more details please [refer](https://phoenixnap.com/kb/install-kubernetes-on-ubuntu)
### Installing Kubespray
`git clone https://github.com/kubernetes-incubator/kubespray.git`
`cd kubespray`
`sudo pip3 install -r requirements.txt`
`cp -rfp inventory/sample inventory/mycluster`
`declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)`
Please put your IP addresses here separated with a space.
`CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[#]}`
`ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml`
For none root user access
`ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml --extra-vars "ansible_sudo_pass=password"`
This will take around 15mins to run successfully. If `root` user ssh is not working properly, this will fail. Please check key sharing step again.
[10 Simple stepms](https://dzone.com/articles/kubespray-10-simple-steps-for-installing-a-product)
[Add a node to existing cluster](https://www.serverlab.ca/tutorials/containers/kubernetes/how-to-add-workers-to-kubernetes-clusters/)
[kubelet debug](https://stackoverflow.com/questions/56463783/how-to-start-kubelet-service)
### Possible Errors
`kubectl get nodes`
> The connection to the server localhost:8080 was refused - did you specify the right host or port?
Perform followings as normal user (none root user)
`mkdir -p $HOME/.kube`
`sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config`
`sudo chown $(id -u):$(id -g) $HOME/.kube/config`
If you are in worker node, you will have to use `scp` to get `/etc/kubernetes/admin.conf` from master node. Master node may have this problem, if so please do these steps locally using normal user.
[Refer](https://www.edureka.co/community/18633/error-saying-connection-server-localhost-refused-specify)
## Installing MetalLB
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
[Official Installation guide](https://metallb.universe.tf/installation/)
### Configuring L2 config
sachith#master:~$ cat << EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.19-192.168.1.29 # Preferred IP range.
EOF
Verify installation success using : kubectl describe configmap config -n metallb-system
This will install two components.
Controller : Responsible for assigning IPs.
Speaker : Facilitate services to navigate through LB.

Related

linkerd Top feature only shows /healthz requests

Doing Lab 7.2. Service Mesh and Ingress Controller from the Kubernetes Developer course from the Linux Foundation and there is a problem I am facing - the Top feature only shows the /healthz requests.
It is supposed to show / requests too. But does not. Would really like to troubleshoot it, but I have no idea how to even approach it.
More details
Following the course instructions I have:
A k8s cluster deployed on two GCE VMs
linkerd
nginx ingress controller
A simple LoadBalancer service off the httpd image. In effect, this is a NodePort service, since the LoadBalancer is never provisioned. The name is secondapp
A simple ingress object routing to the secondapp service.
I have no idea what information is useful to troubleshoot the issue. Here is some that I can think off:
Setup
Linkerd version
student#master:~$ linkerd version
Client version: stable-2.11.1
Server version: stable-2.11.1
student#master:~$
nginx ingress controller version
student#master:~$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
myingress default 1 2022-09-28 02:09:35.031108611 +0000 UTC deployed ingress-nginx-4.2.5 1.3.1
student#master:~$
The service list
student#master:~$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d4h
myingress-ingress-nginx-controller LoadBalancer 10.106.67.139 <pending> 80:32144/TCP,443:32610/TCP 62m
myingress-ingress-nginx-controller-admission ClusterIP 10.107.109.117 <none> 443/TCP 62m
nginx ClusterIP 10.105.88.244 <none> 443/TCP 3h42m
registry ClusterIP 10.110.129.139 <none> 5000/TCP 3h42m
secondapp LoadBalancer 10.105.64.242 <pending> 80:32000/TCP 111m
student#master:~$
Verifying that the ingress controller is known to linkerd
student#master:~$ k get ds myingress-ingress-nginx-controller -o json | jq .spec.template.metadata.annotations
{
"linkerd.io/inject": "ingress"
}
student#master:~$
The secondapp pod
apiVersion: v1
kind: Pod
metadata:
name: secondapp
labels:
example: second
spec:
containers:
- name: webserver
image: httpd
- name: busy
image: busybox
command:
- sleep
- "3600"
The secondapp service
student#master:~$ k get svc secondapp -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2022-09-28T01:21:00Z"
name: secondapp
namespace: default
resourceVersion: "433221"
uid: 9266f000-5582-4796-ba73-02375f56ce2b
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.105.64.242
clusterIPs:
- 10.105.64.242
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 32000
port: 80
protocol: TCP
targetPort: 80
selector:
example: second
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
student#master:~$
The ingress object
student#master:~$ k get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-test <none> www.example.com 80 65m
student#master:~$ k get ingress ingress-test -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
creationTimestamp: "2022-09-28T02:20:03Z"
generation: 1
name: ingress-test
namespace: default
resourceVersion: "438934"
uid: 1952a816-a3f3-42a4-b842-deb56053b168
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
service:
name: secondapp
port:
number: 80
path: /
pathType: ImplementationSpecific
status:
loadBalancer: {}
student#master:~$
Testing
secondapp
student#master:~$ curl "$(curl ifconfig.io):$(k get svc secondapp '--template={{(index .spec.ports 0).nodePort}}')"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 15 100 15 0 0 340 0 --:--:-- --:--:-- --:--:-- 348
<html><body><h1>It works!</h1></body></html>
student#master:~$
through the ingress controller
student#master:~$ url="$(curl ifconfig.io):$(k get svc myingress-ingress-nginx-controller '--template={{(index .spec.ports 0).nodePort}}')"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 15 100 15 0 0 319 0 --:--:-- --:--:-- --:--:-- 319
student#master:~$ curl -H "Host: www.example.com" $url
<html><body><h1>It works!</h1></body></html>
student#master:~$
And without the Host header:
student#master:~$ curl $url
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
student#master:~$
And finally the linkerd dashboard Top snapshot:
Where are the GET / requests?
EDIT 1
So on the linkerd slack someone suggested to have a look at https://linkerd.io/2.12/tasks/using-ingress/#nginx and that made me examine my pods more carefully. It turns out one of the nginx-ingress pods could not start and it is clearly due to linkerd injection. Please, observe:
Before linkerd
student#master:~$ k get pod
NAME READY STATUS RESTARTS AGE
myingress-ingress-nginx-controller-gbmbg 1/1 Running 0 19m
myingress-ingress-nginx-controller-qtdhw 1/1 Running 0 3m6s
secondapp 2/2 Running 4 (13m ago) 12h
student#master:~$
After linkerd
student#master:~$ k get ds myingress-ingress-nginx-controller -o yaml | linkerd inject --ingress - | k apply -f -
daemonset "myingress-ingress-nginx-controller" injected
daemonset.apps/myingress-ingress-nginx-controller configured
student#master:~$
And checking the pods:
student#master:~$ k get pod
NAME READY STATUS RESTARTS AGE
myingress-ingress-nginx-controller-gbmbg 1/1 Running 0 40m
myingress-ingress-nginx-controller-xhj5m 1/2 Running 8 (5m59s ago) 17m
secondapp 2/2 Running 4 (34m ago) 12h
student#master:~$
student#master:~$ k describe pod myingress-ingress-nginx-controller-xhj5m |tail
Normal Created 19m kubelet Created container linkerd-proxy
Normal Started 19m kubelet Started container linkerd-proxy
Normal Pulled 18m (x2 over 19m) kubelet Container image "registry.k8s.io/ingress-nginx/controller:v1.3.1#sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974" already present on machine
Normal Created 18m (x2 over 19m) kubelet Created container controller
Normal Started 18m (x2 over 19m) kubelet Started container controller
Warning FailedPreStopHook 18m kubelet Exec lifecycle hook ([/wait-shutdown]) for Container "controller" in Pod "myingress-ingress-nginx-controller-xhj5m_default(93dd0189-091f-4c56-a197-33991932d66d)" failed - error: command '/wait-shutdown' exited with 137: , message: ""
Warning Unhealthy 18m (x6 over 19m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 502
Normal Killing 18m kubelet Container controller failed liveness probe, will be restarted
Warning Unhealthy 14m (x30 over 19m) kubelet Liveness probe failed: HTTP probe failed with statuscode: 502
Warning BackOff 4m29s (x41 over 14m) kubelet Back-off restarting failed container
student#master:~$
I will process the link I was given on the linkerd slack and update this post with any new findings.
The solution was provided by the Axenow user on the linkerd2 slack forum. The problem is that ingress-nginx cannot share the namespace with the services it provides the ingress functionality to. In my case all of them were in the default namespace.
To quote Axenow:
When you deploy nginx, by default it send traffic to the pod directly.
To fix it you have to make this configuration:
https://linkerd.io/2.12/tasks/using-ingress/#nginx
To elaborate, one has to update the values.yaml file of the downloaded ingress-nginx helm chart to make sure the following is true:
controller:
replicaCount: 2
service:
externalTrafficPolicy: Cluster
podAnnotations:
linkerd.io/inject: enabled
And install the controller in a dedicated namespace:
helm upgrade --install --create-namespace --namespace ingress-nginx -f values.yaml ingress-nginx ingress-nginx/ingress-nginx
(Having uninstalled the previous installation, of course)

Kubernetes pod Troubleshoot

I deployed my container in kubernetes pod, andd pod and related services are up and running.
please find the below pod and deployment status of the pods
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get pods
NAME READY STATUS RESTARTS AGE
angular-deployment-5d5fbf967c-zvzvl 1/1 Running 0 70m
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
angular-service NodePort 10.96.16.68 <none> 80:31000/TCP 79m
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
angular-deployment 1/1 1 1 70m
please find the below curl access
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl exec -it angular-deployment-5d5fbf967c-zvzvl curl 10.0.0.1:31000
curl: (7) Failed to connect to 10.0.0.1 port 31000: Connection refused
command terminated with exit code 7
Even though i am not able to access my application services in the browser as like below.
https://10.0.0.1:31000
apiVersion: apps/v1
kind: Deployment
metadata:
name: angular-deployment
spec:
selector:
matchLabels:
app: frontend-app
replicas: 1
template:
metadata:
labels:
app: frontend-app
spec:
containers:
- name: frontend-app
image: ${IMAGE_NAME}:${IMAGE_TAG}
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: angular-service
spec:
selector:
app: frontend-app
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 31000
type: NodePort
root#jenkins-linux-vm:/home/admin# kubectl describe pod angular-deployment-556c47f666-9d2x4
Name: angular-deployment-556c47f666-9d2x4
Namespace: pre-release
Priority: 0
Node: poc-worker2/10.0.0.2
Start Time: Sat, 18 Jan 2020 08:47:35 +0000
Labels: app=frontend-app
pod-template-hash=556c47f666
Annotations: <none>
Status: Running
IP: 10.32.0.8
IPs:
IP: 10.32.0.8
Controlled By: ReplicaSet/angular-deployment-556c47f666
Containers:
frontend-app:
Container ID: docker://43fea22e4c1d49e0c94fc8aca3a4b41df44b5f91f45ea29ede263c5a6bcf6503
Image: frontend-app:future-master-fix-f2d2a8bd
Image ID: docker://sha256:0099587db89de9ef999a7d1f087d4781e73c491b17e89392e92b08d2f935ad27
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 18 Jan 2020 08:47:40 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r67p7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-r67p7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r67p7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21s default-scheduler Successfully assigned pre-release/angular-deployment-556c47f666-9d2x4 to poc-worker2
Normal Pulled 17s kubelet, poc-worker2 Container image "frontend-app:future-master-fix-f2d2a8bd" already present on machine
Normal Created 16s kubelet, poc-worker2 Created container frontend-app
Normal Started 16s kubelet, poc-worker2 Started container frontend-app
please find the below are the svc for deployment
root#jenkins-linux-vm:/home/admin# kubectl describe svc angular-service
Name: angular-service
Namespace: pre-release
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"angular-service","namespace":"pre-release"},"spec":{"ports":[{"no...
Selector: app=frontend-app
Type: NodePort
IP: 10.96.227.143
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31000/TCP
Endpoints: 10.32.0.4:80,10.32.0.8:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Please find the docker file here
FROM node:12.2.0
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
# add app
COPY . /app
# start app
CMD ng serve --host 0.0.0.0
Can you please some one help me how to fix this issue
Here I could see the issue is with your Angular Dockerfile, you are using ng serve if you see your dependencies in package.json you will see "#angular/cli": "*" in order to have that inside your docker you need to add RUN npm install this will install all your dependencies inside your docker container, and you can do ng serve but ng serve is for development locally, it's not a good approach I would say.
To identify these kind of issues, it's advisable to run something like this on local machine in order to find if your docker container is working fine or not, before you could deploy it onto to k8s cluster, as you know kubernetes is a very big universe it will take time to identify the actual problem.
Ok coming to the issue (I can simply add a single command to your dockerfile and add my answer, but I wouldn't suggest that approach. So adding the complete answer which would look good), when you are deploying some frontend related application your docker image need to have the capabilities of serving the index.html page it's the end product after you build your Angular or React applications.
There are several ways this could be done. And there are several tutorials explaining the same, here is something I would suggest, your dockerfile should look like this. Adding comments what they do.
#Stage0: builder, based on Node alpine imagine to build and compile your angular code
FROM node:10-alpine as builder
WORKDIR /app
COPY package*.json /app/
# This is one thing you forgot in your dockerfile, if you add this it might work
RUN npm install
COPY . .
# This is normal build, it does ng build
RUN npm run build
#Stage 1, based on Nginx imagine, to have only the compile app inside nginx folders to serve
FROM nginx:1.15
COPY --from=builder /app/dist/ /usr/share/nginx/html
# This one copies the local nginx.conf file as default.conf for nginx to let it serve
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
Please make sure you have nginx.conf file inside your code, file level is same as package.json
server {
listen 80;
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 1100;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
#try_files $uri $uri/ /index.html =404;
expires -1d;
alias /usr/share/nginx/html/;
try_files $uri$args $uri$args/ /index.html =404;
location ~* \.(?:ico|css|js|gif|jpe?g|png|svg|woff|woff2|ttf|eot)$ {
add_header Access-Control-Allow-Origin *;
}
}
}
Make sure you run docker run on local machine before you deploy it on to cluster.
Hope this helps.
If you are accessing the pod via nodeip:nodeport which will only work if you are accessing the pod from outside the cluster using a browser.
Here is a guide on how to expose an application via NodePort. In this case the nodes of your kubernetes cluster need to be accessible i.e should have public ip.
You should be using the cluster ip to access the pod from within the cluster i.e via exec from another pod as shown in below command.
kubectl exec -it angular-deployment-5d5fbf967c-zvzvl curl 10.96.16.68:80
I have a feeling that your docker container is not listening on port 80.

Kubernetes pods can't ping each other using ClusterIP

I'm trying to ping the kube-dns service from a dnstools pod using the cluster IP assigned to the kube-dns service. The ping request times out. From the same dnstools pod, I tried to curl the kube-dns service using the exposed port, but that timed out as well.
Following is the output of kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default pod/busybox 1/1 Running 62 2d14h 192.168.1.37 kubenode <none>
default pod/dnstools 1/1 Running 0 2d13h 192.168.1.45 kubenode <none>
default pod/nginx-deploy-7c45b84548-ckqzb 1/1 Running 0 6d11h 192.168.1.5 kubenode <none>
default pod/nginx-deploy-7c45b84548-vl4kh 1/1 Running 0 6d11h 192.168.1.4 kubenode <none>
dmi pod/elastic-deploy-5d7c85b8c-btptq 1/1 Running 0 2d14h 192.168.1.39 kubenode <none>
kube-system pod/calico-node-68lc7 2/2 Running 0 6d11h 10.62.194.5 kubenode <none>
kube-system pod/calico-node-9c2jz 2/2 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/coredns-5c98db65d4-5nprd 1/1 Running 0 6d12h 192.168.0.2 kubemaster <none>
kube-system pod/coredns-5c98db65d4-5vw95 1/1 Running 0 6d12h 192.168.0.3 kubemaster <none>
kube-system pod/etcd-kubemaster 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-apiserver-kubemaster 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-controller-manager-kubemaster 1/1 Running 1 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-proxy-9hcgv 1/1 Running 0 6d11h 10.62.194.5 kubenode <none>
kube-system pod/kube-proxy-bxw9s 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-scheduler-kubemaster 1/1 Running 1 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/tiller-deploy-767d9b9584-5k95j 1/1 Running 0 3d9h 192.168.1.8 kubenode <none>
nginx-ingress pod/nginx-ingress-66wts 1/1 Running 0 5d17h 192.168.1.6 kubenode <none>
In the above output, why do some pods have an IP assigned in the 192.168.0.0/24 subnet whereas others have an IP that is equal to the IP address of my node/master? (10.62.194.4 is the IP of my master, 10.62.194.5 is the IP of my node)
This is the config.yml I used to initialize the cluster using kubeadm init --config=config.yml
apiServer:
certSANs:
- 10.62.194.4
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: dev-cluster
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
networking:
dnsDomain: cluster.local
podSubnet: 192.168.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
Result of kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d12h <none>
default service/nginx-deploy ClusterIP 10.97.5.194 <none> 80/TCP 5d17h run=nginx
dmi service/elasticsearch ClusterIP 10.107.84.159 <none> 9200/TCP,9300/TCP 2d14h app=dmi,component=elasticse
dmi service/metric-server ClusterIP 10.106.117.2 <none> 8098/TCP 2d14h app=dmi,component=metric-se
kube-system service/calico-typha ClusterIP 10.97.201.232 <none> 5473/TCP 6d12h k8s-app=calico-typha
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d12h k8s-app=kube-dns
kube-system service/tiller-deploy ClusterIP 10.98.133.94 <none> 44134/TCP 3d9h app=helm,name=tiller
The command I ran was kubectl exec -ti dnstools -- curl 10.96.0.10:53
EDIT:
I raised this question because I got this error when trying to resolve service names from within the cluster. I was under the impression that I got this error because I cannot ping the DNS server from a pod.
Output of kubectl exec -ti dnstools -- nslookup kubernetes.default
;; connection timed out; no servers could be reached
command terminated with exit code 1
Output of kubectl exec dnstools cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local reddog.microsoft.com
options ndots:5
Result of kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 192.168.0.2:53,192.168.0.3:53,192.168.0.2:53 + 3 more... 6d13h
EDIT:
Ping-ing the CoreDNS pod directly using its Pod IP times out as well:
/ # ping 192.168.0.2
PING 192.168.0.2 (192.168.0.2): 56 data bytes
^C
--- 192.168.0.2 ping statistics ---
24 packets transmitted, 0 packets received, 100% packet loss
EDIT:
I think something has gone wrong when I was setting up the cluster. Below are the steps I took when setting up the cluster:
Edit host files on master and worker to include the IP's and hostnames of the nodes
Disabled swap using swapoff -a and disabled swap permanantly by editing /etc/fstab
Install docker prerequisites using apt-get install apt-transport-https ca-certificates curl software-properties-common -y
Added Docker GPG key using curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
Added Docker repo using add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Install Docker using apt-get update -y; -get install docker-ce -y
Install Kubernetes prerequisites using curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
Added Kubernetes repo using echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Update repo and install Kubernetes components using apt-get update -y; apt-get install kubelet kubeadm kubectl -y
Configure master node:
kubeadm init --apiserver-advertise-address=10.62.194.4 --apiserver-cert-extra-sans=10.62.194.4 --pod-network-cidr=192.168.0.0/16
Copy Kube config to $HOME: mkdir -p $HOME/.kube; sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; sudo chown $(id -u):$(id -g) $HOME/.kube/config
Installed Calico using kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml; kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
On node:
On the node I did the kubeadm join command using the command printed out from kubeadm token create --print-join-command on the master
The kubernetes system pods get assigned the host ip since they provide low level services that are not dependant on an overlay network (or in case of calico even provide the overlay network). They have the ip of the node where they run.
A common pod uses the overlay network and gets assigned an ip from the calico range, not from the metal node they run on.
You can't access DNS (port 53) with HTTP using curl. You can use dig to query a DNS resolver.
A service IP is not reachable by ping since it is a virtual IP just used as a routing handle for the iptables rules setup by kube-proxy, therefore a TCP connection works, but ICMP not.
You can ping a pod IP though, since it is assigned from the overlay network.
You should check on the same namespace
Currently, you are in default namespace and curl to other kube-system namespace.
You should check in the same namespace, I think it works.
On some cases the local host that Elasticsearch publishes is not routable/accessible from other hosts. On these cases you will have to configure network.publish_host in the yml config file, in order for Elasticsearch to use and publish the right address.
Try configuring network.publish_host to the right public address.
See more here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#advanced-network-settings
note that control plane components like api server, etcd that runs on master node are bound to host network. and hence you see the ip address of the master server.
On the other hand, the apps that you deployed are going to get the ips from the pod subnet range. those vary from cluster node ip's
Try below steps to test dns working or not
deploy nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
labels:
app: nginx
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
emptyDir:
kuebctl create -f nginx.yaml
master $ kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 1m
web-1 1/1 Running 0 1m
master $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35m
nginx ClusterIP None <none> 80/TCP 2m
master $ kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local
Address 2: 10.40.0.2 web-1.nginx.default.svc.cluster.local
/ #
/ # nslookup web-0.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local
/ # nslookup web-0.nginx.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx.default.svc.cluster.local
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local

Jenkins app is not accessible outside Kubernetes cluster

On CentOS 7.4, I have set up a Kubernetes master node, pulled down jenkins image and deployed it to the cluster defining the jenkins service on a NodePort as below.
I can curl the jenkins app from the worker or master nodes using the IP defined by the service. But, I can not access the Jenkins app (dashboard) from my browser (outside cluster) using the public IP of the master node.
[administrator#abcdefgh ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
abcdefgh Ready master 19h v1.13.1
hgfedcba Ready <none> 19h v1.13.1
[administrator#abcdefgh ~]$ sudo docker pull jenkinsci/jenkins:2.154-alpine
[administrator#abcdefgh ~]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 5 days ago 80.2MB
k8s.gcr.io/kube-controller-manager v1.13.1 26e6f1db2a52 5 days ago 146MB
k8s.gcr.io/kube-apiserver v1.13.1 40a63db91ef8 5 days ago 181MB
k8s.gcr.io/kube-scheduler v1.13.1 ab81d7360408 5 days ago 79.6MB
jenkinsci/jenkins 2.154-alpine aa25058d8320 2 weeks ago 222MB
k8s.gcr.io/coredns 1.2.6 f59dcacceff4 6 weeks ago 40MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 2 months ago 220MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 10 months ago 44.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 12 months ago 742kB
[administrator#abcdefgh ~]$ ls -l
total 8
-rw------- 1 administrator administrator 678 Dec 18 06:12 jenkins-deployment.yaml
-rw------- 1 administrator administrator 410 Dec 18 06:11 jenkins-service.yaml
[administrator#abcdefgh ~]$ cat jenkins-service.yaml
apiVersion: v1
kind: Service
metadata:
name: jenkins-ui
spec:
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: ui
selector:
app: jenkins-master
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-discovery
spec:
selector:
app: jenkins-master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: jenkins-slaves
[administrator#abcdefgh ~]$ cat jenkins-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
containers:
- image: jenkins/jenkins:2.154-alpine
name: jenkins
ports:
- containerPort: 8080
name: http-port
- containerPort: 50000
name: jnlp-port
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
[administrator#abcdefgh ~]$ kubectl create -f jenkins-service.yaml
service/jenkins-ui created
service/jenkins-discovery created
[administrator#abcdefgh ~]$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-discovery ClusterIP 10.98.--.-- <none> 50000/TCP 19h
jenkins-ui NodePort 10.97.--.-- <none> 8080:31587/TCP 19h
kubernetes ClusterIP 10.96.--.-- <none> 443/TCP 20h
[administrator#abcdefgh ~]$ kubectl create -f jenkins-deployment.yaml
deployment.extensions/jenkins created
[administrator#abcdefgh ~]$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
jenkins 1/1 1 1 19h
[administrator#abcdefgh ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default jenkins-6497cf9dd4-f9r5b 1/1 Running 0 19h
kube-system coredns-86c58d9df4-jfq5b 1/1 Running 0 20h
kube-system coredns-86c58d9df4-s4k6d 1/1 Running 0 20h
kube-system etcd-abcdefgh 1/1 Running 1 20h
kube-system kube-apiserver-abcdefgh 1/1 Running 1 20h
kube-system kube-controller-manager-abcdefgh 1/1 Running 5 20h
kube-system kube-flannel-ds-amd64-2w68w 1/1 Running 1 20h
kube-system kube-flannel-ds-amd64-6zl4g 1/1 Running 1 20h
kube-system kube-proxy-9r4xt 1/1 Running 1 20h
kube-system kube-proxy-s7fj2 1/1 Running 1 20h
kube-system kube-scheduler-abcdefgh 1/1 Running 8 20h
[administrator#abcdefgh ~]$ kubectl describe pod jenkins-6497cf9dd4-f9r5b
Name: jenkins-6497cf9dd4-f9r5b
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: hgfedcba/10.41.--.--
Start Time: Tue, 18 Dec 2018 06:32:50 -0800
Labels: app=jenkins-master
pod-template-hash=6497cf9dd4
Annotations: <none>
Status: Running
IP: 10.244.--.--
Controlled By: ReplicaSet/jenkins-6497cf9dd4
Containers:
jenkins:
Container ID: docker://55912512a7aa1f782784690b558d74001157f242a164288577a85901ecb5d152
Image: jenkins/jenkins:2.154-alpine
Image ID: docker-pullable://jenkins/jenkins#sha256:b222875a2b788f474db08f5f23f63369b0f94ed7754b8b32ac54b8b4d01a5847
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Tue, 18 Dec 2018 07:16:32 -0800
Ready: True
Restart Count: 0
Environment:
JAVA_OPTS: -Djenkins.install.runSetupWizard=false
Mounts:
/var/jenkins_home from jenkins-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wqph5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
jenkins-home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-wqph5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wqph5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
[administrator#abcdefgh ~]$ kubectl describe svc jenkins-ui
Name: jenkins-ui
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=jenkins-master
Type: NodePort
IP: 10.97.--.--
Port: ui 8080/TCP
TargetPort: 8080/TCP
NodePort: ui 31587/TCP
Endpoints: 10.244.--.--:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
# Check if NodePort along with Kubernetes ports are open
[administrator#abcdefgh ~]$ sudo su root
[root#abcdefgh administrator]# systemctl start firewalld
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=6443/tcp # Kubernetes API Server
Warning: ALREADY_ENABLED: 6443:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=2379-2380/tcp # etcd server client API
Warning: ALREADY_ENABLED: 2379-2380:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10250/tcp # Kubelet API
Warning: ALREADY_ENABLED: 10250:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10251/tcp # kube-scheduler
Warning: ALREADY_ENABLED: 10251:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10252/tcp # kube-controller-manager
Warning: ALREADY_ENABLED: 10252:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10255/tcp # Read-Only Kubelet API
Warning: ALREADY_ENABLED: 10255:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=31587/tcp # NodePort of jenkins-ui service
Warning: ALREADY_ENABLED: 31587:tcp
success
[root#abcdefgh administrator]# firewall-cmd --reload
success
[administrator#abcdefgh ~]$ kubectl cluster-info
Kubernetes master is running at https://10.41.--.--:6443
KubeDNS is running at https://10.41.--.--:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[administrator#hgfedcba ~]$ curl 10.41.--.--:8080
curl: (7) Failed connect to 10.41.--.--:8080; Connection refused
# Successfully curl jenkins app using its service IP from the worker node
[administrator#hgfedcba ~]$ curl 10.97.--.--:8080
<!DOCTYPE html><html><head resURL="/static/5882d14a" data-rooturl="" data-resurl="/static/5882d14a">
<title>Dashboard [Jenkins]</title><link rel="stylesheet" ...
...
Would you know how to do that? Happy to provide additional logs. Also, I have installed jenkins from yum on another similar machine without any docker or kubernetes and it's possible to access it through 10.20.30.40:8080 in my browser so there is no provider firewall preventing me from doing that.
Your Jenkins Service is of type NodePort. That means that a specific port number, on any node within your cluster, will deliver your Jenkins UI.
When you described your Service, you can see that the port assigned was 31587.
You should be able to browse to http://SOME_IP:31587

GKE with Ingress setup always gives status UNHEALTHY

To start of I have tested the tutorial at https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
which works fine. I also tested the same tutorial but added a tls secret as well to test https which also worked fine.
My problems arise when I create my own image. Here is the steps I take:
The Dockerfile:
# We label our stage as "builder"
FROM node:9.4.0-alpine as builder
COPY package.json package-lock.json ./
## Storing node modules on a separate layer will prevent unnecessary npm installs at each build
RUN npm i && mkdir /srv/cs-ui && cp -R ./node_modules ./srv/cs-ui
WORKDIR /srv/cs-ui
COPY . .
## Build the angular app in production mode and store the artifacts in dist folder
RUN $(npm bin)/ng build --environment "prod"
FROM nginx
## Copy our default nginx config
COPY nginx/default.conf /etc/nginx/conf.d/
## Remove default nginx website
RUN rm -rf /usr/share/nginx/html/*
## From "builder" stage copy over the artifacts in dist folder to default nginx nginx public folder
COPY --from=builder /srv/cs-ui/dist /usr/share/nginx/html/
The Dockerfile is run with docker-compose file that looks like this:
version: '2'
services:
cs-ui:
image: "gcr.io/cs-micro/cs-ui:v1"
container_name: "cs-ui"
tty: true
build: .
ports:
- "80:80"
Locally this works without any issues. The next thing I do is to push it to the Container Registry.
gcloud docker -- push gcr.io/cs-micro/cs-ui:v1
After that I create a container:
kubectl run cs-ui --image=gcr.io/cs-micro/cs-ui:v1 --port=80
Then I expose it:
kubectl expose deployment cs-ui --target-port=80 --type=NodePort
Then I run the following ingress file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
tls:
- secretName: tls-certificate
backend:
serviceName: cs-ui
servicePort: 80
with command:
kubectl apply -f test.yaml
kubectl describe service
Name: cs-ui
Namespace: default
Labels: run=cs-ui
Annotations:
Selector: run=cs-ui
Type: NodePort
IP: 10.35.244.124
Port: 80/TCP
TargetPort: 80/TCP
NodePort: 30272/TCP
Endpoints: 10.32.0.32:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations:
Selector:
Type: ClusterIP
IP: 10.35.240.1
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: 35.195.192.28:443
Session Affinity: ClientIP
Events:
kubectl describe deployment
Name: cs-ui
Namespace: default
CreationTimestamp: Thu, 25 Jan 2018 12:27:59 +0100
Labels: run=cs-ui
Annotations: deployment.kubernetes.io/revision=1
Selector: run=cs-ui
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: run=cs-ui
Containers:
cs-ui:
Image: gcr.io/cs-micro/cs-ui:v1
Port: 80/TCP
Environment:
Mounts:
Volumes:
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets:
NewReplicaSet: cs-ui-2929390783 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 9m deployment-controller Scaled up replica set cs-ui-2929390783 to 1
kubectl describe ing
Name: basic-ingress
Namespace: default
Address: 35.227.220.186
Default backend: cs-ui:80 (10.32.0.32:80)
TLS:
tls-certificate terminates
Rules:
Host Path Backends
---- ---- --------
* * cs-ui:80 (10.32.0.32:80)
Annotations:
https-forwarding-rule: k8s-fws-default-basic-ingress--f5fde3efbfa51336
https-target-proxy: k8s-tps-default-basic-ingress--f5fde3efbfa51336
ssl-cert: k8s-ssl-default-basic-ingress--f5fde3efbfa51336
target-proxy: k8s-tp-default-basic-ingress--f5fde3efbfa51336
url-map: k8s-um-default-basic-ingress--f5fde3efbfa51336
backends: {"k8s-be-30272--f5fde3efbfa51336":"UNHEALTHY"}
forwarding-rule: k8s-fw-default-basic-ingress--f5fde3efbfa51336
static-ip: k8s-fw-default-basic-ingress--f5fde3efbfa51336
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 12m loadbalancer-controller default/basic-ingress
Normal CREATE 11m loadbalancer-controller ip: 35.227.220.186
Normal Service 6m (x4 over 11m) loadbalancer-controller default backend set to cs-ui:30272
After 3-5 minutes I get Unhealthy and I have no clue why because the setup is almost exactly the same as with their setup.
I have read countless of threads on what to do when you get the backend status of Unhealthy, but none of them have helped. One mentioned to add a firewall rule mention in this tutorial: https://cloud.google.com/compute/docs/load-balancing/health-checks which I have added, but did not help.
If you have any suggestions I will gladly test them.
Turned out our Angular application had a redirect on '/' which gave it a 302 response. This response makes the health check fail and results in a UNHEALTHY state.
As soon as we set up a custom health check it worked.