I have one master and worker node and both are up & running, I deployed an angular application in my k8 cluster. When I'm inspecting my pod log all things are working file without any error.
I am trying to access the application in browser using master and worker IP address followed by a node port number like below, and getting error like unable to connect.
http://10.0.0.1:32394/
Name: frontend-app-6848bc9666-9ggz7
Namespace: pre-release
Priority: 0
Node: SBT-poc-worker2/10.0.0.5
Start Time: Fri, 17 Jan 2020 05:04:10 +0000
Labels: app=frontend-app
pod-template-hash=6848bc9666
Annotations: <none>
Status: Running
IP: 10.32.0.3
IPs:
IP: 10.32.0.3
Controlled By: ReplicaSet/frontend-app-6848bc9666
Containers:
frontend-app:
Container ID: docker://292199347e391c9feecd667e1668f32931f1fd7c670514eb1e05e4a37b8109ad
Image: frontend-app:future-master-fix-7ba35fbe
Image ID: docker://sha256:0099587db89de9ef999a7d1f087d4781e73c491b17e89392e92b08d2f935ad27
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 17 Jan 2020 05:04:15 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 250m
memory: 256Mi
Requests:
cpu: 100m
memory: 128Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r67p7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-r67p7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r67p7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m44s default-scheduler Successfully assigned pre-release/frontend-app-6848bc9666-9ggz7 to SBT-poc-worker2
Normal Pulled 5m41s kubelet, SBT-poc-worker2 Container image "frontend-app:future-master-fix-7ba35fbe" already present on machine
Normal Created 5m39s kubelet, SBT-poc-worker2 Created container frontend-app
Normal Started 5m39s kubelet, SBT-poc-worker2 Started container frontend-app
root#jenkins-linux-vm:/home/SBT-admin# kubectl get pods -n pre-release
NAME READY STATUS RESTARTS AGE
frontend-app-6848bc9666-9ggz7 1/1 Running 0 7m26s
root#jenkins-linux-vm:/home/SBT-admin# kubectl get services -n pre-release
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend-app NodePort 10.96.6.77 <none> 8080:32394/TCP 7m36s
root#jenkins-linux-vm:/home/SBT-admin# kubectl get deployment -n pre-release
NAME READY UP-TO-DATE AVAILABLE AGE
frontend-app 1/1 1 1 11m
root#jenkins-linux-vm:/home/SBT-admin# kubectl get -o yaml -n pre-release svc frontend-app
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"frontend-app"},"name":"frontend-app","namespace":"pre-release"},"spec":{"ports":[{"port":8080,"targetPort":8080}],"selector":{"name":"frontend-app"},"type":"NodePort"}}
creationTimestamp: "2020-01-17T05:04:10Z"
labels:
name: frontend-app
name: frontend-app
namespace: pre-release
resourceVersion: "1972713"
selfLink: /api/v1/namespaces/pre-release/services/frontend-app
uid: 91b87f9e-d723-498c-af05-5969645a82ee
spec:
clusterIP: 10.96.6.77
externalTrafficPolicy: Cluster
ports:
- nodePort: 32394
port: 8080
protocol: TCP
targetPort: 8080
selector:
name: frontend-app
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
root#jenkins-linux-vm:/home/SBT-admin# kubectl get pods --selector="app=frontend-app" --output=wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
frontend-app-7c7cf68f9c-n9tct 1/1 Running 0 58m 10.32.0.5 SBT-poc-worker2 <none> <none>
root#jenkins-linux-vm:/home/SBT-admin# kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-app-7c7cf68f9c-n9tct 1/1 Running 0 58m
root#jenkins-linux-vm:/home/SBT-admin# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend-app NodePort 10.96.21.202 <none> 8080:31098/TCP 59m
root#jenkins-linux-vm:/home/SBT-admin# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
frontend-app 1/1 1 1 59m
can you please someone help me to fix this.
Label on the POD is app=frontend-app as seen from logs on your problem statement.
Your POD description shows below label
Name: frontend-app-6848bc9666-9ggz7
Namespace: pre-release
Priority: 0
Node: SBT-poc-worker2/10.0.0.5
Start Time: Fri, 17 Jan 2020 05:04:10 +0000
Labels: app=frontend-app
Selector field on service yaml file is name: frontend-app , you should change this label on service yaml file to app: frontend-app and updated the service created.
Your current selector value is as below and is wrong comparing the label on POD
ports:
- nodePort: 32394
port: 8080
protocol: TCP
targetPort: 8080
selector:
name: frontend-app
Change it to
selector:
app: frontend-app
You should try to establish that
There are no rules blocking the default node-port range (i.e from port 30000 - to port 32767) on security rules or firewall on cluster network.
For example verify you have below security rule open on Cluster Network for nodeport range to work in browser.
Ingress IPv4 TCP 30000 - 32767 0.0.0.0/0
Once you have confirmed you have no security group rule issue. I will take below approach to debug and find whats wrong with port reachablity at node level. perform a basic Test and check if i can get nginx web server installed and reachable on browser via node port:
Steps:
Deploy a NGINX deployment using below nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
Verify deployment is up and running
$ kubectl apply -f nginx.yaml
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-nginx-75897978cd-ptqv9 1/1 Running 0 32s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d11h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 1/1 1 1 33s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-75897978cd 1 1 1 33s
Now create service to expose the nginx deployment using below example
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
selector:
run: my-nginx
Verify service is created and identify the nodeport assigned (since we did not provide any fixed port in service.yaml ( like below the node port is 32502)
$ kubectl apply -f service.yaml
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d11h
my-nginx NodePort 10.96.174.234 <none> 8080:32502/TCP 12s
In addition to the nodeport identify the ip of your master node i.e 131.112.113.101 below
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-1 Ready master 4d11h v1.17.0 131.112.113.101 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-1 Ready <none> 4d11h v1.17.0 131.112.113.102 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-2 Ready <none> 4d11h v1.17.0 131.112.113.103 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
Now if you try to access the nginx application using the IP of your masternode with nodeport value like <masternode>:<nodeport> (i.e. 131.112.113.101:32502) in your browser you should get result similar to below
Note the container port used on nginx.yaml and targetPort on service.yaml (i.e. 80) you should be able to figure out this for your frontend-app better. Hope this will help you understand the issue at your node/cluster level if any.
Related
I'm using minikube on WSL2.
I deployed a simple flask app image and write a LoadBalancer to expose the service.
My question is, how do I modify service manifest to get the same result as expose?
Below are more details.
flask app deployment yaml.
rss.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: esg
spec:
selector:
matchLabels:
app: rss
replicas: 3
template:
metadata:
labels:
app: rss
spec:
containers:
- name: rss
image: "idioluck/00esg_rss:v01"
ports:
- containerPort: 5000
service yaml (I tried nodeport and loadbalanacer either.)
rss_lb.yaml
apiVersion: v1
kind: Service
metadata:
name: esg-lb
spec:
type: NodePort # LoadBalancer
selector:
app: rss
ports:
- protocol: TCP
port: 8080
targetPort: 5000
kubectl command is
sjw#DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl apply -f rss.yaml
deployment.apps/esg created
sjw#DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl apply -f rss_lb.yaml
service/esg-lb created
sjw#DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl get pods
NAME READY STATUS RESTARTS AGE
esg-757f659b4-4vndc 1/1 Running 0 13s
esg-757f659b4-4wd2w 1/1 Running 0 13s
esg-757f659b4-sf5q6 1/1 Running 0 13s
sjw#DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/esg-757f659b4-4vndc 1/1 Running 0 16s
pod/esg-757f659b4-4wd2w 1/1 Running 0 16s
pod/esg-757f659b4-sf5q6 1/1 Running 0 16s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/esg-lb LoadBalancer 10.101.221.26 <pending> 8080:31308/TCP 8s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/esg 3/3 3 3 16s
NAME DESIRED CURRENT READY AGE
replicaset.apps/esg-757f659b4 3 3 3 16s
Exteranl ip is pending.
so i delete loabdbalancer and use expose
sjw#DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl expose deployment esg --type=LoadBalancer --port=8080
service/esg exposed
sjw#DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
esg LoadBalancer 10.99.208.98 127.0.0.1 8080:30929/TCP 46s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
The service has been successfully exposed. And the service is a load balancer.
Your LoadBalancer type service is showing Pending as status because it is waiting for you to provision an external Load Balancer like AWS's Elastic Load Balancer or GCP's Load Balancer. LoadBalancer type services are usually used together with managed Kubernetes service e.g EKS, GKE etc.
On the other hand, you're able to expose your service because it already has clusterIP assigned to it.
If you want to use LB in Minikube, this official doc may help you. Otherwise, you can use NodePort type service directly to expose your flask app.
I am new to Kubernetes and i am trying to deploy a MicroKubernetes cluster on 4 raspberry PIs.
I am struggling with setting up the dashboard since (no joke) a total of about 30 hours now and starting to be extremely frustrated .
I just cannot access the dashboard remotely.
Solutions that didnt work out:
No.1 Ingress:
I managed to enable ingress but it seems to be extremely complicated to connect it to the dashboard since i manually have to resolve DNS properties inside pods and host machines.
I eventually gave up on that. There is also no documentation whatsoever available how to set an ingress up without having a valid bought domain pointing at your Ingress Node.
If you are able to guide me through this, i am up for it.
No.2 Change service type of dashboard to LoadBalancer or NodePort:
With this method i can actually expose the dashboard... but it can only be accessed through https.... Since dashbaord seems to use self signed certificates or some other mechanism i cannot access the dashboard via a browser. The browsers(chrome firefox) always refuse to connect to the dashboard... When i try to access via http the browsers say i need to use https.
No.3 kube-proxy:
This only allows Localhost connections. YOu can pass arguments to kube proxy to allow other hosts to access the dashboard... but then again we have the https/http problem
At this point it is just amazing to me how extremly hard it is to just access this simple dashboard... Can anybody give any advice on how to access it ?
a#k8s-node-1:~/kubernetes$ kctl describe service kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.152.183.249
IPs: 10.152.183.249
Port: <unset> 443/TCP
TargetPort: 8443/TCP
NodePort: <unset> 32228/TCP
Endpoints: 10.1.140.67:8443
Session Affinity: None
External Traffic Policy: Cluster
$ kubectl edit svc -n kube-system kubernetes-dashboard
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s>
creationTimestamp: "2022-03-21T14:30:10Z"
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "43060"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: fcb45ccc-070b-4a4d-b987-41f5b7777559
spec:
clusterIP: 10.152.183.249
clusterIPs:
- 10.152.183.249
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 32228
port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
a#k8s-node-1:~/kubernetes$ kctl get services -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
metrics-server ClusterIP 10.152.183.233 <none> 443/TCP 165m
kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 142m
dashboard-metrics-scraper ClusterIP 10.152.183.202 <none> 8000/TCP 32m
kubernetes-dashboard NodePort 10.152.183.249 <none> 443:32228/TCP 32m
a#k8s-node-1:~/kubernetes$ cat dashboard-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: dashboard
namespace: kube-system
spec:
rules:
- host: nonexistent.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 8080
a#k8s-node-1:~/kubernetes$ kctl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-node-c4shb 1/1 Running 0 3h23m 192.168.180.47 k8s-node-2 <none> <none>
ingress nginx-ingress-microk8s-controller-nvcvx 1/1 Running 0 3h12m 10.1.140.66 k8s-node-2 <none> <none>
kube-system calico-node-ptwmk 1/1 Running 0 3h23m 192.168.180.48 k8s-node-3 <none> <none>
ingress nginx-ingress-microk8s-controller-hksg7 1/1 Running 0 3h12m 10.1.55.131 k8s-node-4 <none> <none>
ingress nginx-ingress-microk8s-controller-tk9dj 1/1 Running 0 3h12m 10.1.76.129 k8s-node-3 <none> <none>
ingress nginx-ingress-microk8s-controller-c8t54 1/1 Running 0 3h12m 10.1.109.66 k8s-node-1 <none> <none>
kube-system calico-node-k65fz 1/1 Running 0 3h22m 192.168.180.52 k8s-node-4 <none> <none>
kube-system coredns-64c6478b6c-584s8 1/1 Running 0 177m 10.1.109.67 k8s-node-1 <none> <none>
kube-system calico-kube-controllers-6966456d6b-vvnm6 1/1 Running 0 3h24m 10.1.109.65 k8s-node-1 <none> <none>
kube-system calico-node-7jhz9 1/1 Running 0 3h33m 192.168.180.46 k8s-node-1 <none> <none>
kube-system metrics-server-647bdc584d-ldf8q 1/1 Running 1 (3h19m ago) 3h20m 10.1.55.129 k8s-node-4 <none> <none>
kube-system kubernetes-dashboard-585bdb5648-8s9xt 1/1 Running 0 67m 10.1.140.67 k8s-node-2 <none> <none>
kube-system dashboard-metrics-scraper-69d9497b54-x7vt9 1/1 Running 0 67m 10.1.55.132 k8s-node-4 <none> <none>
Using an ingress is indeed the preferred way, but since you seem to have trouble in your environment, you can indeed use a LoadBalancer service.
To avoid the problem with the automatically generated certificates, provide your certificate and private key to the dashboard, for example as a secret, and use the flags --tls-key-file and --tls-cert-file to point to the certificate. More details: https://github.com/kubernetes/dashboard/blob/master/docs/user/certificate-management.md
I have three nodes. when I shutdown cdh-k8s-3.novalocal ,pods running on it all the time
# kubectl get node
NAME STATUS ROLES AGE VERSION
cdh-k8s-1.novalocal Ready control-plane,master 15d v1.20.0
cdh-k8s-2.novalocal Ready <none> 9d v1.20.0
cdh-k8s-3.novalocal NotReady <none> 9d v1.20.0
# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-66b6c48dd5-5jtqv 1/1 Running 0 3h28m 10.244.26.110 cdh-k8s-3.novalocal <none> <none>
nginx-deployment-66b6c48dd5-fntn4 1/1 Running 0 3h28m 10.244.26.108 cdh-k8s-3.novalocal <none> <none>
nginx-deployment-66b6c48dd5-vz7hr 1/1 Running 0 3h28m 10.244.26.109 cdh-k8s-3.novalocal <none> <none>
my yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 0/3 3 0 3h28m
I find the Doc
DaemonSet pods are created with NoExecute tolerations for the following taints with no tolerationSeconds:
node.kubernetes.io/unreachable
node.kubernetes.io/not-ready
This ensures that DaemonSet pods are never evicted due to these problems.
But it is DaemonSet and not Deployment
Current setup:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cl01mtr01 Ready master 104m v1.18.2+k3s1 10.1.1.1 <none> Debian GNU/Linux 10 (buster) 4.19.0-9-amd64 containerd://1.3.3-k3s2
cl01wkr01 Ready <none> 9m20s v1.18.2+k3s1 10.1.1.101 <none> Arch Linux ARM 5.4.40-1-ARCH containerd://1.3.3-k3s2
Master installed with:
export INSTALL_K3S_VERSION="v1.18.2+k3s1"
curl -sSLf https://get.k3s.io | sh -s - server \
--write-kubeconfig-mode 644 \
--cluster-cidr 172.20.0.0/16 \
--service-cidr 172.21.0.0/16 \
--cluster-dns 172.21.0.10 \
--disable traefik
Worker installed with:
export INSTALL_K3S_VERSION="v1.18.2+k3s1"
curl -sSLf https://get.k3s.io | sh -s - agent \
--server https://10.1.1.1:6443 \
--token <token from master>
I also tried with a raspberry pi as master running arch linux and raspbian and a rock pi 64 with armbian.
I tried with k3s versions:
v1.17.4+k3s1
v1.17.5+k3s1
v1.18.2+k3s1
I also tested with docker and the --docker install option in k3s.
The nodes get discovered (as shown above), but I cannot access the service on my worker node(s) (raspberry pi 3 with arch linux arm) via http://10.1.1.1:30001 although, it can be accessed via kubectl exec.
I always get a connection timeout
This site can’t be reached
10.1.1.1 took too long to respond.
When the pod runs on the master node, or if the worker is an amd64 node, it can be accessed via http://10.1.1.1:30001.
This is the resource I try to load and access:
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-default-configmap
namespace: nginx
data:
default.conf: |
server {
listen 80;
listen [::]:80;
#server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: nginx
spec:
ports:
- name: http
targetPort: 80
port: 80
nodePort: 30001
- name: https
targetPort: 443
port: 443
nodePort: 30002
selector:
app: nginx
type: NodePort
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemonset
namespace: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: NotIn
values:
- "true"
containers:
- name: nginx
image: nginx:stable
imagePullPolicy: Always
env:
- name: TZ
value: "Europe/Brussels"
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
volumeMounts:
- name: default-conf
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
readOnly: true
restartPolicy: Always
volumes:
- name: default-conf
configMap:
name: nginx-default-configmap
Some extra info:
> kubectl get all -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system pod/local-path-provisioner-6d59f47c7-d477m 1/1 Running 0 116m 172.20.0.4 cl01mtr01 <none> <none>
kube-system pod/metrics-server-7566d596c8-fbb7b 1/1 Running 0 116m 172.20.0.2 cl01mtr01 <none> <none>
kube-system pod/coredns-8655855d6-gnbsm 1/1 Running 0 116m 172.20.0.3 cl01mtr01 <none> <none>
nginx pod/nginx-daemonset-l4j7s 1/1 Running 0 52s 172.20.1.3 cl01wkr01 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 116m <none>
kube-system service/kube-dns ClusterIP 172.21.0.10 <none> 53/UDP,53/TCP,9153/TCP 116m k8s-app=kube-dns
kube-system service/metrics-server ClusterIP 172.21.152.234 <none> 443/TCP 116m k8s-app=metrics-server
nginx service/nginx-service NodePort 172.21.14.185 <none> 80:30001/TCP,443:30002/TCP 52s app=nginx
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
nginx daemonset.apps/nginx-daemonset 1 1 1 1 1 <none> 52s nginx nginx:stable app=nginx
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
kube-system deployment.apps/local-path-provisioner 1/1 1 1 116m local-path-provisioner rancher/local-path-provisioner:v0.0.11 app=local-path-provisioner
kube-system deployment.apps/metrics-server 1/1 1 1 116m metrics-server rancher/metrics-server:v0.3.6 k8s-app=metrics-server
kube-system deployment.apps/coredns 1/1 1 1 116m coredns rancher/coredns-coredns:1.6.3 k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
kube-system replicaset.apps/local-path-provisioner-6d59f47c7 1 1 1 116m local-path-provisioner rancher/local-path-provisioner:v0.0.11 app=local-path-provisioner,pod-template-hash=6d59f47c7
kube-system replicaset.apps/metrics-server-7566d596c8 1 1 1 116m metrics-server rancher/metrics-server:v0.3.6 k8s-app=metrics-server,pod-template-hash=7566d596c8
kube-system replicaset.apps/coredns-8655855d6 1 1 1 116m coredns rancher/coredns-coredns:1.6.3 k8s-app=kube-dns,pod-template-hash=8655855d6
I have created Docker images and deployed in k8s cluster with a minimum number of machines, setup one master and worker and both machines are up and running and talking to each other with the same VLAN network.
Please find the below pod and deployment services with described status
root#jenkins-linux-vm:/home/admin# kubectl describe services angular-service
Name: angular-service
Namespace: pre-release
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"angular-service","namespace":"pre-release"},"spec":{"ports":[{"no...
Selector: app=frontend-app
Type: NodePort
IP: 10.96.151.155
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31000/TCP
Endpoints: 10.32.0.6:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
root#jenkins-linux-vm:/home/admin# kubectl get pods
NAME READY STATUS RESTARTS AGE
angular-deployment-7b8d45f48d-b59pv 1/1 Running 0 51m
root#jenkins-linux-vm:/home/admin# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
angular-service NodePort 10.96.151.155 <none> 80:31000/TCP 64m
root#jenkins-linux-vm:/home/admin# kubectl get pods --selector="app=frontend-app" --output=wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
angular-deployment-7b8d45f48d-b59pv 1/1 Running 0 52m 10.32.0.6 poc-worker2 <none> <none>
root#jenkins-linux-vm:/home/admin# kubectl describe pods angular-deployment-7b8d45f48d-b59pv
Name: angular-deployment-7b8d45f48d-b59pv
Namespace: pre-release
Priority: 0
Node: poc-worker2/10.0.0.6
Start Time: Tue, 21 Jan 2020 05:15:49 +0000
Labels: app=frontend-app
pod-template-hash=7b8d45f48d
Annotations: <none>
Status: Running
IP: 10.32.0.6
IPs:
IP: 10.32.0.6
Controlled By: ReplicaSet/angular-deployment-7b8d45f48d
Containers:
frontend-app:
Container ID: docker://751a9fb4a5e908fa1a02eb0460ab1659904362a727a028fdf72489df663a4f69
Image: frontend-app:future-master-fix-d1afa608
Image ID: docker://sha256:0099587db89de9ef999a7d1f087d4781e73c491b17e89392e92b08d2f935ad27
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 21 Jan 2020 05:15:54 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r67p7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-r67p7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r67p7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Now the problem is I'm not able to access my application using a port, even though its not working in a web browser as well.
curl http://<public-node-ip>:<node-port>
curl http://10.0.0.6:31000
Dockr file
FROM node:latest as node
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build --prod
# stage 2
FROM nginx:alpine
COPY --from=node /app/dist/hello-angular /usr/share/nginx/html
root#jenkins-linux-vm:/home/admin# kubectl exec -it angular-deployment-7b8d45f48d-b59pv curl 10.96.151.155:80
curl: (7) Failed to connect to 10.96.151.155 port 80: Connection refused
command terminated with exit code 7
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl run busybox --image=busybox --restart=Never -it --rm --command -- /bin/sh -c "wget 10.96.208.252:80;cat index.html"
Connecting to 10.96.208.252:80 (10.96.208.252:80)
saving to 'index.html'
index.html 100% |********************************| 593 0:00:00 ETA
'index.html' saved
<!doctype html><html lang="en"><head><meta charset="utf-8"><title>AngularApp</title><base href="/"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="icon" type="image/x-icon" href="favicon.ico"><link href="styles.9c0ad738f18adc3d19ed.bundle.css" rel="stylesheet"/></head><body><app-root></app-root><script type="text/javascript" src="inline.720eace06148cc3e71aa.bundle.js"></script><script type="text/javascript" src="polyfills.f20484b2fa4642e0dca8.bundle.js"></script><script type="text/javascript" src="main.11bc84b3b98cd0d00106.bundle.js"></script></body></html>pod "busybox" deleted
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl run busybox --image=busybox --restart=Never -it --rm --command -- /bin/sh -c "wget 10.0.0.6:32331;cat index.html"
Connecting to 10.0.0.6:32331 (10.0.0.6:32331)
wget: can't connect to remote host (10.0.0.6): Connection refused
cat: can't open 'index.html': No such file or directory
pod "busybox" deleted
pod pre-release/busybox terminated (Error)
I am taking a pre-built angular image from docker hub with thanks to https://github.com/nheidloff/web-apps-kubernetes/tree/master/angular-app we will use this image as baseline below.
Create and deployment and service using below yamls
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: angular-app
spec:
replicas: 1
selector:
matchLabels:
run: angular-app
template:
metadata:
labels:
run: angular-app
spec:
containers:
- name: angular-app
image: nheidloff/angular-app
ports:
- containerPort: 80
- containerPort: 443
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: angular-app
labels:
run: angular-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: angular-app
Run as below on your cluster to create the resources
$ kubectl create -f Deployment.yaml
$ kubectl create -f Service.yaml
Should result in below deployment and service configuration
$ kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/angular-app-694d97d56c-7m4x4 1/1 Running 0 8m23s 10.244.3.10 k8s-node-3 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/angular-app NodePort 10.96.150.136 <none> 80:32218/TCP,443:30740/TCP 8m23s run=angular-app
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/angular-app 1/1 1 1 8m23s angular-app nheidloff/angular-app run=angular-app
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/angular-app-694d97d56c 1 1 1 8m23s angular-app nheidloff/angular-app pod-template-hash=694d97d56c,run=angular-app
From above we can see the pod is running node-3 , so identify the ip of node 3
and we see that service has exposed below ports 32218/TCP and 30740/TCP
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-1 Ready master 8d v1.17.0 111.112.113.107 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-1 Ready <none> 8d v1.17.0 111.112.113.108 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-2 Ready <none> 8d v1.17.0 111.112.113.109 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-3 Ready <none> 8d v1.17.0 111.112.113.110 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
So we need to access the app vi node3:NodePort i.e 111.112.113.110:32218 as url check below screen shot as well on how i access the app.
I have below rules open on cluster level to allow browser access the apps on default NodePort range.
NOTE : Ingress IPv4 TCP 30000 - 32767 0.0.0.0/0
To ensure you are able to open your app by nodeport in browser you should try to establish that
There are no rules blocking the default node-port range (i.e from port 30000 - to port 32767) on security rules or firewall on cluster network.
For example verify you have below security rule open on Cluster Network for nodeport range to work in browser.
Ingress IPv4 TCP 30000 - 32767 0.0.0.0/0
Once you have confirmed you have no security group rule issue. I will take below approach to debug and find whats wrong with port reachablity at node level. perform a basic Test and check if i can get nginx web server installed and reachable on browser via node port:
Steps:
Deploy a NGINX deployment using below nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
Verify deployment is up and running
$ kubectl apply -f nginx.yaml
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-nginx-75897978cd-ptqv9 1/1 Running 0 32s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d11h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 1/1 1 1 33s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-75897978cd 1 1 1 33s
Now create service to expose the nginx deployment using below example
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
selector:
run: my-nginx
Verify service is created and identify the nodeport assigned (since we did not provide any fixed port in service.yaml ( like below the node port is 32502)
$ kubectl apply -f service.yaml
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d11h
my-nginx NodePort 10.96.174.234 <none> 8080:32502/TCP 12s
In addition to the nodeport identify the ip of your master node i.e 131.112.113.101 below
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-1 Ready master 4d11h v1.17.0 131.112.113.101 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-1 Ready <none> 4d11h v1.17.0 131.112.113.102 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-2 Ready <none> 4d11h v1.17.0 131.112.113.103 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
Now if you try to access the nginx application using the IP of your masternode with nodeport value like <masternode>:<nodeport> (i.e. 131.112.113.101:32502) in your browser you should get result similar to below
Note the container port used on nginx.yaml and targetPort on service.yaml (i.e. 80) you should be able to figure out this for your app better. Hope this will help you understand the issue at your node/cluster level if any.
I am not sure if I understood what you are trying to do.
Below command is to open a bash shell in the pod:
kubectl exec -it angular-deployment-7b8d45f48d-b59pv -- /bin/bash
You can connect to a pod, then try curl.
The service is defined as NodePort type.
it is using nodeport: 31000
Try hitting the below url in your browser
http://HOSTNAME:31000
hostname could be any hostname of the cluster nodes