How to access Kubernetes deployment - kubernetes

I have created Docker images and deployed in k8s cluster with a minimum number of machines, setup one master and worker and both machines are up and running and talking to each other with the same VLAN network.
Please find the below pod and deployment services with described status
root#jenkins-linux-vm:/home/admin# kubectl describe services angular-service
Name: angular-service
Namespace: pre-release
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"angular-service","namespace":"pre-release"},"spec":{"ports":[{"no...
Selector: app=frontend-app
Type: NodePort
IP: 10.96.151.155
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31000/TCP
Endpoints: 10.32.0.6:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
root#jenkins-linux-vm:/home/admin# kubectl get pods
NAME READY STATUS RESTARTS AGE
angular-deployment-7b8d45f48d-b59pv 1/1 Running 0 51m
root#jenkins-linux-vm:/home/admin# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
angular-service NodePort 10.96.151.155 <none> 80:31000/TCP 64m
root#jenkins-linux-vm:/home/admin# kubectl get pods --selector="app=frontend-app" --output=wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
angular-deployment-7b8d45f48d-b59pv 1/1 Running 0 52m 10.32.0.6 poc-worker2 <none> <none>
root#jenkins-linux-vm:/home/admin# kubectl describe pods angular-deployment-7b8d45f48d-b59pv
Name: angular-deployment-7b8d45f48d-b59pv
Namespace: pre-release
Priority: 0
Node: poc-worker2/10.0.0.6
Start Time: Tue, 21 Jan 2020 05:15:49 +0000
Labels: app=frontend-app
pod-template-hash=7b8d45f48d
Annotations: <none>
Status: Running
IP: 10.32.0.6
IPs:
IP: 10.32.0.6
Controlled By: ReplicaSet/angular-deployment-7b8d45f48d
Containers:
frontend-app:
Container ID: docker://751a9fb4a5e908fa1a02eb0460ab1659904362a727a028fdf72489df663a4f69
Image: frontend-app:future-master-fix-d1afa608
Image ID: docker://sha256:0099587db89de9ef999a7d1f087d4781e73c491b17e89392e92b08d2f935ad27
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 21 Jan 2020 05:15:54 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r67p7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-r67p7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r67p7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Now the problem is I'm not able to access my application using a port, even though its not working in a web browser as well.
curl http://<public-node-ip>:<node-port>
curl http://10.0.0.6:31000
Dockr file
FROM node:latest as node
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build --prod
# stage 2
FROM nginx:alpine
COPY --from=node /app/dist/hello-angular /usr/share/nginx/html
root#jenkins-linux-vm:/home/admin# kubectl exec -it angular-deployment-7b8d45f48d-b59pv curl 10.96.151.155:80
curl: (7) Failed to connect to 10.96.151.155 port 80: Connection refused
command terminated with exit code 7
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl run busybox --image=busybox --restart=Never -it --rm --command -- /bin/sh -c "wget 10.96.208.252:80;cat index.html"
Connecting to 10.96.208.252:80 (10.96.208.252:80)
saving to 'index.html'
index.html 100% |********************************| 593 0:00:00 ETA
'index.html' saved
<!doctype html><html lang="en"><head><meta charset="utf-8"><title>AngularApp</title><base href="/"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="icon" type="image/x-icon" href="favicon.ico"><link href="styles.9c0ad738f18adc3d19ed.bundle.css" rel="stylesheet"/></head><body><app-root></app-root><script type="text/javascript" src="inline.720eace06148cc3e71aa.bundle.js"></script><script type="text/javascript" src="polyfills.f20484b2fa4642e0dca8.bundle.js"></script><script type="text/javascript" src="main.11bc84b3b98cd0d00106.bundle.js"></script></body></html>pod "busybox" deleted
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl run busybox --image=busybox --restart=Never -it --rm --command -- /bin/sh -c "wget 10.0.0.6:32331;cat index.html"
Connecting to 10.0.0.6:32331 (10.0.0.6:32331)
wget: can't connect to remote host (10.0.0.6): Connection refused
cat: can't open 'index.html': No such file or directory
pod "busybox" deleted
pod pre-release/busybox terminated (Error)

I am taking a pre-built angular image from docker hub with thanks to https://github.com/nheidloff/web-apps-kubernetes/tree/master/angular-app we will use this image as baseline below.
Create and deployment and service using below yamls
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: angular-app
spec:
replicas: 1
selector:
matchLabels:
run: angular-app
template:
metadata:
labels:
run: angular-app
spec:
containers:
- name: angular-app
image: nheidloff/angular-app
ports:
- containerPort: 80
- containerPort: 443
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: angular-app
labels:
run: angular-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: angular-app
Run as below on your cluster to create the resources
$ kubectl create -f Deployment.yaml
$ kubectl create -f Service.yaml
Should result in below deployment and service configuration
$ kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/angular-app-694d97d56c-7m4x4 1/1 Running 0 8m23s 10.244.3.10 k8s-node-3 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/angular-app NodePort 10.96.150.136 <none> 80:32218/TCP,443:30740/TCP 8m23s run=angular-app
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/angular-app 1/1 1 1 8m23s angular-app nheidloff/angular-app run=angular-app
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/angular-app-694d97d56c 1 1 1 8m23s angular-app nheidloff/angular-app pod-template-hash=694d97d56c,run=angular-app
From above we can see the pod is running node-3 , so identify the ip of node 3
and we see that service has exposed below ports 32218/TCP and 30740/TCP
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-1 Ready master 8d v1.17.0 111.112.113.107 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-1 Ready <none> 8d v1.17.0 111.112.113.108 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-2 Ready <none> 8d v1.17.0 111.112.113.109 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-3 Ready <none> 8d v1.17.0 111.112.113.110 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
So we need to access the app vi node3:NodePort i.e 111.112.113.110:32218 as url check below screen shot as well on how i access the app.
I have below rules open on cluster level to allow browser access the apps on default NodePort range.
NOTE : Ingress IPv4 TCP 30000 - 32767 0.0.0.0/0

To ensure you are able to open your app by nodeport in browser you should try to establish that
There are no rules blocking the default node-port range (i.e from port 30000 - to port 32767) on security rules or firewall on cluster network.
For example verify you have below security rule open on Cluster Network for nodeport range to work in browser.
Ingress IPv4 TCP 30000 - 32767 0.0.0.0/0
Once you have confirmed you have no security group rule issue. I will take below approach to debug and find whats wrong with port reachablity at node level. perform a basic Test and check if i can get nginx web server installed and reachable on browser via node port:
Steps:
Deploy a NGINX deployment using below nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
Verify deployment is up and running
$ kubectl apply -f nginx.yaml
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-nginx-75897978cd-ptqv9 1/1 Running 0 32s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d11h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 1/1 1 1 33s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-75897978cd 1 1 1 33s
Now create service to expose the nginx deployment using below example
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
selector:
run: my-nginx
Verify service is created and identify the nodeport assigned (since we did not provide any fixed port in service.yaml ( like below the node port is 32502)
$ kubectl apply -f service.yaml
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d11h
my-nginx NodePort 10.96.174.234 <none> 8080:32502/TCP 12s
In addition to the nodeport identify the ip of your master node i.e 131.112.113.101 below
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-1 Ready master 4d11h v1.17.0 131.112.113.101 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-1 Ready <none> 4d11h v1.17.0 131.112.113.102 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-2 Ready <none> 4d11h v1.17.0 131.112.113.103 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
Now if you try to access the nginx application using the IP of your masternode with nodeport value like <masternode>:<nodeport> (i.e. 131.112.113.101:32502) in your browser you should get result similar to below
Note the container port used on nginx.yaml and targetPort on service.yaml (i.e. 80) you should be able to figure out this for your app better. Hope this will help you understand the issue at your node/cluster level if any.

I am not sure if I understood what you are trying to do.
Below command is to open a bash shell in the pod:
kubectl exec -it angular-deployment-7b8d45f48d-b59pv -- /bin/bash
You can connect to a pod, then try curl.

The service is defined as NodePort type.
it is using nodeport: 31000
Try hitting the below url in your browser
http://HOSTNAME:31000
hostname could be any hostname of the cluster nodes

Related

on minikube with WSL2, loadbalancer or nodeport not working, but kubectl expose is work well

I'm using minikube on WSL2.
I deployed a simple flask app image and write a LoadBalancer to expose the service.
My question is, how do I modify service manifest to get the same result as expose?
Below are more details.
flask app deployment yaml.
rss.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: esg
spec:
selector:
matchLabels:
app: rss
replicas: 3
template:
metadata:
labels:
app: rss
spec:
containers:
- name: rss
image: "idioluck/00esg_rss:v01"
ports:
- containerPort: 5000
service yaml (I tried nodeport and loadbalanacer either.)
rss_lb.yaml
apiVersion: v1
kind: Service
metadata:
name: esg-lb
spec:
type: NodePort # LoadBalancer
selector:
app: rss
ports:
- protocol: TCP
port: 8080
targetPort: 5000
kubectl command is
sjw#DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl apply -f rss.yaml
deployment.apps/esg created
sjw#DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl apply -f rss_lb.yaml
service/esg-lb created
sjw#DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl get pods
NAME READY STATUS RESTARTS AGE
esg-757f659b4-4vndc 1/1 Running 0 13s
esg-757f659b4-4wd2w 1/1 Running 0 13s
esg-757f659b4-sf5q6 1/1 Running 0 13s
sjw#DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/esg-757f659b4-4vndc 1/1 Running 0 16s
pod/esg-757f659b4-4wd2w 1/1 Running 0 16s
pod/esg-757f659b4-sf5q6 1/1 Running 0 16s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/esg-lb LoadBalancer 10.101.221.26 <pending> 8080:31308/TCP 8s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/esg 3/3 3 3 16s
NAME DESIRED CURRENT READY AGE
replicaset.apps/esg-757f659b4 3 3 3 16s
Exteranl ip is pending.
so i delete loabdbalancer and use expose
sjw#DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl expose deployment esg --type=LoadBalancer --port=8080
service/esg exposed
sjw#DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
esg LoadBalancer 10.99.208.98 127.0.0.1 8080:30929/TCP 46s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
The service has been successfully exposed. And the service is a load balancer.
Your LoadBalancer type service is showing Pending as status because it is waiting for you to provision an external Load Balancer like AWS's Elastic Load Balancer or GCP's Load Balancer. LoadBalancer type services are usually used together with managed Kubernetes service e.g EKS, GKE etc.
On the other hand, you're able to expose your service because it already has clusterIP assigned to it.
If you want to use LB in Minikube, this official doc may help you. Otherwise, you can use NodePort type service directly to expose your flask app.

How to access kubernetes microk8s dashboard remotely without Ingress?

I am new to Kubernetes and i am trying to deploy a MicroKubernetes cluster on 4 raspberry PIs.
I am struggling with setting up the dashboard since (no joke) a total of about 30 hours now and starting to be extremely frustrated .
I just cannot access the dashboard remotely.
Solutions that didnt work out:
No.1 Ingress:
I managed to enable ingress but it seems to be extremely complicated to connect it to the dashboard since i manually have to resolve DNS properties inside pods and host machines.
I eventually gave up on that. There is also no documentation whatsoever available how to set an ingress up without having a valid bought domain pointing at your Ingress Node.
If you are able to guide me through this, i am up for it.
No.2 Change service type of dashboard to LoadBalancer or NodePort:
With this method i can actually expose the dashboard... but it can only be accessed through https.... Since dashbaord seems to use self signed certificates or some other mechanism i cannot access the dashboard via a browser. The browsers(chrome firefox) always refuse to connect to the dashboard... When i try to access via http the browsers say i need to use https.
No.3 kube-proxy:
This only allows Localhost connections. YOu can pass arguments to kube proxy to allow other hosts to access the dashboard... but then again we have the https/http problem
At this point it is just amazing to me how extremly hard it is to just access this simple dashboard... Can anybody give any advice on how to access it ?
a#k8s-node-1:~/kubernetes$ kctl describe service kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.152.183.249
IPs: 10.152.183.249
Port: <unset> 443/TCP
TargetPort: 8443/TCP
NodePort: <unset> 32228/TCP
Endpoints: 10.1.140.67:8443
Session Affinity: None
External Traffic Policy: Cluster
$ kubectl edit svc -n kube-system kubernetes-dashboard
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s>
creationTimestamp: "2022-03-21T14:30:10Z"
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "43060"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: fcb45ccc-070b-4a4d-b987-41f5b7777559
spec:
clusterIP: 10.152.183.249
clusterIPs:
- 10.152.183.249
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 32228
port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
a#k8s-node-1:~/kubernetes$ kctl get services -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
metrics-server ClusterIP 10.152.183.233 <none> 443/TCP 165m
kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 142m
dashboard-metrics-scraper ClusterIP 10.152.183.202 <none> 8000/TCP 32m
kubernetes-dashboard NodePort 10.152.183.249 <none> 443:32228/TCP 32m
a#k8s-node-1:~/kubernetes$ cat dashboard-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: dashboard
namespace: kube-system
spec:
rules:
- host: nonexistent.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 8080
a#k8s-node-1:~/kubernetes$ kctl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-node-c4shb 1/1 Running 0 3h23m 192.168.180.47 k8s-node-2 <none> <none>
ingress nginx-ingress-microk8s-controller-nvcvx 1/1 Running 0 3h12m 10.1.140.66 k8s-node-2 <none> <none>
kube-system calico-node-ptwmk 1/1 Running 0 3h23m 192.168.180.48 k8s-node-3 <none> <none>
ingress nginx-ingress-microk8s-controller-hksg7 1/1 Running 0 3h12m 10.1.55.131 k8s-node-4 <none> <none>
ingress nginx-ingress-microk8s-controller-tk9dj 1/1 Running 0 3h12m 10.1.76.129 k8s-node-3 <none> <none>
ingress nginx-ingress-microk8s-controller-c8t54 1/1 Running 0 3h12m 10.1.109.66 k8s-node-1 <none> <none>
kube-system calico-node-k65fz 1/1 Running 0 3h22m 192.168.180.52 k8s-node-4 <none> <none>
kube-system coredns-64c6478b6c-584s8 1/1 Running 0 177m 10.1.109.67 k8s-node-1 <none> <none>
kube-system calico-kube-controllers-6966456d6b-vvnm6 1/1 Running 0 3h24m 10.1.109.65 k8s-node-1 <none> <none>
kube-system calico-node-7jhz9 1/1 Running 0 3h33m 192.168.180.46 k8s-node-1 <none> <none>
kube-system metrics-server-647bdc584d-ldf8q 1/1 Running 1 (3h19m ago) 3h20m 10.1.55.129 k8s-node-4 <none> <none>
kube-system kubernetes-dashboard-585bdb5648-8s9xt 1/1 Running 0 67m 10.1.140.67 k8s-node-2 <none> <none>
kube-system dashboard-metrics-scraper-69d9497b54-x7vt9 1/1 Running 0 67m 10.1.55.132 k8s-node-4 <none> <none>
Using an ingress is indeed the preferred way, but since you seem to have trouble in your environment, you can indeed use a LoadBalancer service.
To avoid the problem with the automatically generated certificates, provide your certificate and private key to the dashboard, for example as a secret, and use the flags --tls-key-file and --tls-cert-file to point to the certificate. More details: https://github.com/kubernetes/dashboard/blob/master/docs/user/certificate-management.md

can't get product page from outside using browser via istio

Hey it's been quite days struggling to make the sample book app running. I am new to istio and trying to get understand it. I followed this demo of an other way of setting up the bookinfo. I am using minikube in a virtualbox machine with docker as a driver. I set metalLB as a loadBalancer for ingress-gateway, here is the configmap i used for metalLB :
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: custom-ip-space
protocol: layer2
addresses:
- 192.168.49.2/28
the 192.168.49.2 is the result of the command: minikube ip
The ingressgateway yaml file:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- route:
- destination:
host: productpage
port:
number: 9080
and the output command of kubectl get svc -n istio-system:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.111.105.179 <none> 3000/TCP 34m
istio-citadel ClusterIP 10.100.38.218 <none> 8060/TCP,15014/TCP 34m
istio-egressgateway ClusterIP 10.101.66.207 <none> 80/TCP,443/TCP,15443/TCP 34m
istio-galley ClusterIP 10.103.112.155 <none> 443/TCP,15014/TCP,9901/TCP 34m
istio-ingressgateway LoadBalancer 10.97.23.39 192.168.49.0 15020:32717/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32199/TCP,15030:30010/TCP,15031:30189/TCP,15032:31134/TCP,15443:30748/TCP 34m
istio-pilot ClusterIP 10.108.133.31 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 34m
istio-policy ClusterIP 10.100.74.207 <none> 9091/TCP,15004/TCP,15014/TCP 34m
istio-sidecar-injector ClusterIP 10.97.224.99 <none> 443/TCP,15014/TCP 34m
istio-telemetry ClusterIP 10.101.165.139 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 34m
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 34m
jaeger-collector ClusterIP 10.111.188.83 <none> 14267/TCP,14268/TCP,14250/TCP 34m
jaeger-query ClusterIP 10.103.148.144 <none> 16686/TCP 34m
kiali ClusterIP 10.111.57.222 <none> 20001/TCP 34m
prometheus ClusterIP 10.107.204.95 <none> 9090/TCP 34m
tracing ClusterIP 10.104.88.173 <none> 80/TCP 34m
zipkin ClusterIP 10.111.162.93 <none> 9411/TCP 34m
and when trying to curl 192.168.49.0:80/productpage I am getting :
* Trying 192.168.49.0...
* TCP_NODELAY set
* Immediate connect fail for 192.168.49.0: Network is unreachable
* Closing connection 0
curl: (7) Couldn't connect to server
myhost#k8s:~$ curl 192.168.49.0:80/productpage
curl: (7) Couldn't connect to server
and before setting up the metalLB, I was getting connection refused!
Any solution for this please ? as it's been 5 days struggling to fix it.
I followed the steps here and all steps are ok!
In my opinion, this is a problem with the MetalLB configuration.
You are trying to give MetalLB control over IPs from the 192.168.49.2/28 network.
We can calculate for 192.168.49.2/28 network: HostMin=192.168.49.1 and HostMax=192.168.49.14.
As we can see, your istio-ingressgateway LoadBalancer Service is assigned the address 192.168.49.0 and I think that is the cause of the problem.
I recommend changing from 192.168.49.2/28 to a range, such as 192.168.49.10-192.168.49.20.
I've created an example to illustrate you how your configuration can be changed.
As you can see, at the beginning I had the configuration exactly like you (I also couldn't connect to the server using the curl command):
$ kubectl get svc -n istio-system istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP
istio-ingressgateway LoadBalancer 10.109.75.19 192.168.49.0
$ curl 192.168.49.0:80/productpage
curl: (7) Couldn't connect to server
First, I modified the config ConfigMap:
NOTE: I changed 192.168.49.2/28 to 192.168.49.10-192.168.49.20
$ kubectl edit cm config -n metallb-system
Then I restarted all the controller and speaker Pods to force MetalLB to use new config (see: Metallb ConfigMap update).
$ kubectl delete pod -n metallb-system --all
pod "controller-65db86ddc6-gf49h" deleted
pod "speaker-7l66v" deleted
After some time, we should see a new EXTERNAL-IP assigned to the istio-ingressgateway Service:
kubectl get svc -n istio-system istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP AGE
istio-ingressgateway LoadBalancer 10.106.170.227 192.168.49.10
Finally, we can check if it works as expected:
$ curl 192.168.49.10:80/productpage
<!DOCTYPE html>
<html>
<head>
<title>Simple Bookstore App</title>
...

k3s on arch linux ARM worker service not responding

Current setup:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cl01mtr01 Ready master 104m v1.18.2+k3s1 10.1.1.1 <none> Debian GNU/Linux 10 (buster) 4.19.0-9-amd64 containerd://1.3.3-k3s2
cl01wkr01 Ready <none> 9m20s v1.18.2+k3s1 10.1.1.101 <none> Arch Linux ARM 5.4.40-1-ARCH containerd://1.3.3-k3s2
Master installed with:
export INSTALL_K3S_VERSION="v1.18.2+k3s1"
curl -sSLf https://get.k3s.io | sh -s - server \
--write-kubeconfig-mode 644 \
--cluster-cidr 172.20.0.0/16 \
--service-cidr 172.21.0.0/16 \
--cluster-dns 172.21.0.10 \
--disable traefik
Worker installed with:
export INSTALL_K3S_VERSION="v1.18.2+k3s1"
curl -sSLf https://get.k3s.io | sh -s - agent \
--server https://10.1.1.1:6443 \
--token <token from master>
I also tried with a raspberry pi as master running arch linux and raspbian and a rock pi 64 with armbian.
I tried with k3s versions:
v1.17.4+k3s1
v1.17.5+k3s1
v1.18.2+k3s1
I also tested with docker and the --docker install option in k3s.
The nodes get discovered (as shown above), but I cannot access the service on my worker node(s) (raspberry pi 3 with arch linux arm) via http://10.1.1.1:30001 although, it can be accessed via kubectl exec.
I always get a connection timeout
This site can’t be reached
10.1.1.1 took too long to respond.
When the pod runs on the master node, or if the worker is an amd64 node, it can be accessed via http://10.1.1.1:30001.
This is the resource I try to load and access:
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-default-configmap
namespace: nginx
data:
default.conf: |
server {
listen 80;
listen [::]:80;
#server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: nginx
spec:
ports:
- name: http
targetPort: 80
port: 80
nodePort: 30001
- name: https
targetPort: 443
port: 443
nodePort: 30002
selector:
app: nginx
type: NodePort
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemonset
namespace: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: NotIn
values:
- "true"
containers:
- name: nginx
image: nginx:stable
imagePullPolicy: Always
env:
- name: TZ
value: "Europe/Brussels"
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
volumeMounts:
- name: default-conf
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
readOnly: true
restartPolicy: Always
volumes:
- name: default-conf
configMap:
name: nginx-default-configmap
Some extra info:
> kubectl get all -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system pod/local-path-provisioner-6d59f47c7-d477m 1/1 Running 0 116m 172.20.0.4 cl01mtr01 <none> <none>
kube-system pod/metrics-server-7566d596c8-fbb7b 1/1 Running 0 116m 172.20.0.2 cl01mtr01 <none> <none>
kube-system pod/coredns-8655855d6-gnbsm 1/1 Running 0 116m 172.20.0.3 cl01mtr01 <none> <none>
nginx pod/nginx-daemonset-l4j7s 1/1 Running 0 52s 172.20.1.3 cl01wkr01 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 116m <none>
kube-system service/kube-dns ClusterIP 172.21.0.10 <none> 53/UDP,53/TCP,9153/TCP 116m k8s-app=kube-dns
kube-system service/metrics-server ClusterIP 172.21.152.234 <none> 443/TCP 116m k8s-app=metrics-server
nginx service/nginx-service NodePort 172.21.14.185 <none> 80:30001/TCP,443:30002/TCP 52s app=nginx
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
nginx daemonset.apps/nginx-daemonset 1 1 1 1 1 <none> 52s nginx nginx:stable app=nginx
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
kube-system deployment.apps/local-path-provisioner 1/1 1 1 116m local-path-provisioner rancher/local-path-provisioner:v0.0.11 app=local-path-provisioner
kube-system deployment.apps/metrics-server 1/1 1 1 116m metrics-server rancher/metrics-server:v0.3.6 k8s-app=metrics-server
kube-system deployment.apps/coredns 1/1 1 1 116m coredns rancher/coredns-coredns:1.6.3 k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
kube-system replicaset.apps/local-path-provisioner-6d59f47c7 1 1 1 116m local-path-provisioner rancher/local-path-provisioner:v0.0.11 app=local-path-provisioner,pod-template-hash=6d59f47c7
kube-system replicaset.apps/metrics-server-7566d596c8 1 1 1 116m metrics-server rancher/metrics-server:v0.3.6 k8s-app=metrics-server,pod-template-hash=7566d596c8
kube-system replicaset.apps/coredns-8655855d6 1 1 1 116m coredns rancher/coredns-coredns:1.6.3 k8s-app=kube-dns,pod-template-hash=8655855d6

Kubernetes Unable to Access pods

I have one master and worker node and both are up & running, I deployed an angular application in my k8 cluster. When I'm inspecting my pod log all things are working file without any error.
I am trying to access the application in browser using master and worker IP address followed by a node port number like below, and getting error like unable to connect.
http://10.0.0.1:32394/
Name: frontend-app-6848bc9666-9ggz7
Namespace: pre-release
Priority: 0
Node: SBT-poc-worker2/10.0.0.5
Start Time: Fri, 17 Jan 2020 05:04:10 +0000
Labels: app=frontend-app
pod-template-hash=6848bc9666
Annotations: <none>
Status: Running
IP: 10.32.0.3
IPs:
IP: 10.32.0.3
Controlled By: ReplicaSet/frontend-app-6848bc9666
Containers:
frontend-app:
Container ID: docker://292199347e391c9feecd667e1668f32931f1fd7c670514eb1e05e4a37b8109ad
Image: frontend-app:future-master-fix-7ba35fbe
Image ID: docker://sha256:0099587db89de9ef999a7d1f087d4781e73c491b17e89392e92b08d2f935ad27
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 17 Jan 2020 05:04:15 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 250m
memory: 256Mi
Requests:
cpu: 100m
memory: 128Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r67p7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-r67p7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r67p7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m44s default-scheduler Successfully assigned pre-release/frontend-app-6848bc9666-9ggz7 to SBT-poc-worker2
Normal Pulled 5m41s kubelet, SBT-poc-worker2 Container image "frontend-app:future-master-fix-7ba35fbe" already present on machine
Normal Created 5m39s kubelet, SBT-poc-worker2 Created container frontend-app
Normal Started 5m39s kubelet, SBT-poc-worker2 Started container frontend-app
root#jenkins-linux-vm:/home/SBT-admin# kubectl get pods -n pre-release
NAME READY STATUS RESTARTS AGE
frontend-app-6848bc9666-9ggz7 1/1 Running 0 7m26s
root#jenkins-linux-vm:/home/SBT-admin# kubectl get services -n pre-release
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend-app NodePort 10.96.6.77 <none> 8080:32394/TCP 7m36s
root#jenkins-linux-vm:/home/SBT-admin# kubectl get deployment -n pre-release
NAME READY UP-TO-DATE AVAILABLE AGE
frontend-app 1/1 1 1 11m
root#jenkins-linux-vm:/home/SBT-admin# kubectl get -o yaml -n pre-release svc frontend-app
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"frontend-app"},"name":"frontend-app","namespace":"pre-release"},"spec":{"ports":[{"port":8080,"targetPort":8080}],"selector":{"name":"frontend-app"},"type":"NodePort"}}
creationTimestamp: "2020-01-17T05:04:10Z"
labels:
name: frontend-app
name: frontend-app
namespace: pre-release
resourceVersion: "1972713"
selfLink: /api/v1/namespaces/pre-release/services/frontend-app
uid: 91b87f9e-d723-498c-af05-5969645a82ee
spec:
clusterIP: 10.96.6.77
externalTrafficPolicy: Cluster
ports:
- nodePort: 32394
port: 8080
protocol: TCP
targetPort: 8080
selector:
name: frontend-app
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
root#jenkins-linux-vm:/home/SBT-admin# kubectl get pods --selector="app=frontend-app" --output=wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
frontend-app-7c7cf68f9c-n9tct 1/1 Running 0 58m 10.32.0.5 SBT-poc-worker2 <none> <none>
root#jenkins-linux-vm:/home/SBT-admin# kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-app-7c7cf68f9c-n9tct 1/1 Running 0 58m
root#jenkins-linux-vm:/home/SBT-admin# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend-app NodePort 10.96.21.202 <none> 8080:31098/TCP 59m
root#jenkins-linux-vm:/home/SBT-admin# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
frontend-app 1/1 1 1 59m
can you please someone help me to fix this.
Label on the POD is app=frontend-app as seen from logs on your problem statement.
Your POD description shows below label
Name: frontend-app-6848bc9666-9ggz7
Namespace: pre-release
Priority: 0
Node: SBT-poc-worker2/10.0.0.5
Start Time: Fri, 17 Jan 2020 05:04:10 +0000
Labels: app=frontend-app
Selector field on service yaml file is name: frontend-app , you should change this label on service yaml file to app: frontend-app and updated the service created.
Your current selector value is as below and is wrong comparing the label on POD
ports:
- nodePort: 32394
port: 8080
protocol: TCP
targetPort: 8080
selector:
name: frontend-app
Change it to
selector:
app: frontend-app
You should try to establish that
There are no rules blocking the default node-port range (i.e from port 30000 - to port 32767) on security rules or firewall on cluster network.
For example verify you have below security rule open on Cluster Network for nodeport range to work in browser.
Ingress IPv4 TCP 30000 - 32767 0.0.0.0/0
Once you have confirmed you have no security group rule issue. I will take below approach to debug and find whats wrong with port reachablity at node level. perform a basic Test and check if i can get nginx web server installed and reachable on browser via node port:
Steps:
Deploy a NGINX deployment using below nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
Verify deployment is up and running
$ kubectl apply -f nginx.yaml
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-nginx-75897978cd-ptqv9 1/1 Running 0 32s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d11h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 1/1 1 1 33s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-75897978cd 1 1 1 33s
Now create service to expose the nginx deployment using below example
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
selector:
run: my-nginx
Verify service is created and identify the nodeport assigned (since we did not provide any fixed port in service.yaml ( like below the node port is 32502)
$ kubectl apply -f service.yaml
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d11h
my-nginx NodePort 10.96.174.234 <none> 8080:32502/TCP 12s
In addition to the nodeport identify the ip of your master node i.e 131.112.113.101 below
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-1 Ready master 4d11h v1.17.0 131.112.113.101 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-1 Ready <none> 4d11h v1.17.0 131.112.113.102 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-2 Ready <none> 4d11h v1.17.0 131.112.113.103 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
Now if you try to access the nginx application using the IP of your masternode with nodeport value like <masternode>:<nodeport> (i.e. 131.112.113.101:32502) in your browser you should get result similar to below
Note the container port used on nginx.yaml and targetPort on service.yaml (i.e. 80) you should be able to figure out this for your frontend-app better. Hope this will help you understand the issue at your node/cluster level if any.