Can not access metalLB from outside - kubernetes

I can not access the IP that metalLB in BGP mode gave to me.
I use virtual router in the doc https://metallb.universe.tf/tutorial/minikube/. I do not use minikube.
What should I do?
I can access the IP in kubernetes cluster but the outside of the cluster it's not accessible.
#kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default frontend NodePort 10.106.156.219 <none> 80:30001/TCP 4h40m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
default mssql-deployment NodePort 10.98.127.234 <none> 1433:31128/TCP 25h
default nginx LoadBalancer 10.111.123.22 192.168.1.1 80:32715/TCP 4h49m
default nginx1 LoadBalancer 10.110.53.202 192.168.1.2 80:31881/TCP 79m
default nginx3 LoadBalancer 10.105.73.252 192.168.1.3 80:30515/TCP 77m
default nginx4 LoadBalancer 10.102.143.239 192.168.1.4 80:32467/TCP 63m
default nginx5 LoadBalancer 10.101.167.161 192.168.1.5 80:30752/TCP 62m
default nginx6 LoadBalancer 10.107.121.116 192.168.1.6 80:31832/TCP 61m
default php-apache ClusterIP 10.101.18.143 <none> 80/TCP 6h12m
default redis-master NodePort 10.98.3.18 <none> 6379:31446/TCP 4h40m
default redis-slave ClusterIP 10.110.29.67 <none> 6379/TCP 4h40m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 25h
kube-system metrics-server ClusterIP 10.101.39.125 <none> 443/TCP 6h15m
metallb-system test-bgp-router-bird ClusterIP 10.96.0.100 <none> 179/TCP 81m
metallb-system test-bgp-router-quagga ClusterIP 10.96.0.101 <none> 179/TCP 81m
metallb-system test-bgp-router-ui NodePort 10.104.151.219 <none> 80:31983/TCP 81m
every nginx IP is available from kubernetes cluster but I can not access them from outside.

Related

Nginx Ingress Controller not curling localhost on worker nodes

CentOS 7, 3 VMs -- 1 master and 2 workers, Kubernetes 1.26 (kubelet is 1.25.5.0), cri-dockerd, Calico CNI
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rxxxx-vm1 Ready control-plane 4h48m v1.25.5 10.253.137.20 <none> CentOS Linux 7 (Core) 3.10.0-1160.80.1.el7.x86_64 docker://20.10.22
rxxxx-vm2 Ready <none> 4h27m v1.25.5 10.253.137.17 <none> CentOS Linux 7 (Core) 3.10.0-1160.80.1.el7.x86_64 docker://20.10.22
rxxxx-vm3 Ready <none> 4h27m v1.25.5 10.253.137.10 <none> CentOS Linux 7 (Core) 3.10.0-1160.80.1.el7.x86_64 docker://20.10.22
NGINX Ingress controller is deployed as a daemonset:
# kubectl get po -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver calico-apiserver-685568b969-b8mfr 1/1 Running 0 4h50m 172.17.0.6 rxxxx-vm1 <none> <none>
calico-apiserver calico-apiserver-685568b969-xrj2h 1/1 Running 0 4h50m 172.17.0.7 rxxxx-vm1 <none> <none>
calico-system calico-kube-controllers-67df98bdc8-2zdnj 1/1 Running 0 4h51m 172.17.0.4 rxxxx-vm1 <none> <none>
calico-system calico-node-498bb 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
calico-system calico-node-sblv9 1/1 Running 0 4h30m 10.253.137.17 rxxxx-vm2 <none> <none>
calico-system calico-node-zkn28 1/1 Running 0 4h29m 10.253.137.10 rxxxx-vm3 <none> <none>
calico-system calico-typha-76c8f59f87-mq52d 1/1 Running 0 4h29m 10.253.137.10 rxxxx-vm3 <none> <none>
calico-system calico-typha-76c8f59f87-zk6jr 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system coredns-787d4945fb-6mq5k 1/1 Running 0 4h51m 172.17.0.3 rxxxx-vm1 <none> <none>
kube-system coredns-787d4945fb-kmqcv 1/1 Running 0 4h51m 172.17.0.2 rxxxx-vm1 <none> <none>
kube-system etcd-rxxxx-vm1 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system kube-apiserver-rxxxx-vm1 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system kube-controller-manager-rxxxx-vm1 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system kube-proxy-g9dbt 1/1 Running 0 4h29m 10.253.137.10 rxxxx-vm3 <none> <none>
kube-system kube-proxy-mnzks 1/1 Running 0 4h30m 10.253.137.17 rxxxx-vm2 <none> <none>
kube-system kube-proxy-n98xb 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system kube-scheduler-rxxxx-vm1 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
nginx-ingress nginx-ingress-2chhn 1/1 Running 0 4h29m 172.17.0.2 rxxxx-vm3 <none> <none>
nginx-ingress nginx-ingress-95h7s 1/1 Running 0 4h30m 172.17.0.2 rxxxx-vm2 <none> <none>
nginx-ingress nginx-ingress-wbxng 1/1 Running 0 4h51m 172.17.0.5 rxxxx-vm1 <none> <none>
play apple-app 1/1 Running 0 4h45m 172.17.0.8 rxxxx-vm1 <none> <none>
play banana-app 1/1 Running 0 4h45m 172.17.0.9 rxxxx-vm1 <none> <none>
tigera-operator tigera-operator-7795f5d79b-hmm5g 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
Services:
# kubectl get svc -A -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
calico-apiserver calico-api ClusterIP 10.111.117.42 <none> 443/TCP 6h34m apiserver=true
calico-system calico-kube-controllers-metrics ClusterIP 10.99.121.254 <none> 9094/TCP 6h35m k8s-app=calico-kube-controllers
calico-system calico-typha ClusterIP 10.104.50.90 <none> 5473/TCP 6h35m k8s-app=calico-typha
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h36m <none>
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6h36m k8s-app=kube-dns
play apple-service ClusterIP 10.98.78.251 <none> 5678/TCP 6h29m app=apple
play banana-service ClusterIP 10.103.87.112 <none> 5678/TCP 6h29m app=banana
Service details:
# kubectl -n play describe svc apple-service
Name: apple-service
Namespace: play
Labels: <none>
Annotations: <none>
Selector: app=apple
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.98.78.251
IPs: 10.98.78.251
Port: <unset> 5678/TCP
TargetPort: 5678/TCP
Endpoints: 172.17.0.8:5678
Session Affinity: None
Events: <none>
Endpoints:
# kubectl get ep -A
NAMESPACE NAME ENDPOINTS AGE
calico-apiserver calico-api 172.17.0.6:5443,172.17.0.7:5443 6h39m
calico-system calico-kube-controllers-metrics 172.17.0.4:9094 6h39m
calico-system calico-typha 10.253.137.10:5473,10.253.137.20:5473 6h40m
default kubernetes 10.253.137.20:6443 6h40m
kube-system kube-dns 172.17.0.2:53,172.17.0.3:53,172.17.0.2:53 + 3 more... 6h40m
play apple-service 172.17.0.8:5678 6h34m
play banana-service 172.17.0.9:5678 6h34m
Endpoint details:
# kubectl -n play describe ep apple-service
Name: apple-service
Namespace: play
Labels: <none>
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2023-01-11T20:21:27Z
Subsets:
Addresses: 172.17.0.8
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 5678 TCP
Events: <none>
Ingress resource:
# kubectl get ing -A -o wide
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
play example-ingress nginx localhost 80 6h30m
Ingress details:
# kubectl -n play describe ing example-ingress
Name: example-ingress
Labels: <none>
Namespace: play
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
localhost
/apple apple-service:5678 (172.17.0.8:5678)
/banana banana-service:5678 (172.17.0.9:5678)
Annotations: ingress.kubernetes.io/rewrite-target: /
Events: <none>
QUESTION:
While curl -kL http://localhost/apple on master node returns apple the same command produces below output on worker nodes:
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.23.2</center>
</body>
</html>
My understanding is that if the ingress controller pod is running on every node, then localhost should be resolved as this is defined as host in ingress resource definition. Is this understanding incorrect? If not, what am I doing wrong?
When I look at the corresponding node's ingress controller's pod's logs, I see this:
2023/01/12 03:02:05 [error] 68#68: *11 connect() failed (113: No route to host) while connecting to upstream, client: 172.17.0.1, server: localhost, request: "GET /apple HTTP/1.1", upstream: "http://172.17.0.8:5678/apple", host: "localhost"
172.17.0.1 - - [12/Jan/2023:03:02:05 +0000] "GET /apple HTTP/1.1" 502 157 "-" "curl/7.29.0" "-"

Load balancer is being provisioned with MetalLB

I have k8s cluster v1.24.4+rke2r1, created by Rancher. I have already installed MetalLB, but when I try create pod with nginx and assign LoadBalancer I still have:
Service is ready. Load balancer is being provisioned
This is my pod and service config
apiVersion: v1
kind: Service
metadata:
name: nginx-service
annotations:
metallb.universe.tf/address-pool: public-ips
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer
Pod
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.23.2-alpine
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
My MetalLB online config
configInline:
address-pools:
- addresses:
- 192.168.1.100-192.168.1.200
autoAssign: true
name: public-ips
protocol: layer2
When I describe my nginx-service I got
Name: nginx-service
Namespace: metal-test
Labels: <none>
Annotations: metallb.universe.tf/address-pool: public-ips
Selector: app=nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.127.38
IPs: 10.43.127.38
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30010/TCP
Endpoints: 10.42.212.79:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
My service list
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
calico-system calico-kube-controllers-metrics ClusterIP 10.43.208.243 <none> 9094/TCP 21h
calico-system calico-typha ClusterIP 10.43.230.52 <none> 5473/TCP 21h
cattle-system cattle-cluster-agent ClusterIP 10.43.198.73 <none> 80/TCP,443/TCP 21h
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 21h
default nginx-service LoadBalancer 10.43.245.80 <pending> 8080:32146/TCP 66m
kube-system rke2-coredns-rke2-coredns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 21h
kube-system rke2-metrics-server ClusterIP 10.43.84.128 <none> 443/TCP 21h
longhorn-system csi-attacher ClusterIP 10.43.49.124 <none> 12345/TCP 3h29m
longhorn-system csi-provisioner ClusterIP 10.43.236.132 <none> 12345/TCP 3h29m
longhorn-system csi-resizer ClusterIP 10.43.153.211 <none> 12345/TCP 3h29m
longhorn-system csi-snapshotter ClusterIP 10.43.182.109 <none> 12345/TCP 3h29m
longhorn-system longhorn-admission-webhook ClusterIP 10.43.49.242 <none> 9443/TCP 3h29m
longhorn-system longhorn-backend ClusterIP 10.43.71.124 <none> 9500/TCP 3h29m
longhorn-system longhorn-conversion-webhook ClusterIP 10.43.180.185 <none> 9443/TCP 3h29m
longhorn-system longhorn-engine-manager ClusterIP None <none> <none> 3h29m
longhorn-system longhorn-frontend ClusterIP 10.43.95.1 <none> 80/TCP 3h29m
longhorn-system longhorn-replica-manager ClusterIP None <none> <none> 3h29m
metallb metallb-webhook-service ClusterIP 10.43.211.242 <none> 443/TCP 178m
my metallb pod
NAME READY STATUS RESTARTS AGE
metallb-controller-6776dbc97d-kmkf9 1/1 Running 1 (177m ago) 177m
metallb-speaker-jrnmf 1/1 Running 0
I used this tutorial to install MetalLB http://xybernetics.com/techtalk/how-to-install-metallb-on-rancher-kubernetes-cluster/
I don't have any active firewall and I don't have nginx ingress on my cluster. Any idea what I do wrong? I do this on my local network.

unable to access nodeIP:port, serviceIP:port or podIP:port in minikube k8s

I am using k8s in minikube under Ubuntu and deployed nginx server. Which i want to access from different level eg from serviceip, nodeip or pod ip and none of them is reachable.Not sure why?? I am running my curl command to access ip:port from the ubuntu host machine where minikube node is installed. below is the log
/home/ravi/k8s>kgp
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx-deployment-775bf4d7fb-jqxxv 1/1 Running 0 13m 172.17.0.3 minikube <none> <none>
kube-system coredns-66bff467f8-gtsl7 1/1 Running 0 9h 172.17.0.2 minikube <none> <none>
kube-system etcd-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system kube-apiserver-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system kube-controller-manager-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system kube-proxy-nphlc 1/1 Running 0 7h28m 192.168.49.2 minikube <none> <none>
kube-system kube-scheduler-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system storage-provisioner 1/1 Running 21 9h 192.168.49.2 minikube <none> <none>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>kgs
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9h <none>
default nginx-service NodePort 10.101.107.62 <none> 80:31000/TCP 13m app=nginx-app
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 9h k8s-app=kube-dns
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>kubectl describe service nginx-service
Name: nginx-service
Namespace: default
Labels: <none>
Annotations: Selector: app=nginx-app
Type: NodePort
IP: 10.101.107.62
Port: <unset> 80/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31000/TCP
Endpoints: 172.17.0.3:8000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 172.17.0.3:8000
curl: (7) Failed to connect to 172.17.0.3 port 8000: No route to host
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 192.168.1.52:31000
curl: (7) Failed to connect to 192.168.1.52 port 31000: Connection refused
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 10.101.107.62:80 ---> also hangs
......
......
/home/ravi/k8s>
/home/ravi/k8s>kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready master 9h v1.18.20 192.168.49.2 <none> Ubuntu 20.04.1 LTS 5.13.0-40-generic docker://20.10.3
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 192.168.49.2:31000
curl: (7) Failed to connect to 192.168.49.2 port 31000: Connection refused
/home/ravi/k8s>
/home/ravi/k8s> kubectl logs nginx-deployment-775bf4d7fb-jqxxv ---> no log shown
/home/ravi/k8s>cat 2_nginx_nodeport.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-app
spec:
replicas: 1
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx-container
image: nginx:1.16
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx-app
ports:
- protocol: TCP
nodePort: 31000
port: 80
targetPort: 8000
/home/ravi/k8s>
root#nginx-deployment-775bf4d7fb-jqxxv:~# curl 172.17.0.3:80 ---> working on port 80 instead of 8000 as set in yaml
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
</body>
</html>

How to access kubernetes microk8s dashboard remotely without Ingress?

I am new to Kubernetes and i am trying to deploy a MicroKubernetes cluster on 4 raspberry PIs.
I am struggling with setting up the dashboard since (no joke) a total of about 30 hours now and starting to be extremely frustrated .
I just cannot access the dashboard remotely.
Solutions that didnt work out:
No.1 Ingress:
I managed to enable ingress but it seems to be extremely complicated to connect it to the dashboard since i manually have to resolve DNS properties inside pods and host machines.
I eventually gave up on that. There is also no documentation whatsoever available how to set an ingress up without having a valid bought domain pointing at your Ingress Node.
If you are able to guide me through this, i am up for it.
No.2 Change service type of dashboard to LoadBalancer or NodePort:
With this method i can actually expose the dashboard... but it can only be accessed through https.... Since dashbaord seems to use self signed certificates or some other mechanism i cannot access the dashboard via a browser. The browsers(chrome firefox) always refuse to connect to the dashboard... When i try to access via http the browsers say i need to use https.
No.3 kube-proxy:
This only allows Localhost connections. YOu can pass arguments to kube proxy to allow other hosts to access the dashboard... but then again we have the https/http problem
At this point it is just amazing to me how extremly hard it is to just access this simple dashboard... Can anybody give any advice on how to access it ?
a#k8s-node-1:~/kubernetes$ kctl describe service kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.152.183.249
IPs: 10.152.183.249
Port: <unset> 443/TCP
TargetPort: 8443/TCP
NodePort: <unset> 32228/TCP
Endpoints: 10.1.140.67:8443
Session Affinity: None
External Traffic Policy: Cluster
$ kubectl edit svc -n kube-system kubernetes-dashboard
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s>
creationTimestamp: "2022-03-21T14:30:10Z"
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "43060"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: fcb45ccc-070b-4a4d-b987-41f5b7777559
spec:
clusterIP: 10.152.183.249
clusterIPs:
- 10.152.183.249
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 32228
port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
a#k8s-node-1:~/kubernetes$ kctl get services -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
metrics-server ClusterIP 10.152.183.233 <none> 443/TCP 165m
kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 142m
dashboard-metrics-scraper ClusterIP 10.152.183.202 <none> 8000/TCP 32m
kubernetes-dashboard NodePort 10.152.183.249 <none> 443:32228/TCP 32m
a#k8s-node-1:~/kubernetes$ cat dashboard-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: dashboard
namespace: kube-system
spec:
rules:
- host: nonexistent.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 8080
a#k8s-node-1:~/kubernetes$ kctl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-node-c4shb 1/1 Running 0 3h23m 192.168.180.47 k8s-node-2 <none> <none>
ingress nginx-ingress-microk8s-controller-nvcvx 1/1 Running 0 3h12m 10.1.140.66 k8s-node-2 <none> <none>
kube-system calico-node-ptwmk 1/1 Running 0 3h23m 192.168.180.48 k8s-node-3 <none> <none>
ingress nginx-ingress-microk8s-controller-hksg7 1/1 Running 0 3h12m 10.1.55.131 k8s-node-4 <none> <none>
ingress nginx-ingress-microk8s-controller-tk9dj 1/1 Running 0 3h12m 10.1.76.129 k8s-node-3 <none> <none>
ingress nginx-ingress-microk8s-controller-c8t54 1/1 Running 0 3h12m 10.1.109.66 k8s-node-1 <none> <none>
kube-system calico-node-k65fz 1/1 Running 0 3h22m 192.168.180.52 k8s-node-4 <none> <none>
kube-system coredns-64c6478b6c-584s8 1/1 Running 0 177m 10.1.109.67 k8s-node-1 <none> <none>
kube-system calico-kube-controllers-6966456d6b-vvnm6 1/1 Running 0 3h24m 10.1.109.65 k8s-node-1 <none> <none>
kube-system calico-node-7jhz9 1/1 Running 0 3h33m 192.168.180.46 k8s-node-1 <none> <none>
kube-system metrics-server-647bdc584d-ldf8q 1/1 Running 1 (3h19m ago) 3h20m 10.1.55.129 k8s-node-4 <none> <none>
kube-system kubernetes-dashboard-585bdb5648-8s9xt 1/1 Running 0 67m 10.1.140.67 k8s-node-2 <none> <none>
kube-system dashboard-metrics-scraper-69d9497b54-x7vt9 1/1 Running 0 67m 10.1.55.132 k8s-node-4 <none> <none>
Using an ingress is indeed the preferred way, but since you seem to have trouble in your environment, you can indeed use a LoadBalancer service.
To avoid the problem with the automatically generated certificates, provide your certificate and private key to the dashboard, for example as a secret, and use the flags --tls-key-file and --tls-cert-file to point to the certificate. More details: https://github.com/kubernetes/dashboard/blob/master/docs/user/certificate-management.md

Helm expose prometheus dashboard

I installed Prometheus using helm into Kubernets cluster (CentOS 8 VM) and want to access to dashboard from outside of cluster using VM IP
kubectl get svc -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 27m
prometheus-grafana ClusterIP 10.98.154.200 <none> 80/TCP 27m
prometheus-kube-state-metrics ClusterIP 10.109.183.131 <none> 8080/TCP 27m
prometheus-operated ClusterIP None <none> 9090/TCP 27m
prometheus-prometheus-node-exporter ClusterIP 10.101.171.235 <none> 30206/TCP 27m
prometheus-prometheus-oper-alertmanager ClusterIP 10.109.205.136 <none> 9093/TCP 27m
prometheus-prometheus-oper-operator ClusterIP 10.111.243.35 <none> 8080/TCP,443/TCP 27m
prometheus-prometheus-oper-prometheus ClusterIP 10.106.76.22 <none> 9090/TCP 27m
i need to expose prometheus-prometheus-oper-prometheus service which works on port 9090 to be accessible from the outside on port 30000 using NodePort
http://Kubernetes_VM_IP:30000
so i created another service: but it fails services.yaml:
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9090'
spec:
selector:
app: prometheus-operator-prometheus
type: NodePort
ports:
- port: 9090
nodePort: 30000
protocol: TCP
kubectl describe svc prometheus-prometheus-oper-prometheus -n monitoring
Name: prometheus-prometheus-oper-prometheus
Namespace: monitoring
Labels: app=prometheus-operator-prometheus
chart=prometheus-operator-8.12.2
heritage=Helm
release=prometheus
self-monitor=true
Annotations: <none>
Selector: app=prometheus,prometheus=prometheus-prometheus-oper-prometheus
Type: ClusterIP
IP: 10.106.76.22
Port: web 9090/TCP
TargetPort: 9090/TCP
Endpoints: 10.32.0.7:9090
Session Affinity: None
Events: <none>
Recreated prometheus and specified nodeport during installation:
helm install prometheus stable/prometheus-operator --namespace monitoring --set prometheus.service.nodePort=30000 --set prometheus.service.type=NodePort
For those using a values.yaml file, this is the correct structure :
prometheus:
service:
nodePort: 30000
type: NodePort