I am using k8s in minikube under Ubuntu and deployed nginx server. Which i want to access from different level eg from serviceip, nodeip or pod ip and none of them is reachable.Not sure why?? I am running my curl command to access ip:port from the ubuntu host machine where minikube node is installed. below is the log
/home/ravi/k8s>kgp
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx-deployment-775bf4d7fb-jqxxv 1/1 Running 0 13m 172.17.0.3 minikube <none> <none>
kube-system coredns-66bff467f8-gtsl7 1/1 Running 0 9h 172.17.0.2 minikube <none> <none>
kube-system etcd-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system kube-apiserver-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system kube-controller-manager-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system kube-proxy-nphlc 1/1 Running 0 7h28m 192.168.49.2 minikube <none> <none>
kube-system kube-scheduler-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system storage-provisioner 1/1 Running 21 9h 192.168.49.2 minikube <none> <none>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>kgs
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9h <none>
default nginx-service NodePort 10.101.107.62 <none> 80:31000/TCP 13m app=nginx-app
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 9h k8s-app=kube-dns
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>kubectl describe service nginx-service
Name: nginx-service
Namespace: default
Labels: <none>
Annotations: Selector: app=nginx-app
Type: NodePort
IP: 10.101.107.62
Port: <unset> 80/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31000/TCP
Endpoints: 172.17.0.3:8000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 172.17.0.3:8000
curl: (7) Failed to connect to 172.17.0.3 port 8000: No route to host
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 192.168.1.52:31000
curl: (7) Failed to connect to 192.168.1.52 port 31000: Connection refused
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 10.101.107.62:80 ---> also hangs
......
......
/home/ravi/k8s>
/home/ravi/k8s>kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready master 9h v1.18.20 192.168.49.2 <none> Ubuntu 20.04.1 LTS 5.13.0-40-generic docker://20.10.3
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 192.168.49.2:31000
curl: (7) Failed to connect to 192.168.49.2 port 31000: Connection refused
/home/ravi/k8s>
/home/ravi/k8s> kubectl logs nginx-deployment-775bf4d7fb-jqxxv ---> no log shown
/home/ravi/k8s>cat 2_nginx_nodeport.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-app
spec:
replicas: 1
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx-container
image: nginx:1.16
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx-app
ports:
- protocol: TCP
nodePort: 31000
port: 80
targetPort: 8000
/home/ravi/k8s>
root#nginx-deployment-775bf4d7fb-jqxxv:~# curl 172.17.0.3:80 ---> working on port 80 instead of 8000 as set in yaml
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
</body>
</html>
I am new to Kubernetes and i am trying to deploy a MicroKubernetes cluster on 4 raspberry PIs.
I am struggling with setting up the dashboard since (no joke) a total of about 30 hours now and starting to be extremely frustrated .
I just cannot access the dashboard remotely.
Solutions that didnt work out:
No.1 Ingress:
I managed to enable ingress but it seems to be extremely complicated to connect it to the dashboard since i manually have to resolve DNS properties inside pods and host machines.
I eventually gave up on that. There is also no documentation whatsoever available how to set an ingress up without having a valid bought domain pointing at your Ingress Node.
If you are able to guide me through this, i am up for it.
No.2 Change service type of dashboard to LoadBalancer or NodePort:
With this method i can actually expose the dashboard... but it can only be accessed through https.... Since dashbaord seems to use self signed certificates or some other mechanism i cannot access the dashboard via a browser. The browsers(chrome firefox) always refuse to connect to the dashboard... When i try to access via http the browsers say i need to use https.
No.3 kube-proxy:
This only allows Localhost connections. YOu can pass arguments to kube proxy to allow other hosts to access the dashboard... but then again we have the https/http problem
At this point it is just amazing to me how extremly hard it is to just access this simple dashboard... Can anybody give any advice on how to access it ?
a#k8s-node-1:~/kubernetes$ kctl describe service kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.152.183.249
IPs: 10.152.183.249
Port: <unset> 443/TCP
TargetPort: 8443/TCP
NodePort: <unset> 32228/TCP
Endpoints: 10.1.140.67:8443
Session Affinity: None
External Traffic Policy: Cluster
$ kubectl edit svc -n kube-system kubernetes-dashboard
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s>
creationTimestamp: "2022-03-21T14:30:10Z"
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "43060"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: fcb45ccc-070b-4a4d-b987-41f5b7777559
spec:
clusterIP: 10.152.183.249
clusterIPs:
- 10.152.183.249
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 32228
port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
a#k8s-node-1:~/kubernetes$ kctl get services -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
metrics-server ClusterIP 10.152.183.233 <none> 443/TCP 165m
kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 142m
dashboard-metrics-scraper ClusterIP 10.152.183.202 <none> 8000/TCP 32m
kubernetes-dashboard NodePort 10.152.183.249 <none> 443:32228/TCP 32m
a#k8s-node-1:~/kubernetes$ cat dashboard-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: dashboard
namespace: kube-system
spec:
rules:
- host: nonexistent.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 8080
a#k8s-node-1:~/kubernetes$ kctl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-node-c4shb 1/1 Running 0 3h23m 192.168.180.47 k8s-node-2 <none> <none>
ingress nginx-ingress-microk8s-controller-nvcvx 1/1 Running 0 3h12m 10.1.140.66 k8s-node-2 <none> <none>
kube-system calico-node-ptwmk 1/1 Running 0 3h23m 192.168.180.48 k8s-node-3 <none> <none>
ingress nginx-ingress-microk8s-controller-hksg7 1/1 Running 0 3h12m 10.1.55.131 k8s-node-4 <none> <none>
ingress nginx-ingress-microk8s-controller-tk9dj 1/1 Running 0 3h12m 10.1.76.129 k8s-node-3 <none> <none>
ingress nginx-ingress-microk8s-controller-c8t54 1/1 Running 0 3h12m 10.1.109.66 k8s-node-1 <none> <none>
kube-system calico-node-k65fz 1/1 Running 0 3h22m 192.168.180.52 k8s-node-4 <none> <none>
kube-system coredns-64c6478b6c-584s8 1/1 Running 0 177m 10.1.109.67 k8s-node-1 <none> <none>
kube-system calico-kube-controllers-6966456d6b-vvnm6 1/1 Running 0 3h24m 10.1.109.65 k8s-node-1 <none> <none>
kube-system calico-node-7jhz9 1/1 Running 0 3h33m 192.168.180.46 k8s-node-1 <none> <none>
kube-system metrics-server-647bdc584d-ldf8q 1/1 Running 1 (3h19m ago) 3h20m 10.1.55.129 k8s-node-4 <none> <none>
kube-system kubernetes-dashboard-585bdb5648-8s9xt 1/1 Running 0 67m 10.1.140.67 k8s-node-2 <none> <none>
kube-system dashboard-metrics-scraper-69d9497b54-x7vt9 1/1 Running 0 67m 10.1.55.132 k8s-node-4 <none> <none>
Using an ingress is indeed the preferred way, but since you seem to have trouble in your environment, you can indeed use a LoadBalancer service.
To avoid the problem with the automatically generated certificates, provide your certificate and private key to the dashboard, for example as a secret, and use the flags --tls-key-file and --tls-cert-file to point to the certificate. More details: https://github.com/kubernetes/dashboard/blob/master/docs/user/certificate-management.md
I installed Prometheus using helm into Kubernets cluster (CentOS 8 VM) and want to access to dashboard from outside of cluster using VM IP
kubectl get svc -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 27m
prometheus-grafana ClusterIP 10.98.154.200 <none> 80/TCP 27m
prometheus-kube-state-metrics ClusterIP 10.109.183.131 <none> 8080/TCP 27m
prometheus-operated ClusterIP None <none> 9090/TCP 27m
prometheus-prometheus-node-exporter ClusterIP 10.101.171.235 <none> 30206/TCP 27m
prometheus-prometheus-oper-alertmanager ClusterIP 10.109.205.136 <none> 9093/TCP 27m
prometheus-prometheus-oper-operator ClusterIP 10.111.243.35 <none> 8080/TCP,443/TCP 27m
prometheus-prometheus-oper-prometheus ClusterIP 10.106.76.22 <none> 9090/TCP 27m
i need to expose prometheus-prometheus-oper-prometheus service which works on port 9090 to be accessible from the outside on port 30000 using NodePort
http://Kubernetes_VM_IP:30000
so i created another service: but it fails services.yaml:
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9090'
spec:
selector:
app: prometheus-operator-prometheus
type: NodePort
ports:
- port: 9090
nodePort: 30000
protocol: TCP
kubectl describe svc prometheus-prometheus-oper-prometheus -n monitoring
Name: prometheus-prometheus-oper-prometheus
Namespace: monitoring
Labels: app=prometheus-operator-prometheus
chart=prometheus-operator-8.12.2
heritage=Helm
release=prometheus
self-monitor=true
Annotations: <none>
Selector: app=prometheus,prometheus=prometheus-prometheus-oper-prometheus
Type: ClusterIP
IP: 10.106.76.22
Port: web 9090/TCP
TargetPort: 9090/TCP
Endpoints: 10.32.0.7:9090
Session Affinity: None
Events: <none>
Recreated prometheus and specified nodeport during installation:
helm install prometheus stable/prometheus-operator --namespace monitoring --set prometheus.service.nodePort=30000 --set prometheus.service.type=NodePort
For those using a values.yaml file, this is the correct structure :
prometheus:
service:
nodePort: 30000
type: NodePort
I'm configuring Istio using Helm. Here you can find my istio-config.yaml:
global:
proxy:
accessLogFile: "/dev/stdout"
resources:
requests:
cpu: 10m
memory: 40Mi
disablePolicyChecks: false
sidecarInjectorWebhook:
enabled: true
rewriteAppHTTPProbe: false
pilot:
autoscaleEnabled: false
traceSampling: 100.0
resources:
requests:
cpu: 10m
memory: 100Mi
mixer:
policy:
enabled: true
autoscaleEnabled: false
resources:
requests:
cpu: 10m
memory: 100Mi
telemetry:
enabled: true
autoscaleEnabled: false
resources:
requests:
cpu: 50m
memory: 100Mi
adapters:
stdio:
enabled: true
grafana:
enabled: true
tracing:
enabled: true
kiali:
enabled: true
createDemoSecret: true
gateways:
istio-ingressgateway:
autoscaleEnabled: false
resources:
requests:
cpu: 10m
memory: 40Mi
istio-egressgateway:
enabled: true
autoscaleEnabled: false
resources:
requests:
cpu: 10m
memory: 40Mi
global:
controlPlaneSecurityEnabled: false
mtls:
enabled: false
Then I deployed a bunch of microservices using istioctl, all of them are simple REST call using HTTP. They can communicate with each other without any issue. If I exposed them with NodePorts I can reach and communicate with them correctly.
Here are my Services:
$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default activemq ClusterIP None <none> 61616/TCP 3h17m
default activemq-np NodePort 10.110.76.147 <none> 8161:30061/TCP 3h17m
default api-exchange ClusterIP None <none> 8080/TCP 3h16m
default api-response ClusterIP None <none> 8080/TCP 3h16m
default authorization-server ClusterIP None <none> 8080/TCP 3h17m
default de-communication ClusterIP None <none> 8080/TCP 3h16m
default gateway ClusterIP None <none> 8080/TCP 3h17m
default gateway-np NodePort 10.96.123.57 <none> 8080:30080/TCP 3h17m
default identity ClusterIP None <none> 88/TCP,8080/TCP 3h18m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h19m
default matchengine ClusterIP None <none> 8080/TCP 3h16m
default monitor-redis ClusterIP None <none> 8081/TCP 3h17m
default monitor-redis-np NodePort 10.106.178.13 <none> 8081:30082/TCP 3h17m
default postgres ClusterIP None <none> 5432/TCP 3h18m
default postgres-np NodePort 10.106.223.216 <none> 5432:30032/TCP 3h18m
default redis ClusterIP None <none> 6379/TCP 3h18m
default redis-np NodePort 10.101.167.194 <none> 6379:30079/TCP 3h18m
default synchronization ClusterIP None <none> 8080/TCP 3h15m
default tx-flow ClusterIP None <none> 8080/TCP 3h15m
default tx-manager ClusterIP None <none> 8080/TCP 3h15m
default tx-scheduler ClusterIP None <none> 8080/TCP 3h15m
default ubc-config ClusterIP None <none> 8080/TCP 3h16m
default ubc-services-config ClusterIP None <none> 8888/TCP 3h18m
default ubc-services-config-np NodePort 10.110.11.213 <none> 8888:30088/TCP 3h18m
default user-admin ClusterIP None <none> 8080/TCP 3h17m
default web-exchange-np NodePort 10.105.244.194 <none> 80:30081/TCP 3h15m
istio-system grafana ClusterIP 10.97.134.230 <none> 3000/TCP 3h22m
istio-system istio-citadel ClusterIP 10.99.159.56 <none> 8060/TCP,15014/TCP 3h22m
istio-system istio-egressgateway ClusterIP 10.97.71.204 <none> 80/TCP,443/TCP,15443/TCP 3h22m
istio-system istio-galley ClusterIP 10.98.111.27 <none> 443/TCP,15014/TCP,9901/TCP 3h22m
istio-system istio-ingressgateway LoadBalancer 10.96.182.202 <pending> 15020:30936/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31913/TCP,15030:30606/TCP,15031:32127/TCP,15032:30362/TCP,15443:31416/TCP 3h22m
istio-system istio-pilot ClusterIP 10.101.117.169 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 3h22m
istio-system istio-policy ClusterIP 10.97.247.54 <none> 9091/TCP,15004/TCP,15014/TCP 3h22m
istio-system istio-sidecar-injector ClusterIP 10.101.219.141 <none> 443/TCP 3h22m
istio-system istio-telemetry ClusterIP 10.109.108.78 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 3h22m
istio-system jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 3h22m
istio-system jaeger-collector ClusterIP 10.97.255.231 <none> 14267/TCP,14268/TCP 3h22m
istio-system jaeger-query ClusterIP 10.104.80.162 <none> 16686/TCP 3h22m
istio-system kiali ClusterIP 10.104.41.71 <none> 20001/TCP 3h22m
istio-system kiali-np NodePort 10.100.99.141 <none> 20001:30085/TCP 29h
istio-system prometheus ClusterIP 10.110.46.60 <none> 9090/TCP 3h22m
istio-system tracing ClusterIP 10.111.173.205 <none> 80/TCP 3h22m
istio-system zipkin ClusterIP 10.101.144.199 <none> 9411/TCP 3h22m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 54d
kube-system tiller-deploy ClusterIP 10.105.162.195 <none> 44134/TCP 24d
I created an ingress gateway and one VirtualService to route calls from outside the cluster. Here are my gateway and Virtual Services configurations:
Gateway:
$ kubectl describe gateway iris-gateway
Name: iris-gateway
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"iris-gateway","namespace":"default"},"s...
API Version: networking.istio.io/v1alpha3
Kind: Gateway
Metadata:
Creation Timestamp: 2019-08-23T17:25:20Z
Generation: 1
Resource Version: 7093263
Self Link: /apis/networking.istio.io/v1alpha3/namespaces/default/gateways/iris-gateway
UID: 4c4fac7d-a698-4c9c-97e6-ebc7416c96a8
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Events: <none>
Virtual Services:
$ kubectl describe virtualservice apiexg
Name: apiexg
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.istio.io/v1alpha3","kind":"VirtualService","metadata":{"annotations":{},"name":"apiexg","namespace":"default"},"...
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
Creation Timestamp: 2019-08-23T19:26:16Z
Generation: 1
Resource Version: 7107510
Self Link: /apis/networking.istio.io/v1alpha3/namespaces/default/virtualservices/apiexg
UID: 861bca0d-be98-4bfb-bf92-b2bd2f1b703f
Spec:
Gateways:
iris-gateway
Hosts:
*
Http:
Match:
Uri:
Prefix: /api-exchange
Route:
Destination:
Host: api-exchange.default.svc.cluster.local
Port:
Number: 8080
Events: <none>
When I make a call to the service I always got a 503 Service Unavailable:
curl -X POST http://172.30.7.129:31380/api-exchange/ -vvv
* About to connect() to 172.30.7.129 port 31380 (#0)
* Trying 172.30.7.129...
* Connected to 172.30.7.129 (172.30.7.129) port 31380 (#0)
> POST /api-exchange/ HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 172.30.7.129:31380
> Accept: */*
>
< HTTP/1.1 503 Service Unavailable
< content-length: 19
< content-type: text/plain
< date: Fri, 23 Aug 2019 21:49:33 GMT
< server: istio-envoy
<
* Connection #0 to host 172.30.7.129 left intact
no healthy upstream
Here is log output for istio-ingressgateway pod:
[2019-08-23 21:49:34.185][38][warning][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:110] original_dst_load_balancer: No downstream connection or no original_dst.
Versions:
$ istioctl version --remote
client version: 1.2.4
citadel version: 1.2.4
egressgateway version: 94746ccd404a8e056483dd02e4e478097b950da6-dirty
galley version: 1.2.4
ingressgateway version: 94746ccd404a8e056483dd02e4e478097b950da6-dirty
pilot version: 1.2.4
policy version: 1.2.4
sidecar-injector version: 1.2.4
telemetry version: 1.2.4
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:18:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Istio installation:
$ helm install /opt/istio-1.2.4/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system
$ helm install /opt/istio-1.2.4/install/kubernetes/helm/istio --name istio --namespace istio-system --values istio-config/istio-config.yaml
Environment:
I did the same configuration over a Oracle Virtual Appliance Virtual Server with RHEL 7 and over a cluster of 3 Physical Servers with RHEL 7.
I solve this problem. istio-gateway was not capable to do redirect due to one of my services have a ClusterIP assigned:
$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default activemq ClusterIP None <none> 61616/TCP 3h17m
default api-exchange ClusterIP None <none> 8080/TCP 3h16m
default api-response ClusterIP None <none> 8080/TCP 3h16m
default authorization-server ClusterIP None <none> 8080/TCP 3h17m
default de-communication ClusterIP None <none> 8080/TCP 3h16m
default gateway ClusterIP None <none> 8080/TCP 3h17m
default identity ClusterIP None <none> 88/TCP,8080/TCP 3h18m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h19m
default matchengine ClusterIP None <none> 8080/TCP 3h16m
default monitor-redis ClusterIP None <none> 8081/TCP 3h17m
default postgres ClusterIP None <none> 5432/TCP 3h18m
default redis ClusterIP None <none> 6379/TCP 3h18m
default synchronization ClusterIP None <none> 8080/TCP 3h15m
default tx-flow ClusterIP None <none> 8080/TCP 3h15m
default tx-manager ClusterIP None <none> 8080/TCP 3h15m
default tx-scheduler ClusterIP None <none> 8080/TCP 3h15m
default ubc-config ClusterIP None <none> 8080/TCP 3h16m
default ubc-services-config ClusterIP None <none> 8888/TCP 3h18m
default user-admin ClusterIP None <none> 8080/TCP 3h17m
Here one of my YAML with ClusterIP: None:
apiVersion: v1
kind: Service
metadata:
name: ubc-config
labels:
app: ubc-config
spec:
clusterIP: None
ports:
- port: 8080
name: ubc-config
selector:
app: ubc-config
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubc-config
spec:
selector:
matchLabels:
app: ubc-config
replicas: 1
template:
metadata:
labels:
app: ubc-config
spec:
containers:
- name: ubc-config
image: ubc-config
ports:
- containerPort: 8080
As you can see, Service.spec.ClusterIP is set to NONE. To solve the problem I only change my YAML configuration to:
apiVersion: v1
kind: Service
metadata:
name: ubc-config
labels:
app: ubc-config
spec:
ports:
- port: 8080
name: http-ubcconfig
selector:
app: ubc-config
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubc-config
spec:
selector:
matchLabels:
app: ubc-config
replicas: 1
template:
metadata:
labels:
app: ubc-config
spec:
containers:
- name: ubc-config
image: ubc-config
ports:
- containerPort: 8080
name: http-ubcconfig
I hope this helps someone.
I can not access the IP that metalLB in BGP mode gave to me.
I use virtual router in the doc https://metallb.universe.tf/tutorial/minikube/. I do not use minikube.
What should I do?
I can access the IP in kubernetes cluster but the outside of the cluster it's not accessible.
#kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default frontend NodePort 10.106.156.219 <none> 80:30001/TCP 4h40m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
default mssql-deployment NodePort 10.98.127.234 <none> 1433:31128/TCP 25h
default nginx LoadBalancer 10.111.123.22 192.168.1.1 80:32715/TCP 4h49m
default nginx1 LoadBalancer 10.110.53.202 192.168.1.2 80:31881/TCP 79m
default nginx3 LoadBalancer 10.105.73.252 192.168.1.3 80:30515/TCP 77m
default nginx4 LoadBalancer 10.102.143.239 192.168.1.4 80:32467/TCP 63m
default nginx5 LoadBalancer 10.101.167.161 192.168.1.5 80:30752/TCP 62m
default nginx6 LoadBalancer 10.107.121.116 192.168.1.6 80:31832/TCP 61m
default php-apache ClusterIP 10.101.18.143 <none> 80/TCP 6h12m
default redis-master NodePort 10.98.3.18 <none> 6379:31446/TCP 4h40m
default redis-slave ClusterIP 10.110.29.67 <none> 6379/TCP 4h40m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 25h
kube-system metrics-server ClusterIP 10.101.39.125 <none> 443/TCP 6h15m
metallb-system test-bgp-router-bird ClusterIP 10.96.0.100 <none> 179/TCP 81m
metallb-system test-bgp-router-quagga ClusterIP 10.96.0.101 <none> 179/TCP 81m
metallb-system test-bgp-router-ui NodePort 10.104.151.219 <none> 80:31983/TCP 81m
every nginx IP is available from kubernetes cluster but I can not access them from outside.