Ingress failing to get an address - kubernetes

I have a node server which runs in a container inside a pod. The pod is up and running and is working fine.
Then I have a NodePort service which is used to serve the pod. The service is curl-able from inside the container and returns a response successfully.
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: server
meta.helm.sh/release-namespace: default
creationTimestamp: "2022-12-20T23:46:09Z"
labels:
app: server
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
chart: server-0.1.0
helm.sh/chart: nginx-13.2.19
heritage: Helm
release: server
name: server
namespace: default
resourceVersion: "86325"
uid: 9eea1b02-af20-4e79-b713-6be2683cfcf7
spec:
clusterIP: 10.100.138.182
clusterIPs:
- 10.100.138.182
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 80
nodePort: 30001
protocol: TCP
targetPort: 3001
selector:
app: server
release: server
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
Then I have an NGINX Ingress Controller running and an Ingress resource.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: server
meta.helm.sh/release-namespace: default
creationTimestamp: "2022-12-20T23:46:09Z"
generation: 2
labels:
app.kubernetes.io/instance: server
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: server
helm.sh/chart: server-0.1.0
name: server
namespace: default
resourceVersion: "87296"
uid: e4a597c3-3d85-402d-bf56-19858c22e7bf
spec:
defaultBackend:
service:
name: server
port:
number: 80
ingressClassName: nginx
status:
loadBalancer: {}
After all this when I do kubectl get pods:
NAME READY STATUS RESTARTS AGE
nginxserver-876bf6f87-ncnfl 1/1 Running 0 5h23m
server-7466b8f5d8-jmm57 1/1 Running 0 12m
kubectl get service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 9h
nginxserver LoadBalancer 10.100.129.213 aa486e3c823d54f4fbe8da864b55d394-674371728.us-east-2.elb.amazonaws.com 80:31573/TCP 5h23m
server NodePort 10.100.138.182 <none> 80:30001/TCP 13m
kubectl get ingress:
NAME CLASS HOSTS ADDRESS PORTS AGE
server nginx * 80 14m
This currently shows 14m age but before this I had waited for 30m too but no address popped up.
I have tried searching a lot and found that this was associated with wrong ingress settings or wrong namespaces but none of that helped me.
What can I do to so that I can get an address for the ingress?
Thankyou!

I gave up. As a last option I thought maybe my ingress controller is not working properly. So I
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
helm install ingress-nginx nginx-stable/nginx-ingress --set rbac.create=true
after deleting the previous one and my ingress got an idress within seconds.

Related

Bind Kubernetes ingress-nginx to specific address/interface only

I'm running a single node k3s installation. The machine it's running on has two NICs installed with two IP addresses assigned. I want the ingress only bind to one of these, but no matter what I tried, nginx will always bind to both/all interfaces.
I'm using the official Helmchart for ingress-nginx and modified the following values:
clusterIP: ""
...
externalIPs:
- 192.168.1.200
...
externalTrafficPolicy: "Local"
...
The following doesn't look that bad to me, but nginx is still listening on the other inferface (192.168.1.123) too...
❯ k get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.43.30.231 192.168.1.200,192.168.1.200 80:30047/TCP,443:30815/TCP 9m14s
This is the service as generated by Helm:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
...
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 0.45.0
helm.sh/chart: ingress-nginx-3.29.0
annotations:
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
managedFields:
...
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
nodePort: 30047
- name: https
protocol: TCP
port: 443
targetPort: https
nodePort: 30815
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
clusterIP: 10.43.30.231
clusterIPs:
- 10.43.30.231
type: LoadBalancer
externalIPs:
- 192.168.1.200
sessionAffinity: None
externalTrafficPolicy: Local
healthCheckNodePort: 32248
status:
loadBalancer:
ingress:
- ip: 192.168.1.200
You'll want to set the bind-address in the configmap settings.
Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.

Kubernetes Exposing An Application - AWX Operator

Hope you are all well,
I am currently trying to rollout the awx-operator on to a Kubernetes Cluster and I am running into a few issues with going to the service from outside of the cluster.
Currently I have the following services set up:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx NodePort 10.102.30.6 <none> 8080:32155/TCP 110m
awx-operator NodePort 10.110.147.152 <none> 80:31867/TCP 125m
awx-operator-metrics ClusterIP 10.105.190.155 <none> 8383/TCP,8686/TCP 3h17m
awx-postgres ClusterIP None <none> 5432/TCP 3h16m
awx-service ClusterIP 10.102.86.14 <none> 80/TCP 121m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
I did set up a NodePort which is called awx-operator. I did attempt to create an ingress to the application. You can see that below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awx-ingress
spec:
rules:
- host: awx.mycompany.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: awx
port:
number: 80
When I create the ingress, and then run kubectl describe ingress, I get the following output:
Name: awx-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
awx.mycompany.com
/ awx:80 (10.244.1.8:8080)
Annotations: <none>
Events: <none>
Now I am not too sure whether the default-http-backend:80 error is a red-herring as I have seen this in a number of places and they don't seem too worried about it, but please correct me if I am wrong.
Please let me know whether there is anything else I can do to troubleshoot this, and I will get back to you as soon as I can.
You are right and the blank address is the issue here. In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster.
Bare-metal environments on the other hand lack this option, requiring from you a slightly different setup to offer the same kind of access to external consumers:
This means you have to do some additional gymnastics to make the ingress work. And you have basically two main options here (all well described here):
A pure software solution: MetalLB
Over the NodePort service.
What is happening here is that you basically creating a service type NodePort with selector that matches your ingress controller pod and then it's routes the traffic accordingly to your ingress object:
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-3.30.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.46.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
Full nginx deployment that conatains that service can be found here.
If you wish to skip the ingress you might be just using the nodePort service awx and reach it directly.
I am using Kubernetes 1.22 and the operator version 0.14.0.
I have a Kubernetes baremetal installation and I have to use ingress. The ingress provided with the operator is not compatible with the version of kubernetes I am using so I had to define it myself.
I am using Ansible but you could work out the values for the variables :)
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ awx_deployment_name }}-ingress-unmanaged
namespace: {{ awx_namespace }}
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
spec:
ingressClassName: nginx
rules:
- host: {{ awx_host }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ awx_deployment_name }}-service
port:
number: 80
tls:
- hosts:
- {{ awx_host }}
secretName: {{ awx_tls_secret}}
you can simply expose the deployment to a service type LoadBalancer
the following command creates a service with a type loadBalancer
kubectl expose deployment awx-demo --port=80 --target-port=8052 --name=awx-lb --type=LoadBalancer

Kubernetes (on-premises) Metallb LoadBalancer and sticky sessions

I installed one Kubernetes Master and two kubernetes worker on-premises.
After I installed Metallb as LoadBalancer using commands below:
$ kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxy
Configuration mode:
"ipvs" ipvs:
strictARP: true
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
vim config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.100.170.200-10.100.170.220
kubectl apply -f config-map.yaml
kubectl describe configmap config -n metallb-system
I created my yaml file as below:
myapp-tst-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-tst-deployment
labels:
app: myapp-tst
spec:
replicas: 2
selector:
matchLabels:
app: myapp-tst
template:
metadata:
labels:
app: myapp-tst
spec:
containers:
- name: myapp-tst
image: myapp-tomcat
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
myapp-tst-service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-tst-service
labels:
app: myapp-tst
spec:
externalTrafficPolicy: Cluster
type: LoadBalancer
ports:
- name: myapp-tst-port
nodePort: 30080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: myapp-tst
sessionAffinity: None
myapp-tst-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-tst-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/session-cookie-name: "INGRESSCOOKIE"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: myapp-tst-service
servicePort: myapp-tst-port
I run kubectl -f apply for all three files, and these is my result:
kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/myapp-tst-deployment-54474cd74-p8cxk 1/1 Running 0 4m53s 10.36.0.1 bcc-tst-docker02 <none> <none>
pod/myapp-tst-deployment-54474cd74-pwlr8 1/1 Running 0 4m53s 10.44.0.2 bca-tst-docker01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/myapp-tst-service LoadBalancer 10.110.184.237 10.100.170.15 80:30080/TCP 4m48s app=myapp-tst,tier=backend
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d22h <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/myapp-tst-deployment 2/2 2 2 4m53s myapp-tst mferraramiki/myapp-test app=myapp-tst
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/myapp-tst-deployment-54474cd74 2 2 2 4m53s myapp-tst myapp/myapp-test app=myapp-tst,pod-template-hash=54474cd74
But when I try to connect using LB external IP (10.100.170.15) the system redirect the browser request
(on the same browser) on a pod, if I refresh or open a new tab (on the same url) the system reply redirect the request to another pod.
I need when a user digit url in the browser, he must be connect to a specific pod during all session, and not switch to other pods.
How can solve this problem if is it possible?
In my VM I resolved this issue using stickysession, how can enable it on LB or in Kubernetes components?
In the myapp-tst-service.yaml file the "sessionAffinity" is set to "None".
You should try to set it to "ClientIP".
From page https://kubernetes.io/docs/concepts/services-networking/service/ :
"If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on the client's IP addresses by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). You can also set the maximum session sticky time by setting service.spec.sessionAffinityConfig.clientIP.timeoutSeconds appropriately. (the default value is 10800, which works out to be 3 hours)."

How do I add a service and traefik ingress to an EKS cluster?

Notes
I am trying to deploy a service and ingress for a demo service (from 'Kubernetes in Action') to an AWS EKS cluster in which the traefik ingress controller has been Helm installed.
I am able to access the traefik dashboard from the traefik.example.com hostname after manually adding the IP address of the AWS ELB provisioned by traefik to that hostname in my local /etc/hosts file.
If I describe the service and ingress of the traefik-dashboard:
$ kubectl describe svc -n kube-system traefik-dashboard
Name: traefik-dashboard
Namespace: kube-system
Labels: app=traefik
chart=traefik-1.52.6
heritage=Tiller
release=traefik
Annotations: <none>
Selector: app=traefik,release=traefik
Type: ClusterIP
IP: 10.100.164.81
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 172.31.27.70:8080
Session Affinity: None
Events: <none>
$ kubectl describe ing -n kube-system traefik-dashboard
Name: traefik-dashboard
Namespace: kube-system
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
traefik.example.com
traefik-dashboard:80 (172.31.27.70:8080)
Annotations:
Events: <none>
The service and ingress controller seem to be using the running traefik-575cc584fb-v4mfn pod in the kube-system namespace.
Given this info and looking at the traefik docs, I try to expose a demo service through its ingress with the following YAML:
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia
---
apiVersion: v1
kind: Service
metadata:
name: kubia
namespace: default
spec:
selector:
app: traefik
release: traefik
ports:
- name: web
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubia
namespace: default
spec:
rules:
- host: kubia.int
http:
paths:
- path: /
backend:
serviceName: kubia
servicePort: web
After applying this, I am unable to access the kubia service from the kubia.int hostname after manually adding the IP address of the AWS ELB provisioned by traefik to that hostname in my local /etc/hosts file. Instead, I get a Service Unavailable in the response. Describing the created resources shows some differing info.
$ kubectl describe svc kubia
Name: kubia
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"kubia","namespace":"default"},"spec":{"ports":[{"name":"web","por...
Selector: app=traefik,release=traefik
Type: ClusterIP
IP: 10.100.142.243
Port: web 80/TCP
TargetPort: 8080/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
$ kubectl describe ing kubia
Name: kubia
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
kubia.int
/ kubia:web (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"kubia","namespace":"default"},"spec":{"rules":[{"host":"kubia.int","http":{"paths":[{"backend":{"serviceName":"kubia","servicePort":"web"},"path":"/"}]}}]}}
Events: <none>
I also notice that the demo kubia service has no endpoints, and the corresponding ingress shows no available backends.
Another thing I notice is that the demo kubia service and ingress is in the default namespace, while the traefik-dashboard service and ingress are in the kube-system namespace.
Does anything jump out to anyone? Any suggestions on the best way to diagnose it?
Many thanks in advance!
It would seem that you are missing the kubernetes.io/ingress.class: traefik that tells your Traefik ingress controller to serve for that Ingress definition.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubia
namespace: default
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: kubia.int
http:
paths:
- path: /
backend:
serviceName: kubia
servicePort: web
If you look at the examples in the docs you can see that the only Ingress that doesn't have annotation is traefik-web-ui that points to the Traefik Web UI.

What is URL for Kibana UI

http://grs-preprodkubemaster01:5601/kibana
I have followed docs and installed Kibana, When I used the service as type: LoadBalancer, the service isn't
coming up, so I deleted the type: LoadBalancer and let it default to ClusterIP, it came up fine. (Note I don't have AWS)
But, I am not sure how to access the UI, I tried this URL but its not working.
http://my-preprodkubemaster01/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/app/kibana
any ideas how to access the Kibana UI. I checked service, deployment and everything is green check.
Another thing I tried is this URL with this URL which I got from the command kubectl cluster-info
https://10.123.24.107:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
However, this is showing me this error
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "services "kibana-logging" is forbidden: User "system:anonymous" cannot get services/proxy in the namespace "kube-system"",
reason: "Forbidden",
details: {
name: "kibana-logging",
kind: "services"
},
code: 403
}
So, as another try I used Kibana service as NodePort, but that didn't work either.
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
selector:
k8s-app: kibana-logging
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30887
$ kubectl -n kube-system get rc,svc,cm,po
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/elasticsearch-logging ClusterIP 10.98.10.182 <none> 9200/TCP 12m
svc/heapster ClusterIP 10.107.184.85 <none> 80/TCP 3d
svc/kibana-logging NodePort 10.102.254.129 <none> 5601:30887/TCP 12m
svc/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 3d
svc/kubernetes-dashboard ClusterIP 10.105.30.246 <none> 80/TCP 3d
svc/monitoring-influxdb ClusterIP 10.109.144.39 <none> 8086/TCP 3d
I would like to know what URL I should be using to access the Kibana UI. Please note that I have npot tried to do kubectl proxy and I would like to have it work without it
Use the NodePort you defined in your service:
https://10.123.24.107:30887
The most common way to expose internal server outside the cluster is an Ingress.
First, you need to have an Ingress controller running in your Kubernetes cluster.
There are two types of maintained Ingress controllers - GCE and nginx
Then, you need to create a yaml file as shown below and change it according to your needs:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
When you create it using kubectl create -f, you should see something like this:
$ kubectl get ingress
NAME RULE BACKEND ADDRESS
test-ingress - testsvc:80 1.2.3.4
In this example, 1.2.3.4 is the IP allocated by Ingress controller.
When you have all things in place, you'll be able to access your application (Kibana) by IP 1.2.3.4
Please find more examples and use cases in Ingress documentation
You can also expose a Kubernetes service without using the Ingress resource:
Service.Type=LoadBalancer
Service.Type=NodePort
Port Proxy
I got it to work with these changes in ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kube
namespace: kube-system
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/rewrites: "serviceName=kubernetes-dashboard rewrite=/;serviceName=kibana-logging rewrite=/"
spec:
rules:
- host: HOSTNAME_OF_MASTER
http:
paths:
- path: /kube-ui/
backend:
serviceName: kubernetes-dashboard
servicePort: 80
- path: /kibana/
backend:
serviceName: kibana-logging
servicePort: 5601
and my Kibana serive is setup as Nodeport
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging
and dashboard is also configured as this
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
once you have the svc running you can access kibana using the NodePort from any node. Example: http://node01_ip: 31325/app/kibana
$ kubectl get svc -o wide -n=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
elasticsearch-logging ClusterIP 10.xx.120.130 <none> 9200/TCP 11h k8s-app=elasticsearch-logging
heapster ClusterIP 10.xx.232.165 <none> 80/TCP 11h k8s-app=heapster
kibana-logging NodePort 10.xx.39.255 <none> 5601:31325/TCP 11h k8s-app=kibana-logging
kube-dns ClusterIP 10.xx.0.xx <none> 53/UDP,53/TCP 12h k8s-app=kube-dns
kubernetes-dashboard NodePort 10.xx.xx.xx <none> 80:32086/TCP 11h k8s-app=kubernetes-dashboard
monitoring-influxdb ClusterIP 10.13.199.138 <none> 8086/TCP 11h k8s-app=influxdb