http://grs-preprodkubemaster01:5601/kibana
I have followed docs and installed Kibana, When I used the service as type: LoadBalancer, the service isn't
coming up, so I deleted the type: LoadBalancer and let it default to ClusterIP, it came up fine. (Note I don't have AWS)
But, I am not sure how to access the UI, I tried this URL but its not working.
http://my-preprodkubemaster01/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/app/kibana
any ideas how to access the Kibana UI. I checked service, deployment and everything is green check.
Another thing I tried is this URL with this URL which I got from the command kubectl cluster-info
https://10.123.24.107:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
However, this is showing me this error
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "services "kibana-logging" is forbidden: User "system:anonymous" cannot get services/proxy in the namespace "kube-system"",
reason: "Forbidden",
details: {
name: "kibana-logging",
kind: "services"
},
code: 403
}
So, as another try I used Kibana service as NodePort, but that didn't work either.
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
selector:
k8s-app: kibana-logging
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30887
$ kubectl -n kube-system get rc,svc,cm,po
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/elasticsearch-logging ClusterIP 10.98.10.182 <none> 9200/TCP 12m
svc/heapster ClusterIP 10.107.184.85 <none> 80/TCP 3d
svc/kibana-logging NodePort 10.102.254.129 <none> 5601:30887/TCP 12m
svc/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 3d
svc/kubernetes-dashboard ClusterIP 10.105.30.246 <none> 80/TCP 3d
svc/monitoring-influxdb ClusterIP 10.109.144.39 <none> 8086/TCP 3d
I would like to know what URL I should be using to access the Kibana UI. Please note that I have npot tried to do kubectl proxy and I would like to have it work without it
Use the NodePort you defined in your service:
https://10.123.24.107:30887
The most common way to expose internal server outside the cluster is an Ingress.
First, you need to have an Ingress controller running in your Kubernetes cluster.
There are two types of maintained Ingress controllers - GCE and nginx
Then, you need to create a yaml file as shown below and change it according to your needs:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
When you create it using kubectl create -f, you should see something like this:
$ kubectl get ingress
NAME RULE BACKEND ADDRESS
test-ingress - testsvc:80 1.2.3.4
In this example, 1.2.3.4 is the IP allocated by Ingress controller.
When you have all things in place, you'll be able to access your application (Kibana) by IP 1.2.3.4
Please find more examples and use cases in Ingress documentation
You can also expose a Kubernetes service without using the Ingress resource:
Service.Type=LoadBalancer
Service.Type=NodePort
Port Proxy
I got it to work with these changes in ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kube
namespace: kube-system
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/rewrites: "serviceName=kubernetes-dashboard rewrite=/;serviceName=kibana-logging rewrite=/"
spec:
rules:
- host: HOSTNAME_OF_MASTER
http:
paths:
- path: /kube-ui/
backend:
serviceName: kubernetes-dashboard
servicePort: 80
- path: /kibana/
backend:
serviceName: kibana-logging
servicePort: 5601
and my Kibana serive is setup as Nodeport
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging
and dashboard is also configured as this
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
once you have the svc running you can access kibana using the NodePort from any node. Example: http://node01_ip: 31325/app/kibana
$ kubectl get svc -o wide -n=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
elasticsearch-logging ClusterIP 10.xx.120.130 <none> 9200/TCP 11h k8s-app=elasticsearch-logging
heapster ClusterIP 10.xx.232.165 <none> 80/TCP 11h k8s-app=heapster
kibana-logging NodePort 10.xx.39.255 <none> 5601:31325/TCP 11h k8s-app=kibana-logging
kube-dns ClusterIP 10.xx.0.xx <none> 53/UDP,53/TCP 12h k8s-app=kube-dns
kubernetes-dashboard NodePort 10.xx.xx.xx <none> 80:32086/TCP 11h k8s-app=kubernetes-dashboard
monitoring-influxdb ClusterIP 10.13.199.138 <none> 8086/TCP 11h k8s-app=influxdb
Related
I have a node server which runs in a container inside a pod. The pod is up and running and is working fine.
Then I have a NodePort service which is used to serve the pod. The service is curl-able from inside the container and returns a response successfully.
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: server
meta.helm.sh/release-namespace: default
creationTimestamp: "2022-12-20T23:46:09Z"
labels:
app: server
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
chart: server-0.1.0
helm.sh/chart: nginx-13.2.19
heritage: Helm
release: server
name: server
namespace: default
resourceVersion: "86325"
uid: 9eea1b02-af20-4e79-b713-6be2683cfcf7
spec:
clusterIP: 10.100.138.182
clusterIPs:
- 10.100.138.182
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 80
nodePort: 30001
protocol: TCP
targetPort: 3001
selector:
app: server
release: server
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
Then I have an NGINX Ingress Controller running and an Ingress resource.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: server
meta.helm.sh/release-namespace: default
creationTimestamp: "2022-12-20T23:46:09Z"
generation: 2
labels:
app.kubernetes.io/instance: server
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: server
helm.sh/chart: server-0.1.0
name: server
namespace: default
resourceVersion: "87296"
uid: e4a597c3-3d85-402d-bf56-19858c22e7bf
spec:
defaultBackend:
service:
name: server
port:
number: 80
ingressClassName: nginx
status:
loadBalancer: {}
After all this when I do kubectl get pods:
NAME READY STATUS RESTARTS AGE
nginxserver-876bf6f87-ncnfl 1/1 Running 0 5h23m
server-7466b8f5d8-jmm57 1/1 Running 0 12m
kubectl get service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 9h
nginxserver LoadBalancer 10.100.129.213 aa486e3c823d54f4fbe8da864b55d394-674371728.us-east-2.elb.amazonaws.com 80:31573/TCP 5h23m
server NodePort 10.100.138.182 <none> 80:30001/TCP 13m
kubectl get ingress:
NAME CLASS HOSTS ADDRESS PORTS AGE
server nginx * 80 14m
This currently shows 14m age but before this I had waited for 30m too but no address popped up.
I have tried searching a lot and found that this was associated with wrong ingress settings or wrong namespaces but none of that helped me.
What can I do to so that I can get an address for the ingress?
Thankyou!
I gave up. As a last option I thought maybe my ingress controller is not working properly. So I
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
helm install ingress-nginx nginx-stable/nginx-ingress --set rbac.create=true
after deleting the previous one and my ingress got an idress within seconds.
I installed one Kubernetes Master and two kubernetes worker on-premises.
After I installed Metallb as LoadBalancer using commands below:
$ kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxy
Configuration mode:
"ipvs" ipvs:
strictARP: true
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
vim config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.100.170.200-10.100.170.220
kubectl apply -f config-map.yaml
kubectl describe configmap config -n metallb-system
I created my yaml file as below:
myapp-tst-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-tst-deployment
labels:
app: myapp-tst
spec:
replicas: 2
selector:
matchLabels:
app: myapp-tst
template:
metadata:
labels:
app: myapp-tst
spec:
containers:
- name: myapp-tst
image: myapp-tomcat
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
myapp-tst-service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-tst-service
labels:
app: myapp-tst
spec:
externalTrafficPolicy: Cluster
type: LoadBalancer
ports:
- name: myapp-tst-port
nodePort: 30080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: myapp-tst
sessionAffinity: None
myapp-tst-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-tst-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/session-cookie-name: "INGRESSCOOKIE"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: myapp-tst-service
servicePort: myapp-tst-port
I run kubectl -f apply for all three files, and these is my result:
kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/myapp-tst-deployment-54474cd74-p8cxk 1/1 Running 0 4m53s 10.36.0.1 bcc-tst-docker02 <none> <none>
pod/myapp-tst-deployment-54474cd74-pwlr8 1/1 Running 0 4m53s 10.44.0.2 bca-tst-docker01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/myapp-tst-service LoadBalancer 10.110.184.237 10.100.170.15 80:30080/TCP 4m48s app=myapp-tst,tier=backend
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d22h <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/myapp-tst-deployment 2/2 2 2 4m53s myapp-tst mferraramiki/myapp-test app=myapp-tst
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/myapp-tst-deployment-54474cd74 2 2 2 4m53s myapp-tst myapp/myapp-test app=myapp-tst,pod-template-hash=54474cd74
But when I try to connect using LB external IP (10.100.170.15) the system redirect the browser request
(on the same browser) on a pod, if I refresh or open a new tab (on the same url) the system reply redirect the request to another pod.
I need when a user digit url in the browser, he must be connect to a specific pod during all session, and not switch to other pods.
How can solve this problem if is it possible?
In my VM I resolved this issue using stickysession, how can enable it on LB or in Kubernetes components?
In the myapp-tst-service.yaml file the "sessionAffinity" is set to "None".
You should try to set it to "ClientIP".
From page https://kubernetes.io/docs/concepts/services-networking/service/ :
"If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on the client's IP addresses by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). You can also set the maximum session sticky time by setting service.spec.sessionAffinityConfig.clientIP.timeoutSeconds appropriately. (the default value is 10800, which works out to be 3 hours)."
I have a deployment with 2 replicas. I can access the pods via curl.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
I created a service.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
I was expecting to be able to access the pods through the service via the EndPoints but the result I get is.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
Can someone help me understand what is going on here better?
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d23h
nginx-service ClusterIP 10.109.19.46 <none> 80/TCP 22h
curl --insecure https://PUBLICIP:6443
With your command you're trying to access the / path of the Kubernetes API, but it fails because you're not supplying any credentials to the request:
curl --insecure https://PUBLICIP:6443
But anyway, if you want to access the Pods behind the Service, you don't need to access the Kubernetes API, but you need to access the exposed Service.
A ClusterIP service as in your example (the default) cannot be accessed from outside the cluster. If you want to access the Service from outside the cluster, you need to create, for example, a NodePort Service:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
And then you can access the Service (and thus the Pods behind it) through the IP address and NodePort of one of the worker nodes:
curl NODE_IP:NODE_PORT
You can get the NODE_IP of one of the nodes with kubectl get nodes -o wide and the NODE_PORT of the Service with kubectl get svc nginx-service.
The curl command you are using will hit the Kubernetes API server (defaults to 6443 port) ; NOT the service you created.
The nginx-service you created will create a ClusterIP service, which will not be accessible from outside. You have to use either NodePort or LoadBalancer type service.
To access your service, you can try below (using NodePort)
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
Then get the NodePort from kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d23h
nginx-service NodePort 10.109.19.46 <none> 80:32474/TCP 22h <<<-- (NodePort is 32474)
You can use any node's IP and the port 32474 combinations to access the service.
Eg:-
curl http://192.168.10.10:32474
The easiest way is to expose the service as "NodePort" using kubectl,
Initially, you have to delete the existing service,
kubectl delete svc nginx-service //if you have a namespace -n yourNameSpace
Then expose your deployment as "NodePort" as following,
kubectl expose deployment nginx-deployment --port=80 --type=NodePort --name=nginx-service //if you have a namespace -n yourNameSpace
Now the "NodePort" service has been created and you could curl to the service through,
curl NODE_IP:NODE_PORT
Create ingress followed the guide of 'Kubernetes in Action' book on GKE, but the ingress doesn't work, can' be accessed from the public IP address of Ingress.
Create the replicaset to create pod.
Create Service. (followed the nodeport method on 'Kubernetes in Action').
Create ingress.
ReplicaSet, Service, Ingress are created successfully, nodeport can be accessed from the public IP address, no UNHEALTHY in ingress.
replicaset:
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: sonyfaye/kubia
Service:
apiVersion: v1
kind: Service
metadata:
name: kubia-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30123
selector:
app: kubia
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubia
spec:
rules:
- host: kubia.example.com
http:
paths:
- path: /
backend:
serviceName: kubia-nodeport
servicePort: 80
The nodeport itself can be accessed from public IP addresses.
C:\kube>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 8d
kubia-nodeport NodePort 10.59.253.10 <none> 80:30123/TCP 20h
C:\kube>kubectl get node
NAME STATUS ROLES AGE VERSION
gke-kubia-default-pool-08dd2133-qbz6 Ready <none> 8d v1.12.8-gke.6
gke-kubia-default-pool-183639fa-18vr Ready <none> 8d v1.12.8-gke.6
gke-kubia-default-pool-42725220-43q8 Ready <none> 8d v1.12.8-gke.6
C:\kube>kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-kubia-default-pool-08dd2133-qbz6 Ready <none> 8d v1.12.8-gke.6 10.140.0.17 35.201.224.238 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
gke-kubia-default-pool-183639fa-18vr Ready <none> 8d v1.12.8-gke.6 10.140.0.18 35.229.152.12 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
gke-kubia-default-pool-42725220-43q8 Ready <none> 8d v1.12.8-gke.6 10.140.0.16 34.80.225.64 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
C:\kube>curl http://34.80.225.64:30123
You've hit kubia-j2lnr
But the ingress can't be accessed from outside.
hosts file:
34.98.92.110 kubia.example.com
C:\kube>kubectl describe ingress
Name: kubia
Namespace: default
Address: 34.98.92.110
Default backend: default-http-backend:80 (10.56.0.7:8080)
Rules:
Host Path Backends
---- ---- --------
kubia.example.com
/ kubia-nodeport:80 (10.56.0.14:8080,10.56.1.6:8080,10.56.3.4:8080)
Annotations:
ingress.kubernetes.io/backends: {"k8s-be-30123--c4addd497b1e0a6d":"HEALTHY","k8s-be-30594--c4addd497b1e0a6d":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-kubia--c4addd497b1e0a6d
ingress.kubernetes.io/target-proxy: k8s-tp-default-kubia--c4addd497b1e0a6d
ingress.kubernetes.io/url-map: k8s-um-default-kubia--c4addd497b1e0a6d
Events:
<none>
C:\kube>curl http://kubia.example.com
curl: (7) Failed to connect to kubia.example.com port 80: Timed out
C:\kube>telnet kubia.example.com 80
Connecting To kubia.example.com...
C:\kube>telnet 34.98.92.110 80
Connecting To 34.98.92.110...Could not open connection to the host, on port 80: Connect failed
Tried from intranet.
curl 34.98.92.110 IP can get some resule, and 80 port of 34.98.92.110 is accessible from intranet.
C:\kube>kubectl exec -it kubia-lrt9x bash
root#kubia-lrt9x:/# curl http://kubia.example.com
curl: (6) Could not resolve host: kubia.example.com
root#kubia-lrt9x:/# curl http://34.98.92.110
default backend - 404root#kubia-lrt9x:/# curl http://34.98.92.110
default backend - 404root#kubia-lrt9x:/#
root#kubia-lrt9x:/# curl http://10.56.0.7:8080
default backend - 404root#kubia-lrt9x:/#
Does anybody know how to debug this?
The nodeport is been added to the firewall, or else nodeport is not accessible. The Ingress IP seems don't need to be added to the firewall.
Try to expose replicaset to be able to connect from the outside:
$ kubectl expose rs hello-world --type=NodePort --name=my-service
remember to first delete service kubia-nodeport and delete selector and section with service in Ingress configuration file and then apply changes using kubectl apply command.
More information you can find here: exposing-externalip.
Useful doc: kubectl-expose.
Notes
I am trying to deploy a service and ingress for a demo service (from 'Kubernetes in Action') to an AWS EKS cluster in which the traefik ingress controller has been Helm installed.
I am able to access the traefik dashboard from the traefik.example.com hostname after manually adding the IP address of the AWS ELB provisioned by traefik to that hostname in my local /etc/hosts file.
If I describe the service and ingress of the traefik-dashboard:
$ kubectl describe svc -n kube-system traefik-dashboard
Name: traefik-dashboard
Namespace: kube-system
Labels: app=traefik
chart=traefik-1.52.6
heritage=Tiller
release=traefik
Annotations: <none>
Selector: app=traefik,release=traefik
Type: ClusterIP
IP: 10.100.164.81
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 172.31.27.70:8080
Session Affinity: None
Events: <none>
$ kubectl describe ing -n kube-system traefik-dashboard
Name: traefik-dashboard
Namespace: kube-system
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
traefik.example.com
traefik-dashboard:80 (172.31.27.70:8080)
Annotations:
Events: <none>
The service and ingress controller seem to be using the running traefik-575cc584fb-v4mfn pod in the kube-system namespace.
Given this info and looking at the traefik docs, I try to expose a demo service through its ingress with the following YAML:
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia
---
apiVersion: v1
kind: Service
metadata:
name: kubia
namespace: default
spec:
selector:
app: traefik
release: traefik
ports:
- name: web
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubia
namespace: default
spec:
rules:
- host: kubia.int
http:
paths:
- path: /
backend:
serviceName: kubia
servicePort: web
After applying this, I am unable to access the kubia service from the kubia.int hostname after manually adding the IP address of the AWS ELB provisioned by traefik to that hostname in my local /etc/hosts file. Instead, I get a Service Unavailable in the response. Describing the created resources shows some differing info.
$ kubectl describe svc kubia
Name: kubia
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"kubia","namespace":"default"},"spec":{"ports":[{"name":"web","por...
Selector: app=traefik,release=traefik
Type: ClusterIP
IP: 10.100.142.243
Port: web 80/TCP
TargetPort: 8080/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
$ kubectl describe ing kubia
Name: kubia
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
kubia.int
/ kubia:web (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"kubia","namespace":"default"},"spec":{"rules":[{"host":"kubia.int","http":{"paths":[{"backend":{"serviceName":"kubia","servicePort":"web"},"path":"/"}]}}]}}
Events: <none>
I also notice that the demo kubia service has no endpoints, and the corresponding ingress shows no available backends.
Another thing I notice is that the demo kubia service and ingress is in the default namespace, while the traefik-dashboard service and ingress are in the kube-system namespace.
Does anything jump out to anyone? Any suggestions on the best way to diagnose it?
Many thanks in advance!
It would seem that you are missing the kubernetes.io/ingress.class: traefik that tells your Traefik ingress controller to serve for that Ingress definition.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubia
namespace: default
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: kubia.int
http:
paths:
- path: /
backend:
serviceName: kubia
servicePort: web
If you look at the examples in the docs you can see that the only Ingress that doesn't have annotation is traefik-web-ui that points to the Traefik Web UI.