linkerd Top feature only shows /healthz requests - kubernetes

Doing Lab 7.2. Service Mesh and Ingress Controller from the Kubernetes Developer course from the Linux Foundation and there is a problem I am facing - the Top feature only shows the /healthz requests.
It is supposed to show / requests too. But does not. Would really like to troubleshoot it, but I have no idea how to even approach it.
More details
Following the course instructions I have:
A k8s cluster deployed on two GCE VMs
linkerd
nginx ingress controller
A simple LoadBalancer service off the httpd image. In effect, this is a NodePort service, since the LoadBalancer is never provisioned. The name is secondapp
A simple ingress object routing to the secondapp service.
I have no idea what information is useful to troubleshoot the issue. Here is some that I can think off:
Setup
Linkerd version
student#master:~$ linkerd version
Client version: stable-2.11.1
Server version: stable-2.11.1
student#master:~$
nginx ingress controller version
student#master:~$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
myingress default 1 2022-09-28 02:09:35.031108611 +0000 UTC deployed ingress-nginx-4.2.5 1.3.1
student#master:~$
The service list
student#master:~$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d4h
myingress-ingress-nginx-controller LoadBalancer 10.106.67.139 <pending> 80:32144/TCP,443:32610/TCP 62m
myingress-ingress-nginx-controller-admission ClusterIP 10.107.109.117 <none> 443/TCP 62m
nginx ClusterIP 10.105.88.244 <none> 443/TCP 3h42m
registry ClusterIP 10.110.129.139 <none> 5000/TCP 3h42m
secondapp LoadBalancer 10.105.64.242 <pending> 80:32000/TCP 111m
student#master:~$
Verifying that the ingress controller is known to linkerd
student#master:~$ k get ds myingress-ingress-nginx-controller -o json | jq .spec.template.metadata.annotations
{
"linkerd.io/inject": "ingress"
}
student#master:~$
The secondapp pod
apiVersion: v1
kind: Pod
metadata:
name: secondapp
labels:
example: second
spec:
containers:
- name: webserver
image: httpd
- name: busy
image: busybox
command:
- sleep
- "3600"
The secondapp service
student#master:~$ k get svc secondapp -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2022-09-28T01:21:00Z"
name: secondapp
namespace: default
resourceVersion: "433221"
uid: 9266f000-5582-4796-ba73-02375f56ce2b
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.105.64.242
clusterIPs:
- 10.105.64.242
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 32000
port: 80
protocol: TCP
targetPort: 80
selector:
example: second
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
student#master:~$
The ingress object
student#master:~$ k get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-test <none> www.example.com 80 65m
student#master:~$ k get ingress ingress-test -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
creationTimestamp: "2022-09-28T02:20:03Z"
generation: 1
name: ingress-test
namespace: default
resourceVersion: "438934"
uid: 1952a816-a3f3-42a4-b842-deb56053b168
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
service:
name: secondapp
port:
number: 80
path: /
pathType: ImplementationSpecific
status:
loadBalancer: {}
student#master:~$
Testing
secondapp
student#master:~$ curl "$(curl ifconfig.io):$(k get svc secondapp '--template={{(index .spec.ports 0).nodePort}}')"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 15 100 15 0 0 340 0 --:--:-- --:--:-- --:--:-- 348
<html><body><h1>It works!</h1></body></html>
student#master:~$
through the ingress controller
student#master:~$ url="$(curl ifconfig.io):$(k get svc myingress-ingress-nginx-controller '--template={{(index .spec.ports 0).nodePort}}')"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 15 100 15 0 0 319 0 --:--:-- --:--:-- --:--:-- 319
student#master:~$ curl -H "Host: www.example.com" $url
<html><body><h1>It works!</h1></body></html>
student#master:~$
And without the Host header:
student#master:~$ curl $url
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
student#master:~$
And finally the linkerd dashboard Top snapshot:
Where are the GET / requests?
EDIT 1
So on the linkerd slack someone suggested to have a look at https://linkerd.io/2.12/tasks/using-ingress/#nginx and that made me examine my pods more carefully. It turns out one of the nginx-ingress pods could not start and it is clearly due to linkerd injection. Please, observe:
Before linkerd
student#master:~$ k get pod
NAME READY STATUS RESTARTS AGE
myingress-ingress-nginx-controller-gbmbg 1/1 Running 0 19m
myingress-ingress-nginx-controller-qtdhw 1/1 Running 0 3m6s
secondapp 2/2 Running 4 (13m ago) 12h
student#master:~$
After linkerd
student#master:~$ k get ds myingress-ingress-nginx-controller -o yaml | linkerd inject --ingress - | k apply -f -
daemonset "myingress-ingress-nginx-controller" injected
daemonset.apps/myingress-ingress-nginx-controller configured
student#master:~$
And checking the pods:
student#master:~$ k get pod
NAME READY STATUS RESTARTS AGE
myingress-ingress-nginx-controller-gbmbg 1/1 Running 0 40m
myingress-ingress-nginx-controller-xhj5m 1/2 Running 8 (5m59s ago) 17m
secondapp 2/2 Running 4 (34m ago) 12h
student#master:~$
student#master:~$ k describe pod myingress-ingress-nginx-controller-xhj5m |tail
Normal Created 19m kubelet Created container linkerd-proxy
Normal Started 19m kubelet Started container linkerd-proxy
Normal Pulled 18m (x2 over 19m) kubelet Container image "registry.k8s.io/ingress-nginx/controller:v1.3.1#sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974" already present on machine
Normal Created 18m (x2 over 19m) kubelet Created container controller
Normal Started 18m (x2 over 19m) kubelet Started container controller
Warning FailedPreStopHook 18m kubelet Exec lifecycle hook ([/wait-shutdown]) for Container "controller" in Pod "myingress-ingress-nginx-controller-xhj5m_default(93dd0189-091f-4c56-a197-33991932d66d)" failed - error: command '/wait-shutdown' exited with 137: , message: ""
Warning Unhealthy 18m (x6 over 19m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 502
Normal Killing 18m kubelet Container controller failed liveness probe, will be restarted
Warning Unhealthy 14m (x30 over 19m) kubelet Liveness probe failed: HTTP probe failed with statuscode: 502
Warning BackOff 4m29s (x41 over 14m) kubelet Back-off restarting failed container
student#master:~$
I will process the link I was given on the linkerd slack and update this post with any new findings.

The solution was provided by the Axenow user on the linkerd2 slack forum. The problem is that ingress-nginx cannot share the namespace with the services it provides the ingress functionality to. In my case all of them were in the default namespace.
To quote Axenow:
When you deploy nginx, by default it send traffic to the pod directly.
To fix it you have to make this configuration:
https://linkerd.io/2.12/tasks/using-ingress/#nginx
To elaborate, one has to update the values.yaml file of the downloaded ingress-nginx helm chart to make sure the following is true:
controller:
replicaCount: 2
service:
externalTrafficPolicy: Cluster
podAnnotations:
linkerd.io/inject: enabled
And install the controller in a dedicated namespace:
helm upgrade --install --create-namespace --namespace ingress-nginx -f values.yaml ingress-nginx ingress-nginx/ingress-nginx
(Having uninstalled the previous installation, of course)

Related

Nginx Ingress Controller on Bare Metal expose problem

i try to deploy nginx-ingress-controller on bare metal , I have
4 Node
10.0.76.201 - Node 1
10.0.76.202 - Node 2
10.0.76.203 - Node 3
10.0.76.204 - Node 4
4 Worker
10.0.76.205 - Worker 1
10.0.76.206 - Worker 2
10.0.76.207 - Worker 3
10.0.76.214 - Worker 4
2 LB
10.0.76.208 - LB 1
10.0.76.209 - Virtual IP (keepalave)
10.0.76.210 - LB 10
Everything is on BareMetal , Load balancer located outside Cluster .
This is simple haproxy config , just check 80 port ( Worker ip )
frontend kubernetes-frontends
bind *:80
mode tcp
option tcplog
default_backend kube
backend kube
mode http
balance roundrobin
cookie lsn insert indirect nocache
option http-server-close
option forwardfor
server node-1 10.0.76.205:80 maxconn 1000 check
server node-2 10.0.76.206:80 maxconn 1000 check
server node-3 10.0.76.207:80 maxconn 1000 check
server node-4 10.0.76.214:80 maxconn 1000 check
I Install nginx-ingress-controller using Helm and everything work fine
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-xb5rw 0/1 Completed 0 18m
pod/ingress-nginx-admission-patch-skt7t 0/1 Completed 2 18m
pod/ingress-nginx-controller-6dc865cd86-htrhs 1/1 Running 0 18m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.106.233.186 <none> 80:30659/TCP,443:32160/TCP 18m
service/ingress-nginx-controller-admission ClusterIP 10.102.132.131 <none> 443/TCP 18m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 18m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-6dc865cd86 1 1 1 18m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 24s 18m
job.batch/ingress-nginx-admission-patch 1/1 34s 18m
Deploy nginx simple way and works fine
kubectl create deploy nginx --image=nginx:1.18
kubectl scale deploy/nginx --replicas=6
kubectl expose deploy/nginx --type=NodePort --port=80
after , i decided to create ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tektutor-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "tektutor.training.org"
http:
paths:
- pathType: Prefix
path: "/nginx"
backend:
service:
name: nginx
port:
number: 80
works fine
kubectl describe ingress tektutor-ingress
Name: tektutor-ingress
Labels: <none>
Namespace: default
Address: 10.0.76.214
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
tektutor.training.org
/nginx nginx:80 (192.168.133.241:80,192.168.226.104:80,192.168.226.105:80 + 3 more...)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 18m nginx-ingress-controller Configuration for default/tektutor-ingress was added or updated
Normal Sync 18m (x2 over 18m) nginx-ingress-controller Scheduled for sync
everything work fine , when i try curl any ip works curl (192.168.133.241:80,192.168.226.104:80,192.168.226.105:80 + 3 more...)
now i try to add hosts
10.0.76.201 tektutor.training.org
This is my master ip , is it correct to add here master ip ? when i try curl tektutor.training.org not working
Can you please explain what I am having problem with this last step?
I set the IP wrong? or what ? Thanks !
I hope I have written everything exhaustively
I used to this tutor Medium Install nginx Ingress Controller
TL;DR
Put in your haproxy backend config values shown below instead of the ones you've provided:
30659 instead of 80
32160 instead of 443 (if needed)
More explanation:
NodePort works on certain set of ports (default: 30000-32767) and in this scenario it allocated:
30659 for your ingress-nginx-controller port 80.
32160 for your ingress-nginx-controller port 443.
This means that every request trying to hit your cluster from outside will need to contact this ports (30...).
You can read more about it by following official documentation:
Kubernetes.io: Docs: Concepts: Services
A funny story that took 2 days :) In Ingress i have used the path /nginx but not hitting it while
Something like :
http://tektutor.training.org/nginx
THanks #Dawid Kruk who try to helm me :) !

Unable to get ClusterIP service url from minikube

I have created a ClusterIP service according to configuration files below, however I can't seem to get the URL from minikube for that service
k create -f service-cluster-definition.yaml
➜ minikube service myapp-frontend --url
😿 service default/myapp-frontend has no node port
And if I try to add NodePort into the ports section of service-cluster-definition.yaml it complains with error, that such key is deprecated.
What am I missing or doing wrong?
service-cluster-definition.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-frontend
spec:
type: ClusterIP
ports:
- targetPort: 80
port: 80
selector:
app: myapp
type: etl
deployment-definition.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
env: experiment
type: etl
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
env: experiment
type: etl
spec:
containers:
- name: nginx-container
image: nginx:1.7.1
replicas: 3
selector:
matchLabels:
type: etl
➜ k get pods --selector="app=myapp,type=etl" -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-deployment-59856c4487-2g9c7 1/1 Running 0 45m 172.17.0.9 minikube <none> <none>
myapp-deployment-59856c4487-mb28z 1/1 Running 0 45m 172.17.0.4 minikube <none> <none>
myapp-deployment-59856c4487-sqxqg 1/1 Running 0 45m 172.17.0.8 minikube <none> <none>
(⎈ |minikube:default)
Projects/experiments/kubernetes
➜ k version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
(⎈ |minikube:default)
First let's clear some concepts from Documentation:
ClusterIP: Exposes the Service on a cluster-internal IP.
Choosing this value makes the Service only reachable from within the cluster.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort).
You’ll be able to contact the NodePort Service, from outside the cluster, by requesting NodeIP:NodePort.
Question 1:
I have created a ClusterIP service according to configuration files below, however I can't seem to get the URL from minikube for that service.
Since Minikube is a virtualized environment on a single host we tend to forget that the cluster is isolated from the host computer. If you set a service as ClusterIP, Minikube will not give external access.
Question 2:
And if I try to add NodePort into the ports section of service-cluster-definition.yaml it complains with error, that such key is deprecated.
Maybe you were pasting on the wrong position. You should just substitute the field type: ClusterIP for type: NodePort. Here is the correct form of your yaml:
apiVersion: v1
kind: Service
metadata:
name: myapp-frontend
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
selector:
app: myapp
type: etl
Reproduction:
user#minikube:~$ kubectl apply -f deployment-definition.yaml
deployment.apps/myapp-deployment created
user#minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-deployment-59856c4487-7dw6x 1/1 Running 0 5m11s
myapp-deployment-59856c4487-th7ff 1/1 Running 0 5m11s
myapp-deployment-59856c4487-zvm5f 1/1 Running 0 5m11s
user#minikube:~$ kubectl apply -f service-cluster-definition.yaml
service/myapp-frontend created
user#minikube:~$ kubectl get service myapp-frontend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp-frontend NodePort 10.101.156.113 <none> 80:32420/TCP 3m43s
user#minikube:~$ minikube service list
|-------------|----------------|-----------------------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|----------------|-----------------------------|-----|
| default | kubernetes | No node port | |
| default | myapp-frontend | http://192.168.39.219:32420 | |
| kube-system | kube-dns | No node port | |
|-------------|----------------|-----------------------------|-----|
user#minikube:~$ minikube service myapp-frontend --url
http://192.168.39.219:32420
user#minikube:~$ curl http://192.168.39.219:32420
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...{{output suppressed}}...
As you can see, with the service set as NodePort the minikube start serving the app using MinikubeIP:NodePort routing the connection to the matching pods.
Note that nodeport will be chosen by default between 30000:32767
If you have any question let me know in the comments.
To access inside cluster, do kubectl get svc to get the cluster ip or use the service name directly.
To access outside cluster, you can use NodePort as service type.

Why can't my service pass traffic to a pod with a named port on minikube?

I'm having trouble with the examples in section 5.1.1 Using Named Ports of Kubernetes In Action by Marko Luksa. The example goes like this:
First - Create
I'm creating a pod with a named port that runs a Node.js container that responds with You've hit <hostname> when it's hit:
apiVersion: v1
kind: Pod
metadata:
name: named-port-pod
labels:
app: named-port
spec:
containers:
- name: kubia
image: michaellundquist/kubia
ports:
- name: http
containerPort: 8080
And a service like this (note, this is a simplified version of the original example which also doesn't work.:
apiVersion: v1
kind: Service
metadata:
name: named-port-service
spec:
ports:
- name: http
port: 80
targetPort: http
selector:
app: named-port
Second - Verify
$ kubectl get po -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
named-port-pod 1/1 Running 0 45m 172.17.0.7 minikube <none> <none> app=named-port
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 53m
named-port-service ClusterIP 10.96.115.108 <none> 80/TCP 19m
$ kubectl describe service named-port-service
Name: named-port-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=named-port
Type: ClusterIP
IP: 10.96.115.108
Port: http 80/TCP
TargetPort: http/TCP
Endpoints: 172.17.0.7:8080
Session Affinity: None
Events: <none>
Third - Test (Failing)
$ kubectl exec named-port-pod -- curl named-port-pod:8080
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 26 0 26 0 0 5494 0 --:--:-- --:--:-- --:--:-- 6500
You've hit named-port-pod
$ kubectl exec named-port-pod -- curl --max-time 20 named-port-service
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:19 --:--:-- 0curl: (28) Connection timed out after 20001 milliseconds
command terminated with exit code 28
As you can see, everything works when I hit named-port-pod:8080, but fails when I hit named-port-service. I'm pretty sure I have the mapping correct because kubectl describe service named-port-service has the correct endpoint I think minikube can use named ports but my service can't pass connections to my pod. Why?
p.s here's my minikube version:
$ minikube version
minikube version: v1.6.2
commit: 54f28ac5d3a815d1196cd5d57d707439ee4bb392
This is known issue with minikube. Pod cannot reach itself via service IP. You can try accesing your service from a different pod or use the following workaround to fix this.
minikube ssh
sudo ip link set docker0 promisc on
Open issue: https://github.com/kubernetes/minikube/issues/1568

Kubernetes nginx Ingress configuration not working for Grafana

I am new to configuring Ingress rules for my Kubernetes cluster.
My Kubernetes cluster is deployed on Bare Metal. No cloud.
I followed this link to set up my nginx-controller with RBAC in my cluster.
This is what I have deployed :
# kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/default-http-backend-7c5bc89cc9-ks6kd 1/1 Running 0 2h
pod/nginx-ingress-controller-5b6864749-8xbhf 1/1 Running 0 2h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/default-http-backend ClusterIP 10.233.15.56 <none> 80/TCP 2h
service/ingress-nginx NodePort 10.233.38.84 <none> 80:31118/TCP,443:32003/TCP 2h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/default-http-backend 1 1 1 1 2h
deployment.apps/nginx-ingress-controller 1 1 1 1 2h
NAME DESIRED CURRENT READY AGE
replicaset.apps/default-http-backend-7c5bc89cc9 1 1 1 2h
replicaset.apps/nginx-ingress-controller-5b6864749 1 1 1 2h
Given that I have my setup, I want to access my grafana dashboard using a URL.
My grafana setup is working perfectly fine.
# kubectl get all -n default
NAME READY STATUS RESTARTS AGE
pod/grafana-67c6585fbd-4jl7p 1/1 Running 0 2h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana NodePort 10.233.5.111 <none> 3000:32093/TCP 2h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana 1 1 1 1 2h
NAME DESIRED CURRENT READY AGE
replicaset.apps/grafana-67c6585fbd 1 1 1 2h
I can access the dashboard using http://10.27.239.145:32093 which is the IP of one of my K8S worker nodes.
Now rather than accessing via IP:NodePort, I want to access via URL e.g. grafana.test.mydomain.com
So the ingress rule that I configured in my default namespace is :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
creationTimestamp: 2018-09-25T20:32:24Z
generation: 5
name: grafana
namespace: default
resourceVersion: "28485"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/jenkins-tls
uid: 1c51cece-c102-11e8-bf0f-02000a1bef39
spec:
rules:
- host: grafana.test.mydomain.com
http:
paths:
- backend:
serviceName: grafana
servicePort: 3000
path: /
On my local laptop from where I am testing, Ive added to my /etc/hosts the following entry :
10.27.239.145 grafana.test.mydomain.com
And in my browser, I am trying to access http://grafana.test.mydomain.com but I only get This site can’t be reached
grafana.test.mydomain.com refused to connect.
I have a strong feeling that I am missing out on something but can't figure it out.
I changed the NodePort to ClusterIP but no luck.
I know that my ingress controller is working since everytime I make a change to my ingress rules, I get logs from my ingress controller.
I0925 21:00:19.041440 9 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"grafana", UID:"1c51cece-c102-11e8-bf0f-02000a1bef39", APIVersion:"extensions/v1beta1", ResourceVersion:"28485", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/grafana
I0925 21:00:19.041732 9 controller.go:171] Configuration changes detected, backend reload required.
I0925 21:00:19.216044 9 controller.go:187] Backend successfully reloaded.
I0925 21:00:19.217645 9 controller.go:204] Dynamic reconfiguration succeeded.
Any help will strongly be appreciated regarding what might I have missed.
From what I see, you need to set grafana.test.mydomain.com to point to 10.233.38.84.
Basically, your nginx controller service is directing the traffic to your ingress and then your ingress forwards it to the backend on the nodePort (this is implicit in the ingress). It works for me, but I'm using an AWS ELB, I basically set grafana.test.mydomain.com to point to aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-0000000000.us-west-2.elb.amazonaws.com
$ kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/default-http-backend-6586bc58b6-snxbv 1/1 Running 0 1h
pod/grafana-5b969bb7f9-tsv5k 1/1 Running 0 52m
pod/nginx-ingress-controller-6bd7c597cb-lfwcf 1/1 Running 0 1h
pod/prometheus-server-5dbf9f4fc9-mnwn4 1/1 Running 0 53m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/default-http-backend ClusterIP 10.x.x.x <none> 80/TCP 1h
service/grafana NodePort 10.x.x.x <none> 3000:30073/TCP 52m
service/ingress-nginx LoadBalancer 10.x.x.x aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-0000000000.us-west-2.elb.amazonaws.com 80:30276/TCP,443:32011/TCP 1h
service/prometheus-server NodePort 10.x.x.x <none> 9090:32419/TCP 53m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/default-http-backend 1 1 1 1 1h
deployment.apps/grafana 1 1 1 1 52m
deployment.apps/nginx-ingress-controller 1 1 1 1 1h
deployment.apps/prometheus-server 1 1 1 1 53m
NAME DESIRED CURRENT READY AGE
replicaset.apps/default-http-backend-6586bc58b6 1 1 1 1h
replicaset.apps/grafana-5b969bb7f9 1 1 1 52m
replicaset.apps/nginx-ingress-controller-6bd7c597cb 1 1 1 1h
replicaset.apps/prometheus-server-5dbf9f4fc9 1 1 1 53m
$ kubectl describe ingress grafana-ingress -n ingress-nginx
Name: grafana-ingress
Namespace: ingress-nginx
Address: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-0000000000.us-west-2.elb.amazonaws.com
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
grafana.test.mydomain.com
/ grafana:3000 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"grafana-ingress","namespace":"ingress-nginx"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"grafana","servicePort":3000},"path":"/"}]}}]}}
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 40m nginx-ingress-controller Ingress ingress-nginx/grafana-ingress
Normal UPDATE 22m (x2 over 40m) nginx-ingress-controller Ingress ingress-nginx/grafana-ingress
As far as I can see, you only have a NodePort Service on port 32093.
Your NodePort publishes the port 3000 to 32093 to any external node address as you have already proven, but you configured Ingress to contact port 3000 on grafana service.
Either add the targetPort, port and nodePort to the service for your Grafana instance and point targetPort and port to 3000 and leave nodePort empty/set it to 32092. Then the ingress should work as you posted. Snippet:
nodePort: 32093
port: 3000
protocol: TCP
targetPort: 3000
Or try to set servicePort: 3000 in your ingress configuration to 32093. Warning: I never tested this. I do not know if Ingress supports that. According to the documentation it should as NodePort is a superset of ClusterIP:
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting :.
Edit
Btw: http://grafana.test.mydomain.com:32093 should then already work with your configuration (NodePort)

installing nginx-ingress on Kubernetes to run on localhost MacOs - Docker for Mac(Edge)

Update:
I got the NodePort to work: kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d
my-release-nginx-ingress-controller NodePort 10.105.64.135 <none> 80:32706/TCP,443:32253/TCP 10m
my-release-nginx-ingress-default-backend ClusterIP 10.98.230.24 <none> 80/TCP 10m
Do I port-forward then?
Installing Ingress using Helm on Docker for Mac(Edge with Kubernetes)
https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress
Will this work on localhost - and if so, how to access a service?
Steps:
helm install stable/nginx-ingress
Output:
NAME: washing-jackal
LAST DEPLOYED: Thu Jan 18 12:57:40 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
washing-jackal-nginx-ingress-controller 1 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
washing-jackal-nginx-ingress-controller LoadBalancer 10.105.122.1 <pending> 80:31494/TCP,443:32136/TCP 1s
washing-jackal-nginx-ingress-default-backend ClusterIP 10.103.189.14 <none> 80/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
washing-jackal-nginx-ingress-controller 1 1 1 0 0s
washing-jackal-nginx-ingress-default-backend 1 1 1 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
washing-jackal-nginx-ingress-controller-5b4d86c948-xxlrt 0/1 ContainerCreating 0 0s
washing-jackal-nginx-ingress-default-backend-57947f94c6-h4sz6 0/1 ContainerCreating 0 0s
NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w washing-jackal-nginx-ingress-controller'
An example Ingress that makes use of the controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
As far as I can tell from the output you posted, everything should be running smoothly in your local kubernetes cluster.
However, your ingress controller is exposed using a LoadBalancer Service as you can tell from the following portion of the output you posted:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
washing-jackal-nginx-ingress-controller LoadBalancer 10.105.122.1 <pending> 80:31494/TCP,443:32136/TCP 1s
Services of type LoadBalancer require support from the underlying infrastructure, and will not work in your local environment.
However, a LoadBalancer service is also a NodePort Service. In fact you can see in the above snippet of output that your ingress controller is listening to the following ports:
80:31494/TCP,443:32136/TCP
This means you should be able to reach your ingress controller on port 31494 and 32136 on your node's ip address.
You could make your ingress controller listen to more standard ports, such as 80 and 443, but you'll probably have to edit manually the resources created by the helm chart to do so.
To see the app running, you need to go to localhost:port to see your service if you're on minikube. Or use your computer name instead of localhost if you're on a different computer in your network. If you're using VM's, use the node's VM IP in the browser instead of localhost.