Istio Ingress resulting in "no healthy upstream" - kubernetes

I am using deploying an outward facing service, that is exposed behind a nodeport and then an istio ingress. The deployment is using manual sidecar injection. Once the deployment, nodeport and ingress are running, I can make a request to the istio ingress.
For some unkown reason, the request does not route through to my deployment and instead displays the text "no healthy upstream". Why is this, and what is causing it?
I can see in the http response that the status code is 503 (Service Unavailable) and the server is "envoy". The deployment is functioning as I can map a port forward to it and everything works as expected.

Just in case, like me, you get curious... Even though in my scenario it was clear the case of the error...
Error cause: I had two versions of the same service (v1 and v2), and an Istio VirtualService configured with traffic route destination using weights. Then, 95% goes to v1 and 5% goes to v2. As I didn't have the v1 deployed (yet), of course, the error "503 - no healthy upstream" shows up 95% of the requests.
Ok, even so, I knew the problem and how to fix it (just deploy v1), I was wondering... But, how can I have more information about this error? How could I get a deeper analysis of this error to find out what was happening?
This is a way of investigating using the configuration command line utility of Istio, the istioctl:
# 1) Check the proxies status -->
$ istioctl proxy-status
# Result -->
NAME CDS LDS EDS RDS PILOT VERSION
...
teachstore-course-v1-74f965bd84-8lmnf.development SYNCED SYNCED SYNCED SYNCED istiod-86798869b8-bqw7c 1.5.0
...
...
# 2) Get the name outbound from JSON result using the proxy (service with the problem) -->
$ istioctl proxy-config cluster teachstore-course-v1-74f965bd84-8lmnf.development --fqdn teachstore-student.development.svc.cluster.local -o json
# 2) If you have jq install locally (only what we need, already extracted) -->
$ istioctl proxy-config cluster teachstore-course-v1-74f965bd84-8lmnf.development --fqdn teachstore-course.development.svc.cluster.local -o json | jq -r .[].name
# Result -->
outbound|80||teachstore-course.development.svc.cluster.local
inbound|80|9180-tcp|teachstore-course.development.svc.cluster.local
outbound|80|v1|teachstore-course.development.svc.cluster.local
outbound|80|v2|teachstore-course.development.svc.cluster.local
# 3) Check the endpoints of "outbound|80|v2|teachstore-course..." using v1 proxy -->
$ istioctl proxy-config endpoints teachstore-course-v1-74f965bd84-8lmnf.development --cluster "outbound|80|v2|teachstore-course.development.svc.cluster.local"
# Result (the v2, 5% of the traffic route is ok, there are healthy targets) -->
ENDPOINT STATUS OUTLIER CHECK CLUSTER
172.17.0.28:9180 HEALTHY OK outbound|80|v2|teachstore-course.development.svc.cluster.local
172.17.0.29:9180 HEALTHY OK outbound|80|v2|teachstore-course.development.svc.cluster.local
# 4) However, for the v1 version "outbound|80|v1|teachstore-course..." -->
$ istioctl proxy-config endpoints teachstore-course-v1-74f965bd84-8lmnf.development --cluster "outbound|80|v1|teachstore-course.development.svc.cluster.local"
ENDPOINT STATUS OUTLIER CHECK CLUSTER
# Nothing! Emtpy, no Pods, that's explain the "no healthy upstream" 95% of time.

Although this is a somewhat general error resulting from a routing issue within an improper Istio setup, I will provide a general solution/piece of advice to anyone coming across the same issue.
In my case the issue was due to incorrect route rule configuration, the Kubernetes native services were functioning however the Istio routing rules were incorrectly configured so Istio could not route from the ingress into the service.

I faced the issue, when I my pod was in ContainerCreating state. So, it resulted in 503 error. Also as #pegaldon, explained it can also occur due to incorrect route configuration or there are no gateways created by user.

delete destinationrules.networking.istio.io
and recreate the virtualservice.networking.istio.io
[root#10-20-10-110 ~]# curl http://dprovider.example.com:31400/dw/provider/beat
no healthy upstream[root#10-20-10-110 ~]#
[root#10-20-10-110 ~]# curl http://10.210.11.221:10100/dw/provider/beat
"该服务节点 10.210.11.221 心跳正常!"[root#10-20-10-110 ~]#
[root#10-20-10-110 ~]#
[root#10-20-10-110 ~]# cat /home/example_service_yaml/vs/dw-provider-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: dw-provider-service
namespace: example
spec:
hosts:
- "dprovider.example.com"
gateways:
- example-gateway
http:
- route:
- destination:
host: dw-provider-service
port:
number: 10100
subset: "v1-0-0"
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: dw-provider-service
namespace: example
spec:
host: dw-provider-service
subsets:
- name: "v1-0-0"
labels:
version: 1.0.0
[root#10-20-10-110 ~]# vi /home/example_service_yaml/vs/dw-provider-service.yaml
[root#10-20-10-110 ~]# kubectl -n example get vs -o wide | grep dw
dw-collection-service [example-gateway] [dw.collection.example.com] 72d
dw-platform-service [example-gateway] [dplatform.example.com] 81d
dw-provider-service [example-gateway] [dprovider.example.com] 21m
dw-sync-service [example-gateway] [dw-sync-service dsync.example.com] 34d
[root#10-20-10-110 ~]# kubectl -n example delete vs dw-provider-service
virtualservice.networking.istio.io "dw-provider-service" deleted
[root#10-20-10-110 ~]# kubectl -n example delete d dw-provider-service
daemonsets.apps deniers.config.istio.io deployments.extensions dogstatsds.config.istio.io
daemonsets.extensions deployments.apps destinationrules.networking.istio.io
[root#10-20-10-110 ~]# kubectl -n example delete destinationrules.networking.istio.io dw-provider-service
destinationrule.networking.istio.io "dw-provider-service" deleted
[root#10-20-10-110 ~]# kubectl apply -f /home/example_service_yaml/vs/dw-provider-service.yaml
virtualservice.networking.istio.io/dw-provider-service created
[root#10-20-10-110 ~]# curl http://dprovider.example.com:31400/dw/provider/beat
"该服务节点 10.210.11.221 心跳正常!"[root#10-20-10-110 ~]#
[root#10-20-10-110 ~]#

From my experience, the "no healthy upstream" error can have different causes. Usually, Istio has received ingress traffic that should be forwarded (the client request, or Istio downstream), but the destination is unavailable (istio upstream / kubernetes service). This results in a HTTP 503 "no healthy upstream" error.
1.) Broken Virtualservice definitions
If you have a destination in your VirtualService context where the traffic should be routed, ensure this destination exists (in terms of the hostname is correct, or the service is available from this namespace)
2.) ImagePullBack / Terminating / Service is not available
Ensure your destination is available in general. Sometimes no pod is available, so no upstream will be available too.
3.) ServiceEntry - same destination in 2 lists, but lists with different DNS Rules
Check your namespace for ServiceEntry objects with:
kubectl -n <namespace> get serviceentry
If the result has more than one entry (multiple lines in one ServiceEntry object), check if a destination address (e.g. foo.com) is available in various lines.
If the same destination address (e.g. foo.com) is available in various lines, ensure that the column "DNS" does not have different resolution settings (e.g. one line uses DNS, the other line has NONE). If yes, this is an indicator that you try to apply different DNS settings to the same destination address.
A solution is:
a) to unify the DNS setting, setting all lines to NONE or DNS, but not to mix it up.
b) Ensure the destination (foo.com) is available in one line, and a collision of different DNS rules does not appear.
a) involves restarting istio-ingressgateway pods (data plane) to make it work.
b) Involves no restart of istio data or istio control plane.
Basically: It helps to check the status between Control Plane (istiod) and DatapPlane (istio-ingressgateway) with
istioctl proxy-status
The output of istioctl proxy-status should ensure that the columns say "SYNC" this ensures that the control plane and Data Plane are synced. If not, you can restart the istio-ingressgateway deployment or the istiod daemonset, to force "fresh" processes.
Further, it helped to run
istioctl analyze -A
to ensure that targets are checked in the VirtualService context and do exist. If a virtual service definition exists with routing definitions whose destination is unavailable, istioctl analyze -A can detect these unavailable destinations.
Furthermore, reading the logfiles of the istiod container helps. The istiod error messages often indicate the context of the error in the routing (which namespace and service or istio setting). You can use the default way with
kubectl -n istio-system logs <nameOfIstioDPod>
Referenes:
https://istio.io/latest/docs/reference/config/networking/service-entry/
https://istio.io/latest/docs/reference/config/networking/virtual-service/
https://istio.io/latest/docs/ops/diagnostic-tools/proxy-cmd/

Related

K8S api cloud.google.com not available in GKE v1.16.13-gke.401

I am trying to create a BackendConfig resource on a GKE cluster v1.16.13-gke.401 but it gives me the following error:
unable to recognize "backendconfig.yaml": no matches for kind "BackendConfig" in version "cloud.google.com/v1"
I have checked the available apis with the kubectl api-versions command and cloud.google.com is not available. How can I enable it?
I want to create a BackendConfig whit a custom health check like this:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
spec:
healthCheck:
checkIntervalSec: 8
timeoutSec: 1
healthyThreshold: 1
unhealthyThreshold: 3
type: HTTP
requestPath: /health
port: 10257
And attach this BackendConfig to a Service like this:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
As mentioned in the comments, issue was caused due to the lack of HTTP Load Balancing add-on in your cluster.
When you are creating GKE cluster with all default setting, feature like HTTP Load Balancing is enabled.
The HTTP Load Balancing add-on is required to use the Google Cloud Load Balancer with Kubernetes Ingress. If enabled, a controller will be installed to coordinate applying load balancing configuration changes to your GCP project
More details can be found in GKE documentation.
For test I have created Cluster-1 without HTTP Load Balancing add-on. There was no BackendConfig CRD - Custom Resource Definition.
The CustomResourceDefinition API resource allows you to define custom resources. Defining a CRD object creates a new custom resource with a name and schema that you specify. The Kubernetes API serves and handles the storage of your custom resource. The name of a CRD object must be a valid DNS subdomain name.
Without BackendConfig and without cloud apiVersion like below
user#cloudshell:~ (k8s-tests-XXX)$ kubectl get crd | grep backend
user#cloudshell:~ (k8s-tests-XXX)$ kubectl api-versions | grep cloud
I was not able to create any BackendConfig.
user#cloudshell:~ (k8s-tests-XXX) $ kubectl apply -f bck.yaml
error: unable to recognize "bck.yaml": no matches for kind "BackendConfig" in version "cloud.google.com/v1"
To make it work, you have to enable HTTP Load Balancing You can do it via UI or command.
Using UI:
Navigation Menu > Clusters > [Cluster-Name] > Details > Clikc on
Edit > Scroll down to Add-ons and expand > Find HTTP load balancing and change from Disabled to Enabled.
or command:
gcloud beta container clusters update <clustername> --update-addons=HttpLoadBalancing=ENABLED --zone=<your-zone>
$ gcloud beta container clusters update cluster-1 --update-addons=HttpLoadBalancing=ENABLED --zone=us-central1-c
WARNING: Warning: basic authentication is deprecated, and will be removed in GKE control plane versions 1.19 and newer. For a list of recommended authentication methods, see: https://cloud.google.com/kubernetes-engine/docs/how-to/api-server-authentication
After a while, when Add-on was enabled:
$ kubectl get crd | grep backend
backendconfigs.cloud.google.com 2020-10-23T13:09:29Z
$ kubectl api-versions | grep cloud
cloud.google.com/v1
cloud.google.com/v1beta1
$ kubectl apply -f bck.yaml
backendconfig.cloud.google.com/my-backendconfig created

Kubernetes ingress for dynamic URL

I am developing an application that allows users to play around in their own sandboxes with a finite life time. Conceptually, it can be thought as if users were playing games of Pong. Users can interact with a web interface hosted at main/ to start a game of Pong. Each game of Pong will exist in its own pod. Since each game has a finite lifetime, the pods are dynamically created on-demand (through the Kubernetes API) as Kubernetes jobs with a single pod. There is therefore a one-to-one relationship between games of Pong and pods. Up to this point I have it all figured out.
My problem is, how do I set up an ingress to map dynamically created URLs, for example main/game1, to the corresponding pods? That is, if a user starts a game through the main interface, I would like him to be redirected to the URL of the corresponding pod where his game is hosted.
I could pre-allocate a set of urls, check if they have active jobs, and redirect if they do not, but the does not scale well. I am thinking dynamically assigning URLs is a common pattern in Kubernetes, so there must be a standard way to do this. I have looked at using nginx-ingress, but that is not a requirement.
Furthermore the comment, I created for you a little demo on minikube providing a working Ingress Class controller (enabled via minikube addons enable ingress).
Replicating the multiple Deployment that simulates the games.
kubectl create deployment deployment-1 --image=nginxdemos/hello
kubectl create deployment deployment-2 --image=nginxdemos/hello
kubectl create deployment deployment-3 --image=nginxdemos/hello
kubectl create deployment deployment-4 --image=nginxdemos/hello
kubectl create deployment deployment-5 --image=nginxdemos/hello
Same for Services resources:
kubectl create service clusterip deployment-1 --tcp=80:80
kubectl create service clusterip deployment-2 --tcp=80:80
kubectl create service clusterip deployment-3 --tcp=80:80
kubectl create service clusterip deployment-4 --tcp=80:80
kubectl create service clusterip deployment-5 --tcp=80:80
Finally, it's time for the Ingress one but we have to be quite hacky since we don't have the subcommand create available.
for number in `seq 5`; do echo "
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: deployment-$number
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /game$number
backend:
serviceName: deployment-$number
servicePort: 80
" | kubectl create -f -; done
Now you have Pod, Service and Ingress: obviously, you have to replicate the same result using Kubernetes API but, as I suggested in the comment, you should create a single Ingress resource and update accordingly Path subkey in a dynamic way.
However, if you try to simulate the cURL call faking the Host header, you can see the working result:
# curl `minikube ip`/game2 -sH 'Host: hello-world.info'|grep -i server
<p><span>Server address:</span> <span>172.17.0.5:80</span></p>
<p><span>Server name:</span> <span>deployment-2-5b98b954f6-8g5fl</span></p>
# curl `minikube ip`/game4 -sH 'Host: hello-world.info'|grep -i server
<p><span>Server address:</span> <span>172.17.0.7:80</span></p>
<p><span>Server name:</span> <span>deployment-4-767ff76774-d2fgj</span></p>
You can see the Pod IP and name as well.
I agree with Efrat Levitan. It's not the task for ingress/kubernetes itself.
You need another application (different layer of abstraction) to distinguish where the traffic should be routed for example istio and Routing rule for HTTP traffic based on cookies.

Whitelisting IP addresses for network traffic through Istio gateways

I tried whitelisting IP address/es in my kubernetes cluster's incoming traffic using this example :
Although this works as expected, wanted to go a step further and try if I can use istio gateways or virtual service when I set up Istio rule, instead of Loadbalancer(ingressgateway).
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: checkip
namespace: my-namespace
spec:
match: source.labels["app"] == "my-app"
actions:
- handler: whitelistip.listchecker
instances:
- sourceip.listentry
---
Where my-app is of kind: Gateway with a certain host and port, and labelled app=my-app.
Am using istio version 1.1.1
Also my cluster has all the istio-system running with envoy sidecars on almost all service pods.
You confuse one thing that, in above rule, match: source.labels["app"] == "my-app" is not referring to any resource's label, but to pod's label.
From OutputTemplate Documentation:
sourceLabels | Refers to source pod labels.
attributebindings can refer to this field using $out.sourcelabels
And you can verify by looking for resources with "app=istio-ingressgateway" label via:
kubectl get pods,svc -n istio-system -l "app=istio-ingressgateway" --show-labels
You can check this blog from istio about Mixer Adapter Model, to understand complete mixer model, its handlers,instances and rules.
Hope it helps!

Getting an Kubernetes Ingress endpoint/IP address

Base OS : CentOS (1 master 2 minions)
K8S version : 1.9.5 (deployed using KubeSpray)
I am new to Kubernetes Ingress and am setting up 2 different services, each reachable with its own path.
I have created 2 deployments :
kubectl run nginx --image=nginx --port=80
kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080
I have also created their corresponding services :
kubectl expose deployment nginx --target-port=80 --type=NodePort
kubectl expose deployment echoserver --target-port=8080 --type=NodePort
My svc are :
[root#node1 kubernetes]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver NodePort 10.233.48.121 <none> 8080:31250/TCP 47m
nginx NodePort 10.233.44.54 <none> 80:32018/TCP 1h
My NodeIP address is 172.16.16.2 and I can access both pods using
http://172.16.16.2:31250 &
http://172.16.16.2:32018
Now on top of this I want to deploy an Ingress so that I can reach both pods not using 2 IPs and 2 different ports BUT 1 IP address with different paths.
So my Ingress file is :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fanout-nginx-ingress
spec:
rules:
- http:
paths:
- path: /nginx
backend:
serviceName: nginx
servicePort: 80
- path: /echo
backend:
serviceName: echoserver
servicePort: 8080
This yields :
[root#node1 kubernetes]# kubectl describe ing fanout-nginx-ingress
Name: fanout-nginx-ingress
Namespace: development
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/nginx nginx:80 (<none>)
/echo echoserver:8080 (<none>)
Annotations:
Events: <none>
Now when I try accessing the Pods using the NodeIP address (172.16.16.2), I get nothing.
http://172.16.16.2/echo
http://172.16.16.2/nginx
Is there something I have missed in my configs ?
I had the same issue on my bare metal installation - or rather something close to that (kubernetes virtual cluster - set of virtual machines connected via Host-Only-Adapter). Here is link to my kubernetes vlab.
First of all make sure that you have ingress controller installed. Currently there are two ingress controller worth trying kubernetes nginx ingress controller and nginx kubernetes ingress controller -I installed first one.
Installation
Go to installation instructions and execute first step
# prerequisite-generic-deployment-command
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
Next get IP addresses of cluster nodes.
$ kubectl get nodes -o wide
NAME STATUS ROLES ... INTERNAL-IP
master Ready master ... 192.168.121.110
node01 Ready <none> ... 192.168.121.111
node02 Ready <none> ... 192.168.121.112
Further, crate ingress-nginx service of type LoadBalancer. I do it by downloading NodePort template service from installation tutorial and making following adjustments in svc-ingress-nginx-lb.yaml file.
$ curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml > svc-ingress-nginx-lb.yaml
# my changes svc-ingress-nginx-lb.yaml
type: LoadBalancer
externalIPs:
- 192.168.121.110
- 192.168.121.111
- 192.168.121.112
externalTrafficPolicy: Local
# create ingress- service
$ kubectl apply -f svc-ingress-nginx-lb.yaml
Verification
Check that ingress-nginx service was created.
$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.110.127.9 192.168.121.110,192.168.121.111,192.168.121.112 80:30284/TCP,443:31684/TCP 70m
Check that nginx-ingress-controller deployment was created.
$ kubectl get deploy -n ingress-nginx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-ingress-controller 1 1 1 1 73m
Check that nginx-ingress pod is running.
$ kubectl get pods --all-namespaces -l
app.kubernetes.io/name=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-5cd796c58c-lg6d4 1/1 Running 0 75m
Finally, check ingress controller version. Don't forget to change pod name!
$ kubectl exec -it nginx-ingress-controller-5cd796c58c-lg6d4 -n ingress-nginx -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.21.0
Build: git-b65b85cd9
Repository: https://github.com/aledbf/ingress-nginx
-------------------------------------------------------------------------------
Testing
Test that ingress controller is working by executing steps in this tutorial -of course, you will omit minikube part.
Successful, execution of all steps will create ingress controler resource that should look like this.
$ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
ingress-tutorial myminikube.info,cheeses.all 192.168.121.110,192.168.121.111,192.168.121.112 80 91m
And pods that looks like this.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cheddar-cheese-6f94c9dbfd-cll4z 1/1 Running 0 110m
echoserver-55dcfbf8c6-dwl6s 1/1 Running 0 104m
stilton-cheese-5f6bbdd7dd-8s8bf 1/1 Running 0 110m
Finally, test that request to myminikube.info propagates via ingress load balancer.
$ curl myminikube.info
CLIENT VALUES:
client_address=10.44.0.7
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://myminikube.info:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=myminikube.info
user-agent=curl/7.29.0
x-forwarded-for=10.32.0.1
x-forwarded-host=myminikube.info
x-forwarded-port=80
x-forwarded-proto=http
x-original-uri=/
x-real-ip=10.32.0.1
x-request-id=b2fb3ee219507bfa12472c7d481d4b72
x-scheme=http
BODY:
It was a long journey to make ingress working on bear metal like environment.Thus, i will include relevant links that helped me along.
reproducable tutorial
installation of minikube on ubuntu
ingress I
ingress II
digging
reverse engineering on ingress in kubernetes
Check if you have an ingress controller in your cluster:
$ kubectl get po --all-namespaces
You should see something like:
kube-system nginx-ingress-controller-gwts0 1/1 Running 0 18d
It's only possible to create an ingress to address services inside the namespace in which the Ingress resides.
Cross-namespace ingresses are not implemented for security reasons.
It seems that your cluster is missing Ingress controller.
In general, Ingress controller works as follows:
1. search for a certain type of objects (ingress,"nginx") in a cluster
2. parse that object and create configuration section for a specific ingress pod.
3. update that pod object (restart it with updated configuration)
That particular pod is responsible for processing traffic from incoming ports (usually a couple of dedicated ports on nodes) to configured traffic destination in cluster.
You can choose from two supported and maintained controllers - Nginx and GCE
The ingress controller consists of several components that you create during installation.
Here is installation part from Nginx Ingress documentation:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml | kubectl apply -f -
If you have RBAC authorization configured in your cluster:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml | kubectl apply -f -
If no RBAC configured:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/without-rbac.yaml | kubectl apply -f -
In case you create cluster from scratch:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml | kubectl apply -f -
Verify your installation:
kubectl get pods --all-namespaces -l app=ingress-nginx --watch
You should see something like:
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-699cdf846-nj2rw 1/1 Running 0 1h
Check available services and their parameters:
kubectl get services --all-namespaces
If you are using custom service provider deployment (minikube, AWS, Azure, GKE), follow Nginx Ingress documentation for installation details.
See official Kubernetes Ingress documentation for details about Ingress.
I was using microk8s default nginx ingress controller on a later version of k8s (> 1.18) and I noticed this specific annotation was causing me an issue:
kubernetes.io/ingress.class: "nginx"
It's present in a lot of older documentation and examples but it's apparently deprecated (see https://kubernetes.io/docs/concepts/services-networking/ingress/) and I had also defined an ingressClassName of "public" using the newer annotation. I'm not sure if it was the conflict between the two that caused the issue but once I removed the deprecated annotation my address appeared.
For working your ingress resources (fanout-nginx-ingress), you need to first deploy the ingress controller which is by-default not come in your local kubernetes cluster. You need to deploy it yourself.
There are many solution out there and you can use any of them, but nginx ingress controller is fine.
For detail information you can refer a great video on ingress by Mumshad Mannambeth here:
https://www.youtube.com/watch?v=GhZi4DxaxxE

Grafana HTTP Error Bad Gateway and Templating init failed errors

Use helm installed Prometheus and Grafana on minikube at local.
$ helm install stable/prometheus
$ helm install stable/grafana
Prometheus server, alertmanager grafana can run after set port-forward:
$ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
$ kubectl --namespace default port-forward $POD_NAME 9090
$ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
$ kubectl --namespace default port-forward $POD_NAME 9093
$ export POD_NAME=$(kubectl get pods --namespace default -l "app=excited-crocodile-grafana,component=grafana" -o jsonpath="{.items[0].metadata.name}")
$ kubectl --namespace default port-forward $POD_NAME 3000
Add Data Source from grafana, got HTTP Error Bad Gateway error:
Import dashboard 315 from:
https://grafana.com/dashboards/315
Then check Kubernetes cluster monitoring (via Prometheus), got Templating init failed error:
Why?
In the HTTP settings of Grafana you set Access to Proxy, which means that Grafana wants to access Prometheus. Since Kubernetes uses an overlay network, it is a different IP.
There are two ways of solving this:
Set Access to Direct, so the browser directly connects to Prometheus.
Use the Kubernetes-internal IP or domain name. I don't know about the Prometheus Helm-chart, but assuming there is a Service named prometheus, something like http://prometheus:9090 should work.
I turned off the firewall on appliance, post that adding http://prometheus:9090 on URL did not throw bad gateway error.
I was never able to find a "proper" fix, but I found a workaround:
apiVersion: v1
kind: Service
metadata:
labels:
prometheus: k8s
name: prometheus-k8s
namespace: monitoring
spec:
selector:
app: prometheus
prometheus: k8s
sessionAffinity: ClientIP
clusterIP: None
By setting the clusterIP to None, the service changes to "Headless" mode, which means that requests are sent directly to a random one of the pods in that service/cluster. More info here: https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
There's probably a better solution, but this is the only one I've found that actually works for me, with kube-prometheus. (I've tried docker-desktop, k3d, and kind, and all of them have the same issue, so I doubt it's the emulator's fault; and I stripped my config down to basically just kube-prometheus, so it's hard to understand where the problem lies, but oh well.)