Traefik dashboard/web UI 404 when installed via helm on Digitalocean single node cluster - kubernetes

I am trying to set Traefik as my ingress controller and load balancer on a single node cluster(Digital Ocean). Following the official Traefik setup guide I installed Traefik using helm:
helm install --values values.yaml stable/traefik
# values.yaml
dashboard:
enabled: true
domain: traefik-ui.minikube
kubernetes:
namespaces:
- default
- kube-system
#output
RESOURCES:
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
operatic-emu-traefik-f5dbf4b8f-z9bzp 0/1 ContainerCreating 0 1s
==> v1/ConfigMap
NAME AGE
operatic-emu-traefik 1s
==> v1/Service
operatic-emu-traefik-dashboard 1s
operatic-emu-traefik 1s
==> v1/Deployment
operatic-emu-traefik 1s
==> v1beta1/Ingress
operatic-emu-traefik-dashboard 1s
Then I created the service exposing the Web UI
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml
Then I can clearly see my traefik pod running and an external-ip being assigned:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard ClusterIP 10.245.156.214 <none> 443/TCP 11d
service/kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 14d
service/operatic-emu-traefik LoadBalancer 10.245.137.41 <external-ip> 80:31190/TCP,443:30207/TCP 5m7s
service/operatic-emu-traefik-dashboard ClusterIP 10.245.8.156 <none> 80/TCP 5m7s
Then opening http://external-ip/dashboard/ leads to 404 page not found
I read a ton of answers and tutorials but keep missing something. Any help is highly appreciated.

I am writing this post as the information is a bit much to fit in a comment. After spending enough time on understanding how k8s and helm charts work, this is how I solved it:
Firstly, I missed the RBAC part, I did not create ClusterRole and ClusterRoleBinding in order to authorise Traefik to use K8S API (as I am using 1.12 version). Hence, either I should have deployed ClusterRole and ClusterRoleBinding manually or added the following in my values.yaml
rbac:
enabled: true
Secondly, I tried to access dashboard ui from ip directly without realising Traefik uses hostname to direct to its dashboard as #Rico mentioned above (I am voting you up as you did provide helpful info but I did not manage to connect all pieces of the puzzle at that time). So, either edit your /etc/hosts file linking your hostname to the external-ip and then access the dashboard via browser or test that it is working with curl:
curl http://external-ip/dashboard/ -H 'Host: traefik-ui.minikube'
To sum up, you should be able to install Traefik and access its dashboard ui by installing:
helm install --values values.yaml stable/traefik
# values.yaml
dashboard:
enabled: true
domain: traefik-ui.minikube
rbac:
enabled: true
kubernetes:
namespaces:
- default
- kube-system
and then editing your hosts file and opening the hostname you chose.
Now the confusing part from the official traefik setup guide is the section named Submitting an Ingress to the Cluster just below the Deploy Traefik using Helm Chart that instructs to install a service and an ingress object in order to be able to access the dashboard. This is unneeded as the official stable/traefik helm chart provides both of them. You would need that if you want to install traefik by deploying all needed objects manually. However for a person just starting out with k8s and helm, it looks like that section needs to be completed after installing helm via the official stable/traefik chart.

I believe this is the same issue as this.
You either have to connect with the traefik-ui.minikube hostname or add a host entry on your Ingress definition like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: kube-system
name: traefik-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: yourown.hostname.com
http:
paths:
- path: /dashboard
backend:
serviceName: traefik-web-ui
servicePort: web
You can check with:
$ kubectl -n kube-system get ingress

Related

Nginx Ingress: service "ingress-nginx-controller-admission" not found

We created a kubernetes cluster for a customer about one year ago with two environments; staging and production separated in namespaces. We are currently developing the next version of the application and need an environment for this development work, so we've created a beta environment in its own namespace.
This is a bare metal kubernetes cluster with MetalLB and and nginx-ingress. The nginx ingress controllers is installed with helm and the ingresses are created with the following manifest (namespaces are enforced by our deployment pipeline and are not visible in the manifest):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
#ingress.kubernetes.io/ssl-redirect: "true"
#kubernetes.io/tls-acme: "true"
#certmanager.k8s.io/issuer: "letsencrypt-staging"
#certmanager.k8s.io/acme-challenge-type: http01
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Robots-Tag: noindex, nofollow";
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- hosts:
- ${API_DOMAIN}
secretName: api-cert
rules:
- host: ${API_DOMAIN}
http:
paths:
- backend:
serviceName: api
servicePort: 80
When applying the manifest kubernetes responds with the following error:
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: service "ingress-nginx-controller-admission" not found
I've attempted to update the apiVersion of the ingress manifest to networking.k8s.io/v1beta1 (this is the apiVersion the new nginx-ingress controllers are installed with via helm), but I'm getting the same error.
My initial suspicion is that this is related to a change in the nginx-ingress between the current installation and the installation from one year ago, even if the ingress controllers are separated by namespaces. But i cant find any services called ingress-nginx-controller-admission in any of my namespaces, so I'm clueless how to proceed.
I had the same problem and found a solution from another SO thread.
I had previously installed nginx-ingress using the manifests. I deleted the namespace it created, and the clusterrole and clusterrolebinding as noted in the documentation, but that does not remove the ValidatingWebhookConfiguration that is installed in the manifests, but NOT when using helm by default. As Arghya noted above, it can be enabled using a helm parameter.
Once I deleted the ValidatingWebhookConfiguration, my helm installation went flawlessly.
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
You can check if there is a validation webhook and a service. If they don't exist double check the deployment and add these.
kubectl get -A ValidatingWebhookConfiguration
NAME CREATED AT
ingress-nginx-admission 2020-04-22T15:01:33Z
kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.96.212.217 <none> 80:32268/TCP,443:32683/TCP 2m34s
ingress-nginx-controller-admission ClusterIP 10.96.151.42 <none> 443/TCP 2m34s
Deployment yamls here have the webhook and service.
Since you have used helm to install it you can enable/disable the webhook via a helm parameter as defined here
There is some issue with SSL cert it seems in the webhook.
Chaning failurePolicy: Fail to Ignore worked for me in the
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/baremetal/deploy.yaml
for more info check:
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/
my problem is proven to be a ssl cert issue. after I delete"ValidatingWebhookConfiguration",
the issue is resolved
For me issue was with Kubernetes version 1.18 and I upgraded to 1.19.1 and it worked just fine.
Pod status
k get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-cgpj7 0/1 Completed 0 3m44s
ingress-nginx-admission-patch-mksxs 0/1 Completed 0 3m44s
ingress-nginx-controller-5fb6f67b9c-ps67k 0/1 CrashLoopBackOff 5 3m45s
Error logs from pod
I0916 07:15:34.317477 8 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
F0916 07:15:34.318721 8 main.go:107] ingress-nginx requires Kubernetes v1.19.0 or higher
k get po -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-2tk8p 0/1 Completed 0 104s
ingress-nginx-admission-patch-nlv5w 0/1 Completed 0 104s
ingress-nginx-controller-79c4d49bb9-7bgcj 1/1 Running 0 105s
I faced this issue when working on a Kubernetes cluster.
The issue arose when I was migrating resources from one nodepool to another nodepool in a test Kubernetes Cluster.
I forgot that I had not migrated out the Nginx ingress and the Cert Manager out of the noodpool that I wanted to decommission. So after migrating other applications out of the noodpool that I wanted to decommission I deleted the noodpool, which consequently deleted Nginx ingress and the Cert Manager from the Kubernetes Cluster.
All I had to do was to redeploy the Nginx ingress and the Cert Manager to the new noodpool.

How to access kubernetes websites via https

I built my own 1 host kubernetes cluster (1 host, 1 node, many namespaces, many pods and services) on a virtual machine, running on a always-on server.
The applications running on the cluster are working fine (basically, a NodeJS backend and HTML frontend).
So far, I have a NodePort Service, which is exposing Port 30000:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik-ingress-service NodePort 10.109.211.16 <none> 443:30000/TCP 147d
So, now I can access the web interface by typing https://<server-alias>:30000 in my browser adress bar.
But I would like to access it without giving the port, by only typing https://<server-alias>.
I know, this can be done with the kubectl port-forwarding command:
kubectl -n kube-system port-forward --address 0.0.0.0 svc/traefik-ingress-service 443:443
This works. But it does not seem to be a very professional thing to do.
Port forwarding also seems to keep disconnecting from time to time. Sometimes, it throws an error and quits, but leaves the process open, which leaves the port open - have to kill the process manually.
So, is there a way to do that access-my-application stuff professionally? How do the cluster provider (AWS, GCP...) do that?
Thank you!
Using Ingress Nginx you can access to you website with the name server:
Step 1: Install Nginx ingress in you cluster you can flow this link
After the installation is completed you will have a new pod
NAME READY STATUS
nginx-ingress-xxxxx 1/1 Running
And a new Service
NAME TYPE CLUSTER-IP EXTERNAL-IP
nginx-ingress LoadBalancer 10.109.x.y a.b.c.d
Step 2 : Create new deployment for you application but be sure that you are using the same name space for nginx ingress svc/pod and you application and you set the svc type to ClusterIP
Step 3: Create Kubernetes Ingress Object
Now you have to create the ingress object
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: **Same Name Space**
spec:
rules:
- host: your DNS <server-alias>
http:
paths:
- backend:
serviceName: svc Name
servicePort: svc Port
Now you can access to your website using the .
To create a DNS for free you can use freenom or you can use /etc/hosts
update it with :
server-alias a.b.c.d
Since the Type of your Traefik Ingress Service is NodePort, you get to access to the port provided which will have a value from 30000-32000.
You can also configure it to be of type LoadBalancer and interface with a cloud-based Load Balancer.
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
Here's a very related question: Should I use NodePort in my Traefik deployment on Kubernetes?

Referencing Helm Redis master from another pod within Kubernetes

I am running Redis via Helm on Kubernetes and wondering how I reference the master pod from my application which is also running inside of Kubernetes as a pod. Helm is nice enough to create ClusterIP services, but I am still unclear in my application what I put to always reference the master:
MacBook-Pro ➜ api git:(master) ✗ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ignoble-hyena-redis-master ClusterIP 10.100.187.188 <none> 6379/TCP 5h21m
ignoble-hyena-redis-slave ClusterIP 10.100.236.164 <none> 6379/TCP 5h21m
MacBook-Pro ➜ api git:(master) ✗ kubectl describe service ignoble-hyena-redis-master
Name: ignoble-hyena-redis-master
Namespace: default
Labels: app=redis
chart=redis-9.0.1
heritage=Tiller
release=ignoble-hyena
Annotations: <none>
Selector: app=redis,release=ignoble-hyena,role=master
Type: ClusterIP
IP: 10.100.187.188
Port: redis 6379/TCP
TargetPort: redis/TCP
Endpoints: 192.168.34.46:6379
Session Affinity: None
Events: <none>
Do I use: redis://my-password#ignoble-hyena-redis-master:6379. That seems fragile as the pod name changes every time I redeploy the Helm chart. What is the recommended way to handle internal service discovery within the Kubernetes cluster?
You should package your application as a Helm chart. This basically involves running helm create, then copying your existing deployment YAML into the templates directory. Charts can have dependencies and so you can declare that your application needs Redis. Using the version in the standard Helm charts repository you can say something like
# I am requirements.yaml
- name: redis
version: ~9.0.2
repository: https://kubernetes-charts.storage.googleapis.com
The important detail here is that your application and its Redis will have the same Helm release name -- if your application is ignoble-hyena-myapp then its Redis will be ignoble-hyena-redis-master. You can set this in your deployment YAML spec using templates
env:
- name: REDIS_HOST
value: {{ .Release.Name }}-redis-master
Because of the way Kubernetes works internally, even if you helm upgrade your chart to a newer image tag, it won't usually touch the Redis. Helm will upload a new version of the Redis artifacts that looks exactly the same as the old one, and Kubernetes will take no action.
I couldn’t find it well-documented but following the template code you should be able to set the fullnameOverride value to some string you control, and the redis master will be exposed as <yourFullname>-master, and you can have your clients reach it via that. If your clients are in a different namespace, they can reach the masters at <yourFullname>-master.<redisMasterServiceNamespace>.

Kubernetes Ingress-Service update IP

I have a Kubernetes Cluster running on Azure. I use the nginx-ingress to handle incoming requests. To set up the ingress I used the official guide https://kubernetes.github.io/ingress-nginx/deploy/#azure .
I also created a public static IP which I want to use for the Ingress.
Unfortunately, I´m not able to find the ingress service (generic-deployment.yaml). Also, my ingress is not describable.
How I installed Ingress:
$ sudo kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
...
deployment.apps/nginx-ingress-controller created
$ sudo kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
service/ingress-nginx created
Additionally, I installed some routing configs by ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path:
backend:
serviceName: app0-service
servicePort: 80
- path: /app1
backend:
serviceName: app1-service
servicePort: 80
$sudo kubectl apply -f ingress.yaml
ingress.extensions/myingress created
What confuses me
Unfortunately, I´m not able to find my ingress-nginx service.
$ sudo kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app0-service ClusterIP 10.0.28.3 <none> 80/TCP 3m48s
app1-service ClusterIP 10.0.226.249 <none> 80/TCP 3m47s
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 39m
But my ingress is running:
$ sudo kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
myingress * 23.97.xxx.xxx 80 54m
In browser 23.97.xxx.xxx works partly.
1) If I proxy a domain name to 23.97.xxx.xxx, the domain in a browser will be rewritten by the IP.
2) If I try to browse directly to subroute like 23.97.xxx.xxx/app1/page1. I get every time the main page of app1.
I expected to get an IP from my ingress-service. Because I want to update this IP address by adding loadbalancerIP to spec in cloud-generic.yaml.
(like https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/static-ip/static-ip-svc.yaml).
Is my IP from ingress the right one to use? And why I can´t find my ingress-service?
Looking at service yaml at https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml you can see it's get created in namespace ingress-nginx.
You should be able to get your service by running:
kubectl get service -n ingress-nginx
You can also get all services by running kubectl get service --all-namespaces.

Getting an Kubernetes Ingress endpoint/IP address

Base OS : CentOS (1 master 2 minions)
K8S version : 1.9.5 (deployed using KubeSpray)
I am new to Kubernetes Ingress and am setting up 2 different services, each reachable with its own path.
I have created 2 deployments :
kubectl run nginx --image=nginx --port=80
kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080
I have also created their corresponding services :
kubectl expose deployment nginx --target-port=80 --type=NodePort
kubectl expose deployment echoserver --target-port=8080 --type=NodePort
My svc are :
[root#node1 kubernetes]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver NodePort 10.233.48.121 <none> 8080:31250/TCP 47m
nginx NodePort 10.233.44.54 <none> 80:32018/TCP 1h
My NodeIP address is 172.16.16.2 and I can access both pods using
http://172.16.16.2:31250 &
http://172.16.16.2:32018
Now on top of this I want to deploy an Ingress so that I can reach both pods not using 2 IPs and 2 different ports BUT 1 IP address with different paths.
So my Ingress file is :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fanout-nginx-ingress
spec:
rules:
- http:
paths:
- path: /nginx
backend:
serviceName: nginx
servicePort: 80
- path: /echo
backend:
serviceName: echoserver
servicePort: 8080
This yields :
[root#node1 kubernetes]# kubectl describe ing fanout-nginx-ingress
Name: fanout-nginx-ingress
Namespace: development
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/nginx nginx:80 (<none>)
/echo echoserver:8080 (<none>)
Annotations:
Events: <none>
Now when I try accessing the Pods using the NodeIP address (172.16.16.2), I get nothing.
http://172.16.16.2/echo
http://172.16.16.2/nginx
Is there something I have missed in my configs ?
I had the same issue on my bare metal installation - or rather something close to that (kubernetes virtual cluster - set of virtual machines connected via Host-Only-Adapter). Here is link to my kubernetes vlab.
First of all make sure that you have ingress controller installed. Currently there are two ingress controller worth trying kubernetes nginx ingress controller and nginx kubernetes ingress controller -I installed first one.
Installation
Go to installation instructions and execute first step
# prerequisite-generic-deployment-command
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
Next get IP addresses of cluster nodes.
$ kubectl get nodes -o wide
NAME STATUS ROLES ... INTERNAL-IP
master Ready master ... 192.168.121.110
node01 Ready <none> ... 192.168.121.111
node02 Ready <none> ... 192.168.121.112
Further, crate ingress-nginx service of type LoadBalancer. I do it by downloading NodePort template service from installation tutorial and making following adjustments in svc-ingress-nginx-lb.yaml file.
$ curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml > svc-ingress-nginx-lb.yaml
# my changes svc-ingress-nginx-lb.yaml
type: LoadBalancer
externalIPs:
- 192.168.121.110
- 192.168.121.111
- 192.168.121.112
externalTrafficPolicy: Local
# create ingress- service
$ kubectl apply -f svc-ingress-nginx-lb.yaml
Verification
Check that ingress-nginx service was created.
$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.110.127.9 192.168.121.110,192.168.121.111,192.168.121.112 80:30284/TCP,443:31684/TCP 70m
Check that nginx-ingress-controller deployment was created.
$ kubectl get deploy -n ingress-nginx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-ingress-controller 1 1 1 1 73m
Check that nginx-ingress pod is running.
$ kubectl get pods --all-namespaces -l
app.kubernetes.io/name=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-5cd796c58c-lg6d4 1/1 Running 0 75m
Finally, check ingress controller version. Don't forget to change pod name!
$ kubectl exec -it nginx-ingress-controller-5cd796c58c-lg6d4 -n ingress-nginx -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.21.0
Build: git-b65b85cd9
Repository: https://github.com/aledbf/ingress-nginx
-------------------------------------------------------------------------------
Testing
Test that ingress controller is working by executing steps in this tutorial -of course, you will omit minikube part.
Successful, execution of all steps will create ingress controler resource that should look like this.
$ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
ingress-tutorial myminikube.info,cheeses.all 192.168.121.110,192.168.121.111,192.168.121.112 80 91m
And pods that looks like this.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cheddar-cheese-6f94c9dbfd-cll4z 1/1 Running 0 110m
echoserver-55dcfbf8c6-dwl6s 1/1 Running 0 104m
stilton-cheese-5f6bbdd7dd-8s8bf 1/1 Running 0 110m
Finally, test that request to myminikube.info propagates via ingress load balancer.
$ curl myminikube.info
CLIENT VALUES:
client_address=10.44.0.7
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://myminikube.info:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=myminikube.info
user-agent=curl/7.29.0
x-forwarded-for=10.32.0.1
x-forwarded-host=myminikube.info
x-forwarded-port=80
x-forwarded-proto=http
x-original-uri=/
x-real-ip=10.32.0.1
x-request-id=b2fb3ee219507bfa12472c7d481d4b72
x-scheme=http
BODY:
It was a long journey to make ingress working on bear metal like environment.Thus, i will include relevant links that helped me along.
reproducable tutorial
installation of minikube on ubuntu
ingress I
ingress II
digging
reverse engineering on ingress in kubernetes
Check if you have an ingress controller in your cluster:
$ kubectl get po --all-namespaces
You should see something like:
kube-system nginx-ingress-controller-gwts0 1/1 Running 0 18d
It's only possible to create an ingress to address services inside the namespace in which the Ingress resides.
Cross-namespace ingresses are not implemented for security reasons.
It seems that your cluster is missing Ingress controller.
In general, Ingress controller works as follows:
1. search for a certain type of objects (ingress,"nginx") in a cluster
2. parse that object and create configuration section for a specific ingress pod.
3. update that pod object (restart it with updated configuration)
That particular pod is responsible for processing traffic from incoming ports (usually a couple of dedicated ports on nodes) to configured traffic destination in cluster.
You can choose from two supported and maintained controllers - Nginx and GCE
The ingress controller consists of several components that you create during installation.
Here is installation part from Nginx Ingress documentation:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml | kubectl apply -f -
If you have RBAC authorization configured in your cluster:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml | kubectl apply -f -
If no RBAC configured:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/without-rbac.yaml | kubectl apply -f -
In case you create cluster from scratch:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml | kubectl apply -f -
Verify your installation:
kubectl get pods --all-namespaces -l app=ingress-nginx --watch
You should see something like:
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-699cdf846-nj2rw 1/1 Running 0 1h
Check available services and their parameters:
kubectl get services --all-namespaces
If you are using custom service provider deployment (minikube, AWS, Azure, GKE), follow Nginx Ingress documentation for installation details.
See official Kubernetes Ingress documentation for details about Ingress.
I was using microk8s default nginx ingress controller on a later version of k8s (> 1.18) and I noticed this specific annotation was causing me an issue:
kubernetes.io/ingress.class: "nginx"
It's present in a lot of older documentation and examples but it's apparently deprecated (see https://kubernetes.io/docs/concepts/services-networking/ingress/) and I had also defined an ingressClassName of "public" using the newer annotation. I'm not sure if it was the conflict between the two that caused the issue but once I removed the deprecated annotation my address appeared.
For working your ingress resources (fanout-nginx-ingress), you need to first deploy the ingress controller which is by-default not come in your local kubernetes cluster. You need to deploy it yourself.
There are many solution out there and you can use any of them, but nginx ingress controller is fine.
For detail information you can refer a great video on ingress by Mumshad Mannambeth here:
https://www.youtube.com/watch?v=GhZi4DxaxxE