ingress-nginx wasn't installed properly? - kubernetes

Now I'm using WSL 2 and Docker Desktop on Windows 10.
I created an YAML script to create an ingress for my microservices like below.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: posts.com
http:
paths:
- path: /posts
pathType: Prefix
backend:
service:
name: posts-clusterip-srv
port:
number: 4000
And I installed ingress-nginx by following this installation guide
I ran this command in the guide.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
But when I ran kubectl get pods --namespace=ingress-nginx, ingress-nginx-controller shows ImageInspectError
And when I ran the command kubectl apply -f ingress-srv.yaml, it showed an error message.
Can anyone please let me know what the issue is?
I removed the namespace ingress-nginx using this command kubectl delete all --all -n ingress-nginx and ran the deploy script again.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
But the issue still happened.

There is an issue deploying ingress-nginx controller. You need to first fix issues there before deploying the ingress. Because, only the nginx controller knows how to handle the ingress resources.
Since there is no much info about the controller deployment failure, you better add more details about the error. You can describe the controller pod and share its event and status to look into this further.

It was because of the corrupted filesystem.
When I ran the ingress-nginx deployment command, there was a docker-desktop crash because of the lack of drive storage size.
So I removed all corrupted, unused or dangling docker images.
docker system prune
Also I deleted ingress-nginx and reinstalled.
kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
After that, it worked well.
kubectl get pods --namespace=ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-tgkfx 0/1 Completed 0 74m
ingress-nginx-admission-patch-28l7q 0/1 Completed 3 74m
ingress-nginx-controller-7844b9db77-4dfvb 1/1 Running 0 74m

Related

Kubernetes - NginxIngressController resources not creating

We are using Nginx ingress operator version 0.2.0 and the controller version 1.11.1. Following steps are completed to deploy the CRD and operator.
https://github.com/nginxinc/nginx-ingress-operator/blob/release-0.2.0/docs/manual-installation.md
After that, we are deploying the controller using the following yaml:
apiVersion: k8s.nginx.org/v1alpha1
kind: NginxIngressController
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
type: deployment
image:
repository: nginx/nginx-ingress
tag: 1.11.1
pullPolicy: Always
serviceType: NodePort
nginxPlus: False
The manifest gets applied successfully but none of the required resources are getting created (deployment and service). Hence, the ingress is not getting the address.
kubectl get all -n ingress-nginx
No resources found in ingress-nginx namespace.
kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
my-ingress <none> * 80 6h23m
kubeadm, kubelet & kubectl version 1.21.2.
Earlier we had deployed it on minikube and it was working fine.
I have reproduced the use case using Nginx ingress operator version 0.4.0 and the controller version 2.0.x by following the documentation and successfully created the Nginx Ingress Operator and NginxIngressController. Firstly, I didn't create the namespace ingress-nginx,While running the command
kubectl get all -n ingress-nginx. I was getting the error No resources found in ingress-nginx namespace.
After creating the required namespace by running the command kubectl create namespace ingress-nginx , I am able to get the resources(pod,service,deployment,replica set) successfully.
Can you try again by changing the nginx controller and operator versions to the latest one and also check the configurations correctly.

Syncing Prometheus and Kubewatch with Kubernetes Cluster

I want to get all events that occurred in Kubernetes cluster in some python dictionary using maybe some API to extract data from the events that occurred in the past. I found on internet that it is possible by storing all data of Kube-watch on Prometheus and later accessing it. I am unable to figure out how to set it up and see all past pod events in python. Any alternative solutions to access past events are also appreciated. Thanks!
I'll describe a solution that is not complicated and I think meets all your requirements.
There are tools such as Eventrouter that take Kubernetes events and push them to a user specified sink. However, as you mentioned, you only need Pods events, so I suggest a slightly different approach.
In short, you can run the kubectl get events --watch command from within a Pod and collect the output from that command using a log aggregation system like Loki.
Below, I will provide a detailed step-by-step explanation.
1. Running kubectl command from within a Pod
To display only Pod events, you can use:
$ kubectl get events --watch --field-selector involvedObject.kind=Pod
We want to run this command from within a Pod. For security reasons, I've created a separate events-collector ServiceAccount with the view Role assigned and our Pod will run under this ServiceAccount.
NOTE: I've created a Deployment instead of a single Pod.
$ cat all-in-one.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: events-collector
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: events-collector-binding
subjects:
- kind: ServiceAccount
name: events-collector
namespace: default
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: events-collector
name: events-collector
spec:
selector:
matchLabels:
app: events-collector
template:
metadata:
labels:
app: events-collector
spec:
serviceAccountName: events-collector
containers:
- image: bitnami/kubectl
name: test
command: ["kubectl"]
args: ["get","events", "--watch", "--field-selector", "involvedObject.kind=Pod"]
After applying the above manifest, the event-collector was created and collects Pod events as expected:
$ kubectl apply -f all-in-one.yml
serviceaccount/events-collector created
clusterrolebinding.rbac.authorization.k8s.io/events-collector-binding created
deployment.apps/events-collector created
$ kubectl get deploy,pod | grep events-collector
deployment.apps/events-collector 1/1 1 1 14s
pod/events-collector-d98d6c5c-xrltj 1/1 Running 0 14s
$ kubectl logs -f events-collector-d98d6c5c-xrltj
LAST SEEN TYPE REASON OBJECT MESSAGE
77s Normal Scheduled pod/app-1-5d9ccdb595-m9d5n Successfully assigned default/app-1-5d9ccdb595-m9d5n to gke-cluster-2-default-pool-8505743b-brmx
76s Normal Pulling pod/app-1-5d9ccdb595-m9d5n Pulling image "nginx"
71s Normal Pulled pod/app-1-5d9ccdb595-m9d5n Successfully pulled image "nginx" in 4.727842954s
70s Normal Created pod/app-1-5d9ccdb595-m9d5n Created container nginx
70s Normal Started pod/app-1-5d9ccdb595-m9d5n Started container nginx
73s Normal Scheduled pod/app-2-7747dcb588-h8j4q Successfully assigned default/app-2-7747dcb588-h8j4q to gke-cluster-2-default-pool-8505743b-p7qt
72s Normal Pulling pod/app-2-7747dcb588-h8j4q Pulling image "nginx"
67s Normal Pulled pod/app-2-7747dcb588-h8j4q Successfully pulled image "nginx" in 4.476795932s
66s Normal Created pod/app-2-7747dcb588-h8j4q Created container nginx
66s Normal Started pod/app-2-7747dcb588-h8j4q Started container nginx
2. Installing Loki
You can install Loki to store logs and process queries. Loki is like Prometheus, but for logs :). The easiest way to install Loki is to use the grafana/loki-stack Helm chart:
$ helm repo add grafana https://grafana.github.io/helm-charts
"grafana" has been added to your repositories
$ helm repo update
...
Update Complete. ⎈Happy Helming!⎈
$ helm upgrade --install loki grafana/loki-stack
$ kubectl get pods | grep loki
loki-0 1/1 Running 0 76s
loki-promtail-hm8kn 1/1 Running 0 76s
loki-promtail-nkv4p 1/1 Running 0 76s
loki-promtail-qfrcr 1/1 Running 0 76s
3. Querying Loki with LogCLI
You can use the LogCLI tool to run LogQL queries against a Loki server. Detailed information on installing and using this tool can be found in the LogCLI documentation. I'll demonstrate how to install it on Linux:
$ wget https://github.com/grafana/loki/releases/download/v2.2.1/logcli-linux-amd64.zip
$ unzip logcli-linux-amd64.zip
Archive: logcli-linux-amd64.zip
inflating: logcli-linux-amd64
$ mv logcli-linux-amd64 logcli
$ sudo cp logcli /bin/
$ whereis logcli
logcli: /bin/logcli
To query the Loki server from outside the Kubernetes cluster, you may need to expose it using the Ingress resource:
$ cat ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: loki-ingress
spec:
rules:
- http:
paths:
- backend:
serviceName: loki
servicePort: 3100
path: /
$ kubectl apply -f ingress.yml
ingress.networking.k8s.io/loki-ingress created
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
loki-ingress <none> * <PUBLIC_IP> 80 19s
Finally, I've created a simple python script that we can use to query the Loki server:
NOTE: We need to set the LOKI_ADDR environment variable as described in the documentation. You need to replace the <PUBLIC_IP> with your Ingress IP.
$ cat query_loki.py
#!/usr/bin/env python3
import os
os.environ['LOKI_ADDR'] = "http://<PUBLIC_IP>"
os.system("logcli query '{app=\"events-collector\"}'")
$ ./query_loki.py
...
2021-07-02T10:33:01Z {} 2021-07-02T10:33:01.626763464Z stdout F 0s Normal Pulling pod/backend-app-5d99cf4b-c9km4 Pulling image "nginx"
2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.836755152Z stdout F 0s Normal Scheduled pod/backend-app-5d99cf4b-c9km4 Successfully assigned default/backend-app-5d99cf4b-c9km4 to gke-cluster-1-default-pool-328bd2b1-288w
2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.649954267Z stdout F 0s Normal Started pod/web-app-6fcf9bb7b8-jbrr9 Started container nginx2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.54819851Z stdout F 0s Normal Created pod/web-app-6fcf9bb7b8-jbrr9 Created container nginx
2021-07-02T10:32:59Z {} 2021-07-02T10:32:59.414571562Z stdout F 0s Normal Pulled pod/web-app-6fcf9bb7b8-jbrr9 Successfully pulled image "nginx" in 4.228468876s
...

Nginx Ingress: service "ingress-nginx-controller-admission" not found

We created a kubernetes cluster for a customer about one year ago with two environments; staging and production separated in namespaces. We are currently developing the next version of the application and need an environment for this development work, so we've created a beta environment in its own namespace.
This is a bare metal kubernetes cluster with MetalLB and and nginx-ingress. The nginx ingress controllers is installed with helm and the ingresses are created with the following manifest (namespaces are enforced by our deployment pipeline and are not visible in the manifest):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
#ingress.kubernetes.io/ssl-redirect: "true"
#kubernetes.io/tls-acme: "true"
#certmanager.k8s.io/issuer: "letsencrypt-staging"
#certmanager.k8s.io/acme-challenge-type: http01
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Robots-Tag: noindex, nofollow";
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- hosts:
- ${API_DOMAIN}
secretName: api-cert
rules:
- host: ${API_DOMAIN}
http:
paths:
- backend:
serviceName: api
servicePort: 80
When applying the manifest kubernetes responds with the following error:
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: service "ingress-nginx-controller-admission" not found
I've attempted to update the apiVersion of the ingress manifest to networking.k8s.io/v1beta1 (this is the apiVersion the new nginx-ingress controllers are installed with via helm), but I'm getting the same error.
My initial suspicion is that this is related to a change in the nginx-ingress between the current installation and the installation from one year ago, even if the ingress controllers are separated by namespaces. But i cant find any services called ingress-nginx-controller-admission in any of my namespaces, so I'm clueless how to proceed.
I had the same problem and found a solution from another SO thread.
I had previously installed nginx-ingress using the manifests. I deleted the namespace it created, and the clusterrole and clusterrolebinding as noted in the documentation, but that does not remove the ValidatingWebhookConfiguration that is installed in the manifests, but NOT when using helm by default. As Arghya noted above, it can be enabled using a helm parameter.
Once I deleted the ValidatingWebhookConfiguration, my helm installation went flawlessly.
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
You can check if there is a validation webhook and a service. If they don't exist double check the deployment and add these.
kubectl get -A ValidatingWebhookConfiguration
NAME CREATED AT
ingress-nginx-admission 2020-04-22T15:01:33Z
kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.96.212.217 <none> 80:32268/TCP,443:32683/TCP 2m34s
ingress-nginx-controller-admission ClusterIP 10.96.151.42 <none> 443/TCP 2m34s
Deployment yamls here have the webhook and service.
Since you have used helm to install it you can enable/disable the webhook via a helm parameter as defined here
There is some issue with SSL cert it seems in the webhook.
Chaning failurePolicy: Fail to Ignore worked for me in the
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/baremetal/deploy.yaml
for more info check:
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/
my problem is proven to be a ssl cert issue. after I delete"ValidatingWebhookConfiguration",
the issue is resolved
For me issue was with Kubernetes version 1.18 and I upgraded to 1.19.1 and it worked just fine.
Pod status
k get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-cgpj7 0/1 Completed 0 3m44s
ingress-nginx-admission-patch-mksxs 0/1 Completed 0 3m44s
ingress-nginx-controller-5fb6f67b9c-ps67k 0/1 CrashLoopBackOff 5 3m45s
Error logs from pod
I0916 07:15:34.317477 8 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
F0916 07:15:34.318721 8 main.go:107] ingress-nginx requires Kubernetes v1.19.0 or higher
k get po -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-2tk8p 0/1 Completed 0 104s
ingress-nginx-admission-patch-nlv5w 0/1 Completed 0 104s
ingress-nginx-controller-79c4d49bb9-7bgcj 1/1 Running 0 105s
I faced this issue when working on a Kubernetes cluster.
The issue arose when I was migrating resources from one nodepool to another nodepool in a test Kubernetes Cluster.
I forgot that I had not migrated out the Nginx ingress and the Cert Manager out of the noodpool that I wanted to decommission. So after migrating other applications out of the noodpool that I wanted to decommission I deleted the noodpool, which consequently deleted Nginx ingress and the Cert Manager from the Kubernetes Cluster.
All I had to do was to redeploy the Nginx ingress and the Cert Manager to the new noodpool.

Getting an Kubernetes Ingress endpoint/IP address

Base OS : CentOS (1 master 2 minions)
K8S version : 1.9.5 (deployed using KubeSpray)
I am new to Kubernetes Ingress and am setting up 2 different services, each reachable with its own path.
I have created 2 deployments :
kubectl run nginx --image=nginx --port=80
kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080
I have also created their corresponding services :
kubectl expose deployment nginx --target-port=80 --type=NodePort
kubectl expose deployment echoserver --target-port=8080 --type=NodePort
My svc are :
[root#node1 kubernetes]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver NodePort 10.233.48.121 <none> 8080:31250/TCP 47m
nginx NodePort 10.233.44.54 <none> 80:32018/TCP 1h
My NodeIP address is 172.16.16.2 and I can access both pods using
http://172.16.16.2:31250 &
http://172.16.16.2:32018
Now on top of this I want to deploy an Ingress so that I can reach both pods not using 2 IPs and 2 different ports BUT 1 IP address with different paths.
So my Ingress file is :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fanout-nginx-ingress
spec:
rules:
- http:
paths:
- path: /nginx
backend:
serviceName: nginx
servicePort: 80
- path: /echo
backend:
serviceName: echoserver
servicePort: 8080
This yields :
[root#node1 kubernetes]# kubectl describe ing fanout-nginx-ingress
Name: fanout-nginx-ingress
Namespace: development
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/nginx nginx:80 (<none>)
/echo echoserver:8080 (<none>)
Annotations:
Events: <none>
Now when I try accessing the Pods using the NodeIP address (172.16.16.2), I get nothing.
http://172.16.16.2/echo
http://172.16.16.2/nginx
Is there something I have missed in my configs ?
I had the same issue on my bare metal installation - or rather something close to that (kubernetes virtual cluster - set of virtual machines connected via Host-Only-Adapter). Here is link to my kubernetes vlab.
First of all make sure that you have ingress controller installed. Currently there are two ingress controller worth trying kubernetes nginx ingress controller and nginx kubernetes ingress controller -I installed first one.
Installation
Go to installation instructions and execute first step
# prerequisite-generic-deployment-command
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
Next get IP addresses of cluster nodes.
$ kubectl get nodes -o wide
NAME STATUS ROLES ... INTERNAL-IP
master Ready master ... 192.168.121.110
node01 Ready <none> ... 192.168.121.111
node02 Ready <none> ... 192.168.121.112
Further, crate ingress-nginx service of type LoadBalancer. I do it by downloading NodePort template service from installation tutorial and making following adjustments in svc-ingress-nginx-lb.yaml file.
$ curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml > svc-ingress-nginx-lb.yaml
# my changes svc-ingress-nginx-lb.yaml
type: LoadBalancer
externalIPs:
- 192.168.121.110
- 192.168.121.111
- 192.168.121.112
externalTrafficPolicy: Local
# create ingress- service
$ kubectl apply -f svc-ingress-nginx-lb.yaml
Verification
Check that ingress-nginx service was created.
$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.110.127.9 192.168.121.110,192.168.121.111,192.168.121.112 80:30284/TCP,443:31684/TCP 70m
Check that nginx-ingress-controller deployment was created.
$ kubectl get deploy -n ingress-nginx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-ingress-controller 1 1 1 1 73m
Check that nginx-ingress pod is running.
$ kubectl get pods --all-namespaces -l
app.kubernetes.io/name=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-5cd796c58c-lg6d4 1/1 Running 0 75m
Finally, check ingress controller version. Don't forget to change pod name!
$ kubectl exec -it nginx-ingress-controller-5cd796c58c-lg6d4 -n ingress-nginx -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.21.0
Build: git-b65b85cd9
Repository: https://github.com/aledbf/ingress-nginx
-------------------------------------------------------------------------------
Testing
Test that ingress controller is working by executing steps in this tutorial -of course, you will omit minikube part.
Successful, execution of all steps will create ingress controler resource that should look like this.
$ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
ingress-tutorial myminikube.info,cheeses.all 192.168.121.110,192.168.121.111,192.168.121.112 80 91m
And pods that looks like this.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cheddar-cheese-6f94c9dbfd-cll4z 1/1 Running 0 110m
echoserver-55dcfbf8c6-dwl6s 1/1 Running 0 104m
stilton-cheese-5f6bbdd7dd-8s8bf 1/1 Running 0 110m
Finally, test that request to myminikube.info propagates via ingress load balancer.
$ curl myminikube.info
CLIENT VALUES:
client_address=10.44.0.7
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://myminikube.info:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=myminikube.info
user-agent=curl/7.29.0
x-forwarded-for=10.32.0.1
x-forwarded-host=myminikube.info
x-forwarded-port=80
x-forwarded-proto=http
x-original-uri=/
x-real-ip=10.32.0.1
x-request-id=b2fb3ee219507bfa12472c7d481d4b72
x-scheme=http
BODY:
It was a long journey to make ingress working on bear metal like environment.Thus, i will include relevant links that helped me along.
reproducable tutorial
installation of minikube on ubuntu
ingress I
ingress II
digging
reverse engineering on ingress in kubernetes
Check if you have an ingress controller in your cluster:
$ kubectl get po --all-namespaces
You should see something like:
kube-system nginx-ingress-controller-gwts0 1/1 Running 0 18d
It's only possible to create an ingress to address services inside the namespace in which the Ingress resides.
Cross-namespace ingresses are not implemented for security reasons.
It seems that your cluster is missing Ingress controller.
In general, Ingress controller works as follows:
1. search for a certain type of objects (ingress,"nginx") in a cluster
2. parse that object and create configuration section for a specific ingress pod.
3. update that pod object (restart it with updated configuration)
That particular pod is responsible for processing traffic from incoming ports (usually a couple of dedicated ports on nodes) to configured traffic destination in cluster.
You can choose from two supported and maintained controllers - Nginx and GCE
The ingress controller consists of several components that you create during installation.
Here is installation part from Nginx Ingress documentation:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml | kubectl apply -f -
If you have RBAC authorization configured in your cluster:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml | kubectl apply -f -
If no RBAC configured:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/without-rbac.yaml | kubectl apply -f -
In case you create cluster from scratch:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml | kubectl apply -f -
Verify your installation:
kubectl get pods --all-namespaces -l app=ingress-nginx --watch
You should see something like:
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-699cdf846-nj2rw 1/1 Running 0 1h
Check available services and their parameters:
kubectl get services --all-namespaces
If you are using custom service provider deployment (minikube, AWS, Azure, GKE), follow Nginx Ingress documentation for installation details.
See official Kubernetes Ingress documentation for details about Ingress.
I was using microk8s default nginx ingress controller on a later version of k8s (> 1.18) and I noticed this specific annotation was causing me an issue:
kubernetes.io/ingress.class: "nginx"
It's present in a lot of older documentation and examples but it's apparently deprecated (see https://kubernetes.io/docs/concepts/services-networking/ingress/) and I had also defined an ingressClassName of "public" using the newer annotation. I'm not sure if it was the conflict between the two that caused the issue but once I removed the deprecated annotation my address appeared.
For working your ingress resources (fanout-nginx-ingress), you need to first deploy the ingress controller which is by-default not come in your local kubernetes cluster. You need to deploy it yourself.
There are many solution out there and you can use any of them, but nginx ingress controller is fine.
For detail information you can refer a great video on ingress by Mumshad Mannambeth here:
https://www.youtube.com/watch?v=GhZi4DxaxxE

Kubernetes: Edit Heapster deployment in kube-system namespace

I use the Kubernetes system provided by Google Container Engine.
When I try to edit the Heapster deployment to add an influxdb sink:
kubectl --namespace=kube-system edit deployment/heapster-v1.2.0.1
Everything works well, I get the following response:
deployment "heapster-v1.2.0.1" edited
The pods are recreated successfully but after about 1 minute, the deployment is reverted to its default value.
I thought maybe it was the sink configuration line that I added that was the problem, so I tried to edit the deployment again, but just to add a simple label:
labels:
k8s-app: heapster
kubernetes.io/cluster-service: "true"
version: v1.2.0.1
test: test
Same thing happens, the deployment is reverted to its original config after 1 minute. I type the following command and the label is gone:
kubectl --namespace=kube-system get -o yaml deployment/heapster-v1.2.0.1
Anyone has an idea how I can edit this Heapster deployment?
Thanks