Kubernetes Ingress issue baremetal - kubernetes

I am new to kubernetes, installed 3 nodes k8s cluster through kubeadm in my personal Laptop on top of VMware Workstation
a master and 2 worker nodes.
I have deployed nginx ingress controller through below URL, seems nginx ingress pods are working fine, I have deployed a httpd pod, service and ingress to point to the http server, but I am not able to point to the http URL, pasted all files.
But I didn't deploy any LoadBalancers(HAproxy/MetalLB), I am in a dilemma whether LoadBalancer or Proxy required to make ingress working on BareMetal multinode cluster.
# nginx-ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
[root#kube-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master01 Ready master 197d v1.19.0
kube-node01.example.com Ready worker 197d v1.19.0
kube-node02.example.com Ready worker 197d v1.19.0
[root#kube-master01 ~]# kubectl -n ingress-nginx get pods
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-5zcd5 0/1 Completed 0 41h
ingress-nginx-controller-67897c9494-pt5nl 1/1 Running 0 3h4m
[root#minikube01 httpd]# cat httpd-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deployment
labels:
app: http-server
spec:
replicas: 1
selector:
matchLabels:
app: http-server
template:
metadata:
labels:
app: http-server
spec:
containers:
- name: http-server
image: httpd
ports:
- containerPort: 80
[root#minikube01 httpd]# cat httpd-service.yaml
apiVersion: v1
kind: Service
metadata:
name: httpd-service
spec:
selector:
app: http-server
ports:
- protocol: TCP
port: 8081
targetPort: 80
[root#minikube01 httpd]# cat httpd-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: httpd-ingress
spec:
rules:
- host: httpd.com
http:
paths:
- backend:
serviceName: httpd-service
servicePort: 8081
The same above files works fine in a minikube node without any issues.
Any assiatnace is appreciated.
Thanks in Advance
Niru

Related

Why not working request to deployment via service request and via ingress request?

Install minikube version: v1.29.0 on MacOs.
I create API endpoint on flask and build in docker image
FROM debian:latest
COPY . /app
WORKDIR /app
RUN pip3 install --no-cache-dir -r requirements.txt
CMD ["uwsgi", "--socket", "0.0.0.0:5001", "--protocol=http", "-w", "wsgi:app", "--ini", "wsgi.ini"]
after load docker image into minikube
minikube image load drnoreg/devops_blog:0.0.1
check minikube
% minikube image ls
docker.io/drnoreg/devops_blog:0.0.1
create deployment, service and ingress yaml
app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: devops-blog
spec:
selector:
matchLabels:
run: devops-blog
replicas: 1
template:
metadata:
labels:
run: devops-blog
spec:
containers:
- name: devops-blog
image: docker.io/drnoreg/devops_blog:0.0.1
ports:
- name: pod-port
containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: devops-blog
labels:
run: devops-blog
spec:
type: NodePort
ports:
- name: pod-port
port: 5001
targetPort: 5001
protocol: TCP
nodePort: 30001
selector:
run: devops-blog
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: devops-blog
namespace: devops-blog
spec:
rules:
- host: devops-blog.cluster.local
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: devops-blog
port:
number: 5001
execute create namespace
kubectl create namespace devops-blog
set current namespace
kubectl config set-context --current --namespace=devops-blog
and create deployment, service and ingress
kubectl create -f app.yaml
after try forwarding port for check working flask API
kubectl port-forward devops-blog-f666d8cd7-njp95 5001:5001
Forwarding from 127.0.0.1:5001 -> 5001
Forwarding from [::1]:5001 -> 5001
Handling connection for 5001
Handling connection for 5001
flask API service in minikube is working.
% kubectl get service -n devops-blog -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
devops-blog NodePort 10.99.37.126 <none> 5001:30001/TCP 45s run=devops-blog
% kubectl get pod -n devops-blog -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
devops-blog-f666d8cd7-b9n7j 1/1 Running 0 57s 10.244.0.34 minikube <none> <none>
% kubectl get node -n devops-blog -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane 16h v1.26.1 192.168.49.2 <none> Ubuntu 20.04.5 LTS 5.10.47-linuxkit docker://20.10.23
Now I try to check working API via minikube service
% telnet 192.168.49.2 30001
Trying 192.168.49.2...
not working
add to /etc/hosts
127.0.0.1 devops-blog.cluster.local
try to check working API via ingress minikube
% telnet devops-blog.cluster.local 80
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
not working too.
Why not working request to deployment via service request and via ingress request?
How solve this problem?
In case you did not enable the ingress addon try enable it by executing the following command
$ minikube addons enable ingress
Instead of NodePort service try using the clusterIP service for the app and when you are creating ingress you can give this service as backend like this
service.yaml
apiVersion: v1
kind: Service
metadata:
name: devops-blog
labels:
run: devops-blog
spec:
type: ClusterIP
ports:
- name: pod-port
port: 5001
targetPort: 5001
protocol: TCP
selector:
run: devops-blog
ingres.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: devops-blog
namespace: devops-blog
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false" #Since you are using localhost
spec:
rules:
- host: devops-blog.cluster.local
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: devops-blog
port:
number: 5001
path: /
Once the ingress is generated the IP try opening it in local browser with http://devops-blog.cluster.local/ or curl it like curl $ curl http://devops-blog.cluster.local/.
Note: In case you are deploying this app in the cloud try LoadBalancer as a service.
Try this tutorial as it explained in detail

Bare-metal k8s ingress with nginx-ingress

I can't apply an ingress configuration.
I need access a jupyter-lab service by it's DNS
http://jupyter-lab.local
It's deployed to a 3 node bare metal k8s cluster
node1.local (master)
node2.local (worker)
node3.local (worker)
Flannel is installed as the Network controller
I've installed nginx ingress for bare metal like this
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
When deployed the jupyter-lab pod is on node2 and the NodePort service responds correctly from http://node2.local:30004 (see below)
I'm expecting that the ingress-nginx controller will expose the ClusterIP service by its DNS name ...... thats what I need, is that wrong?
This is the CIP service, defined with symmetrical ports 8888 to be as simple as possible (is that wrong?)
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-cip
namespace: default
spec:
type: ClusterIP
ports:
- port: 8888
targetPort: 8888
selector:
app: jupyter-lab
The DNS name jupyter-lab.local resolves to the ip address range of the cluster, but times out with no response. Failed to connect to jupyter-lab.local port 80: No route to host
firewall-cmd --list-all shows that port 80 is open on each node
This is the ingress definition for http into the cluster (any node) on port 80. (is that wrong ?)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyter-lab-ingress
annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io: /
spec:
rules:
- host: jupyter-lab.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jupyter-lab-cip
port:
number: 80
This the deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jupyter-lab-dpt
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jupyter-lab
template:
metadata:
labels:
app: jupyter-lab
spec:
volumes:
- name: jupyter-lab-home
persistentVolumeClaim:
claimName: jupyter-lab-pvc
containers:
- name: jupyter-lab
image: docker.io/jupyter/tensorflow-notebook
ports:
- containerPort: 8888
volumeMounts:
- name: jupyter-lab-home
mountPath: /var/jupyter-lab_home
env:
- name: "JUPYTER_ENABLE_LAB"
value: "yes"
I can successfully access jupyter-lab by its NodePort http://node2:30004 with this definition:
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-nodeport
namespace: default
spec:
type: NodePort
ports:
- port: 10003
targetPort: 8888
nodePort: 30004
selector:
app: jupyter-lab
How can I get ingress to my jupyter-lab at http://jupyter-lab.local ???
the command kubectl get endpoints -n ingress-nginx ingress-nginx-controller-admission returns :
ingress-nginx-controller-admission 10.244.2.4:8443 15m
Am I misconfiguring ports ?
Are my "selector:appname" definitions wrong ?
Am I missing a part
How can I debug what's going on ?
Other details
I was getting this error when applying an ingress kubectl apply -f default-ingress.yml
Error from server (InternalError): error when creating "minnimal-ingress.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-contr
oller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s": context deadline exceeded
This command kubectl delete validatingwebhookconfigurations --all-namespaces
removed the validating webhook ... was that wrong to do?
I've opened port 8443 on each node in the cluster
Ingress is invalid, try the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyter-lab-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jupyter-lab.local
http: # <- removed the -
paths:
- path: /
pathType: Prefix
backend:
service:
# name: jupyter-lab-cip
name: jupyter-lab-nodeport
port:
number: 8888
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-cip
namespace: default
spec:
type: ClusterIP
ports:
- port: 8888
targetPort: 8888
selector:
app: jupyter-lab
If I understand correctly, you are trying to expose jupyternb through ingress nginx proxy and to make it accessible through port 80.
Run the folllowing command to check what nodeport is used by nginx ingress service:
$ kubectl get svc -n ingress-nginx ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.96.240.73 <none> 80:30816/TCP,443:31475/TCP 3h30m
In my case that is port 30816 (for http) and 31475 (for https).
Using NodePort type you can only use ports in range 30000-32767 (k8s docs: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). You can change it using kube-apiserver flag --service-node-port-range and then set it to e.g. 80-32767 and then in your ingress-nginx-controller service set nodePort: 80
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 0.44.0
helm.sh/chart: ingress-nginx-3.23.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: 80 # <- HERE
- name: https
port: 443
protocol: TCP
targetPort: https
nodePort: 443 # <- HERE
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: NodePort
Although this is genereally not advised to change service-node-port-range since you may encounter some issues if you use ports that are already open on nodes (e.g. port 10250 that is opened by kubelet on every node).
What might be a better solution is to use MetalLB.
EDIT:
How can I get ingress to my jupyter-lab at http://jupyter-lab.local ???
Assuming you don't need a failure tolerant solution, download the https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml file and change ports: section for the deployment object like following:
ports:
- name: http
containerPort: 80
hostPort: 80 # <- add this line
protocol: TCP
- name: https
containerPort: 443
hostPort: 443 # <- add this line
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
and apply the changes:
kubectl apply -f deploy.yaml
Now run:
$ kubectl get po -n ingress-nginx ingress-nginx-controller-<HERE PLACE YOUR HASH> -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-controller-67897c9494-c7dwj 1/1 Running 0 97s 172.17.0.6 <node_name> <none> <none>
Notice the <node_name> in NODE column. This is a node's name where the pod got scheduled. Now take this nodes IP and add it to your /etc/hosts file.
It should work now (go to http://jupyter-lab.local to check it), but this solution is fragile and if nginx ingress controller pod gets rescheduled to other node it will stop working (and it will stay lik this until you change the ip in /etc/hosts file). It's also generally not advised to use hostPort: field unless you have a very good reason to do so, so don't abuse it.
If you need failure tolerant solution, use MetalLB and create a service of type LoadBalancer for nginx ingress controller.
I haven't tested it but the following should do the job, assuming that you correctly configured MetalLB:
kubectl delete svc -n ingress-nginx ingress-nginx-controller
kubectl expose deployment -n ingress-nginx ingress-nginx-controller --type LoadBalancer

how to access service on rpi k8s cluster

I built a k8s cluster with help of this guide: rpi+k8s. I got some basic nginx service up and running and and I can curl from master node to worker node to get the nginx welcome page content using:
k exec nginx-XXX-XXX -it -- curl localhost:80
I tried following suggestions in the following SO posts:
link 1
link 2
However, I still can't access a simple nginx service on the worker node from my local computer (linux). I used, NODE IP:NODE PORT. I also installed kubefwd and ran, sudo kubefwd svc -n nginx-ns but I don't see the expected output where it would show the port forwards. Any help would be appreciated. Thanks.
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-svc NodePort 10.101.19.230 <none> 80:32749/TCP 168m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 3/3 3 3 168m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-54485b444f 3 3 3 168m
And here is the yaml file:
kind: Namespace
apiVersion: v1
metadata:
name: nginx-ns
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: nginx-ns
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19-alpine
ports:
- name: nginxport
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
selector:
app: backend
You need to update your service nginx-svc where you have used two selector.
remove below part:
selector:
app: backend
Updated service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
Then, Try this one for port-forwarding.
kubectl port-forward -n nginx-ns svc/nginx-svc 8080:80
Template is like this:
kubectl port-forward -n <namespace> svc/<svc_name> <local_port>:<svc_port>
Then try in the browser with 127.0.0.1:8080 or localhost:8080

Unable to connect to pod using ingress

I am trying to configure one python flask application running in port 5000 in kubernetes. I have created the deployment, service and ingress. It is not working using the domain name which is added to hosts file, but python application is working when i have tried from port forwarding.
I have tried a lot changing the configurations, but no thing worked.
Please let me know your suggestions.
kind: Deployment
metadata:
name: web-app
namespace: production
labels:
app: web-app
platform: python
spec:
replicas:
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: XXXXXX/XXXXXX:XXXXXX
imagePullPolicy: Always
ports:
- containerPort: 5000
apiVersion: v1
kind: Service
metadata:
name: web-app
namespace: production
spec:
selector:
app: web-app
ports:
- protocol: TCP
port: 5000
targetPort: 5000
selector:
run: web-app
kind: Ingress
metadata:
name: name-virtual-host-ingress
namespace: production
spec:
rules:
- host: first.bar.com
http:
paths:
- backend:
serviceName: web-app
servicePort: 5000
kubectl get all -n production
NAME READY STATUS RESTARTS AGE
pod/web-app-559df5fc4-67nbn 1/1 Running 0 24m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/web-app ClusterIP 10.100.122.15 <none> 5000/TCP 24m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/web-app 1 1 1 1 24m
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-app-559df5fc4 1 1 1 24m
kubectl get ing -n production
NAME HOSTS ADDRESS PORTS AGE
name-virtual-host-ingress first.bar.com 80 32s
kubectl get ep web-app -n production
NAME ENDPOINTS AGE
web-app <none> 23m
You need to run a Ingress Controller. The Prerequisites part of https://kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites says:
You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
One example would be https://kubernetes.github.io/ingress-nginx/deploy/. Be sure to run the Mandatory Command and the one that pertains to your provider. You can then get the service to see the assigned IP:
kubectl get -n ingress-nginx svc/ingress-nginx

Is it possible to point a GKE K8 ingress point to a LB backend

The way I have my services set up is the following:
deployment (2 pods) -> load balancer routes to this deployment -> ingress point terminating https pointing to the load balancer as the backend.
So far it's serving the correct cert, but for some reasons it's pointing to the "wrong" backend. On the GKE wbeconsole it just says my backend services are unhealthy, once I click on them they don't exist. What am I doing wrong here?
[stupifatcatslaptop poc (dev)]$ kubectl get pods -o wide | grep my_project
my_project-flask-poc-696f7b57c5-54n6r 1/1 Running 0 13d 10.236.1.228 gke-qus1-shared-1-prod-default-pool-44da43de-vq4c
my_project-flask-poc-696f7b57c5-m57h7 1/1 Running 0 13d 10.236.0.16 gke-qus1-shared-1-prod-default-pool-b27de1c2-2h63
[stupifatcatslaptop poc (dev)]$ kubectl get services | grep my_project
my_project-flask-poc-lb LoadBalancer {internal_ip_0} {internal_ip_1} 8080:32133/TCP 33d
[stupifatcatslaptop poc (dev)]$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
my_project-flask-poc-ingress my_project-flask-poc.mydomain.com {external_ip} 80, 443 1d
This is my ingress yaml file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my_project-flask-poc-ingress
spec:
tls:
- secretName: my_project-poc-tls
rules:
- host: my_project-flask-poc.mydomain.com
http:
paths:
- backend:
serviceName: my_project-flask-poc-lb
servicePort: 8080
deployment yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my_project-flask-poc
labels:
app: my_project-flask-poc
spec:
replicas: 2
template:
metadata:
labels:
app: my_project-flask-poc
spec:
containers:
- name: my_project-flask-poc
image: gcr.io/myprojectid/my_project-flask-poc
ports:
- containerPort: 8080
volumeMounts:
- name: secrets
mountPath: "/etc/secrets"
readOnly: true
volumes:
- name: secrets
secret:
secretName: my_project-secret-poc
lb service yaml
apiVersion: v1
kind: Service
metadata:
name: my_project-flask-poc-lb
annotations:
cloud.google.com/load-balancer-type: "Internal"
spec:
type: LoadBalancer
loadBalancerIP: {someinternalip}
selector:
app: my_project-flask-poc
ports:
- protocol: TCP
port: 8080
targetPort: 8080
When it comes to GKE, only GCE ingress type manages your SSL certificates, hence, is the only option that has LB-level SSL termination.
For Kubernetes' service type load balancer, you will find that a Network Load Balancer is attached to the cluster. For this type of load balancer, the SSL termination must be handled in the backend.
This is because SSL certificates are managed by layer 7 applications and the Network Load Balancer is working at layer 4, as pointed in a previously shared answer.