Haproxy ingress controller - kubernetes

I have installed ingress controller via helm as a daemonset. I have configured the ingress as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
namespace: rcc
annotations:
haproxy.org/check: 'true'
haproxy.org/check-http: /serviceCheck
haproxy.org/check-interval: 5s
haproxy.org/cookie-persistence: SERVERID
haproxy.org/forwarded-for: 'true'
haproxy.org/load-balance: leastconn
kubernetes.io/ingress.class: haproxy
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-frontend
port:
number: 8080
kubectl get ingress -n rcc
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
webapp-ingress <none> example.com 10.110.186.170 80 11h
The type chosen was loadbalancer.
I can ping from any node the ip address of the ingress on port 80 also can curl it just fine. I can also browse any of the ingress pods ip address from the node just fine. But when I browse the node ip o port 80 I get connection refused. Anything that I am missing here?

I installed last haproxy ingress which is 0.13.4 version using helm.
By default it's installed with LoadBalancer service type:
$ kubectl get svc -n ingress-haproxy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-ingress LoadBalancer 10.102.166.149 <pending> 80:30312/TCP,443:32524/TCP 3m45s
Since I have the same kubeadm cluster, EXTERNAL-IP will be pending. And as you correctly mentioned in question, CLUSTER-IP is accessible on the nodes when cluster is set up using kubeadm.
There are two options how to access your ingress:
Using NodePort:
From output above there's a NodePort 30312 for internally exposed port 80. Therefore from outside the cluster it should be accessed by Node_IP:NodePort:
curl NODE_IP:30312 -IH "Host: example.com"
HTTP/1.1 200 OK
Set up metallb:
Follow installation guide and second step is to configure metallb. I use layer 2. Be careful to assign not used ip range!
After I installed and set up the metallb, my haproxy has EXTERNAL-IP now:
$ kubectl get svc -n ingress-haproxy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-ingress LoadBalancer 10.102.166.149 172.16.1.241 80:30312/TCP,443:32524/TCP 10m
And now I can access ingress by EXTERNAL-IP on port 80:
curl 172.16.1.241 -IH "Host: example.com"
HTTP/1.1 200 OK
Useful to read:
Kubernetes service types

Related

Can you expose your local minikube cluster to be accessible from a browser without editing etc/hosts?

I am following this tutorial for how to expose your local cluster for external access.
I only need to be able to check my application from browser, without exposing the app to the Internet.
> kubectl get service web
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web NodePort 10.98.217.114 <none> 8080:32718/TCP 10m
> minikube service web --url
http://192.168.49.2:32718
Followed the guide until the etc/hosts part. I set up the ingress:
> kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress nginx hello-world.info 192.168.49.2 80 96s
For various reasons I cannot edit the etc/hosts file on my Windows machine, it says another process is using it. However, neither 192.168.49.2 nor http://192.168.49.2:32718 in the browser returns anything, as well as curl 192.168.49.2 (and with :32718). I don't think that should be expected, as the hosts file merely forwards hello-world.info to the IP, I should be able to access my app with just the IP. What am I missing here?
Kubectl v1.24.1 (kustomize v4.5.4, server v1.23.3), Minikube v1.25.2, Windows 10, Minikube with the Docker driver.
Okay my solution to the problem was this: port-forward to the ingress-controller pod (not to the ingress itself object, because it doesn't seem to be possible)
Sample ingress file for a service named "web":
# example-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
Run it with
kubectl apply -f example-ingress.yaml
Check that it's running
> kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress nginx * 192.168.49.2 80 23h
If you ssh into minikube (minikube ssh), you can curl 192.168.49.2:80 and it returns the proper output.
Output nginx-controller pods:
> kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS
AGE
ingress-nginx-admission-create-56gbc 0/1 Completed 0
46h
ingress-nginx-admission-patch-fqf92 0/1 Completed 0
46h
ingress-nginx-controller-cc8496874-7znt5 1/1 Running 4 (39m ago)
46h
Forward port to it:
> kubectl port-forward ingress-nginx-controller-cc8496874-7znt5 8080:80 -n ingress-nginx
Then check out localhost:8080. If it returns nginx 404, then your Ingress.yaml setup is probably wrong. Otherwise works for me.

Minikube with ingress example not working

I'm trying to get an ingress controller working in Minikube and am following the steps in the K8s documentation here, but am seeing a different result in that the IP address for the ingress controller is different than that for Minikube (the example seems to indicate they should be the same):
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
example-ingress hello-world.info 10.0.2.15 80 12m
$ minikube ip
192.168.99.101
When I try to connect to the Minikube IP address (using the address directly vs. adding it to my local hosts file), I'm getting a "Not found" response from NGINX:
$ curl http://`minikube ip`/
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>
When I try to connect to the IP address associated with the ingress controller, it just hangs.
Should I expect the addresses to be the same as the K8s doc indicates?
Some additional information:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready master 2d23h v1.16.0 10.0.2.15 <none> Buildroot 2018.05.3 4.15.0 docker://18.9.9
$ kubectl get ingresses example-ingress -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/rewrite-target":"/$1"},"name":"example-ingress","namespace":"default"},"spec":{"rules":[{"host":"hello-world.info","http":{"paths":[{"backend":{"serviceName":"web","servicePort":8080},"path":"/"}]}}]}}
nginx.ingress.kubernetes.io/rewrite-target: /$1
creationTimestamp: "2019-10-28T15:36:57Z"
generation: 1
name: example-ingress
namespace: default
resourceVersion: "25609"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/example-ingress
uid: 5e96c378-fbb1-4e8f-9738-3693cbce7d9b
spec:
rules:
- host: hello-world.info
http:
paths:
- backend:
serviceName: web
servicePort: 8080
path: /
status:
loadBalancer:
ingress:
- ip: 10.0.2.15
Here’s what worked for me:
minikube start
minikube addons enable ingress
minikube addons enable ingress-dns
Wait until you see the ingress-nginx-controller-XXXX is up and running using Kubectl get pods -n ingress-nginx
Create an ingress using the K8s example yaml file
Update the service section to point to the NodePort Service that you already created
Append
127.0.0.1 hello-world.info
to your /etc/hosts file on MacOS (NOTE:
Do NOT use the Minikube IP)
Run minikube tunnel ( Keep the window open. After you entered the password there will be no more messages, and the cursor just blinks)
Hit the hello-world.info ( or whatever host you configured in the yaml file) in a browser and it should work
I've reproduced your scenario in a Linux environment (on GCP) and I also have different IPs:
user#bf:~$ minikube ip
192.168.39.144
user#bf:~$ kubectl get ingresses
NAME HOSTS ADDRESS PORTS AGE
example-ingress * 192.168.122.173 80 30m
Your problem is not related to the fact you have different IPs. The guide instructs us to create an ingress with the following rule:
spec:
rules:
- host: hello-world.info
This rule is telling the ingress service that a DNS record with hello-world.info name is expected.
If you follow the guide a bit further, it instructs you to create an entry on your hosts file pointing to your ingress IP or Minikube IP.
Note: If you are running Minikube locally, use minikube ip to get the external IP. The IP address displayed within the ingress list
will be the internal IP.
Source: Set up Ingress on Minikube with the NGINX Ingress Controller
(if you want to curl the IP instead of DNS name, you need to remove the host rule from your ingress)
It should look like this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: web
servicePort: 8080
Apply your changes:
user#bf:~$ kubectl apply -f example-ingress.yaml
And curl the IP using -Lk options to surpass problems related to secure connections.
user#bf:~$ curl -Lk 192.168.39.144
Hello, world!
Version: 1.0.0
Hostname: web-9bbd7b488-l5gc9
In addition to the accepted answer, minikube now has a tunnel command which allows you generate external ip addresses for your services which can be accessed directly on your host machine without using the general minikube ip.
Run minikube tunnel in a separate terminal. This runs in the foreground as a daemon.
In a different terminal, execute your kubectl apply -f <file_name> command to deploy your desired service. It should generate an ip address for you that is routed directly to your service and available on port 80 on that address.
More here on the minikube documentation: https://minikube.sigs.k8s.io/docs/tasks/loadbalancer/
I got Minikube on Windows 11 to work for me
minikube start --vm-driver=hyperv
Install minikube Ingress Controller
minikube addons enable ingress
minikube addons enable ingress-dns
Deploy Helm Chart
helm install ...
Get Kubernetes IP Address
nslookup <host-found-in-ingress> $(minikube ip)
Add to etc/host
<minikube-ip> <domain-url>
Live!
curl <domain-url>

Kubernetes Ingress-Service update IP

I have a Kubernetes Cluster running on Azure. I use the nginx-ingress to handle incoming requests. To set up the ingress I used the official guide https://kubernetes.github.io/ingress-nginx/deploy/#azure .
I also created a public static IP which I want to use for the Ingress.
Unfortunately, I´m not able to find the ingress service (generic-deployment.yaml). Also, my ingress is not describable.
How I installed Ingress:
$ sudo kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
...
deployment.apps/nginx-ingress-controller created
$ sudo kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
service/ingress-nginx created
Additionally, I installed some routing configs by ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path:
backend:
serviceName: app0-service
servicePort: 80
- path: /app1
backend:
serviceName: app1-service
servicePort: 80
$sudo kubectl apply -f ingress.yaml
ingress.extensions/myingress created
What confuses me
Unfortunately, I´m not able to find my ingress-nginx service.
$ sudo kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app0-service ClusterIP 10.0.28.3 <none> 80/TCP 3m48s
app1-service ClusterIP 10.0.226.249 <none> 80/TCP 3m47s
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 39m
But my ingress is running:
$ sudo kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
myingress * 23.97.xxx.xxx 80 54m
In browser 23.97.xxx.xxx works partly.
1) If I proxy a domain name to 23.97.xxx.xxx, the domain in a browser will be rewritten by the IP.
2) If I try to browse directly to subroute like 23.97.xxx.xxx/app1/page1. I get every time the main page of app1.
I expected to get an IP from my ingress-service. Because I want to update this IP address by adding loadbalancerIP to spec in cloud-generic.yaml.
(like https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/static-ip/static-ip-svc.yaml).
Is my IP from ingress the right one to use? And why I can´t find my ingress-service?
Looking at service yaml at https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml you can see it's get created in namespace ingress-nginx.
You should be able to get your service by running:
kubectl get service -n ingress-nginx
You can also get all services by running kubectl get service --all-namespaces.

How to assign Public IP to Kubernetes's Ingress

I have deployed Kong-Ingress-controller using helm
And I have Kubernetes's Cluster v1.10 On centos 7
I am using dedicated Server From OVH Provider
When I create Ingress
cat ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins
spec:
backend:
serviceName: jenkins
servicePort: 8080
kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
jenkins * 80 3s
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins ClusterIP 10.254.104.80 <none> 8080/TCP 1d
Now I Can not access this Ingress from Out Side because I am using OVH Server.
Is there a solution?
OVH is not officially supported by Kubernetes. It was supported then generally you would create a Service jenkins of the type LoadBalancer and that would be your externally facing endpoint with a public IP.
Since it's not supported the next best thing is to create a NodePort service. That will create a service that listens on a specific port on all the Kubernetes nodes and forwards the requests to your Pods (only where they are running). So, in this case, you will have to create an OVH Load Balancer with a public IP and point the backend of that load balancer to the NodePort of the service where your Ingress is listening on.

Kubernetes + metallb + traefik: how to get real client ip?

traefik.toml:
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.forwardedHeaders]
trustedIPs = ["0.0.0.0/0"]
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[entryPoints.https.forwardedHeaders]
trustedIPs = ["0.0.0.0/0"]
[api]
traefik Service:
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: http
- protocol: TCP
port: 443
name: https
type: LoadBalancer
Then:
kubectl run source-ip-app --image=k8s.gcr.io/echoserver:1.4
deployment "source-ip-app" created
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
service "clusterip" exposed
kubectl get svc clusterip
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
clusterip ClusterIP 10.5.55.102 <none> 80/TCP 2h
Create ingress for clusterip:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: clusterip-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: clusterip.staging
http:
paths:
- backend:
serviceName: clusterip
servicePort: 80
clusterip.staging ip: 192.168.0.69
From other pc with ip: 192.168.0.100:
wget -qO - clusterip.staging
and get results:
CLIENT VALUES:
client_address=10.5.65.74
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://clusterip.staging:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
accept-encoding=gzip, deflate, br
accept-language=ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3
cache-control=max-age=0
host=clusterip.staging
upgrade-insecure-requests=1
x-forwarded-for=10.5.64.0
x-forwarded-host=clusterip.staging
x-forwarded-port=443
x-forwarded-proto=https
x-forwarded-server=traefik-ingress-controller-755cc56458-t8q9k
x-real-ip=10.5.64.0
BODY:
-no body in request-
kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default clusterip NodePort 10.5.55.102 <none> 80:31169/TCP 19h
default kubernetes ClusterIP 10.5.0.1 <none> 443/TCP 22d
kube-system kube-dns ClusterIP 10.5.0.3 <none> 53/UDP,53/TCP 22d
kube-system kubernetes-dashboard ClusterIP 10.5.5.51 <none> 443/TCP 22d
kube-system traefik-ingress-service LoadBalancer 10.5.2.37 192.168.0.69 80:32745/TCP,443:30219/TCP 1d
kube-system traefik-web-ui NodePort 10.5.60.5 <none> 80:30487/TCP 7d
How to get real ip (192.168.0.100) in my installation? Why x-real-ip 10.5.64.0? I could not find the answers in the documentation.
When kube-proxy uses the iptables mode, it uses NAT to send data to the node where payload works, and you lose the original SourceIP address in that case.
As I understood, you use Matallb behind the Traefik Ingress Service (because its type is LoadBalancer). That means traffic from the client to the backend goes that way:
Client -> Metallb -> Traefik LB -> Traefik Service -> Backend pod.
Traefik works correctly and adds headers x-*, including x-forwarded-for and x-real-ip which contain a fake address, and that's why:
From the Metallb documentation:
MetalLB understands the service’s externalTrafficPolicy option and implements different announcements modes depending on the policy and announcement protocol you select.
Layer2
This policy results in uniform traffic distribution across all pods in the service. However, kube-proxy will obscure the source IP address of the connection when it does load-balancing, so your pod logs will show that external traffic appears to be coming from the cluster’s leader node.
BGP
“Cluster” traffic policy
With the default Cluster traffic policy, every node in your cluster will attract traffic for the service IP. On each node, the traffic is subjected to a second layer of load-balancing (provided by kube-proxy), which directs the traffic to individual pods.
......
The other downside of the “Cluster” policy is that kube-proxy will obscure the source IP address of the connection when it does its load-balancing, so your pod logs will show that external traffic appears to be coming from your cluster’s nodes.
“Local” traffic policy
With the Local traffic policy, nodes will only attract traffic if they are running one or more of the service’s pods locally. The BGP routers will load-balance incoming traffic only across those nodes that are currently hosting the service. On each node, the traffic is forwarded only to local pods by kube-proxy, there is no “horizontal” traffic flow between nodes.
This policy provides the most efficient flow of traffic to your service. Furthermore, because kube-proxy doesn’t need to send traffic between cluster nodes, your pods can see the real source IP address of incoming connections.
Finally, the only way to get the real source IP address is to use "Local" mode of TrafficPolicy.
If you set it up, you will get what you want.