Addition of a configuration to HAProxy Ingress Controller not working - haproxy

I have the below config that I need to add to the HA Proxy Ingress controller on my k8s.
acl metrics path -i /metrics
use_backend httpback-default-backend if metrics
So basically what I want is that for all Ingress hosts (URLs) being accessed using the controller, if the path /metrics is accessed, the request needs to be routed to the Ingress default backend and the user should get a 404 error.
So in my standard HA Proxy deployment I have the following configMaps
k get cm
NAME DATA AGE
haproxy-configmap 8 50d
haproxy-configmap-tcpservice 1 50d
haproxy-ingress 0 50d
ingress-controller-leader-haproxy 0 50d
And Ive added my config to the haproxy-configmap configMap in the config-frontend section
apiVersion: v1
data:
config-frontend: |
capture request header Host len 32
capture request header X-Request-ID len 64
capture request header User-Agent len 200
acl metrics path -i /metrics
use_backend httpback-default-backend if metrics
Now I expect that /metrics endpoint should lead me to a 404 error but seems like I can still access it.
What am I missing here ?

Related

Is there a way to enable proxy-protocol on Ingress for only one service?

I have this service that limits IPs to 2 requests per day running in Kubernetes.
Since it is behind an ingress proxy the request IP is always the same, so it is limiting he total amount of requests to 2.
Its possible to turn on proxy protocol with a config like this:
apiVersion: v1
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
kind: ConfigMap
But this would turn it on for all services, and since they don't expect proxy-protocol they would break.
Is there a way to enable it for only one service?
It is possible to configure Ingress so that it includes the original IPs into the http header.
For this I had to change the service config.
Its called ingress-nginx-ingress-controller(or similar) and can be found with kubectl get services -A
spec:
externalTrafficPolicy: Local
And then configure the ConfigMap with the same name:
data:
compute-full-forwarded-for: "true"
use-forwarded-headers: "true"
Restart the pods and then the http request will contain the fields X-Forwarded-For and X-Real-Ip.
This method won't break deployments not expecting proxy-protocol.

Kubernetes ingress-nginx metrics uri label

How to get a label of uri that was used for a request.
For example, I made a request curl https://www.somesite.com/api/health
I see on Prometheus labels:
host www.somesite.com
path / (ingress is configured with path "/")
How can I see metrics over "/api/health"?

ingress nginx how to debug 502 page even though the ports in service and Ingress are correct?

i have a web application running inside cluster ip on worker node on port 5001,i'm also using k3s for cluster deployment, i checked the cluster connection it's running fine
the deployment has the container port set to 5001:
ports:
- containerPort:5001
Here is the service file:
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: user-ms
name: user-ms
spec:
ports:
- name: http
port: 80
targetPort: 5001
selector:
io.kompose.service: user-ms
status:
loadBalancer: {}
and here is the ingress file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: user-ms-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: user-ms
port:
number: 80
i'm getting 502 Bad Gateway error whenever i type in my worker or master ip address
expected: it should return the web application page
i looked online and most of them mention wrong port for service and ingress, but my ports are correct yes i triple check it:
try calling user-ms service on port 80 from another pod -> worked try
calling cluster ip on worker node on port 5001 -> worked
the ports are running correct, why is the ingress returning 502?
here is the ingress describe:
and here is the describe of nginx ingress controller pod:
the nginx ingress pod running normally:
here is the logs of the nginx ingress pod:
sorry for the images, but i'm using a streaming machine to access the terminal so i can't copy paste
How should i go with debugging this error?
ok i managed to figure out this, in the default setting of K3S it uses traefik as it default ingress, so that why my nginx ingress log doesn't show anything from 502 Bad GateWay
I decided to tear down my cluster and set it up again, now with suggestion from this issue https://github.com/k3s-io/k3s/issues/1160#issuecomment-1058846505 to create cluster without traefik:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh -
now when i call kubectl get pods --all-namespaces i no longer see traefik pod running, previously it had traefik pods runining.
once i done all of it, run apply on ingress once again -> get 404 error, i checked in the nginx ingress pod logs now it's showing new error of missing Ingress class, i add the following to my ingress configuration file under metadata:
metadata:
name: user-ms-ingress
annotitations:
kubernetes.io/ingress.class: "nginx"
now i once more go to the ip of the worker node -> 404 error gone but got 502 bad gateway error, i checked the logs get connection refused errors:
i figured out that i was setting a network policy for all of my micro services, i delete the network policy and remove it's setting from all my deployment files.
Finally check once more and i can see that i can access my api and swagger page normally.
TLDR:
If you are using nginx ingress on K3S, remember to disable traefik first when created the cluster
don't forget to set ingress class inside your ingress configuration
don't setup network policy because nginx pod won't be able to call the other pods in that network
You can turn on access logging on nginx, which will enable you to see more logs on ingress-controller and also trace every requests routing through ingress, if you are trying to load UI/etc, it will show you that the requests are coming in from browser or if you accessing a particular endpoint, the calls will be visible on the nginx-controller logs. You can conclude, if the requests coming in are actually routing to the proper service using this and then start debugging the service (ex: check to see if you can curl the endpoint from any pod within the cluster etc)
Noticed that you are using the image(k8s.gcr.io/ingress-nginx/controller:v1.2.0), if you have installed using helm, there must be a kubernetes-ingress configmap with ingress controller, by default "disable-access-log" will be true, change it false and you should start seeing more logs on ingress-controller, you might want to bounce ingress controller pods if you do not see detailed logs.
Kubectl edit cm -n namespace kubernetes-ingress
apiVersion: v1
data:
disable-access-log: "false" #turn this to false
map-hash-bucket-size: "128"
ssl-protocols: SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2 TLSv1.3
kind: ConfigMap

cetic-nifi Invalid host header issue

Helm version: v3.5.2
Kubernetes version: v1.20.4
nifi chart version:latest : 1.0.2 rel
Issue: [cetic/nifi]-issue
I'm trying to connect to nifi UI deployed in kubernetes.
I have set following properties in values yaml
properties:
# use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
sensitiveKey: changeMechangeMe # Must to have minimal 12 length key
algorithm: NIFI_PBKDF2_AES_GCM_256
externalSecure: false
isNode: false
httpsPort: 8443
webProxyHost: 10.0.39.39:30666
clusterPort: 6007
# ui service
service:
type: NodePort
httpsPort: 8443
nodePort: 30666
annotations: {}
# loadBalancerIP:
## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24
## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
# sessionAffinity: ClientIP
# sessionAffinityConfig:
# clientIP:
# timeoutSeconds: 10800
10.0.39.39 - is the kubernetes masternode internal ip.
When nifi get started i get follwoing
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/k8sadmin/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/k8sadmin/.kube/config
NAME: nifi
LAST DEPLOYED: Thu Nov 25 12:38:00 2021
NAMESPACE: jeed-cluster
STATUS: deployed
REVISION: 1
NOTES:
Cluster endpoint IP address will be available at:
kubectl get svc nifi -n jeed-cluster -o jsonpath='{.status.loadBalancer.ingress[*].ip}'
Cluster endpoint domain name is: 10.0.39.39:30666 - please update your DNS or /etc/hosts accordingly!
Once you are done, your NiFi instance will be available at:
https://10.0.39.39:30666/nifi
and when i do a curl
curl https://10.0.39.39:30666 put sample.txt -k
<h1>System Error</h1>
<h2>The request contained an invalid host header [<code>10.0.39.39:30666</
the request [<code>/</code>]. Check for request manipulation or third-part
t.</h2>
<h3>Valid host headers are [<code>empty
<ul><li>127.0.0.1</li>
<li>127.0.0.1:8443</li>
<li>localhost</li>
<li>localhost:8443</li>
<li>[::1]</li>
<li>[::1]:8443</li>
<li>nifi-0.nifi-headless.jeed-cluste
<li>nifi-0.nifi-headless.jeed-cluste
<li>10.42.0.8</li>
<li>10.42.0.8:8443</li>
<li>0.0.0.0</li>
<li>0.0.0.0:8443</li>
</ul>
Tried lot of things but still cannot whitelist master node ip in
proxy hosts
Ingress is not used
edit: it looks like properties set in values.yaml is not set in nifi.properties in side the pod. Is there any reason for this?
Appreciate help!
As NodePort service you can also assign port number from 30000-32767.
You can apply values when you install your chart with:
properties:
webProxyHost: localhost
httpsPort:
This should let nifi whitelist your https://localhost:

configure dns in kubernetes pod

good afternoon. I have a problem with a multi-tenant architecture that I am setting up in kubernetes. it consists of two pods (front and back), the front hits a url that points to the backend and in the back I have the different clients (tenants) that I have.
the config of the nginx of the back is defined in the following way:
server_name ~^(?<account>.+)\-backend.domain.com$;
root /var/www/html/tenant/$account-backend/;
index index.php;
this means that if I want to get to the backend from the frontend it would be with a url like this: tenant1.backend.domain.com
the frontend is exposed with a nodeport type service and a load balancer.
the backend is exposed locally with a ClusterIP type service which is the following:
apiVersion: v1
type: Service
metadata:
name: service-clusterip-app-backend
namespace: app
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
selector:
app: app-nginx
When I upload the cluster and go to the frontend and make a request, the pod can't resolve the tenant1.backend.domain.com. I've tried configuring some redirection rules through coredns, but I don't understand how it works:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
zoomcrm.server: |
tenant1.backend.domain.com {
forward . service-clusterip-app-backend:80
}
basically what i need is for the frontend to know where to go when the url of the request is tenant1.backend.domain.com. i've looked into it but nothing i've done has worked.
First of all, you need to forward the DNS request to the reuqired DNS base on the forward rule. Like this. That means you will forward the resolving request of zone tenant1.backend.domain.com(*.tenant1.backend.domain.com) to the DNS server.
data:
zoomcrm.server: |
tenant1.backend.domain.com {
forward . <customDNSserver>:<dns port>
}
Description:
The forward plugin re-uses already opened sockets to the upstreams. It supports UDP, TCP and DNS-over-TLS and uses in band health checking.
the description of forward plugin of coredns can be found here. You can check this post to build up your dns server in kubernetes for name resolving if you are interested.
In your case, if you only need to add single dns record for resolve inside kubernetes. You can either use hostalias in this document, or use coredns file plugin to add zone in your coredns for name resolving, here is the document.