Kubernetes ingress-nginx metrics uri label - kubernetes

How to get a label of uri that was used for a request.
For example, I made a request curl https://www.somesite.com/api/health
I see on Prometheus labels:
host www.somesite.com
path / (ingress is configured with path "/")
How can I see metrics over "/api/health"?

Related

Kubernetes Nginx ingress appears to remove headers before sending requests to backend service

I am trying to deploy a SpringbBoot Java application hosted on Apache Tomcat on a Kubernetes cluster using Ngninx Ingress for URL routing. More specifically, I am deploying on Minikube on my local machine, exposing the SpringBoot application as a Cluster IP Service, and executing the
minikube tunnel
command to expose services to my local machine. Visually, the process is the following...
Browser -> Ingress -> Service -> Pod hosting docker container of Apache Tomcat Server housing SpringBoot Java API
The backend service requires a header "SM_USER", which for the moment can be any value. When running the backend application as a Docker container with port forwarding, accessing the backend API works great. However, when deploying to a Kubernetes cluster behind an Nginx ingress, I get 403 errors from the API stating that the SM_USER header is missing. I suspect the following. My guess is that the header is included with the request to the ingress, but removed when being routed to the backend service.
My setup is the following.
Deploying on Minikube
minikube start
minikube addons enable ingress
eval $(minikube docker-env)
docker build . -t api -f Extras/DockerfileAPI
docker build . -t ui -f Extras/DockerfileUI
kubectl apply -f Extras/deployments-api.yml
kubectl apply -f Extras/deployments-ui.yml
kubectl expose deployment ui --type=ClusterIP --port=8080
kubectl expose deployment api --type=ClusterIP --port=8080
kubectl apply -f Extras/ingress.yml
kubectl apply -f Extras/ingress-config.yml
edit /etc/hosts file to resolve mydomain.com to localhost
minikube tunnel
Ingress YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
# Tried forcing the header by manual addition, no luck (tried with and without)
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header SM_USER $http_Admin;
spec:
rules:
- host: mydomain.com
http:
paths:
- path: /ui
pathType: Prefix
backend:
service:
name: ui
port:
number: 8080
- path: /api
pathType: Prefix
backend:
service:
name: api
port:
number: 8080
Ingress Config YAML
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: default
labels:
app.kubernetes.io/name: my-ingress
app.kubernetes.io/part-of: my-ingress
data:
# Tried allowing underscores and also using proxy protocol (tried with and without)
enable-underscores-in-headers: "true"
use-proxy-protocol: "true"
When navigating to mydomain.com/api, I expect to receive the ROOT API interface but instead receive the 403 error page indicating that the SM_USER is missing. Note, this is not a 403 forbidden error regarding the ingress or access from outside the cluster, the specific SpringBoot error page I receive is from within my application and custom to indicate that the header is missing. In other words, my routing is definitely correct, and I am able to access the API, its just that the header is missing.
Are there configs or parameters I am missing? Possibly an annotation?
This is resolved. The issue was that the config was being applied in the Application namespace. Note, even if the Ingress object is in the application namespace, if you are using minikubes built in ingress functionality, the ConfigMap must be applied in the Nginx ingress namespace.

OpenShift Kafka connect REST API not accessible from outside through Route URL

I am running Kafka connect worker from Openshift cluster and able to access the connector APIs from the PODs terminal like below (listener in connect-distributed.properties as https://localhost:20000):
sh-4.2$ curl -k -X GET https://localhost:20000
{"version":"5.5.0-ce","commit":"dad78e2df6b714e3","kafka_cluster_id":"XojxTYmbTXSwHguxJ_flWg"}
I have created OpenShift routes with below config:
- kind: Route
apiVersion: v1
metadata:
name: '${APP_SHORT_NAME}-route'
labels:
app: '${APP_SHORT_NAME}'
annotations:
description: Route for application's http service.
spec:
host: '${APP_SHORT_NAME}.${ROUTE_SUFFIX}'
port:
targetPort: 20000-tcp
tls:
termination: reencrypt
destinationCACertificate: '${DESTINATION_CA_CERTIFICATE}'
to:
kind: Service
name: '${APP_NAME}-service'
Port 20000 is exposed from Dockerfile but the route URL is throwing below error instead:
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running
Same OpenShift route URL works fine with a normal Spring boot service but Kafka connect worker URL is not getting bind to the Route URL create as above. (Note the Kafka connect worker is running fine along with the logs in the OpenShift pods)
Set the listeners back to the default of bind-address 0.0.0.0 so that external connections can be accepted.
If you need a different port, you can use port forwarding in the k8s service, rather than do that at the application config.
If you're not using Strimzi (which doesn't look like you are, based on 5.5.0-ce), you'll also want to add this env-var so a cluster can be formed.
- name: CONNECT_REST_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.podIP

Is there a way to enable proxy-protocol on Ingress for only one service?

I have this service that limits IPs to 2 requests per day running in Kubernetes.
Since it is behind an ingress proxy the request IP is always the same, so it is limiting he total amount of requests to 2.
Its possible to turn on proxy protocol with a config like this:
apiVersion: v1
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
kind: ConfigMap
But this would turn it on for all services, and since they don't expect proxy-protocol they would break.
Is there a way to enable it for only one service?
It is possible to configure Ingress so that it includes the original IPs into the http header.
For this I had to change the service config.
Its called ingress-nginx-ingress-controller(or similar) and can be found with kubectl get services -A
spec:
externalTrafficPolicy: Local
And then configure the ConfigMap with the same name:
data:
compute-full-forwarded-for: "true"
use-forwarded-headers: "true"
Restart the pods and then the http request will contain the fields X-Forwarded-For and X-Real-Ip.
This method won't break deployments not expecting proxy-protocol.

Istio access to container SSL endpoint

My application is running a SSL NodeJS server with mutual authentication.
How do I tell k8s to access the container thought HTTPS?
How do I forward the client SSL certificates to the container?
I tried to setup a Gateway & a Virtual host without success. In every configuration I tried I hit a 503 error.
The Istio sidecar proxy container (when injected) in the pod will automatically handle communicating over HTTPS. The application code can continue to use HTTP, and the Istio sidecar will intercept the request, "upgrading" it to HTTPS. The sidecar proxy in the receiving pod will then handle "downgrading" the request to HTTP for the application container.
Simply put, there is no need to modify any application code. The Istio sidecar proxies requests and responses between Kubernetes pods with TLS/HTTPS.
UPDATE:
If you wish to use HTTPS at the application level, you can tell Istio to exclude certain inbound and outbound ports. To do so, you can add the traffic.sidecar.istio.io/excludeInboundPorts and traffic.sidecar.istio.io/excludeOutboundPorts annotations, respectively, to the Kubernetes deployment YAML.
Example:
...
spec:
selector:
matchLabels:
app: podinfo
template:
metadata:
annotations:
traffic.sidecar.istio.io/includeInboundPorts: "443"
traffic.sidecar.istio.io/excludeInboundPorts: "443"
labels:
...

Addition of a configuration to HAProxy Ingress Controller not working

I have the below config that I need to add to the HA Proxy Ingress controller on my k8s.
acl metrics path -i /metrics
use_backend httpback-default-backend if metrics
So basically what I want is that for all Ingress hosts (URLs) being accessed using the controller, if the path /metrics is accessed, the request needs to be routed to the Ingress default backend and the user should get a 404 error.
So in my standard HA Proxy deployment I have the following configMaps
k get cm
NAME DATA AGE
haproxy-configmap 8 50d
haproxy-configmap-tcpservice 1 50d
haproxy-ingress 0 50d
ingress-controller-leader-haproxy 0 50d
And Ive added my config to the haproxy-configmap configMap in the config-frontend section
apiVersion: v1
data:
config-frontend: |
capture request header Host len 32
capture request header X-Request-ID len 64
capture request header User-Agent len 200
acl metrics path -i /metrics
use_backend httpback-default-backend if metrics
Now I expect that /metrics endpoint should lead me to a 404 error but seems like I can still access it.
What am I missing here ?