haproxy source ip address shows Kubernetes node ip address - kubernetes

I have HAproxy running in a Kubernetes container. This is what a sample log looks like
<134>Jul 20 13:11:37 haproxy[6]: <SOURCE_ADDRESS> [20/Jul/2020:13:11:37.713] front gameApi/game-api-test 0/0/0/9/9 200 384 - - ---- 37/37/0/0/0 0/0 {<FORWARD_FOR_ADDRESS>} "GET /api/games/lists?dtype=brandlist HTTP/1.1"
The <SOURCE_ADDRESS> here is the haproxy kubernetes node ip address and i need it to be the clinet/forwardfor ip address so that filebeat is able to parse the geolocation correctly.
Edit:
I found a solution using haproxy which is to simply set http-request set-src hdr(x-forwarded-for)
However attempting to use the solution externalTrafficPolicy: Local seems to break my haproxies ability to serve website. When i try to reach a website it would say "this site can't be reached" or "Secure Connection Failed"
haproxy service
apiVersion: v1
kind: Service
metadata:
name: haproxy-test
labels:
app: haproxy-test
namespace: test
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-sload-balancer-ssl-cert: arn:aws:acm:us-east-2:redacted:certificate/redacted
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 80
name: http
targetPort: 80
protocol: "TCP"
- port: 443
name: https
targetPort: 80
protocol: "TCP"
selector:
app: haproxy-test
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: haproxy-test
namespace: test
spec:
replicas: 1
template:
metadata:
labels:
app: haproxy-test
spec:
containers:
- name: haproxy-test
image: <redacted>.dkr.ecr.us-east-2.amazonaws.com/haproxy:$TAG
imagePullPolicy: Always
ports:
- containerPort: 80
resources:
limits:
cpu: "50m"
requests:
cpu: "25m"

You can preserve source IP by setting externalTrafficPolicy to local. Check this question for more details How do Kubernetes NodePort services with Service.spec.externalTrafficPolicy=Local route traffic?
Alternatively use http-request set-src hdr(x-forwarded-for) to configure HAProxy to use the contents of the X-Forward-For header to establish its internal concept of the source address of the request, instead of the actual IP address initiating the inbound connection.

Related

Istio routing, metallb and https issue

I am having a problem with kubernetes K3S, Istio, MetalLB and CertManager.
I have my cluster hosted on a VPS with one public ip. As my service provider don provide me with a Load Balancer, i am using MetlLb with my public Ip to reach internet with istio-ingressgateway.
In thsis cluster i have three namespaces for my applications, one for qa environment, othe for dev and the prod environment.
I configured my ip in my dns provider with my public ip, and configured CertManager to get a Certificate from letsencrypt (i am using Issuer instead of ClusterIssuer as i want to use the staging api for dev and qa and prod for prod). Certificate are issued fine, but the Gateway from istio is routing the traffic only when i use the port 80, when i enable the 443 i cant reach the site by https, getting a "ERR_CONNECTION_RESET".
I cant understand why is everyhing fine for 80, but not for the 443.
My application es exposing the traffic in the port 80 by http.
Here are my yaml files:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-v1
spec:
replicas: 3
selector:
matchLabels:
app: hello-v1
template:
metadata:
labels:
app: hello-v1
spec:
containers:
- name: hello
image: pablin.dynu.net:5000/chevaca/chevacaweb:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "200m"
limits:
memory: "128Mi"
cpu: "500m"
kind: Service
apiVersion: v1
metadata:
name: hello-v1-svc
namespace: chevaca-qa
spec:
selector:
app: hello-v1
ports:
- protocol: TCP
port: 80
targetPort: 80
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: qa-app-gateway
namespace: chevaca-qa
spec:
selector:
istio: ingressgateway
servers:
- port:
name: http
number: 80
protocol: HTTP
hosts:
- qa-app.chevaca.com
- port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: front-cert
hosts:
- qa-app.chevaca.com
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: front-app
namespace: chevaca-qa
spec:
hosts:
- qa-app.chevaca.com
gateways:
- qa-app-gateway
http:
- route:
- destination:
host: hello-v1-svc
port:
number: 80
I get a some progress, i can use https on the namespace istio-system, but no in default or any of my custon namespaces. I am searching how to fix this issue right now.
Its fixed,
the solution is to create the certificate in the istio-system namespace with the secret name ingressgateway-certs.
In that way, the certificate is mounted in the ingress gateway service from istio, and nothing else need to be configured on the Custom Resources from istio. If you have multiple namespaces , like my scenario, you can use multiple hosts on the certificate or you can use a wild card.

Tunneling from kubernetes to dev machine via Headless Service and Endpoint

I'm trying to use a headless service with an endpoint to forward traffic from within my cluster to my local development machine. I want to listen on port 80 on the service and call port 5002 on the endpoint. I have it setup as so:
Headless Service (listening on port 80 with a targetPort of 5002):
Endpoint (pointing to my development computer on port 5002):
When I try to curl http://web:80 from any pod in my cluster on port 80 it times out. If I curl http://web:5002 it successfully goes through and hits my development machine. Shouldn't the targetPort make the request to web:80 go to my endpoint on port 5002?
curl web:80
curl web:5002
Some additional info:
My cluster and dev machine are in the same local network
I'm using K3S on the cluster
I'm just trying to emulate what Bridge For Kubernetes does
Here is the manifest yaml:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
clusterIP: None
ports:
- name: web
port: 80
targetPort: 5002
---
apiVersion: v1
kind: Endpoints
metadata:
name: web
namespace: default
subsets:
- addresses:
- ip: $HOST_IP
ports:
- name: web
port: 5002
protocol: TCP
I managed to get it to work by removing the clusterIP: None. My manifest now looks like this:
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: ClusterIP
ports:
- name: web
port: 80
targetPort: 5002
---
apiVersion: v1
kind: Endpoints
metadata:
name: web
subsets:
- addresses:
- ip: $HOST_IP
ports:
- name: web
port: 5002

kubernetes force service to use https

I want to expose k8s api's using a service. My issue is that the api only respond on port 6443 on https. Any attempt on http return status 400 bad request. How can I "force" the service to user https ?
apiVersion: v1
kind: Service
metadata:
name: k8s-api
namespace: kube-system
labels:
label: k8s-api
spec:
ports:
- port: 80 #Port on which your service is running
targetPort: 6443
protocol: TCP
name: http
selector:
name: kube-apiserver-master-node
May be this ?
apiVersion: v1
kind: Service
metadata:
name: k8s-api
namespace: kube-system
labels:
label: k8s-api
spec:
ports:
- port: 443 #Port on which your service is running
targetPort: 6443
protocol: TCP
name: http
selector:
name: kube-apiserver-master-node
If you are using the Nginx ingress by default it does SSL off load and sends plain HTTP in the background.
Changing port 6443 might be helpful if you request direct connecting to the service.
If you are using the Nginx ingress make sure it doesn't terminate SSL.
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"

Unable to access service in Kubernetes

I've got this webserver config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
and this web service config:
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
run: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: webserver
I was expecting to be able to connect to the webserver via http://192.168.99.100:80 with this config but Chrome gives me a ERR_CONNECTION_REFUSED.
I tried minikube service --url web-service which gives http://192.168.99.100:30276 however this also has a ERR_CONNECTION_REFUSED.
Any further suggestions?
UPDATE
I updated the port / targetPort to 80.
However, I now get:
ERR_CONNECTION_REFUSED for http://192.168.99.100:80/
and
an nginx 403 for http://192.168.99.100:31540/
In your service, you can define a nodePort
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
run: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 32700
protocol: TCP
selector:
app: webserver
Now, you will be able to access it on http://:32700
Be careful with port 80. Ideally, you would have an nginx ingress controller running on port 80 and all traffic will be routed through it. Using port 80 as nodePort will mess up your deployment.
In your service, you did not specify a targetPort, so the service is using the port value as targetPort, however your container is listening on 80. Add a targetPort: 80 to the service.
NodePort port range varies from 30000-32767(default). When you expose a service without specifying a port, kubernetes picks up a random port from the above range and provide you.
You can check the port by typing the below command
kubectl get svc
In your case - the application is port forwarded to 31540. Your issues seems to be the niginx configuration. Check for the nginx logs.
Please check permissions of mounted volume /home/docker/vol
To fix this you have to make the mounted directory and its contents publicly readable:
chmod -R o+rX /home/docker/vol

Health Checks in GKE in GCloud resets after I change it from HTTP to TCP

I'm working on a Kubernetes cluster where I am directing service from GCloud Ingress to my Services. One of the services endpoints fails health check as HTTP but passes it as TCP.
When I change the health check options inside GCloud to be TCP, the health checks pass, and my endpoint works, but after a few minutes, the health check on GCloud resets for that port back to HTTP and health checks fail again, giving me a 502 response on my endpoint.
I don't know if it's a bug inside Google Cloud or something I'm doing wrong in Kubernetes. I have pasted my YAML configuration here:
namespace
apiVersion: v1
kind: Namespace
metadata:
name: parity
labels:
name: parity
storageclass
apiVersion: storage.k8s.io/v1
metadata:
name: classic-ssd
namespace: parity
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
zones: us-central1-a
reclaimPolicy: Retain
secret
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
namespace: ingress-nginx
data:
tls.crt: ./config/redacted.crt
tls.key: ./config/redacted.key
statefulset
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: parity
namespace: parity
labels:
app: parity
spec:
replicas: 3
selector:
matchLabels:
app: parity
serviceName: parity
template:
metadata:
name: parity
labels:
app: parity
spec:
containers:
- name: parity
image: "etccoop/parity:latest"
imagePullPolicy: Always
args:
- "--chain=classic"
- "--jsonrpc-port=8545"
- "--jsonrpc-interface=0.0.0.0"
- "--jsonrpc-apis=web3,eth,net"
- "--jsonrpc-hosts=all"
ports:
- containerPort: 8545
protocol: TCP
name: rpc-port
- containerPort: 443
protocol: TCP
name: https
readinessProbe:
tcpSocket:
port: 8545
initialDelaySeconds: 650
livenessProbe:
tcpSocket:
port: 8545
initialDelaySeconds: 650
volumeMounts:
- name: parity-config
mountPath: /parity-config
readOnly: true
- name: parity-data
mountPath: /parity-data
volumes:
- name: parity-config
secret:
secretName: parity-config
volumeClaimTemplates:
- metadata:
name: parity-data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "classic-ssd"
resources:
requests:
storage: 50Gi
service
apiVersion: v1
kind: Service
metadata:
labels:
app: parity
name: parity
namespace: parity
annotations:
cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'
spec:
selector:
app: parity
ports:
- name: default
protocol: TCP
port: 80
targetPort: 80
- name: rpc-endpoint
port: 8545
protocol: TCP
targetPort: 8545
- name: https
port: 443
protocol: TCP
targetPort: 443
type: LoadBalancer
ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-parity
namespace: parity
annotations:
#nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.global-static-ip-name: cluster-1
spec:
tls:
secretName: tls-classic
hosts:
- www.redacted.com
rules:
- host: www.redacted.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 8080
- path: /rpc
backend:
serviceName: parity
servicePort: 8545
Issue
I've redacted hostnames and such, but this is my basic configuration. I've also run a hello-app container from this documentation here for debugging: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
Which is what the endpoint for ingress on / points to on port 8080 for the hello-app service. That works fine and isn't the issue, but just mentioned here for clarification.
So, the issue here is that, after creating my cluster with GKE and my ingress LoadBalancer on Google Cloud (the cluster-1 global static ip name in the Ingress file), and then creating the Kubernetes configuration in the files above, the Health-Check fails for the /rpc endpoint on Google Cloud when I go to Google Compute Engine -> Health Check -> Specific Health-Check for the /rpc endpoint.
When I edit that Health-Check to not use HTTP Protocol and instead use TCP Protocol, health-checks pass for the /rpc endpoint and I can curl it just fine after and it returns me the correct response.
The issue is that a few minutes after that, the same Health-Check goes back to HTTP protocol even though I edited it to be TCP, and then the health-checks fail and I get a 502 response when I curl it again.
I am not sure if there's a way to attach the Google Cloud Health Check configuration to my Kubernetes Ingress prior to creating the Ingress in kubernetes. Also not sure why it's being reset, can't tell if it's a bug on Google Cloud or something I'm doing wrong in Kubernetes. If you notice on my statefulset deployment, I have specified livenessProbe and readinessProbe to use TCP to check the port 8545.
The delay of 650 seconds was due to this ticket issue here which was solved by increasing the delay to greater than 600 seconds (to avoid mentioned race conditions): https://github.com/kubernetes/ingress-gce/issues/34
I really am not sure why the Google Cloud health-check is resetting back to HTTP after I've specified it to be TCP. Any help would be appreciated.
I found a solution where I added a new container for health check on my stateful set on /healthz endpoint, and configured the health check of the ingress to check that endpoint on the 8080 port assigned by kubernetes as an HTTP type of health-check, which made it work.
It's not immediately obvious why the reset happens when it's TCP.