traefik v2 redirect TCP requests - kerberos

we habe following situation.
We are using Kerberos on a tomcat running in a docker swarm as reverse proxy we use traefik also hosted in docker swarm.
Kerberos needs to passthrough the SSL request via TCP router. This works no problem. But now we want to redirect requests to port 80 also to tcp router port 443?
We tried a lot, but no success at all :-(
Here is the traefik configuration
labels:
traefik.tcp.routers.example.entrypoints: https
traefik.tcp.services.example.loadbalancer.server.port: '8443'
traefik.tcp.routers.example.service: example
traefik.tcp.routers.example.tls.passthrough: 'true'
traefik.constraint-label: traefik-public
traefik.tcp.routers.example.rule: HostSNI(`example.com`)
traefik.docker.network: traefik-public
traefik.enable: 'true'
traefik.tcp.routers.example.tls: 'true'
regards
Meex

Related

HAProxy Not Working with Kubernetes NodePort for Backend (Bare Metal)

I have a host running HAProxy already. It's been up and running since before I did anything with Kubernetes. It works flawlessly as a reverse proxy and SSL terminator for various web things in various Docker containers on various other host machines.
Now I have a Kubernetes cluster up and running across some of those other machines. I've created the NodePort Service that exposes port 30080 on each worker node, as follows:
apiVersion: v1
kind: Service
metadata:
name: snginx
labels:
app: nginx
spec:
type: NodePort
externalTrafficPolicy: Local #Cluster or Local
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30080
From the machine running HAProxy (which is not part of the cluster), I can curl the NodePort successfully ( curl 10.0.0.20:30080 ), and I get "Welcome to nginx!..." However, if I set that NodePort as a backend in HAProxy, I get a 503 "No server is available", and HAProxy traffic log says:
localhost haproxy[18489]: [redactedIP]:49399 [30/Aug/2021:19:24:00.764] http-in~ load/load 66/0/-1/-1/67 503 212 - - SC-- 1/1/0/0/3 0/0 "GET / HTTP/1.1"
The haproxy admin log says:
Aug 30 20:07:13 localhost haproxy[18839]: Server load/load is DOWN, reason: Layer4 connection problem, info: "General socket error (Permission denied)"
However, I've disabled the firewall with
sudo systemctl disable --now firewalld
and verified the status is not running. Also, SELinux was disabled when I installed the cluster. Also, I can ping 10.0.0.20 just fine.
"load" is the hostname I'm using for testing load balancing (i.e. load.mydomain.com).
Also, if I use PAT on my physical router to route directly to that NodePort, from outside the building, it works as expected.
What gives? What's the difference between the proxied request and curl?
Thank you.
SELinux is the difference. That is, SELinux on the HAProxy host (not a cluster node):
"SELinux only allows the web server to make outbound connections to a limited set of ports"
That is, you can't make an outbound http request to any port in the NodePort range (30000-32768) without opening that port on the "client", which is the HAProxy server in this case.
sudo semanage port --add --type http_port_t --proto tcp 30080

K8S with Traefik and HAP not getting real client IP on pods

I have an external LB using HAProxy in order to have a HA k8s cluster. My cluster is in K3s from Rancher and it's using Traefik LB internally.
I'm currently facing an issue where in my pods I'm getting the Traefik IP instead of the real client IP.
HAP Configuration:
# Ansible managed
defaults
maxconn 1000
mode http
log global
option dontlognull
log stdout local0 debug
option httplog
timeout http-request 5s
timeout connect 5000
timeout client 2000000
timeout server 2000000
frontend k8s
bind *:6443
bind *:80
bind *:443
mode tcp
option tcplog
use_backend masters-k8s
backend masters-k8s
mode tcp
balance roundrobin
server master01 master01.k8s.int.ntw
server master02 master02.k8s.int.ntw
# end Ansible managed
Traefik Service:
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: kube-system
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
labels:
app: traefik
app.kubernetes.io/managed-by: Helm
chart: traefik-1.81.0
heritage: Helm
release: traefik
spec:
clusterIP: 10.43.250.142
clusterIPs:
- 10.43.250.142
externalTrafficPolicy: Local
ports:
- name: http
nodePort: 32232
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30955
port: 443
protocol: TCP
targetPort: https
selector:
app: traefik
release: traefik
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 10.0.1.1
- ip: 10.0.1.11
- ip: 10.0.1.12
- ip: 10.0.1.2
With this configurations I never get my real IP on the pods, during some research I saw that people recommend to use send-proxy in the HAP like this:
server master01 master01.k8s.int.ntw check send-proxy-v2
server master02 master02.k8s.int.ntw check send-proxy-v2
But when I do so, all my cluster communication returns ERR_CONNECTION_CLOSED.
If I'm looking at it correctly, this means its going from the HAP to the cluster and the cluster is somewhere rejecting the traffic.
Any clues what I'm missing here?
Thanks
Well you have two options.
use proxy protocol
use X-Forwarded-For header
Option 1: proxy protocol
This option requires that both sites HAProxy and Traefik uses the proxy protocol, that's the reason why the people recommend "send-proxy-v2".
This requires also that all other clients which want to connect to Traefik MUST also use the proxy protocol, if the clients does not use the proxy protocol then you will get what you get a communication error.
As you configured HAProxy in TCP mode is this the only option to get the client ip to Traefik.
Option 2: X-Forwarded-For
I personally would use this option because it makes it possible to connect to traefik with any http(s) client.
This requires that you use the HTTP mode in haproxy and some more parameters like option forwardfor.
# Ansible managed
defaults
maxconn 1000
mode http
log global
option dontlognull
log stdout local0 debug
option httplog
timeout http-request 5s
timeout connect 5s
timeout client 200s
timeout server 200s
# send client ip in the x-forwarded-for header
option forwardfor
frontend k8s
bind *:6443 v4v6 alpn h2,http/1.1 ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/
bind *:80 v4v6 alpn h2,http/1.1 ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/
bind *:443 v4v6 alpn h2,http/1.1 ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/
use_backend masters-k8s
backend masters-k8s
balance roundrobin
server master01 master01.k8s.int.ntw check
server master02 master02.k8s.int.ntw check
# end Ansible managed
In the File "/etc/haproxy/letsencryptauthorityx3.pem" are the CA's for the backends, in the directory "/etc/ssl/haproxy/" are the certificates for the frontends.
Please take a look into the documentation about the crt keyword.
You have also to configure traefik to allow the header from haproxy forwarded-headers

Kubernetes pods can not make https request after deploying istio service mesh

I am exploring the istio service mesh on my k8s cluster hosted on EKS(Amazon).
I tried deploying istio-1.2.2 on a new k8s cluster with the demo.yml file used for bookapp demonstration and most of the use cases I understand properly.
Then, I deployed istio using helm default profile(recommended for production) on my existing dev cluster with 100s of microservices running and what I noticed is my services can can call http endpoints but not able to call external secure endpoints(https://www.google.com, etc.)
I am getting :
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong
version number
Though I am able to call external https endpoints from my testing cluster.
To verify, I check the egress policy and it is mode: ALLOW_ANY in both the clusters.
Now, I removed the the istio completely from my dev cluster and install the demo.yml to test but now this is also not working.
I try to relate my issue with this but didn't get any success.
https://discuss.istio.io/t/serviceentry-for-https-on-httpbin-org-resulting-in-connect-cr-srvr-hello-using-curl/2044
I don't understand what I am missing or what I am doing wrong.
Note: I am referring to this setup: https://istio.io/docs/setup/kubernetes/install/helm/
This is most likely a bug in Istio (see for example istio/istio#14520): if you have any Kubernetes Service object, anywhere in your cluster, that listens on port 443 but whose name starts with http (not https), it will break all outbound HTTPS connections.
The instance of this I've hit involves configuring an AWS load balancer to do TLS termination. The Kubernetes Service needs to expose port 443 to configure the load balancer, but it receives plain unencrypted HTTP.
apiVersion: v1
kind: Service
metadata:
name: breaks-istio
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:...
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
selector: ...
ports:
- name: http-ssl # <<<< THIS NAME MATTERS
port: 443
targetPort: http
When I've experimented with this, changing that name: to either https or tcp-https seems to work. Those name prefixes are significant to Istio, but I haven't immediately found any functional difference between telling Istio the port is HTTPS (even though it doesn't actually serve TLS) vs. plain uninterpreted TCP.
You do need to search your cluster and find every Service that listens to port 443, and make sure the port name doesn't start with http-....

exposing CockroachDB on Kubernetes to public IP

I have a CockroachDB instance running in a Kubernetes cluster on Google Kubernetes Engine. I am trying to expose port 26257 so I can connect to it from my local machine.
As stated in this answer, port forwarding to the pod will not work.
I have an nginx-ingress controller which is used to map from my domain name paths to services, so I tried to use that:
I changed my db-cockroachdb-public service from ClusterIP to NodePort:
type: NodePort
I added these lines to my nginx-controller YAML:
-name: postgresql
nodePort: 30472
port: 26257
protocol: TCP
targetPort: 26257
and these lines to my ingress YAML:
- host: db.mydomain.com
http:
paths:
- path: /
backend:
serviceName: db-cockroachdb-public
servicePort: 26257
However, I'm unable to connect to the database - connection gets refused. I also tried to disable SSL redirects in the nginx controller, but it still doesn't work.
I also tried a ConfigMap but it didn't do anything:
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md
There are a few ways to fix this. Most are related to changing your ingress configuration or how you're connecting to the service, which I'm not going to go into. Another option is to make port forwarding work to eliminate the need for the ingress machinery.
You can make port forwarding work by modifying the CockroachDB config file slightly. Change the name of the --host flag in the invocation of the Cockroach binary to be --advertise-host instead. That way, the process will listen on localhost in addition to on its hostname, which will make port forwarding work.
edit: To follow up on this, I've switched the default configuration in the CockroachDB repo to use --advertise-host instead of --host, so port forwarding works by default now.
I don't know if it technically should work to proxy a CockroachDB through a nginx instance, but your setup fails for another reason. When specifying a servicePort in the rules section, you tell k8s which port is exposed to the service. The mapping itself happens by default to port 80/443, not your desired port. So you should try just to ask port 80 in your case.

OpenShift route - Unable to connect to remote host: No route to host

I have deployed a grpc service running on OpenShift Origin. And this backed by a OpenShift service. And the service is exposed with an OpenShift route. I am trying to make this pod available via a service and route that maps the container port (50051) to outside world on port 8080.
The image that the service is trying to expose has, in its Dockerfile:
EXPOSE 50051
The route has the following:
Service Port: 8080/TCP
Target Port: 50051
In the DeploymentConfig I specify the port with:
ports:
- containerPort: 50051
protocol: TCP
However, when I try to access the application via the route and port, I get (from Java)
java.net.NoRouteToHostException: No route to host
And when I try to telnet the service IP:
telnet 172.30.197.247 8080
I am able to connect.
However, when I try to connect via the route it doesnt work:
telnet my.route.com 8080
Trying ...
telnet: connect to address : Connection refused
When I use:
curl -kv my-svc.myproject.svc.cluster.local:8080
I can connect.
So it seems the service is working but the route is not.
I have been going through the troubleshooting guide on https://docs.openshift.org/3.6/admin_guide/sdn_troubleshooting.html#debugging-the-router
The router setups in OpenShift focus on HTTP/HTTPS(SNI)/TLS(SNI). However it appears that you can use an externalIP to expose non-web application ports from the cluster. Because gRPC is an over the wire protocol, you might need to go this path.
There are multiple things to check :
Is you route point to your service ? Here is a example :
apiVersion: v1
kind: Route
spec:
host: my.route.com
to:
kind: Service
name: yourservice
weight: 100
If it's not the case, the route and the service are not connected.
You can check the router configuration. Connect to your router with oc rsh and check if you find your route name in the /var/lib/haproxy/conf/haproxy.config (the backend name format should be backend be_http_NAMESPACE_ROUTENAME). The server part below the backend part should contains the ip of your pod (you can obtain your pod ip with oc get pods -o wide command).
If it's not the case, the route is not registered in the router config. You can try to restart the router end recheck the haproxy.config file.
Can you connect to the pod ip from the router container ?