Real Client IP for TCP services - Nginx Ingress Controller - kubernetes

We have HTTP and TCP services behind Nginx Ingress Controller. The HTTP services are configured through an Ingress object, where when we get the request, through a snippet generate a header (Client-Id) and pass it to the service.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-pre
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host ~ ^(?<client>[^\..]+)\.(?<app>pre|presaas).(host1|host2).com$) {
more_set_input_headers 'Client-Id: $client';
}
spec:
tls:
- hosts:
- "*.pre.host1.com"
secretName: pre-host1
- hosts:
- "*.presaas.host2.com"
secretName: presaas-host2
rules:
- host: "*.pre.host1.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-front
port:
number: 80
- host: "*.presaas.host2.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-front
port:
number: 80
The TCP service is configured to connect directly, and it is done through a ConfigMap. These service connect through a TCP socket.
apiVersion: v1
data:
"12345": pre/service-back:12345
kind: ConfigMap
metadata:
name: tcp-service
namespace: ingress-nginx
All this config works fine. The TCP clients connect fine through a TCP sockets and the users connect fine through HTTP. The problem is that the TCP clients, when the establish the connection, get the source IP address (their own IP, or in Nginx, $remote_addr) and report it back to an admin endpoint, where it is shown in a dashboard. So there is a dashboard with all the TCP clients connected, with their IP addresses. Now what happens is that all the IP addresses, instead of being the client ones are the one of the Ingress Controller (the pod).
I set use-proxy-protocol: "true", and it seems to resolve the issue for the TCP connections, as in the logs I can see different external IP addresses being connected, but now HTTP services do not work, including the dashboard itself. These are the logs:
while reading PROXY protocol, client: 1.2.3.4, server: 0.0.0.0:80
2022/04/04 09:00:13 [error] 35#35: *5273 broken header: "��d�hԓ�:�����ӝp��E�L_"�����4�<����0�,�(�$��
����kjih9876�w�s��������" while reading PROXY protocol, client: 1.2.3.4, server: 0.0.0.0:443
I know the broken header logs are from HTTP services, as if I do telnet to the HTTP port I get the broken header, and if I telnet to the TCP port I get clean logs with what I expect.
I hope the issue is clear. So, what I need is a way to configure Nginx Ingress Controller to get HTTP and TCP servies. I don't know if I can configure use-proxy-protocol: "true" parameter for only one service. It seems that this is a global parameter.
For now the solution we are thinking of is to set a new Network Load Balancer (this is running in a AWS EKS cluster) just for the TCP service, and leave the HTTP behind the Ingress Controller.

To solve this issue, go to NLB target groups and enable the proxy protocol version 2 in the attributes tab. Network LB >> Listeners >> TCP80/TCP443 >> select Target Group >> Attribute Tab >> Enable Proxy Protocol v2.

Related

Kubernetes expose a service on a port over tls

I have my application https://myapp.com deployed on K8S, with an nginx ingress controller. HTTPS is resolved at nginx.
Now there is a need to expose one service on a specific port for example https://myapp.com:8888. Idea is to keep https://myapp.com secured inside the private network and expose only port number 8888 to the internet for integration.
Is there a way all traffic can be handled by the ingress controller, including tls termination, and it can also expose 8888 port and map it to a service?
Or
I need another nginx terminating tls and exposed on nodeport? I am not sure if I can access services like https://myapp.com:<node_port> with https.
Is using multiple ingress controllers an option?
What is the best practice to do this in Kubernetes?
Use sidecar proxy pattern to add HTTPS support to the application running inside the pod.
Refer the below diagram as a reference
Run nginx as a sidecar proxy container fronting the application container inside the same pod. Access the application through port 8888 on nginx proxy. nginx would route the traffic to the application.
Find below the post showing how it can be implemented
https://vorozhko.net/kubernetes-sidecar-pattern-nginx-ssl-proxy-for-nodejs
It is not a best practices to expose custom port over internet.
Instead, create a sub-domain (i.e https://custom.myapp.com) which point to internal service in port 8888.
Then to create separate nginx ingress (not ingress controller) which point to that "https://custom.myapp.com" sub domain
Example manifest file as follow:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-myapp-service
namespace: abc
rules:
- host: custom.myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 8888
Hope this helps.
So you have service foo on some port, which you want to have available on your internal network. Then service bar, which runs on port 8888 in that same pod.
It's as simple as setting up two services to that pod, with different spec.ports[].targetPort values. My example assumes a svc foo pointing at port 80, and svc bar pointing at port 8888 on the pod.
Take care that generally, the ingress controller only services HTTP and HTTPS connections on ports 80 and 443. That is a network setting generally defined for the nodes that are running the ingress controller. TCP/UDP are not serviced out-of-the-box by the ingress controller
My advice is to use something like this, and use path to expose the required service.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-wildcard-host
spec:
rules:
- host: "myapp.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: foo
port:
number: 80
- pathType: Prefix
path: "/bar"
backend:
service:
name: bar
port:
number: 80
If you would want to further secure your network, you should probably take a look at networkpolicies. They allow configuration of granular access to pods and services. You can, for example, only allow external ingress to that pod to port 8888.

How do I properly HTTPS secure an application when using Istio?

I'm currently trying to wrap my head around how the typical application flow looks like for a kubernetes application in combination with Istio.
So, for my app I have an asp.net application hosted within a Kubernetes cluster, and I added Istio on top. Here is my gateway & VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: appgateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: appvservice
spec:
hosts:
- "*"
gateways:
- appgateway
tls:
- match:
- port: 443
sniHosts:
- "*"
route:
- destination:
host: frontendservice.default.svc.cluster.local
port:
number: 443
This is what I came up with after reading through the Istio documentation.
Note that my frontendservice is a very basic ClusterIP service routing to an Asp.Net application which also offers standard 80 / 443 ports.
I have a few questions now:
Is this the proper approach to securing my application? In essence I want to redirect incoming traffic on port 80 straight to https enabled 443 right at the edge. However, when I try this, there's no redirect going on on port 80 at all.
Also, the tls route on my VirtualService does not work. There's just no traffic ending up on my pod
I'm also wondering, is it necessary to even manually add HTTPs to my internal applications, or is this something where Istios internal CA functionality comes in?
I have imagined it to work like this:
Request comes in. If it's on port 80, send a redirect to the client in order to send a https request. If it's on port 443, allow the request.
The VirtualService providers the instructions what should happen with requests on port 443, and forward it to the service.
The service now forwards the request to my app's 443 port.
Thanks in advance - I'm just learning Istio, and I'm a bit baffled why my seemingly proper setup does not work here.
Your Gateway terminates TLS connections, but your VirtualService is configured to accept unterminated TLS connections with TLSRoute.
Compare the example without TLS termination and the example which terminates TLS. Most probably, the "default" setup would be to terminate the TLS connection and configure the VirtualService with a HTTPRoute.
We are also using a similar setup.
SSL is terminated on ingress gateway, but we use mTLS mode via Gateway CR.
Services are listening on non-ssl ports but sidecars use mTLS between them so that any container without sidecar cannot talk to service.
VirtualService is routing to non-ssl port of service.
Sidecar CR intercepts traffic going to and from non-ssl port of service.
PeerAuthentication sets mTLS between sidecars.

Traefik behind ssl terminating load balancer return 404

I have a K8s setup with traefik being exposed like this
kubernetes:
ingressClass: traefik
service:
nodePorts:
http: 32080
serviceType: NodePort
Behind, I forward some requests to different services
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-name
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: my-host.com
http:
paths:
- path: /my-first-path
backend:
serviceName: my-nodeJs-services
servicePort: 3000
When the DNS is set directly to resolve to my ip, the application works fine with HTTP
http://my-host.com:32080/my-first-path
But when some one add SSL through AWS ALB / API Gateway, the application fail to be reached with 404-NotFound error
The route is like this
https://my-host.com/my-first-path
On the AWS size, they configured something like this
https://my-host.com => SSL Termination and => Forward all to 43.43.43.43:32080
I think this fail because traefik is expecting http://my-host.com but not https://my-host.com which lead to its failure to find the matching route? Or maybe at the ssl termination time, the hostname is lost so that traefik can not find a route?
What should I do in this situation?
I am not very familiar with ALB but what is probably happening is that the requests received by the loadbalancer contain the header Host: my-host.com and when it gets forwarded to your ingress controller, the header is replaced by Host: 43.43.43.43. If this is the case, I see 3 solutions:
ALB might be able to pass the original Host header to the target. (You will have to check in the doc if it's possible)
If the application behind your ingress doesn't check the host header, you can write an ingress that doesn't check a specific host. For example on these examples you can see that the host field is not specified.
If the name resolution works internally, you can define a name for your target, use this name in your ALB and in your ingress.

OpenShift Service Proxy timeout

I have an application deployed on OpenShift Container Platform v3.6. It consists of multiple services interconnected to each other.
The frontend service calls a time consuming function of the backend service (through a REST call), but after 30 seconds it receives a "504 Gateway Timeout" message. Frontend runs over nginx, but I've already configured it with long proxy send/read timeouts, so the 504 message doesn't come from it. I think it comes from the Service Proxy component of OpenShift Platform, but I can't find out where and how configure a kind of service proxy timeout. I know the existence of HAProxy timeout for external routes, but my services leave in the same cluster application and communicate each other via OpenShift Container Platform DNS.
Could be a Service Proxy timeout issue? How can it be configured?
Thanks!
Your route timeout is the culprit. The haproxy ingress router is terminating the request. You can configure the timeout by following the docs below:
https://docs.openshift.com/container-platform/3.5/install_config/configuring_routing.html
For example:
# Set the timeout on 'longrunningroute' to five minutes.
oc annotate route longrunningroute --overwrite haproxy.router.openshift.io/timeout=5m
In my case I didn't annotate the route myself but added the annotation to the Ingress.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: my-namespace
annotations:
haproxy.router.openshift.io/timeout: 600s
spec:
tls:
- hosts:
- example.com
secretName: https-tls-secret
rules:
- host: example.com
http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
The routes are managed by the ingress and therefore inherit the annotations from it.

Failed to access perforce server when using ingress(Kubernetes) to route the service

I am getting "partner is not a Perforce client/server" when using ingress to route the service, but I am able to directly query the perforce server in the Kubernetes cluster.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-notls
namespace: default
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: perforce.domain.com
http:
paths:
- path: /*
backend:
serviceName: p4-server
servicePort: 80
p4 service
apiVersion: v1
kind: Service
metadata:
name: p4-server
spec:
type: NodePort
ports:
- port: 80
targetPort: 1666
nodePort: 30166
name: p4-server
selector:
run: p4-server
if I am in the cluster:
$ p4 -p p4-server:80 info
User name: root
Client name: platform-3101934619-wtxs5
Client host: platform-3101934619-wtxs5
Client unknown.
Current directory: /
Peer address: 10.4.0.218:49924
Client address: 10.4.0.218
Server address: p4-server-1400441787-fcmd9:1666
Server root: /codelingo
Server date: 2017/10/04 02:19:17 +0000 UTC
Server uptime: 380:53:52
Server version: P4D/LINUX26X86_64/2017.1/1511680 (2017/05/05)
Server license: none
Case Handling: sensitive
p4 logs:
Perforce server info:
2017/10/04 02:19:17 pid 23038 root#platform-3101934619-wtxs5 10.4.0.218 [p4/2017.1/LINUX26X86_64/1511680] 'user-info'
Failed attempt via ingress:
$ p4 -p perforce.domain.com:80 info
(hangs)
p4 logs:
Perforce server error:
Date 2017/10/04 02:18:30:
Pid 23012
Connection from 10.4.0.1:38622 broken.
RpcTransport: partner is not a Perforce client/server.
RpcTransport: partner is not a Perforce client/server.
RpcTransport: partner is not a Perforce client/server.
Peer address: 10.4.0.218:49924
looks suspiciously like a bi-directional protocol, meaning that client and server expect to have unfettered access to one another, ala (non-passive mode) ftp
http:
paths:
- path: /*
I don't believe that http: stanza is an accurate statement, as I doubt super, super seriously that Perforce speaks http between the client and the server. There are ongoing discussions around teaching Ingress about TCP, but for the time being I think you've gotten most of the way to where you want to go by already having a NodePort for :1666
Create a GCE tcp load balancer (which effectively is just a firewall to keep the wild Internet away from your cluster) and point its 1666 to port 30166 on every Node in your cluster. It's unclear if anything further needs to happen around Perforce, but from the "establishing tcp/ip connectivity between outsiders and your in-cluster P4" point of view, I think that would do it