Getting error "http: TLS handshake error from EOF" in kubernetes go program - kubernetes

I have a kubernetes pod configured as a webserver supporting https. This pod is giving the TLS handshake error logs. When we try to access the loadbalancer service IP on the browser, it gives error - the connection is not secure proceed to unsafe. For secure connection we have a self signed certificate mounted as a secret to the pod volume. If we remove support of https everything works fine. Can somebody suggest what could be the possible reason for such behaviour.

By default a https connection exist only between the browser and the loadbalancer. The loadbalancer communicates with pods using plain http.
browser -------------->|loadbalancer|-----------> POD
https http
In that case, the certificate needs to be present on the loadbalancer, not on the POD, and you should disable HTTPS on the pod.
The loadbalancer can be configured to communicate with PODs using https, but it will be a different https connection:
browser -------------->|loadbalancer|-----------> POD
https https
Here two certificates are needed, one on the loadbalancer and one on the pod itself.
The last option is pass-through SSL, but it's not enabled by default:
loadbalancer
browser --------------|--------------|-----------> POD
https
Here the certificate should be placed on the pod.
The way of configuring HTTPS depends on the used loadbalancer, cloud provider etc. If you are using Ingress, this page might help: Kubernetes: Using Ingress with SSL/TLS termination and HTTP/2
Sidenote: browsers always complain about insecure connection when using a self-signed certificate (unless you configure them not to do it).

Related

HTTP/2 client preface string missing or corrupt for gRPC client in Kubernetes making call to local service using Telepresence

I am trying to prepare an environment for Integration testing of the Springboot application running inside Kubernetes cluster. I am using Telepresence which intercepts the traffic(gRPC APIs) in Kubernetes cluster to route it to locally running application in my IDE(IntelliJ).
Springboot application in Kubernetes is listening to gRPC calls on port 9090, and exposes via a ClusterIP service. I am trying to intercept gRPC traffic to this application running in Kubernetes, and route it to locally running application which listens on port 9095, using the below Telepresence intercept command
telepresence intercept service-name --namespace=ns --port 9095:9090 --env-file C:\Users\SC1063\telepresence\env_files\intercept-config.env
My local application on receiving the gRPC call is throwing the following exception
io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception: HTTP/2 client preface string missing or corrupt. Hex dump for received bytes: 1603010200010001fc0303ffd1d5efdfb5771b509014337a
From the question Spring boot + GRPC Http2Exception I understand, call from client application running in Kubernetes is trying to secure the communication using TLS. Whereas, the non-intercepted gRPC calls within kubernetes is working without any problem.
Application environment uses Istio for service mesh.
Error observed in the client logs
upstream connect error or disconnect/reset before headers. retried and the latest reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER]
Root cause for the issue is, client is applying TLS before sending the request to server, whereas server is expecting PLAINTEXT.
Istio service mesh secures external outbound traffic (traffic flowing outside K8s cluster) with TLS unless DISABLED.
Create Istio destinationrule CRD which is utilized by envoy proxy to DISABLE TLS while routing the traffic
spec:
trafficPolicy:
tls:
mode: DISABLE

Trying to make a reverse proxy to allow IPs on our DNS to get certificates inside of Kubernetes

I'm trying to setup a reverse proxy to work with Kubernetes. I currently have an ingress load-balancer using Metallb and Contour with Envoy.
I also have a working certificate issuer with Let's Encrypt and cert-manager allowing services and deployments to get certificates for HTTPS.
My problem is trying to get other websites and servers not run in Kubernetes but are in our DNS range to have HTTPS certificates and I feel like I am missing something.
My IP for my load-balancer is 10.64.1.35 while the website I am trying to get a certificate for is 10.64.0.145.
Thank you if you could offer any help!
I think that will never work. Something needs to request a certificate, in kubernetes this usually is the presence of a Resource. The cert-manager listens to the creation of that resource, and requests a certificate from let's encrypt.
Then that certificate must be configured in some loadbalancer and the loadbalancer must reload its configuration (That's what Metallb does).
When you have applications running elsewhere outside of this setup, those applications will never have certificates.
If you really want to have that Metallb loadbalancer request and attach the certificates, you'll need to create a resource in kubernetes and proxy all the traffic for that application through kubernetes.
myapp.com -> metallb -> kubernetes -> VPS
However, I think the better way for you is to setup let's encrypt on the server where you need it. That way you prevent 2 additional network hops, and resources on the metallb and kubernetes server(s).

Need help in configuring a simple TLS/SSL within k8s cluster for pod to pod communication over https

Need help on how to configure TLS/SSL on k8s cluster for internal pod to pod communication over https. Able to curl http://servicename:port over http but for https i am ending up with NSS error on client pod.
I generated a self signed cert with CN=*.svc.cluster.local (As all the services in k8s end with this) and i am stuck on how to configure it on k8s.
Note: i exposed the main svc on 8443 port and i am doing this in my local docker desktop setup on windows machine.
No Ingress --> Because communication happens within the cluster itself.
Without any CRD(custom resource definition) cert-manager
You can store your self-signed certificate inside the secret of Kubernetes and mount it to the volume of the pod.
If you don't want to use the CRD or cert-manager you can use the native Kubernetes API to generate the Certificate which will be trusted by all the pods by default.
https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
managing the self singed certificate across all pods and service might be hard I would suggest using the service mesh. Service mesh encrypts the network traffic using the mTLS.
https://linkerd.io/2.10/features/automatic-mtls/#:~:text=By%20default%2C%20Linkerd%20automatically%20enables,TLS%20connections%20between%20Linkerd%20proxies.
Mutual TLS between service to service communication managed by the Side car containers in case of service mesh.
https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/
in this case, No ingress required and no cert-manager required.

Istio | TLS mutual-auth without using Istio ingress gateway

I want to achieve TLS mutual auth between my different services running in a kubernetes cluster and I have found that Istio is a good solution to achieve this without making any changes in code.
I am trying to use Istio sidecar injection to do TLS mutual auth between services running inside the cluster.
Outside traffic enters the mesh through nginx ingress controller. We want to keep using it instead of the Istio ingress controller(we want to make as little changes as possible).
The services are able to communicate with each other properly when the Istio Sidecar injection is disabled. But as soon as I enable the sidecar in the application's namespace, the app is not longer able to serve requests(I am guessing the incoming requests are dropped by the envoy sidecar proxy).
What I want to do:
Enable istio sidecar proxy injection on namespace-2(nginx ingress controller, service 1 and service 2) so that all services communicate with each other through TLS mutual auth.
What I don't want to do:
Enable istio sidecar proxy injection on the nginx ingress controller(I don't want to make any changes in it as it is serving as frontend for multiple other workloads).
I have been trying to make it work since a couple of weeks with no luck. Any help from the community will be greatly appreciated.
my goal is to atleast enable TLS mutual auth between service-1 and service-2
AFAIK if you have enabled injection in namespace-2 then services here already have mTLS enabled. It's enabled by default since istio 1.5 version. There are related docs about this.
Automatic mutual TLS is now enabled by default. Traffic between sidecars is automatically configured as mutual TLS. You can disable this explicitly if you worry about the encryption overhead by adding the option -- set values.global.mtls.auto=false during install. For more details, refer to automatic mutual TLS.
Take a look here for more information about how mtls between services works.
Mutual TLS in Istio
Istio offers mutual TLS as a solution for service-to-service authentication.
Istio uses the sidecar pattern, meaning that each application container has a sidecar Envoy proxy container running beside it in the same pod.
When a service receives or sends network traffic, the traffic always
goes through the Envoy proxies first.
When mTLS is enabled between two services, the client side and server side Envoy proxies verify each other’s identities before sending requests.
If the verification is successful, then the client-side proxy encrypts the traffic, and sends it to the server-side proxy.
The server-side proxy decrypts the traffic and forwards it locally to the actual destination service.
NGINX
But the problem is, the traffic from outside the mesh is getting terminated at the ingress resource. The nginx reverse proxy in namespace-2 does not see the incoming calls.
I see there is similar issue on github about that, worth to try with this.
Answer provided by #stono.
Hey,
This is not an istio issue, getting nginx to work with istio is a little bit difficult. The issue is because fundamentally nginx is making an outbound request to an ip that is has resolved from your hostname foo-bar. This won't work as envoy doesn't know what cluster ip belongs to, so it fails.
I'd suggest using the ingress-nginx kubernetes project and in turn using the following value in your Ingress configuration:
annotations:
nginx.ingress.kubernetes.io/service-upstream: "true"
What this does is ensure that nginx doesn't resolve the upstream address to an ip, and maintains the correct Host header which the sidecar uses in order to route to your destination.
I recommend using this project because I use it, with Istio, with a 240 odd service deployment.
If you're not using ingress-nginx, I think you can set proxy_ssl_server_name on; or another thing you could try is forcefully setting the Host header on the outbound request to the internal fqdn of the service so:
proxy_set_header Host foo-bar;
Hope this helps but as I say, it's an nginx configuration rather than an istio problem.

Certificates for services

Moving from VMs to Kubernetes.
We are running our services on multiple VMs. Services are running on multiple VMs and have VIP in front of them. Clients will be accessing VIP and VIP will be routing traffic to services. Here, we use SSL cert for VIP and VIP to VM also using HTTPS.
Here the service will be deployed into VM with a JKS file. This JKS file will have a cert for exposing HTTPS and also to communicate with SSL enabled database.
How to achieve the same thing in Kubernetes cluster? Need HTTPS for VIP and services and also for communication to SSL enabled database from service.
Depends on the platform where you running Kubernetes (on-premises, AWS, GKE, GCE etc.) you have several ways to do it, but I will describe a solution which will work on all platforms - Ingress with HTTPS termination on it.
So, in Kubernetes you can provide access to your application inside a cluster using Ingress object. It can provide load balancing, HTTPS termination, routing by path etc. In most of the cases, you can use Ingress controller based on Nginx. Also, it providing TCP load balancing and SSL Passthrough if you need it.
For providing routing from users to your services, you need:
Deploy your application as a combination of Pods and Service for them.
Deploy Ingress controller, which will manage your Ingress objects.
Create a secret for your certificate.
Create an Ingress object with will point to your service with TLS settings for ask Ingress to use your secret with your certificate, like that:
spec:
tls:
hosts:
- foo.bar.com
secretName: foo-secret
Now, when you call the foo.bar.com address, Ingress with using FQDN-based routing and provide HTTPS connection between your client and pods in a cluster using a service object, which knows where exactly your pod is. You can read how it works here and here.
What about encrypted communication between your services inside a cluster - you can use the same scheme with secrets for providing SSL keys to all your services and setup Service to use HTTPS endpoint of an application instead of HTTP. Technically it is same as using https upstream in installations without Kubernetes, but all configuration for Nginx will be provided automatically based on your Service and Ingress objects configuration.