some pitfalls in mtls traffic organization in frame of internal-mesh (istio) - kubernetes

Actually, I have two questions, but firstly I'm just started to study istio...
It is possible to enable mtls authentication on egress pod?
I tried to apply PeerAuthentication in mtls STRICT mode on egress service, but envoy still allows for http traffic.
And also, it's possible to apply a DestinationRule to a http traffic from ingress to make it mTLS?
I also tried to apply the dsr on ingress service with mode ISTIO_MUTUAL or MUTUAL (with custom secret), but traffic is still http... Can it be related to Citadel (enabled/disabled)?

Related

Istio | TLS mutual-auth without using Istio ingress gateway

I want to achieve TLS mutual auth between my different services running in a kubernetes cluster and I have found that Istio is a good solution to achieve this without making any changes in code.
I am trying to use Istio sidecar injection to do TLS mutual auth between services running inside the cluster.
Outside traffic enters the mesh through nginx ingress controller. We want to keep using it instead of the Istio ingress controller(we want to make as little changes as possible).
The services are able to communicate with each other properly when the Istio Sidecar injection is disabled. But as soon as I enable the sidecar in the application's namespace, the app is not longer able to serve requests(I am guessing the incoming requests are dropped by the envoy sidecar proxy).
What I want to do:
Enable istio sidecar proxy injection on namespace-2(nginx ingress controller, service 1 and service 2) so that all services communicate with each other through TLS mutual auth.
What I don't want to do:
Enable istio sidecar proxy injection on the nginx ingress controller(I don't want to make any changes in it as it is serving as frontend for multiple other workloads).
I have been trying to make it work since a couple of weeks with no luck. Any help from the community will be greatly appreciated.
my goal is to atleast enable TLS mutual auth between service-1 and service-2
AFAIK if you have enabled injection in namespace-2 then services here already have mTLS enabled. It's enabled by default since istio 1.5 version. There are related docs about this.
Automatic mutual TLS is now enabled by default. Traffic between sidecars is automatically configured as mutual TLS. You can disable this explicitly if you worry about the encryption overhead by adding the option -- set values.global.mtls.auto=false during install. For more details, refer to automatic mutual TLS.
Take a look here for more information about how mtls between services works.
Mutual TLS in Istio
Istio offers mutual TLS as a solution for service-to-service authentication.
Istio uses the sidecar pattern, meaning that each application container has a sidecar Envoy proxy container running beside it in the same pod.
When a service receives or sends network traffic, the traffic always
goes through the Envoy proxies first.
When mTLS is enabled between two services, the client side and server side Envoy proxies verify each other’s identities before sending requests.
If the verification is successful, then the client-side proxy encrypts the traffic, and sends it to the server-side proxy.
The server-side proxy decrypts the traffic and forwards it locally to the actual destination service.
NGINX
But the problem is, the traffic from outside the mesh is getting terminated at the ingress resource. The nginx reverse proxy in namespace-2 does not see the incoming calls.
I see there is similar issue on github about that, worth to try with this.
Answer provided by #stono.
Hey,
This is not an istio issue, getting nginx to work with istio is a little bit difficult. The issue is because fundamentally nginx is making an outbound request to an ip that is has resolved from your hostname foo-bar. This won't work as envoy doesn't know what cluster ip belongs to, so it fails.
I'd suggest using the ingress-nginx kubernetes project and in turn using the following value in your Ingress configuration:
annotations:
nginx.ingress.kubernetes.io/service-upstream: "true"
What this does is ensure that nginx doesn't resolve the upstream address to an ip, and maintains the correct Host header which the sidecar uses in order to route to your destination.
I recommend using this project because I use it, with Istio, with a 240 odd service deployment.
If you're not using ingress-nginx, I think you can set proxy_ssl_server_name on; or another thing you could try is forcefully setting the Host header on the outbound request to the internal fqdn of the service so:
proxy_set_header Host foo-bar;
Hope this helps but as I say, it's an nginx configuration rather than an istio problem.

mTLS between two kubernetes clusters

I'm trying to get mTLS between two applications in two kubernetes clusters without the way Istio does it (with its ingress gateway), and I was wondering if the following would work (for Istio, for Likerd, for Consul...).
Let's say we have a k8s cluster A with an app A.A. and a cluster B with an app B.B. and I want them to communicate with mTLS.
Cluster A has letsEncrypt cert for nginx ingress controller, and a mesh (whatever) for its application.
Cluster B has self signed cert from our root CA.
Cluster A and B service meshes have different certificates signed by our root CA.
Traffic goes from the internet to Cluster A ingress controller (HTTPS), from there to app A.A.
After traffic gets to app A.A., this app wants to talk to app B.B.
Apps A.A. and B.B. have endpoints exposed via ingress (using their ingress controllers).
The TLS certificates are ended in the endpoints and are wildcards.
Do you think the mTLS will work in this situation?
Basically this blog from portshift answer your question.
The answer is depends on how your clusters are built, because
Istio offers few options to deploy service mesh in multiple kubernetes clusters, more about it here.
So, if you have Single Mesh deployment
You can deploy a single service mesh (control-plane) over a fully connected multi-cluster network, and all workloads can reach each other directly without an Istio gateway, regardless of the cluster on which they are running.
BUT
If you have Multi Mesh Deployment
With a multi-mesh deployment you have a greater degree of isolation and availability, but it increases the set-up complexity. Meshes that are otherwise independent are loosely coupled together using ServiceEntries, Ingress Gateway and use a common root CA as a base for secure communication. From a networking standpoint, the only requirement is that the ingress gateways be reachable from one another. Every service in a given mesh that needs to be accessed a service in a different mesh requires a ServiceEntry configuration in the remote mesh.
In multi-mesh deployments security can become complicated as the environment grows and diversifies. There are security challenges in authenticating and authorizing services between the clusters. The local Mixer (services policies and telemetries) needs to be updated with the attributes of the services in the neighbouring clusters. Otherwise, it will not be able to authorize these services when they reaching its cluster. To achieve this, each Mixer needs to be aware of the workload identities, and their attributes, in neighbouring clusters Each Citadel needs to be updated with the certificates of neighbouring clusters, to allow mTLS connections between clusters.
Federation of granular workloads identities (mTLS certificates) and service attributes across multi-mesh control-planes can be done in the following ways:
Kubernetes Ingress: exposing HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. An Ingress can terminate SSL / TLS, and offer name based virtual hosting. Yet, it requires an Ingress controller for fulfilling the Ingress rules
Service-mesh gateway: The Istio service mesh offers a different configuration model, Istio Gateway. A gateway allows Istio features such as monitoring and route rules to be applied to traffic entering the cluster. An ingress gateway describes a load balancer operating at the edge of the mesh that receives incoming HTTP/TCP connections. It configures exposed ports, protocols, etc. Traffic routing for ingress traffic is configured instead using Istio routing rules, exactly the same way as for internal service requests.
Do you think the mTLS will work in this situation?
Based on above informations
If you have Single Mesh Deployment
It should be possible without any problems.
If you have Multi Mesh Deployment
It should work, but since you don't want to use istio gateway then the only option is kubernetes ingress.
I hope it answer your question. Let me know if you have any more questions.
You want Consul's Mesh Gateways. They mTLS service-to-service connectivity between federated Consul clusters deployed in different data centers, clusters, or runtime environments.

Kubernetes TLS all the way to the pod

I just started using Istio and securing service to service communication and have two questions:
When using nginx ingress, will Istio secure the data from the ingress controller to the service with TLS?
Is it possible to secure with TLS all the way to the pod?
With "Envoy" deployed as sidecar container to both i.e. (a) NGINX POD and (b) Application POD, istio will ensure that both the services communicate to each-other over TLS.
Infact that's the whole idea behind using Istio i.e. to secure all the communication way till the POD using ENVOY side-car. Envoy will intercept all the traffic going in/out of the POD and perform TLS communication with the peer Envoy counterpart.
All this is done in a transparent manner i.e transparent to the application container. The responsibility to perform TLS layer jobs ex. handshake, encryption/decryption, peer discovery etc. are all offloaded to the envoy sidecar.

Certificates for services

Moving from VMs to Kubernetes.
We are running our services on multiple VMs. Services are running on multiple VMs and have VIP in front of them. Clients will be accessing VIP and VIP will be routing traffic to services. Here, we use SSL cert for VIP and VIP to VM also using HTTPS.
Here the service will be deployed into VM with a JKS file. This JKS file will have a cert for exposing HTTPS and also to communicate with SSL enabled database.
How to achieve the same thing in Kubernetes cluster? Need HTTPS for VIP and services and also for communication to SSL enabled database from service.
Depends on the platform where you running Kubernetes (on-premises, AWS, GKE, GCE etc.) you have several ways to do it, but I will describe a solution which will work on all platforms - Ingress with HTTPS termination on it.
So, in Kubernetes you can provide access to your application inside a cluster using Ingress object. It can provide load balancing, HTTPS termination, routing by path etc. In most of the cases, you can use Ingress controller based on Nginx. Also, it providing TCP load balancing and SSL Passthrough if you need it.
For providing routing from users to your services, you need:
Deploy your application as a combination of Pods and Service for them.
Deploy Ingress controller, which will manage your Ingress objects.
Create a secret for your certificate.
Create an Ingress object with will point to your service with TLS settings for ask Ingress to use your secret with your certificate, like that:
spec:
tls:
hosts:
- foo.bar.com
secretName: foo-secret
Now, when you call the foo.bar.com address, Ingress with using FQDN-based routing and provide HTTPS connection between your client and pods in a cluster using a service object, which knows where exactly your pod is. You can read how it works here and here.
What about encrypted communication between your services inside a cluster - you can use the same scheme with secrets for providing SSL keys to all your services and setup Service to use HTTPS endpoint of an application instead of HTTP. Technically it is same as using https upstream in installations without Kubernetes, but all configuration for Nginx will be provided automatically based on your Service and Ingress objects configuration.

Securing Nginx-Ingress > Istio-Ingress

I have Istio Ingress which is working with traffic going in to microservices and inbetween microservices is being encrypted within ISTIO domain. But i dont want to expose ISTIO ingress to public.
So tried deploying NGINX or HAPROXY ingress (with https certs) and point them to ISTIO ingress and everything is working great.
My only worry now is that traffic between NGINX INGRESS (https term) > ISTIO INGRESS is not encrypted.
What is the usual way on Istio to get full encryption of traffic but with NGINX/HAPROXY ingress.
I guess one way is to HAPROXY tcp mode to ISTIO ingress with certs on Istio ingress. Haven't tried it but it should work. Wild idea is running NGINX ingress within ISTIO mash but then i would loose some Istio Ingress capabilities.
What is the recommended way or any suggestion. How is usualy Istio being exposed on some real Prod env example.
Since i dont use cloud loadbalancer on voyager instances but expose Voyager/Haproxy on Host-Port
I collocated Voyager(HostPort) and Istio(HostPort) via DeamonSet/node-selector(and taints) on same machines called frontend. Then just pointed Voyager to loadbalance the loopback/localhost with port of Istio HostPort I specified.
backendRule:
- 'server local-istio localhost:30280'
This way no unenctypted traffic is traversing the network between Voyager/Haproxy and Istio Ingress since they communicate now on same Host. I have 2 frontend nodes witch are beeing loadbalanced so i have redundancy. But its kind of improvisation and breaking kubernetes logic. On the other hand it works great.
Other solution was to use selfsigned certificates on Istio, than just point Voyager/Haproxy to Istio instances. But this requires multiple terminations since Voyager is also terminating Https. Advanteg of this is that you can leave Voyager and Istio instances to Kubernetes to distribute. No need to bind them to specific machines.