I am running services on Kubernetes cluster and for security purpose, I came to know about service-mesh named istio.
Currently, I have enabled the Mtls in istio-system namespace and I can see Sidecars is running inside the pod in bookinfo service.
But while capturing traffic through Wireshark between pod I can see my context route in Wireshark is still in HTTP. I supposed that it should be in TLS and encrypted.
Note : I am using istio-1.6.3 and Defined Gateway and ingress (Kubernetes ingress) to the service.
Here is the screen shot :
Wireshark image
As I mentioned in comment, AFAIK it´s working as designed, if you want to see tls you could try that what mentioned in this tutorial.
Seeing that unencrypted communication to the QOTM service is only occurring over the loopback adapter is only one part of the TLS verification process. You ideally want to see the encrypted traffic flowing around your cluster. You can do this by removing the “http” filter, and instead adding a display filter to only show TCP traffic with a destination IP address of your QOTM Pod and a target port of 20000, which you can see that the Envoy sidecar is listening on via the earlier issued kubectl describe command.
Hi #jt97 I can see lock badge in kiali dashboard, I read somewhere that this is a representation of encryption is happening over there.
Exactly, there is github issue about that.
Hope you find this useful.
Related
We use Google Cloud Run on our K8s cluster on GCP which is powered by Knative and Anthos, however it seems the load balancer doesn't amend the x-forwarded-for (and this is not expected as it is TCP load balancer), and Istio doesn't do the same.
Do you have the same issue or it is limited to our deployment?
I understand Istio support this as part of their upcoming Gateway Network Topology but not in the current gcp version.
I think you are correct in assessing that current Cloud Run for Anthos set up (unintentionally) does not let you see the origin IP address of the user.
As you said, the created gateway for Istio/Knative in this case is a Cloud Network Load Balancer (TCP) and this LB doesn’t preserve the client’s IP address on a connection when the traffic is routed to Kubernetes Pods (due to how Kubernetes networking works with iptables etc). That’s why you see an x-forwarded-for header, but it contains internal hops (e.g. 10.x.x.x).
I am following up with our team on this. It seems that it was not noticed before.
I have a container that I don't want to be accessed except through a gateway that checks for authorization. The gateway works great if you are accessing the container through the gateway service. However, if I curl the pod IP:port combination to access the container I want protected the request is allowed through with nothing stopping it. I have tried configuring a simple NetworkPolicy to prevent this access by using the basic example here: https://kubernetes.io/docs/concepts/services-networking/network-policies/ specifically the example at the bottom of the page where you deny everything for Ingress and Egress. That network policy still did not prevent the curl to the pod IP:port combination. What am I missing or what am I doing wrong?
I have a container with an exposed port in a pod. When I check the log in the containerized app, the source of the requests is always 192.168.189.0 which is a cluster IP. I need to be able to see the original source IP of the request. Is there any way to do this?
I tried modifying the service (externalTrafficPolicy: Local) instead of Cluster but it still doesn't work. Please help.
When you are working on an application or service that needs to know the source IP address you need to know the topology of the network you are using. This means that you need to know how the different layers of loadbalancers or proxies works to deliver the traffic to your service.
Depending on what cloud provider you are using or the loadbalancer you have in front of your application the source IP address should be on a header of the request. The header you have to look for is X-Fordwared-for, more info here, depending on the proxy or loadbalancer you are using sometimes you need to activate this header to receive the correct IP address.
I am using istio v1.0.6 and kubernetes 1.11. I was able to succesfully implement the ingress feature of istio.However, I am seeing that by default istio block the TCP connections from the mesh to applications outside cluster. But, it allows https connections to applications that are not even registered in the mesh.
Is there any default egress rules that I am missing ?
Up until version 1.0, Istio’s default behavior was to block access to external endpoints . This created a connectivity issue and applications were breaking until the user could discover all the endpoints and configure them manually.
Istio 1.1 changed the default to allow access to all external endpoints.
See this for additional details and an automated way to generate serviceentries:
https://medium.com/#tufin/locking-down-istio-egress-with-automatic-traffic-discovery-51f0d49879a3
I'm trying to use Istio in a K8s 1.6 cluster on AWS.
I have a Kafka pod/service running the old fashion way, with a "kafka-zk-broker-kafka.dev" service without IP, so the kafka-zk-broker-kafka.dev service (I'm in the dev namespace) resolve to the internal name of my 3 Kafka pods. This is working great.
~ # nslookup kafka-zk-broker-kafka.dev
Name: kafka-zk-broker-kafka.dev
Address 1: 10.33.0.11 kafka-zk-kafka-0.kafka-zk-broker-kafka.dev.svc.cluster.local
Address 2: 10.38.96.16 kafka-zk-kafka-2.kafka-zk-broker-kafka.dev.svc.cluster.local
Address 3: 10.40.128.13 kafka-zk-kafka-1.kafka-zk-broker-kafka.dev.svc.cluster.local
I deployed a kafka producer application, using Istio sidecart as it is also exposing a gRPC port for internal uses.
Deployment went fine, but my application can't connect to to the "kafka-broker" service. DNS resolution is OK, but I can't reach the service port (TCP:9092) using either kafka client or telnet.
What I understand is that, when the Istio (envoy) sidecart is deployed, everything out of the POD is going through the Envoy proxy...
So the envoy proxy does not know how to reach regular services ?
Am I missing something ? is there a way to mix Istio/Envoy with regular k8s services ?
What you are doing should work, but I think you're running into this known bug: https://github.com/istio/issues/issues/37