I'm using Kubernetes service in Alibaba Cloud, kubernetes server version is v1.14.8-aliyun.1 while istio version is 1.2.7.
From this istio official tutorial(https://istio.io/docs/tasks/policy-enforcement/denial-and-list/), I learnt how to block a single ip from istio ingress gateway. I apply these rule, instance and handler in istio-system namespace and my public ip was blocked successfully.
Then I try to use similar concept to do geo ip blocking. First I get the country IPs list from GeoLite2 database provided by MaxMind, then i parse those IPs into multiple handler files(3000 ip list per file as there is a resource size limit in kubernetes). Eventually let's say I want to block IP from US, I will have around 500 handler files generated from 25MB US-IP.txt. Now when I apply these resources into kubernetes, when I observe Istio Mixer log, I can see the following error:
Error receiving MCP response: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (4623689 vs. 4194304)
I tried to set the Istio Mixer MaxMessageSize to 30MB, but still getting this error. Looks like this 4MB is the limit from golang grpc library.
Can anyone please give me an idea how to do geo ip blocking with istio ingress gateway? Step to reproduce this issue is included in https://github.com/heylong6551/istio-issue.
Thanks in advance.
Related
we are testing out the Ambassador Edge Stack and started with a brand new GKE private cluster in autopilot mode.
We installed from scratch following the quick start tour to get a feeling of it and ended up with the following error
Error from server: error when creating "mapping-test.yaml": conversion webhook for getambassador.io/v3alpha1, Kind=Mapping failed: Post "https://emissary-apiext.emissary-system.svc:443/webhooks/crd-convert?timeout=30s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
We did a few rounds of DNS testing and deployed a few different test pods in different namespaces to validate that kube-dns is working properly, everything looks good at that end. Also the resolv.conf looks good.
Ambassador is using the hostname emissary-apiext.emissary-system.svc:443 (without the cluster.local) which should resolve fine. Doing a lookup with the FQN (with cluster.local) works fine btw.
Any clues?
Thanks a lot and take care.
I think i found the solution, posting here if someone come across this later on.
So i followed this to deploy Ambassador Edge Stack in a Autopilot private cluster. I was getting the same error when i was trying to deploy the Mapping object (step 2.2).
The issue is that the control plane (API Server) is trying to call emissary-apiext.emissary-system.svc:443 but the pods behind it are listening on port 8443 (figured that out by describing the Service).
So i added a firewall rule to allow the GKE control plane to talk to the nodes on port 443.
The firewall rule in question is called gke-gke-ap-xxxxx-master. The xxxx is called the cluster hash and is different for each cluster. To make sure you are editing the proper rule, double check that source IP Range matches the "Control plane address range" from the cluster details page. And that it's the rule that has a name ending with master.
Just edit that rule and add 8443 to the tcp ports. It should work
That sounds like an issue related to the webhooks limitation in GKE Autopilot
Which version of GKE are you on ?
Also there is a limitation with which resources and namespaces we allow webhooks to intercept
Additionally, webhooks which specify one or more of following
resources (and any of their sub-resources) in the rules, will be
rejected:
group: "" resource: nodes
group: "" resource: persistentvolumes
group: certificates.k8s.io resource: certificatesigningrequests
group: authentication.k8s.io resource: tokenreviews
You probably have to check the manifests of Ambassador Edge Stack to figure this out.
I am running services on Kubernetes cluster and for security purpose, I came to know about service-mesh named istio.
Currently, I have enabled the Mtls in istio-system namespace and I can see Sidecars is running inside the pod in bookinfo service.
But while capturing traffic through Wireshark between pod I can see my context route in Wireshark is still in HTTP. I supposed that it should be in TLS and encrypted.
Note : I am using istio-1.6.3 and Defined Gateway and ingress (Kubernetes ingress) to the service.
Here is the screen shot :
Wireshark image
As I mentioned in comment, AFAIK it´s working as designed, if you want to see tls you could try that what mentioned in this tutorial.
Seeing that unencrypted communication to the QOTM service is only occurring over the loopback adapter is only one part of the TLS verification process. You ideally want to see the encrypted traffic flowing around your cluster. You can do this by removing the “http” filter, and instead adding a display filter to only show TCP traffic with a destination IP address of your QOTM Pod and a target port of 20000, which you can see that the Envoy sidecar is listening on via the earlier issued kubectl describe command.
Hi #jt97 I can see lock badge in kiali dashboard, I read somewhere that this is a representation of encryption is happening over there.
Exactly, there is github issue about that.
Hope you find this useful.
We use Google Cloud Run on our K8s cluster on GCP which is powered by Knative and Anthos, however it seems the load balancer doesn't amend the x-forwarded-for (and this is not expected as it is TCP load balancer), and Istio doesn't do the same.
Do you have the same issue or it is limited to our deployment?
I understand Istio support this as part of their upcoming Gateway Network Topology but not in the current gcp version.
I think you are correct in assessing that current Cloud Run for Anthos set up (unintentionally) does not let you see the origin IP address of the user.
As you said, the created gateway for Istio/Knative in this case is a Cloud Network Load Balancer (TCP) and this LB doesn’t preserve the client’s IP address on a connection when the traffic is routed to Kubernetes Pods (due to how Kubernetes networking works with iptables etc). That’s why you see an x-forwarded-for header, but it contains internal hops (e.g. 10.x.x.x).
I am following up with our team on this. It seems that it was not noticed before.
I have a cluster with 3 nodes. In each node i have a frontend application running in a Pod and backend application running in a separate Pod.
I send data from the frontend application to the backend application, to do this i utilise the Cluster IP Service and k8 dns resource.
I also have a function in my frontend where i send data to a separate service unrelated to my k8s cluster. I send this data using a standard AJAX request to a url with a payload i.e http://my-seperate-service-unrelated-tok8.com.
All of this works correctly and the cluster operates as i want. - i have this cluster deployed to GKE.
I now want to run this cluster local using minikube, which i have been able to do, however, when i am running locally i do not want to send data to my external service - instead i want to forward it to either a new Pod i will create or just not send it.
The problem here is i need a proxy to intercept outgoing network traffic, check if the outgoing request is the request i am looking for and if it is then redirect it.
I understand each node running in a cluster has a kube-proxy service running within the node - which is used to forward traffic to the relevant services in the cluster.
I would like to either extend this service, or create a new proxy service where i can listen for outgoing traffic to a specific url and redirect it.
Is this possible to do in a k8 cluster? I assume there is a Service i can create to listen for all outgoing requests and redirect specific requests based on rules i set.
I wasn’t sure if k8 clusters have a Service already configured i can simply add to - that’s why i thought of the kube-proxy, would anyone be able to advice on this?
I wanted to add this proxy so i don’t have to change my code when its ran locally in minikube or deployed to GKE.
Any help is greatly appreciated. Thanks!
I did a tool that help you to forward a service to another service,local port, service from other cluster, etc...
This way you can have exactly your same urls, ports and code... but the underlying services gets "replaced", if I understand correctly this is what you are looking for.
Here is a quick example of an stage service being replaced with my local 3000 port
This is the repository with more info and examples: linker-tool
If you are interested let me know if you need help or have any question.
I have a container with an exposed port in a pod. When I check the log in the containerized app, the source of the requests is always 192.168.189.0 which is a cluster IP. I need to be able to see the original source IP of the request. Is there any way to do this?
I tried modifying the service (externalTrafficPolicy: Local) instead of Cluster but it still doesn't work. Please help.
When you are working on an application or service that needs to know the source IP address you need to know the topology of the network you are using. This means that you need to know how the different layers of loadbalancers or proxies works to deliver the traffic to your service.
Depending on what cloud provider you are using or the loadbalancer you have in front of your application the source IP address should be on a header of the request. The header you have to look for is X-Fordwared-for, more info here, depending on the proxy or loadbalancer you are using sometimes you need to activate this header to receive the correct IP address.