How can I block some public IPs to access my Kubernetes cluster - kubernetes

I have deployed my Kubernetes cluster on AWS EKS and using ingress gateway to block IPs to access my certain services. Is there a way I can block those public IPs to access my Kubernetes cluster from inside my cluster (say using ingress-gateway) if not, then is there a way to white list certain IPs to access cluster from inside the cluster?
I am already aware that the security group of AWS will be able to do this but I want to implement it from the inside of cluster.

You can try using Calico on EKS.
Network policies are similar to AWS security groups in that you can
create network ingress and egress rules. Instead of assigning
instances to a security group, you assign network policies to pods
using pod selectors and labels.

Related

What is the Best Way to Scale an external (non EKS) EC2 Auto Scaling Group from Inside a Kubernetes Cluster Based on Prometheus Metrics?

I am currently autoscaling an HPA via internal Prometheus metrics which then filters down to scale the cluster via the AWS Cluster Autoscaler. That HPA is tied to an external service run on bare EC2 instances. I would like to use the same metrics that I use to scale that HPA to also scale the ASG behind that service that is external to the Kubernetes cluster.
What is the best way to do this? It is preferable that the external EC2 cluster does not have network access to the EKS cluster.
I was thinking about just writing a small service that does it via the AWS API based on polling Prometheus intermittently but I figured that there must be a better way.

How to do client-side load balancing of a GRPC service hosted inside a Kubernetes cluster from outside a Kubernetes cluster?

Scenario: We have a client outside a K8s cluster trying to access a GRPC service hosted inside a K8s cluster. Both the client and the service are part of the same VNET in Azure. We would like to use client-side load balancing for accessing this GRPC service.
Setup of our K8s cluster: Our K8s cluster is hosted inside an Azure VNET and uses Azure CNI networking model, so this means the pods in our cluster have the IP addresses from the VNET's IP address space. Please note we are not using AKS and are self-hosting the K8s cluster, but this whole question should not depend on this in my opinion.
Questions:
We would like to use client-side load balancing for accessing this GRPC service. If both our client and server were present inside the K8s cluster, then we could have used K8s headless service to get list of IP addresses. But in this case since client is outside the K8s cluster, we are looking for solutions on how to retrieve the IP addresses outside the K8s cluster?
Can K8s cluster create DNS records in a DNS server which is hosted outside the K8s cluster so that the client which is outside the K8s cluster can access the list of IP addresses from it?
Thanks for your help!
I found that I could solve the issue by using Extenral DNS. After wiring up my cluster with External DNS (which was linked to a Azure Private DNS zone), I created a headless service and found that on deployment of this service, DNS records were created in the Azure Private DNS zone. I was able to get the list of IP addresses of the pods by just doing a DNS lookup of the service's DNS name.

GKE cluster egress traffic coming out the nodes rather than the LB service

I'm new to GKE and K8S so please bare with me and my silliness. I currently have a GKE cluster that has two nodes in the default node pool and the cluster is exposed via a LoadBalancer type service.
These nodes are tasked with calling a Compute Engine instance via HTTP. I have a Firewall rule set in GCP to deny ingress traffic to the GCE instance except the one coming from the GKE cluster.
The issue is that the traffic isn't coming from the LoadBalancer's service IP but rather from the nodes themselves, so whitelisting the services' IP has no effect and I have to whitelist the IPs of the nodes instead of the cluster. This is not ideal, since each time a new node is created I have to change the Firewall rule. I understand that once you have a service set up in the cluster, all traffic will be directed towards the IP of the service, so why is this happening? What am I doing wrong? Please let me know if you need more details and thanks in advance.
YAML of the service:
https://i.stack.imgur.com/XBZmE.png
When you create a service on GKE, and you expose it to internet, a load balancer is created. This load balancer manage only the ingress traffic (traffic from internet to your GKE cluster).
When your pod initiate a communication, the traffic is not managed by the load balancer, but by the node that host the pod, if the node have a public IP (Instead of denied the traffic to GCE instance, simply remove the public IP, it's easier and safer!).
If you want to manage the IP for egress traffic originated by your pod, you have to set up a Cloud NAT on your GKE cluster.

How to access services in a different kubernetes cluster using DNS

How to access services in a different kubernetes cluster using DNS and not willing to use External ingress.
With in the cluster I am using internal DNS ex: mysvc.mynamespace.
How i can achieve the same from one Cluster to another.
You can explore KubeFed which allows cross cluster service discovery

Peering Connection between GKE and EKS

Is it possible to connect a GKE cluster to a VPC within AWS? For this specific use case, I want the GKE cluster to be able to talk with the EKS cluster behind a VPC in AWS.
I have the CIDR block for my GKE cluster gcloud container clusters describe _cluster_name_ | grep clusterIpv4Cidr
I've already created a VPC and cluster in AWS (i.e. I have a VPC ID for my aws VPC)
Do I need to create a VPC for my GKE cluster in addition to the VPC for my EKS cluster, or do I just need the CIDR range for the GKE cluster for AWS?
Google searching renders very few results for connecting clusters from different providers.
In my opinion, it's possible with VPN connection. At first, I think you should have a look at Kubernetes Engine Communication Through VPN demo. And then, move to the more close example for your case - site-to-site VPN between GCP and AWS. In addition, check some Google Cloud Router documentation and example for some extra information about networking at GKE.