Google Kubernetes Engine Service loadBalancerSourceRanges not allowing connection on IP range - kubernetes

I'm exposing an application run on a GKE cluster using a LoadBalancer service. By default, the LoadBalancer creates a rule in the Google VPC firewall with IP range 0.0.0.0/0. With this configuration, I'm able to reach the service in all situations.
I'm using an OpenVPN server inside my default network to prevent outside access to GCE instances on a certain IP range. By modifying the service .yaml file loadBalancerSourceRanges value to match the IP range of my VPN server, I expected to be able to connect to the Kubernetes application while connected to the VPN, but not otherwise. This updated the Google VPN firewall rule with the range I entered in the .yaml file, but didn't allow me to connect to the service endpoint. The Kubernetes cluster is located in the same network as the OpenVPN server. Is there some additional configuration that needs to be used other than setting loadBalancerSourceRanges to the desired ingress IP range for the service?

You didn't mention the version of this GKE cluster; however, it might be helpful to know that, beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Google Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons. You can replicate the behavior of older clusters (1.8.x and earlier) by setting a new firewall rule on your cluster. You can see this notification on the Release Notes published in the official documentation

Related

It is possible config a VM instance on GCP to only receive requests from the Load Balancer?

I have an nginx deployment on k8s that is exposed via a nodeport service. Is it possible by using the GCP firewall to permit that only an application LB can talk with these nodeports?
I wouldn't like to let these two nodeports opened to everyone.
Surely you can controll access traffics to your VM instance via firewall.
That is why firewall service exitsts.
If you created a VM in the default VPC and firewall setting environment, firewall will deny all traffics from outside.
You just need to write a rule to allow traffic from the application LB.
According to Google document, You need to allow from 130.211.0.0/22 and 35.191.0.0/16 IP ranges.

Getting client ip using Knative and Anthos

We use Google Cloud Run on our K8s cluster on GCP which is powered by Knative and Anthos, however it seems the load balancer doesn't amend the x-forwarded-for (and this is not expected as it is TCP load balancer), and Istio doesn't do the same.
Do you have the same issue or it is limited to our deployment?
I understand Istio support this as part of their upcoming Gateway Network Topology but not in the current gcp version.
I think you are correct in assessing that current Cloud Run for Anthos set up (unintentionally) does not let you see the origin IP address of the user.
As you said, the created gateway for Istio/Knative in this case is a Cloud Network Load Balancer (TCP) and this LB doesn’t preserve the client’s IP address on a connection when the traffic is routed to Kubernetes Pods (due to how Kubernetes networking works with iptables etc). That’s why you see an x-forwarded-for header, but it contains internal hops (e.g. 10.x.x.x).
I am following up with our team on this. It seems that it was not noticed before.

How to make cluster nodes private on Google Kubernetes Engine?

I noticed every node in a cluster has an external IP assigned to it. That seems to be the default behavior of Google Kubernetes Engine.
I thought the nodes in my cluster should be reachable from the local network only (through its virtual IPs), but I could even connect directly to a mongo server running on a pod from my home computer just by connecting to its hosting node (without using a LoadBalancer).
I tried to make Container Engine not to assign external IPs to newly created nodes by changing the cluster instance template settings (changing property "External IP" from "Ephemeral" to "None"). But after I did that GCE was not able to start any pods (Got "Does not have minimum availability" error). The new instances did not even show in the list of nodes in my cluster.
After switching back to the default instance template with external IP everything went fine again. So it seems for some reason Google Kubernetes Engine requires cluster nodes to be public.
Could you explain why is that and whether there is a way to prevent GKE exposing cluster nodes to the Internet? Should I set up a firewall? What rules should I use (since nodes are dynamically created)?
I think Google not allowing private nodes is kind of a security issue... Suppose someone discovers a security hole on a database management system. We'd feel much more comfortable to work on fixing that (applying patches, upgrading versions) if our database nodes are not exposed to the Internet.
GKE recently added a new feature allowing you to create private clusters, which are clusters where nodes do not have public IP addresses.
This is how GKE is designed and there is no way around it that I am aware of. There is no harm in running kubernetes nodes with public IPs, and if these are the IPs used for communication between nodes you can not avoid it.
As for your security concern, if you run that example DB on kubernetes, even if you go for public IP it would not be accessible, as this would be only on the internal pod-to-pod networking, not the nodes them selves.
As described in this article, you can use network tags to identify which GCE VMs or GKE clusters are subject to certain firewall rules and network routes.
For example, if you've created a firewall rule to allow traffic to port 27017, 27018, 27019, which are the default TCP ports used by MongoDB, give the desired instances a tag and then use that tag to apply the firewall rule that allows those ports access to those instances.
Also, it is possible to create GKE cluster with applying the GCE tags on all nodes in the new node pool, so the tags can be used in firewall rules to allow/deny desired/undesired traffic to the nodes. This is described in this article under --tags flag.
Kubernetes Master is running outside your network and it needs to access your nodes. This could the the reason for having public IPs.
When you create your cluster, there are some firewall rules created automatically. These are required by the cluster, and there's e.g. ingress from master and traffic between the cluster nodes.
Network 'default' in GCP has readymade firewall rules in place. These enable all SSH and RDP traffic from internet and enable pinging of your machines. These you can remove without affecting the cluster and your nodes are not visible anymore.

Configure firewall rules for kubernetes cluster

I am trying to configure firewall rules for kubernetes service to allow restricted access to my mongo pod when running a load balancer service. I would like to know how to specify the ip range because we have our own internal firewall?
From https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service:
When using a Service with spec.type: LoadBalancer, you can specify the IP ranges that are allowed to access the load balancer by using spec.loadBalancerSourceRanges. This field takes a list of IP CIDR ranges, which Kubernetes will use to configure firewall exceptions. This feature is currently supported on Google Compute Engine, Google Container Engine and AWS. This field will be ignored if the cloud provider does not support the feature.
This loadBalancerSourceRanges property should help in your case.

Access SkyDNS etcd API on Google Container Engine to Add Custom Records

I'm running a kubernetes cluster on GKE and I would like to discover and access the etcd API from a service pod. The reason I want to do this is to add keys to the SkyDNS hierarchy.
Is there a way to discover (or create/expose) and interact with the etcd service API endpoint on a GKE cluster from application pods?
We have IoT gateway nodes that connect to our cloud services via an SSL VPN to ease management and comms. When a device connects to the VPN I want to update an entry in SkyDNS with the hostname and VPN IP address of the device.
It doesn't make sense to spin another clustered DNS setup since SkyDNS will work great for this and all of the pods in the cluster are already automatically configured to query it first.
I'm running a kubernetes cluster on GKE and I would like to discover and access the etcd API from a service pod. The reason I want to do this is to add keys to the SkyDNS hierarchy.
It sounds like you want direct access to the etcd instance that is backing the DNS service (not the etcd instance that is backing the Kubernetes apiserver, which is separate).
Is there a way to discover (or create/expose) and interact with the etcd service API endpoint on a GKE cluster from application pods?
The etcd instance for the DNS service is an internal implementation detail for the DNS service and isn't designed to be directly accessed. In fact, it's really just a convenient communication mechanism between the kube2sky binary and the skydns binary so that skydns wouldn't need to understand that it was running in a Kubernetes cluster. I wouldn't recommend attempting to access it directly.
In addition, this etcd instance won't even exist in Kubernetes 1.3 installs, since skydns is being replaced by a new DNS binary kubedns.
We have IoT gateway nodes that connect to our cloud services via an SSL VPN to ease management and comms. When a device connects to the VPN I want to update an entry in SkyDNS with the hostname and VPN IP address of the device.
If you create a new service, that will cause the cluster DNS to have a new entry created mapping the service name to the endpoints that back the service. What if you programmatically add a service each time a new IoT device registers rather than trying to configure DNS directly?