Configure firewall rules for kubernetes cluster - kubernetes

I am trying to configure firewall rules for kubernetes service to allow restricted access to my mongo pod when running a load balancer service. I would like to know how to specify the ip range because we have our own internal firewall?

From https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service:
When using a Service with spec.type: LoadBalancer, you can specify the IP ranges that are allowed to access the load balancer by using spec.loadBalancerSourceRanges. This field takes a list of IP CIDR ranges, which Kubernetes will use to configure firewall exceptions. This feature is currently supported on Google Compute Engine, Google Container Engine and AWS. This field will be ignored if the cloud provider does not support the feature.
This loadBalancerSourceRanges property should help in your case.

Related

It is possible config a VM instance on GCP to only receive requests from the Load Balancer?

I have an nginx deployment on k8s that is exposed via a nodeport service. Is it possible by using the GCP firewall to permit that only an application LB can talk with these nodeports?
I wouldn't like to let these two nodeports opened to everyone.
Surely you can controll access traffics to your VM instance via firewall.
That is why firewall service exitsts.
If you created a VM in the default VPC and firewall setting environment, firewall will deny all traffics from outside.
You just need to write a rule to allow traffic from the application LB.
According to Google document, You need to allow from 130.211.0.0/22 and 35.191.0.0/16 IP ranges.

Google Kubernetes Engine Service loadBalancerSourceRanges not allowing connection on IP range

I'm exposing an application run on a GKE cluster using a LoadBalancer service. By default, the LoadBalancer creates a rule in the Google VPC firewall with IP range 0.0.0.0/0. With this configuration, I'm able to reach the service in all situations.
I'm using an OpenVPN server inside my default network to prevent outside access to GCE instances on a certain IP range. By modifying the service .yaml file loadBalancerSourceRanges value to match the IP range of my VPN server, I expected to be able to connect to the Kubernetes application while connected to the VPN, but not otherwise. This updated the Google VPN firewall rule with the range I entered in the .yaml file, but didn't allow me to connect to the service endpoint. The Kubernetes cluster is located in the same network as the OpenVPN server. Is there some additional configuration that needs to be used other than setting loadBalancerSourceRanges to the desired ingress IP range for the service?
You didn't mention the version of this GKE cluster; however, it might be helpful to know that, beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Google Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons. You can replicate the behavior of older clusters (1.8.x and earlier) by setting a new firewall rule on your cluster. You can see this notification on the Release Notes published in the official documentation

Set service-node-port-range in Google Kubernetes Engine

For our use-case, we need to access a lot of services via NodePort. By default, the NodePort range is 30000-32767. With kubeadm, I can set the port range via --service-node-port-range flag.
We are using Google Kubernetes Engine (GKE) cluster. How can I set the port range for a GKE cluster?
In GKE, the control plane is managed by Google. This means you don't get to set things on the API Server yourself. That being sad, I believe you can use the kubemci CLI tool to achieve it, see Setting up a multi-cluster Ingress.

Whitelist traffic to mysql from a kubernetes service

I have a Cloud MySQL instance which allows traffic only from whitelisted IPs. How do I determine which IP I need to add to the ruleset to allow traffic from my Kubernetes service?
The best solution is to use the Cloud SQL Proxy in a sidecar pattern. This adds an additional container into the pod with your application that allows for traffic to be passed to Cloud SQL.
You can find instructions for setting it up here. (It says it's for GKE, but the principles are the same)
If you prefer something a little more hands on, this codelab will walk you through taking an app from local to on a Kubernetes Cluster.
I am using Google Cloud Platform, so my solution was to add the Google Compute Engine VM instance External IP to the whitelist.

How to install Kubernetes dashboard on external IP address?

How to install Kubernetes dashboard on external IP address?
Is there any tutorial for this?
You can expose services and pods in several ways:
expose the internal ClusterIP service through Ingress, if you have that set up.
change the service type to use 'type: LoadBalancer', which will try to create an external load balancer.
If you have external IP addresses on your kubernetes nodes, you can also expose the ports directly on the node hosts; however, I would avoid these unless it's a small, test cluster.
change the service type to 'type: NodePort', which will utilize a port above 30000 on all cluster machines.
expose the pod directly using 'type: HostPort' in the pod spec.
Depending on your cluster type (Kops-created, GKE, EKS, AKS and so on), different variants may not be setup. Hosted clusters typically support and recommend LoadBalancers, which they charge for, but may or may not have support for NodePort/HostPort.
Another, more important note is that you must ensure you protect the dashboard. Running an unprotected dashboard is a sure way of getting your cluster compromised; this recently happened to Tesla. A decent writeup on various way to protect yourself was written by Jo Beda of Heptio