I’ve been deploying an Private AKS cluster. On the subnet where it supposed to be deployed I’ve assigned and UDR to force all traffic 0.0.0.0 to the internal IP of the Azure Firewall that resides in a peered VNEt aka the hub (in a hub and spoke architecture). The AKS deployment was not finishing and actually looking at the node pools to be deployed it looks like the deployment failed because the service couldn’t reach MS stuff. My question now is as I was unable to find, what url do I need to actually permit from the aks subnet in terms of a) deploying it b) keeping it up to date - meaning updating the worker nodes c) NTP d) whatever else ?
In the official MS documentation there is a section that describes the required outbound ports / network rule for an AKS cluster when using a firewall.
Related
I have a kubernetes cluster with several nodes, and it is connecting to a SQL server outside of the cluster. How can I whitelist these (potentially changing) nodes on the SQL server firewall, without having to whitelist each Node's external IP independently?
Is there a clean solution for this? Perhaps some intra-cluster tooling to route all requests through a single node?
You would have to use a NAT. It is possible, but fiddly (we do this weekly in order to connect to a hosted service to make backups, and the hosted service only whitelists a specific IP.)
We used Terraform for spinning up a cluster, then deploying our backup job to it so it could connect to the hosted service, and since it was going via the NAT IP, the remote host would allow the connection.
We used Cloud NAT via Terraform (as we were on GKE): https://registry.terraform.io/modules/terraform-google-modules/cloud-nat/google/latest
Though there are surely similar options for whichever Kubernetes provider you are using. If you are running bare-metal, you'll need to do the routing yourself.
I have a GKE clusters setup, dev and stg let's say, and wanted apps running in pods on stg nodes to connect to dev master and execute some commands on that's GKE - I have all the setup I need and when I add from hand IP address of the nodes all works fine but the IP's are changing,
so my question is how can I add to Master authorised networks the ever-changing default-pool IPs of nodes from the other cluster?
EDIT: I think I found the solution, it's not the node IP but the NAT IP I have added to authorized networks, so assuming I don't change those I just need to add the NAT I guess, unless someone knows better solution ?
I'm not sure that you are doing the correct things. In kubernetes, your communication is performed between services, that represents deployed pods, on one or several nodes.
When you communicate with the outside, you reach an endpoint (an API or a specific port). The endpoint is materialized by a loadbalancer that routes the traffic.
Only the kubernetes master care about the node as resources (CPU, memory, GPU,...) provider inside the cluster. You should never have to directly reach the node of a cluster without using the standard way.
Potentially you can reach the NodePort service exposal on the NodeIP+servicePort.
What you really need to do is configure the kubectl in jenkins pipeline to connect to GKE Master IP. The master is responsible for accepting your commands (rollback, deployment, etc). See Configuring cluster access for kubectl
The Master IP is available in the Kubernetes Engine console along with the Certificate Authority certificate. A good approach is to use a service account token to authenticate with the master. See how to Login to GKE via service account with token.
I have created an Ingress service that forwards TCP port 22 to a service in my cluster. As is, every inbound traffic is allowed.
What I would like to know is if it is possible to define NSG rules to prevent access to a certain subnet only. I was able to define that rule using the Azure interface. However, every time that Ingress service is edited, those Network Security Group rules get reverted.
Thanks!
I think there would be some misunderstanding about the NSG in AKS. So first let us take a look at the network of the AKS, Kubernetes uses Services to logically group a set of pods together and provide network connectivity. See the AKS Service for more details. And when you create services, the Azure platform automatically configures any network security group rules that are needed.
Don't manually configure network security group rules to filter
traffic for pods in an AKS cluster.
See NSG in AKS for more details. So in this situation, you do not need to manage the rule in the NSG manually.
But don't worry, you can also manage the rules for your pods manually as you want. See Secure traffic between pods using network policies in Azure Kubernetes Service. You can install the Calico network policy engine and create Kubernetes network policies to control the flow of traffic between pods in AKS. Although it just is the preview version, it also can help you with what you want. But remember, the Network policy can only be enabled when the cluster is created.
Yes! this is most definitely possible. The Azure NSG is for subnets and NIC's. You can define the CIDR on the NSG rule to allow/deny traffic on the desired port and apply it to the NIC and subnet. A word of caution would be to make sure to have matching rules at Subnet and NIC level if the cluster is within the same subnet. Else the traffic would be blocked internally and won't go out. This doc best describes them https://blogs.msdn.microsoft.com/igorpag/2016/05/14/azure-network-security-groups-nsg-best-practices-and-lessons-learned/.
I'm exposing an application run on a GKE cluster using a LoadBalancer service. By default, the LoadBalancer creates a rule in the Google VPC firewall with IP range 0.0.0.0/0. With this configuration, I'm able to reach the service in all situations.
I'm using an OpenVPN server inside my default network to prevent outside access to GCE instances on a certain IP range. By modifying the service .yaml file loadBalancerSourceRanges value to match the IP range of my VPN server, I expected to be able to connect to the Kubernetes application while connected to the VPN, but not otherwise. This updated the Google VPN firewall rule with the range I entered in the .yaml file, but didn't allow me to connect to the service endpoint. The Kubernetes cluster is located in the same network as the OpenVPN server. Is there some additional configuration that needs to be used other than setting loadBalancerSourceRanges to the desired ingress IP range for the service?
You didn't mention the version of this GKE cluster; however, it might be helpful to know that, beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Google Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons. You can replicate the behavior of older clusters (1.8.x and earlier) by setting a new firewall rule on your cluster. You can see this notification on the Release Notes published in the official documentation
Azure portal is not showing the nodes after associating the service fabric cluster subnet to the NSG.
Before the cluster subnet is associated to the NSG:
After the cluster subnet is associated to NSG:
Am I missing something or is this a bug on portal?
You'll need to ensure that you have whitelisted the proper ports via the NSG rules. I discuss some of this here.
The short list is (as best I've been able to determine so far):
19080, 19000 for external access. and 1025-1027, 49152-65534, and 445 within the cluster's vnet.