Cant connect to GKE cluster with kubectl. getting timeout - kubernetes

I executed followign command
gcloud container clusters get-credentials my-noice-cluter --region=asia-south2
and that command runs successfully. I can see the relevant config with kubectl config view
But when I try to kubectl, I get timeout
kubectl config view
❯ kubectl get pods -A -o wide
Unable to connect to the server: dial tcp <some noice ip>:443: i/o timeout
If I create a VM in gcp and use kubectl there or use gcp's cloud shell, It works but it does not work on our local laptops and PCs.
Some network info about our cluster:-
Private cluster Disabled
Network default
Subnet default
VPC-native traffic routing Enabled
Pod address range 10.122.128.0/17
Service address range 10.123.0.0/22
Intranode visibility Enabled
NodeLocal DNSCache Enabled
HTTP Load Balancing Enabled
Subsetting for L4 Internal Load Balancers Disabled
Control plane authorized networks
office (192.169.1.0/24)
Network policy Disabled
Dataplane V2 Disabled
I also have firewall riles to allow http/s
❯ gcloud compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
default-allow-http default INGRESS 1000 tcp:80 False
default-allow-https default INGRESS 1000 tcp:443 False
....

If it's work from your VPC and not from outside, it's because you created a private GKE cluster. The master is only reachable through the private IP or through the autorized network.
Speaking about the authorized network, you have one authorizer office (192.169.1.0/24). Sadly, you registered a private IP range in your office network and not the public IP used to access the internet.
To solve that, go to a site that provide you your public IP. Then update the authorized network for your cluster with that IP/32, and try again.

If it works from the GCP VM, but does not work from your local that means that it's either related to the GCP Firewall or your GKE does not have a public IP.
First check if you cluster IP is public and if yes, then you need to add a firewall rule which allows the traffic over HTTPS (443 port). You can do it in the the gcloud tool or via the GCP Console "Firewall -> Create Firewall Rule".

Related

EKS pod to RDS connectivity

Trying to check connection to RDS-postgres from EKS pod, unable to connect. Below are the details-
VPC- 10.0.0.0/16
EKS-
in private subnets:- (10.0.1.0/24, 10.0.2.0/24). AZ-a and AZ-b
security-groups:- control-plane and worker-node-sec-group.
RDS-postgres
db subnet-group:- (10.0.3.0/24, 10.0.4.0/24) AZ-a and AZ-b
DB is deployed in 10.0.3.0/24 (AZ-a)
sec-group:- db-subnet-group
=> allowing traffic from both EKS cluster security group.
=> also, from whole vpc 10.0.0.0/16. on port 5432.
NACL, route table entries seems ok. everything is allowed.
Anything missing which needs to be configured ?
Outbound rule was missing in EKS worker node security group. Updated to allow outbound traffic for port 5432 and destination as db security group.
I was under impression that security groups are by default stateful. It means everything is allowed to go out for destination 0.0.0.0/0. However, you can modify/delete outbound rules.
In my case, I was deploying EKS using terraform EKS module and this module was allowing only required outbound traffic.

Can't access NodePort service in OVH Managed Kubernetes cluster

In my OVH Managed Kubernetes cluster I'm trying to expose a NodePort service, but it looks like the port is not reachable via <node-ip>:<node-port>.
I followed this tutorial: Creating a service for an application running in two pods. I can successfully access the service on localhost:<target-port> along with kubectl port-forward, but it doesn't work on <node-ip>:<node-port> (request timeout) (though it works from inside the cluster).
The tutorial says that I may have to "create a firewall rule that allows TCP traffic on your node port" but I can't figure out how to do that.
The security group seems to allow any traffic:
The solution is to NOT enable "Private network attached" ("réseau privé attaché") when you create the managed Kubernetes cluster.
If you already paid your nodes or configured DNS or anything, you can select your current Kubernetes cluster, and select "Reset your cluster" ("réinitialiser votre cluster"), and then "Keep and reinstall nodes" ("conserver et réinstaller les noeuds") and at the "Private network attached" ("Réseau privé attaché") option, choose "None (public IPs)" ("Aucun (IPs publiques)")
I faced the same use case and problem, and after some research and experimentation, got the hint from the small comment on this dialog box:
By default, your worker nodes have a public IPv4. If you choose a private network, the public IPs of these nodes will be used exclusively for administration/linking to the Kubernetes control plane, and your nodes will be assigned an IP on the vLAN of the private network you have chosen
Now i got my Traefik ingress as a DaemonSet using hostNetwork and every node is reachable directly even on low ports (as you saw yourself, the default security group is open)
Well i can't help any further i guess, but i would check the following:
Are you using the public node ip address?
Did you configure you service as Loadbalancer properly?
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
Do you have a loadbalancer and set it up properly?
Did you install any Ingress controller? (ingress-nginx?) You may need to add a Daemonset for this ingress-controller to duplicate the ingress-controller pod on each node in your cluster
Otherwise, i would suggest an Ingress, (if this works, you may exclude any firewall related issues).
This page explains very well:
What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?
In AWS, you have things called security groups... you may have the same kind of thing in you k8s provider (or even your local machine). Please add those ports to the security groups or local firewalls. In AWS you may need to bind those security groups to your EC2 instance (Ingress node) as well.

It is possible config a VM instance on GCP to only receive requests from the Load Balancer?

I have an nginx deployment on k8s that is exposed via a nodeport service. Is it possible by using the GCP firewall to permit that only an application LB can talk with these nodeports?
I wouldn't like to let these two nodeports opened to everyone.
Surely you can controll access traffics to your VM instance via firewall.
That is why firewall service exitsts.
If you created a VM in the default VPC and firewall setting environment, firewall will deny all traffics from outside.
You just need to write a rule to allow traffic from the application LB.
According to Google document, You need to allow from 130.211.0.0/22 and 35.191.0.0/16 IP ranges.

Is it possible to perform ssh to a VM within a pod?

I have a pod inside a Kubernetes cluster on GKE that remotely creates a Kubernetes cluster on Azure and I want to ssh into the master VM of the Azure cluster from the pod so I can remotely run some commands on it. However, I encountered a timeout problem whenever I run ssh / scp inside the pod:
ssh: connect to host port 22: Connection timed out
I already installed OpenSSH-client/server in my pod. I ensured that the VM has a public IP address and the pod also has access to the private key of the VM. I tried ssh into the Azure master VM on my laptop and it works just fine. Any ideas?
If you are running a private cluster in GKE, check their docs:
it says:
Private nodes do not have outbound Internet access because they don't have external IP addresses. Private Google Access provides private nodes and their workloads with limited outbound access to Google Cloud Platform APIs and services over Google's private network. For example, Private Google Access makes it possible for private nodes to pull container images from Google Container Registry, and to send logs to Stackdriver.
Check this other question => Kubernetes: Connect to the outside world from pod
Follow the below steps
deploy a test pod that has ssh binary in azure cluster.
update ssh certificates on the cluster nodes ( ignore if you already have certs )
copy ssh certs into test pod using kubectl cp command
get into test pod and ssh to any of the cluster nodes
you should be able to run commands on cluster node

Google Kubernetes Engine Service loadBalancerSourceRanges not allowing connection on IP range

I'm exposing an application run on a GKE cluster using a LoadBalancer service. By default, the LoadBalancer creates a rule in the Google VPC firewall with IP range 0.0.0.0/0. With this configuration, I'm able to reach the service in all situations.
I'm using an OpenVPN server inside my default network to prevent outside access to GCE instances on a certain IP range. By modifying the service .yaml file loadBalancerSourceRanges value to match the IP range of my VPN server, I expected to be able to connect to the Kubernetes application while connected to the VPN, but not otherwise. This updated the Google VPN firewall rule with the range I entered in the .yaml file, but didn't allow me to connect to the service endpoint. The Kubernetes cluster is located in the same network as the OpenVPN server. Is there some additional configuration that needs to be used other than setting loadBalancerSourceRanges to the desired ingress IP range for the service?
You didn't mention the version of this GKE cluster; however, it might be helpful to know that, beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Google Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons. You can replicate the behavior of older clusters (1.8.x and earlier) by setting a new firewall rule on your cluster. You can see this notification on the Release Notes published in the official documentation