unable to connect from GKE to GCE - kubernetes

I'm a newbie in kubernetes space, sorry if I'm missing anything obvious here.
I have Prometheus running in GKE and it has to scrape metrics exposed on an endpoint mounted in GCE, the host in GCE is behind a VPN, I'm not sure if this is an issue since both are on Google cloud. What can I do here to ensure that prometheus in kubernetes can connect to the host in GCE and scrape metrics form them.
Edit: added config and error message
Prometheus scrape_config
- job_name: 'cassandra-metrics'
static_configs:
- targets:
- <ip>:<port>
error when trying to scrape
net_conntrack_dialer_conn_failed_total{dialer_name="cassandra-metrics",reason="timeout"} 4675

Remember that the GCP default network is global, therefore it doesn't matter in which zone you deploy your resources, they will always be able to reach each other using the internal IPs or external IPs (keep in mind that if you are using external IPs to communicate you will be charged by ingress/egress traffic and also setup the necessary firewall rules).
If you are using different VPCs or networks,you need to setup VPC Peering, this will allow communication between both VPCs, for example, you have your GCE instance in one VPC and your GKE cluster in other.
I tried to replicate your scenario, a GKE cluster and a compute instance deployed in the default network, applied a firewall rule to allow ingress traffic to all instances in the network (no network tags used).
I did a deploy of busybox using this yaml file in GKE, logged in into one pod, kubectl get pods and then using kubectl exec -ti $podname sh, finally used these tools to test the connection: traceroute and ping.
The connection was successful between both resources, notice that instead of using the UDP protocol for traceroute I used the option -I which stands for "use ICMP ECHO for probes".
The instance being behind a VPN (Cloud VPN, Dedicated interconnect or direct peering) doesn't affect the fact that the GKE cluster can't reach it, unless like I mentioned before, both resources are on separate/different networks.

Related

EKS: Route external VPC traffic to service ClusterIP using kube-proxy (or something else)?

Requirement/Problem:
I would like to route traffic from the VPC network to a cluster IP. In AKS I was able to do this by adding an entry in the VNET route table to a node running kube-proxy. I can't seem to be able to do this in EKS. I would like to do this for development environments so I can easily access service cluster IPs without having to forward ports or create load balancers. It's my understanding that kube-proxy uses iptables to forward network traffic.
Question:
Is there something fundamental that won't allow me to route traffic to the cluster network in EKS?
Context:
I'm testing with eks.9 and k8s 1.21
As per my understanding, you should definitely be able to do this by setting proper SecurityGroup settings (which allow traffic to be forwarded to your worker node clusterIP subnet).
And yes, kube-proxy uses iptables to forward traffic but it really depends on the overlay networking driver you have. If you're running flannel for instance, this is true, but perhaps not for calico or cilium, they may use bpf. So, just double check if your overlay network CNI plugin supports forwarding based on iptables.
Another thing you can do (and this will not require creating Load Balancers) is you can change your service type to NodePort or LoadBalancer which will allow you to set a personalized externalIP on your service. This you can provide to the cluster through a subnet configured in your VPC. All the incoming traffic to this subnet will then be forwarded to your services on the desired ports on which they are listening.
I hope this is helpful enough for you to get started.

How to set DNS entrys & network configuration for kubernetes cluster #home (noob here)

I am currently running a Kubernetes cluster on my own homeserver (in proxmox ct's, was kinda difficult to get working because I am using zfs too, but it runs now), and the setup is as follows:
lb01: haproxy & keepalived
lb02: haproxy & keepalived
etcd01: etcd node 1
etcd02: etcd node 2
etcd03: etcd node 3
master-01: k3s in server mode with a taint for not accepting any jobs
master-02: same as above, just joining with the token from master-01
master-03: same as master-02
worker-01 - worker-03: k3s agents
If I understand it correctly k3s delivers with flannel as a CNI pre-installed, as well as traefik as a Ingress Controller.
I've setup rancher on my cluster as well as longhorn, the volumes are just zfs volumes mounted inside the agents tho, and as they aren't on different hdd's I've set the replicas to 1. I have a friend running the same setup (we set them up together, just yesterday) and we are planing on joining our networks trough vpn tunnels and then providing storage nodes for each other as an offsite backup.
So far I've hopefully got everything correct.
Now to my question: I've both got a static ip #home as well as a domain, and I've set that domain to my static ip
Something like that: (don't know how dns entries are actually written, just from the top of my head for your reference, the entries are working well.)
A example.com. [[my-ip]]
CNAME *.example.com. example.com
I've currently made a port-forward to one of my master nodes for port 80 & 443 but I am not quite sure how you would actually configure that with ha in mind, and my rancher is throwing a 503 after visiting global settings, but I have not changed anything.
So now my question: How would one actually configure the port-forward and, as far as I know k3s has a load-balancer pre-installed, but how would one configure those port-forwards for ha? the one master node it's pointing to could, theoretically, just stop working and then all services would not be reachable anymore from outside.
Assuming your apps are running on port 80 and port 443 your ingress should give you a service with an external ip and you would point your dns at that. Read below for more info.
Seems like you are not a noob! you got a lot going on with your cluster setup. What you are asking is a bit complicated to answer and I will have to make some assumptions about your setup, but will do my best to give you at least some intial info.
This tutorial has a ton of great info and may help you with what you are doing. They use kubeadm instead of k3s, buy you can skip that section if you want and still use k3s.
https://www.debontonline.com/p/kubernetes.html
If you are setting up and installing etcd on your own, you don't need to do that k3s will create an etcd cluster for you that run inside pods on your cluster.
Load Balancing your master nodes
haproxy + keepalived nodes would be configured to point to the ips of your master nodes at port 6443 (TCP), the keepalived will give you a virtual ip and you would configure your kubeconfig (that you get from k3s) to talk to that ip. On your router you will want to reserve an ip (make sure not to assign that to any computers).
This is a good video that explains how to do it with a nodejs server but concepts are the same for your master nodes:
https://www.youtube.com/watch?v=NizRDkTvxZo
Load Balancing your applications running in the cluster
Use an K8s Service read more about it here: https://kubernetes.io/docs/concepts/services-networking/service/
essentially you need an external ip, I prefer to do this with metal lb.
metal lb gives you a service of type load balancer with an external ip
add this flag to k3s when creating initial master node:
https://metallb.universe.tf/configuration/k3s/
configure metallb
https://metallb.universe.tf/configuration/#layer-2-configuration
You will want to reserve more ips on your router and put them under the addresses section in the yaml below. In this example you will see you have 11 ips in the range 192.168.1.240 to 192.168.1.250
create this as a file example metallb-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
kubectl apply -f metallb-cm.yaml
Install with these yaml files:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
source - https://metallb.universe.tf/installation/#installation-by-manifest
ingress
Will need a service of type load balancer, use its external ip as the external ip
kubectl get service -A - look for your ingress service and see if it has an external ip and does not say pending
I will do my best to answer any of your follow up questions. Good Luck!

How to IP whitelist a Kubernetes cluster

So I have a kubernetes cluster running in Google Cloud. And from pods inside the cluster I need to access an external DB which has IP whitelisting configured. It seems that I need a static, shared IP for the cluster's outgoing traffic, what's the best approach?
Setting up a service IP seems irrelevant as that's for inbound traffic. I looked into Cloud NAT and it seems promising, but I'm not exactly sure about how to set that up. Any docs/tutorial would be helpful, thanks!
According the docs when traffic goes out of a kubernetes cluster in GKE it will get SNATed with the IP of the node. So you could whitelist the IPs of all GKE kubernetes cluster nodes.
Here is some best practices on connecting to external services from Kubernetes cluster. An example for connecting to Cloud SQL from Google Kubernetes Engine.
An example setup of Cloud NAT on GKE.

redis deployment on Kubernetes with sentinel

I am deploying Redis and a sentinel architecture on Kubernetes.
when I work with deployments are my cluster that requires redis all is working fine.
the problem is that some services of my deployment are located on a different kubernetes cluster.
when the clients reach the redis sentinel ( which I exposed via NodePort that maps internally to 26379) they get an reply the master IP.
that actually happens is that they are getting the redis Master kubernetes IP and the internal port 6379.
as I said while working in KUbernetes that works fine since the clients can access that IP but when the a services are external it is not reachable.
I found that there is a configuration named:
cluster-announce-ip and cluster-announce-ip
I have set those values to the external IP of the cluster and the external port hoping that it will solve the problem but still no change.
I am using the formal docker image : redis:4.0.11-alpine
any help would be appreciated

Wrong IP from GCP kubernetes load balancer to app engine's service

I'm having some troubles with a nginx pod inside a kubernetes cluster located on GCP which should be able to access a service located on app engine.
I have set firewall rules in the app engine to deny all and only allow some ips but the ip which hits my app engine service isn't the IP of the load balancer of my Nginx but instead the IP of one of the node of the cluster.
An image is better than 1000 words, then here's an image of our architecture :
The problem is: The ip which hits app engine's firewall is IP A whereas I thought i'd be IP B. IP A changes everytime I kill/create the cluster. If it were IP B, I could easily open this IP in App engine's firewall rules as I've put her static. Anyone has an idea how to have IP B instead of IP A ?
Thanks
The IP address assigned to your nginx "load balancer" is (likely) not an IP owned or managed by your Kubernetes cluster. Services of type LoadBalancer in GKE use Google Cloud Load Balancers. These are an external abstraction which terminates inbound connections in Google's front-end infrastructure and passes traffic to the individual k8s nodes in the cluster for onward delivery to your k8s-hosted service.
Pods in a Kubernetes cluster will, by default, route egress traffic out of the cluster using the configuration of their host node. In GKE, this route corresponds to the gateway of the VPC in which the cluster (and, by extension, Compute Engine instances) exists. The public IP of cluster nodes will change as they are added and removed from the pool.
A workaround uses a dedicated instance with a static external IP to process egress traffic leaving your VPC (i.e. egress from your cluster). Google has a tutorial for this purpose here: https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine
There are k8s-native solutions, but these will be unsuitable in a GKE context at present due to the inability to maintain any node with a non-ephemeral public IP.