How to make two local minikube clusters communicate with each other? - kubernetes

I have two minikube clusters(two separate profiles) running locally call it minikube cluster A and minikube Cluster B. Each of these cluster also have an ingress and a dns associated with it locally. The dns are hello.dnsa and hello.dnsb . I am able to do ping on both of them and nslookup just like this https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/#testing
I want pod A in cluster A to be able to communicate with pod B in cluster B. How can I do that? I logged into pod A cluster A and I did telnet hello.dnsb 80 and it doesn't get connected because I suspect there is no route. similarly I logged into pod B of cluster B and did telnet hello.dnsb 80 and it doesnt get connected. However If I do telnet hello.dnsb 80 or telnet hello.dnsb 80 from my host machine, telnet works!
Any simple way to solve this problem for now? I am ok with any solution like even adding routes manually using ip route add if needed

Skupper is a plugin available for performing these actions. It is a service interconnect that facilitates secured communication between the clusters, for more information on skupper go through this documentation.
There are multiple examples in which minikube is integrated with skupper, go through this configuration documentation for more details.

Related

Access Kubernetes applications via localhost from the host system

Is there any other way except port-forwarding, I can access the apps running inside my K8s cluster via http://localhost:port from my host operating system.
For example
I am running minikube setup to practise the K8s and I deployed three pods along with their services, I choose three different service type, Cluster IP, nodePort and LoadBalancer.
For Cluster IP, I can use port-forward option to access my app via localhost:port, but the problem is, I have to leave that command running and if for some reason, it is distributed, connection will be dropped, so is there any alternate solution here ?
For nodePort, I can only access this via minikube node IP not with the localhost, therefore, if I have to access this remotely, I wont have a route to this node IP address
For LoadBalancer, not a valid option as I am running minikube in my local system not in cloud.
Please let me know if there is any other solution to this problem, the reason why I am asking this when I deploy same application via docker compose, I can access all these services via localhost:port and I can even call them via VM_IP:port from other systems.
Thanks,
-Rafi

Kubernetes: kafka pod rechability issue from another pod

I know the below information is not enough to trace the issue but still, I want some solution.
We have Amazon EKS cluster.
Currently, we are facing the reachability of the Kafka pod issue.
Environment:
Total 10 nodes with Availability zone ap-south-1a,1b
I have a three replica of the Kafka cluster (Helm chart installation)
I have a three replica of the zookeeper (Helm chart installation)
Kafka using external advertised listener on port 19092
Kafka has service with an internal network load balancer
I have deployed a test-pod to check reachability of Kafka pod.
we are using cloud-map based DNS for advertized listener
Working:
When I run telnet command from ec2 like telnet 10.0.1.45 19092. It works as expected. IP 10.0.1.45 is a loadbalancer ip.
When I run telnet command from ec2 like telnet 10.0.1.69 31899. It works as expected. IP 10.0.1.69 is a actual node's ip and 31899 is nodeport.
Problem:
When I run same command from test-pod. like telnet 10.0.1.45 19092. It works sometime and sometime it will gives an error like telnet: Unable to connect to remote host: Connection timed out
The issue is something related to kube-proxy. we need help to resolve this issue.
Can anyone help to guide me?
Can I restart kube-proxy? Does it affect other pods/deployments?
I believe this problem is caused by AWS's NLB TCP-only nature (as mentioned in the comments).
In a nutshell, your pod-to-pod communication fails when hairpin is needed.
To confirm this is the root cause, you can verify that when the telnet works, kafka pod and client pod are not in the same EC2 node. And when they're in the same EC2 server, the telnet fails.
There are (at least) two approaches to tackle this issue:
Use K8s internal networking - Refer to k8s Service's URL
Every K8s service has its own DNS FQDN for internal usage (meaning using k8s network only, without reaching the LoadBalancer and come back to k8s again). You can just telnet this instead of the NodePort via the LB.
I.e. let's assume your kafka service is named kafka. Then you can just telnet kafka.svc.cluster.local (on the port exposed by kafka service)
Use K8s anti-affinity to make sure client and kafka are never scheduled in the same node.
Oh and as indicated in this answer you might need to make that service headless.

How to access minikube machine from outside?

I have a server running on ubuntu where I need to expose my app using kubernetes tools. I created a cluster using minikube with virtualbox machine and with the command kubectl expose deployment I was able tu expose my app... but only in my local network. It's mean that when I run minikube ip I receive a local ip. My question is how can I access my minikube machine from outside ?
I think the answer will be "port-forwarding".But how can I do that ?
You can use SSH port forwarding to access your services from host machine in the following way:
ssh -R 30000:127.0.0.1:8001 $USER#192.168.0.20
In which 8001 is port on which your service is exposed, 192.168.0.20 is minikube IP.
Now you'll be able to access your application from your laptop pointing the browser to http://192.168.0.20:30000
If you mean to access your machine from the internet, then the answer is yes "port-forwarding" and use the external ip address [https://www.whatismyip.com/]. The configurations go into your router settings. Check your router manual.

Connecting GCP Kubernetes in private vpc and NAT

I have created a new GCP Kubernetes cluster. The cluster is private with NAT - not have connection to the internet. I also deploy bastion machine which allow my to connect into my private network (vpc) from the internet. This is the tutorial I based on. SSH into bastion - working currently.
The kubernetes master is not exposed outside. The result:
$ kubectl get pods
The connection to the server 172.16.0.2 was refused - did you specify the right host or port?
So i install kubectl on bastion and run:
$ kubectl proxy --port 1111
Starting to serve on 127.0.0.1:3128
now I want to connect my local kubectl to the remote proxy server. I installed secured tunnel to the bastion server and mapped the remote port into the local port. Also tried it with CURL and it's working.
Now I looking for something like
$ kubectl --use-proxy=1111 get pods
(Make my local kubectl pass tru my remote proxy)
How to do it?
kubectl proxy acts exactly as an apiserver, exactly like the target apiserver - but the queries trough it are already authenticated. From your description, 'works with curl', it sounds like you've set it up correctly, you just need to target the client kubectl to it:
kubectl --server=http://localhost:1111
(Where port 1111 on your local machine is where kubectl proxy is available; in your case trough a tunnel)
If you need exec or attach trough kubectl proxy you'll need to run it with either --disable-filter=true or --reject-paths='^$'. Read the fine print and consequences for those options.
Safer way
All in all, this is not how I access clusters trough a bastion. The problem with above approach is if someone gains access to the bastion they immediately have valid Kubernetes credentials (as kubectl proxy needs those to function). It is also not the safest solution if the bastion is shared between multiple operators. One of the main points of a bastion would be that it never has credentials on it. What I fancy doing is accessing the bastion from my workstation with:
ssh -D 1080 bastion
That makes ssh act as SOCKS proxy. You need GatewayPorts yes in your sshd_config for this to work. Thereafter from the workstation I can use
HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pod

GKE: secured access to services from outside the cluster

Is there any way to access the 'internal' services (those not exposed outside) of the cluster in a secure way from the outside.
The goal is simple: I need to debug clients of those services and need to access them, but don't want to expose them outside.
On a regular single host I would normally tunnel to the host with SSH and map the ports to localhost; I tried using a SSHD container but that didn't get me very far: the services are not directly on that container so I'm not sure how to get to the next hop on the network since the services are dynamically managing IPs.
Ideally a VPN would be much more convenient, but GKE doesn't seem to support VPN for road warrior situation.
Is there any solution for this use-case?
Thanks for your input.
EDIT:
I see here:
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/accessing-the-cluster.md#ways-to-connect
that the only way to connect supported right now is HTTP/HTTPS meaning I can proxy HTTP calls but not to any port
You can do this with a combination of running kubectl proxy on your dev machine and using the proxying functionality built into the master (that's a lot of proxying, but bear with me).
First, run kubectl proxy. Note the port that is bound locally (it should be 8001 by default). This will cause kubectl to create a tunnel to your master instance that you can hit locally without needing to pass any authentication (technically, you can do all of the following steps without doing this first by hitting the master directly, but this is simpler for debugging).
Next, point a client (web browser, curl, etc) at http://localhost:8001/api/v1/proxy/namespaces/<ns>/services/<svc>/, replacing <ns> with the namespace in which your service is configured and <svc> with the name of your service. You can also append a particular request path to the end of the URL, so if your pods behind the service are hosting a file called data.json you would append that to the end of the request path.
This is how the update-demo tutorial works, so if you get stuck I'd recommend walking through that example and taking a close look at what the javascript does (it isn't too complicated).
After trying the many methods explained in the doc mentioned above, the thing that works for me was:
1) Create a SSHD daemon container to SSH to the cluster
2) Create a ssh Service with a type: NodePort
3) get the port number with
kubectl describe service sshd
4) use ssh port forwarding to get to the service with:
ssh -L <local-port>:<my-k8s-service-name>:<my-k8s-service-port> -p <sshd-port> user#sshd-container
for example
ssh -L 2181:zookeeper:2181 -p 12345 root#sshd-container
Then I have my zookeeper service on localhost:2181
For more port mappings, use alternate ports.
You can also try using kubectl port-forward:
http://kubernetes.io/docs/user-guide/connecting-to-applications-port-forward/
http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/
Example:
kubectl port-forward POD_NAME LOCAL_PORT:REMOTE_PORT
kubectl port-forward redis-master 6379:6379