How can I connect to a MiniKF k8s cluster? - kubernetes

I installed an arrikcto/minikuf in order to have a minikf locally, but I need to connect to its k8s cluster to do port forward over its S3, in order to do this, I copied the ./kube/config of the vagrant host into the localhost but this didn't work.
So the question is how can I connect with its cluster?
Thanks

Related

Kubectl don't allow view resources in cluster from node in GCP

I have a trouble that i can't solve. So, I have k8s cluster on GCP. I can use kubectl from shell That opened directly to cluster. But when I use kubectl from node "I have The connection to the server localhost:8080 was refused - did you specify the right host or port?".
Also I use ./kube/config and it works about 5 minutes, and then again fail.
Maybe someone use GCP and help me. Thank you.
irvifa- I use Kubernetes cluster that provides GCP. When I connect directly from cloud shell. But when I connect from instance kubectl shows for client but for server it has mistake The connection to the server localhost:8080 was refused - did you specify the right host or port?"
If you didn't use COS(Container-Optimized OS), your need to
gcloud container clusters get-credentials "CLUSTER NAME"
By default COS will get credentials when you access to the node.

Fail to connect the GKE with GCE on the same VPC?

I am new to Google Cloud Platform and the following context:
I have a Compute Engine VM running as a MongoDB server and a Compute Engine VM running as a NodeJS server already with Docker. Then the NodeJS application connects to Mongo via the default VPC internal IP. Now, I'm trying to migrate the NodeJS application to Google Kubernetes Engine, but I can't connect to the MongoDB server when I deploy the NodeJS application Docker image to the cluster.
All services like GCE and GKE are in the same region (us-east-1).
I did a hard test accessing a kubernetes cluster node via SSH and deploying a simple MongoDB Docker image and trying to connect to the remote MongoDB server via command line, but the problem is the same, time out when trying to connect.
I have also checked the firewall settings on GCP as well as the bindIp setting on the MongoDB server and it has no blocking on that.
Does anyone know what may be happening? Thank you very much.
In my case traffic from GKE to GCE VM was blocked by Google Firewall even thou both are in the same network (default).
I had to whitelist cluster pod network listed in cluster details:
Pod address range 10.8.0.0/14
https://console.cloud.google.com/kubernetes/list
https://console.cloud.google.com/networking/firewalls/list
By default, containers in a GKE cluster should be able to access GCE VMs of the same VPC through internal IPs. It is just like you access the internet (e.g., google.com) from GKE containers, GKE and VPC know how to route the traffic. The problem must be with other configurations (firewall or your application).
You can do a test, start a simple HTTP server in the GCE VM, say the internal IP is 10.138.0.5:
python -m SimpleHTTPServer 8080
then create a GKE container and try to access the service:
kubectl run my-client -it --image=tutum/curl --generator=run-pod/v1 -- curl http://10.138.0.5:8080

Can't connect to mongodb FROM EC2 instance

I have mongodb running on an EC2 instance. After setting mongod.conf to accept traffic from 0.0.0.0, I am able to connect and send queries from my local machine. This machine is set to accept all traffic on port 27017.
I have an express app running mongoose also deployed to EC2 on a different instance. However, I cannot connect to the mongo instance from the express instance. I checked the outbound traffic rules, port 27017 is enabled explicitly, though all outbound traffic is enabled as well.
I can't figure out why I would be able to connect from my local machine but not my EC2 instance. The only thing I can think of is perhaps some setting in the VPC these instances are in. Both instances share the same VPC. Both instances are running ubuntu. The only other difference between my local environment and deployment environment is I'm running node 11 (macOS) locally and node 8 in deployment. Any ideas?

How to access minikube machine from outside?

I have a server running on ubuntu where I need to expose my app using kubernetes tools. I created a cluster using minikube with virtualbox machine and with the command kubectl expose deployment I was able tu expose my app... but only in my local network. It's mean that when I run minikube ip I receive a local ip. My question is how can I access my minikube machine from outside ?
I think the answer will be "port-forwarding".But how can I do that ?
You can use SSH port forwarding to access your services from host machine in the following way:
ssh -R 30000:127.0.0.1:8001 $USER#192.168.0.20
In which 8001 is port on which your service is exposed, 192.168.0.20 is minikube IP.
Now you'll be able to access your application from your laptop pointing the browser to http://192.168.0.20:30000
If you mean to access your machine from the internet, then the answer is yes "port-forwarding" and use the external ip address [https://www.whatismyip.com/]. The configurations go into your router settings. Check your router manual.

Kubernetes cannot access app via POD IP

I set up a cluster with 2 machines, which are not in the same local subnet but they can connect each other, machine A is Master + Node and machine B is Node. Then I use flannel (subnet 172.16.0.0/16) as the network plugin. After deployed apps, I encountered a problem that I can access the app via POD IP on machine A, but I cannot access the same app on machine B via POD IP, and curl command would say No route to the host172.16.0.x`.
I think there is no route rules to other machine, but I don't know how to configure the network. Could anyone help to explain if I miss something important? Thank you very much.
I use this kubernetes/contrib ansible script to deploy cluster, and did not change any configuration about flannel.
You can use the type:NodePort to access the pod over all of the node's IPs