not able to connect to internet via HTTPS minikube pods - kubernetes

cluster info:
Minikube installation steps on centos VM:
curl -LO https://storage.googleapis.com/minikube/releases/v1.21.0/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube start --addons=ingress --vm=true --memory=8192 --driver=none
PODs of this minikube cluster are not able to connect to internet.
However My host VM has internet connection with no firewall or iptables setup.
Can anybody help me debug this connection refused error
UPDATE:
I have noticed just now , I am able to connect non-https URLs, but not https URLs

How you have started the Minikube on the VM ? Which command did you used ?
If you are using the minikube --driver=docker it might won't work.
for starting the minikube on VM you have to change the driver
minikube --driver=none
in docker driver, it will create the container and install the kubernetes inside it and spawn the POD into further.
Check more at : https://minikube.sigs.k8s.io/docs/drivers/none/
if you on docker you can try :
pkill docker
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
docker -d
"It will force docker to recreate the bridge and reinit all the network rules"
reference : https://github.com/moby/moby/issues/866#issuecomment-19218300
Try with
docker run --net=host -it ubuntu
Or else add dns in the config file in /etc/default/docker
DOCKER_OPTS="--dns 208.67.222.222 --dns 208.67.220.220"

Please check your container port and target port. This is my pod setup:
spec:
containers:
name: hello
image: "nginx"
ports:
containerPort: 5000
This is my service setup:
ports:
protocol: TCP
port: 60000
targetPort: 5000
If your target port and container port don't match, you will get a curl: (7) connection refused error(Source).
Checkout similar Stackoverflowlink for more information.

Related

telnet: Unable to connect to remote host: Connection refused - running from a kubernetes pod

My local kubernetes cluster is running by Rancher Desktop -
% kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
I have created a very basic job - to do a telnet at localhost and port 6443 and see if the connection is reachable by the job pod running in the cluster ->
apiVersion: batch/v1
kind: Job
metadata:
name: telnet-test
spec:
template:
spec:
containers:
- name: test-container
image: getting-started:latest
imagePullPolicy: IfNotPresent
command: ["/usr/bin/telnet"]
args: ["127.0.0.1","6443"]
restartPolicy: Never
backoffLimit: 4
Docker image is also basic , installing telnet ->
#Download base image ubuntu 16.04
FROM ubuntu:16.04
# Update Software repository
RUN apt update && apt upgrade
# Install nginx, php-fpm and supervisord from ubuntu repository
RUN apt install -y telnet
CMD ["which","telnet"]
EXPOSE 6443
When I run this job , I get connection refused ->
telnet: Unable to connect to remote host: Connection refused
Trying 127.0.0.1...
Any idea what I could be missing here ?
"kubectl cluster-info" shows you on which NODE and port your Kubernetes api-server is Running. So these are processes running on either a virtual machine or on a physical machine.
IP address 127.0.0.1 is also known as the localhost address, and belong to the local network adapter. Hence it is NOT a real IP that you can call from any other machine.
When you test 127.0.0.1:6443 inside your container image running as a Pod or with "docker run", you are not trying to call the NODE on port 6443. Instead you are trying to call the localhost address on port 6443 INSIDE the container.
When you install Kubernetes, it would be better if you configure the cluster address as the :6443 or :6443 instead of using a localhost address.

Can't access ArgoCD UI that is in a VM with port forwarding set in vagrant file

I have setup a kubernetes cluster with kubeadm with a 3 node vagrant setup. I have installed ArgoCD and when I use vagrant ssh into the kubemaster vm, I can run:
kubectl port-forward svc/argocd-server -n argocd 8080:443
And I can curl it in the ssh session successfully with:
curl -k https://localhost:8080
I have a static ip for the nodes with the master being 192.168.56.2, and a port forward set for that vm
config.vm.define "kubemaster" do |node|
...
node.vm.network :private_network, ip: 192.168.56.2
node.vm.network "forwarded_port", guest: 8080, host: 8080
...
end
On the host I try to access ArgoCD UI in browser with:
https://localhost:8080
https://192.168.56.2:8080
And I get connection refused
What am I missing?
Edit:
The nodes are running ubuntu 22 and ufw is not enabled.
Im running on a Mac
It turns out I needed to add the address flag to the port forwarding command
// from
kubectl port-forward svc/argocd-server -n argocd 8080:443
// to
kubectl port-forward --address 0.0.0.0 svc/argocd-server -n argocd 8080:443

cannot mount NFS share on my Mac PC to minikube cluster

Problem
To make k8s multinodes dev env, I was trying to use NFS persistent volume in minikube with multi-nodes and cannot run pods properly. It seems there's something wrong with NFS setting. So I run minikube ssh and tried to mount the nfs volume manually first by mount command but it doesnt work, which bring me here.
When I run
sudo mount -t nfs 192.168.xx.xx(=macpc's IP):/PATH/TO/EXPORTED/DIR/ON/MACPC /PATH/TO/MOUNT/POINT/IN/MINIKUBE/NODE
in minikube master node, the output is
mount.nfs: requested NFS version or transport protocol is not supported
Some relavant info is
NFS client: minikube nodes
NFS server: my Mac PC
minikube driver: docker
Cluster comprises 3 nodes. (1 master and 2 worker nodes)
Currently there's no k8s resources (such as deployment, pv and pvc) in cluster.
minikube nodes' os is Ubuntu so I guess "nfs-utils" is not relavant and not installed. "nfs-common" is preinstalled in minikube.
Please see the following sections for more detail.
Goal
The goal is mount cmd in minikube nodes succeeds and nfs share on my Mac pc mounts properly.
What I've done so far is
On NFS server side,
created /etc/exports file on mac pc. The content is like
/PATH/TO/EXPORTED/DIR/ON/MACPC -mapall=user:group 192.168.xx.xx(=the output of "minikube ip")
and run nfsd update and then showmount -e cmd outputs
Exports list on localhost:
/PATH/TO/EXPORTED/DIR/ON/MACPC 192.168.xx.xx(=the output of "minikube ip")
rpcinfo -p shows rpcbind(=portmapper in linux), status, nlockmgr, rquotad, nfs, mountd are all up in tcp and udp
ping 192.168.xx.xx(=the output of "minikube ip") says
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
and continues
It seems I can't reach minikube from host.
On NFS client side,
started nfs-common and rpcbind services with systemctl cmd in all minikube nodes. By running sudo systemctl status rpcbind and sudo systemctl status nfs-common, I confirmed rpcbind and nfs-common are running.
minikube ssh output
Last login: Mon Mar 28 09:18:38 2022 from 192.168.xx.xx(=I guess my macpc's IP seen from minikube cluster)
so I run
sudo mount -t nfs 192.168.xx.xx(=macpc's IP):/PATH/TO/EXPORTED/DIR/ON/MACPC /PATH/TO/MOUNT/POINT/IN/MINIKUBE/NODE
in minikube master node.
The output is
mount.nfs: requested NFS version or transport protocol is not supported
rpcinfo -p shows only portmapper and status are running. I am not sure this is ok.
ping 192.168.xx.xx(=macpc's IP) works properly.
ping host.minikube.internal works properly.
nc -vz 192.168.xx.xx(=macpc's IP) 2049 outputs connection refused
nc -vz host.minikube.internal 2049 outputs succeeded!
Thanks in advance!
I decided to use another type of volume instead.

Double port forwarding kubernetes + docker

Summary:
I have a docker container which is running kubectl port-forward, forwarding the port (5432) of a postgres service running as a k8s service to a local port (2223).
In the Dockerfile, I have exposed the relevant port 2223. Then I ran the container by publishing the said port (-p 2223:2223)
Now when I am trying to access the postgres through psql -h localhost -p 2223, I am getting the following error:
psql: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
However, when I do docker exec -ti to the said container and run the above psql command, I am able to connect to postgres.
Dockerfile CMD:
EXPOSE 2223
CMD ["bash", "-c", "kubectl -n namespace_test port-forward service/postgres-11-2 2223:5432"]
Docker Run command:
docker run -it --name=k8s-conn-12 -p 2223:2223 my_image_name:latest
Output of the docker run command:
Forwarding from 127.0.0.1:2223 -> 5432
So the port forwarding is successful, and I am able to connect to the postgres instance from inside the docker container. What I am not able to do is to connect from outside the container with the exposed and published port
You are missing a following parameter with your $ kubectl port-forward ...:
--address 0.0.0.0
I've reproduced the setup that you've tried to achieve and this was the reason the connection wasn't possible. I've included more explanation below.
Explanation
$ kubectl port-forward --help
Listen on port 8888 on all addresses, forwarding to 5000 in the pod
kubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000
Options:
--address=[localhost]: Addresses to listen on (comma separated). Only accepts IP addresses or
localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1
and will fail if neither of these addresses are available to bind.
By default: $ kubectl port-forward will bind to the localhost i.e. 127.0.0.1. In this setup the localhost will be the internal to the container and will not be accessible from your host even with the --publish (-p) parameter.
To allow the connections that are not originating from localhost you will need to pass earlier mentioned: --address 0.0.0.0. This will make kubectl listen on all IP addresses and respond to the traffic accordingly.
Your Dockerfile CMD should look similar to:
CMD ["bash", "-c", "kubectl -n namespace_test port-forward --address 0.0.0.0 service/postgres-11-2 2223:5432"]
Additional reference:
Kubernetes.io: Docs: Reference: Generated: Kubectl commands

How to connect kubernetes pod server on guest os from host os

I am testing k8s on ubuntu using virtual box.
I have two nodes, one is master, another is worker node.
I deployed a pod containing nginx server container for test.
I can access the webpage deployed by the pod on master node with commands below
kubectl port-forward nginx-server 8080:80
curl localhost:8080
but I want to open this page on my host os(windows10) using chrome web browser
This is how I set port-forwading on virtual-box...
simply answer your question, use address args for the kubectl command:
kubectl port-forward --address 0.0.0.0 nginx-server 8080:80
here is the explanation:
kubectl port-forward bind to localhost by default
the port forward for your virtual box is bind to 10.100.0.104
0.0.0.0 will bind the port to both localhost and 10.100.0.104
change 0.0.0.0 to 10.100.0.104 will also work for 10.100.0.104 access, but not the localhost
and also, when exposing a port, you could use a NodePort service: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport