Double port forwarding kubernetes + docker - postgresql

Summary:
I have a docker container which is running kubectl port-forward, forwarding the port (5432) of a postgres service running as a k8s service to a local port (2223).
In the Dockerfile, I have exposed the relevant port 2223. Then I ran the container by publishing the said port (-p 2223:2223)
Now when I am trying to access the postgres through psql -h localhost -p 2223, I am getting the following error:
psql: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
However, when I do docker exec -ti to the said container and run the above psql command, I am able to connect to postgres.
Dockerfile CMD:
EXPOSE 2223
CMD ["bash", "-c", "kubectl -n namespace_test port-forward service/postgres-11-2 2223:5432"]
Docker Run command:
docker run -it --name=k8s-conn-12 -p 2223:2223 my_image_name:latest
Output of the docker run command:
Forwarding from 127.0.0.1:2223 -> 5432
So the port forwarding is successful, and I am able to connect to the postgres instance from inside the docker container. What I am not able to do is to connect from outside the container with the exposed and published port

You are missing a following parameter with your $ kubectl port-forward ...:
--address 0.0.0.0
I've reproduced the setup that you've tried to achieve and this was the reason the connection wasn't possible. I've included more explanation below.
Explanation
$ kubectl port-forward --help
Listen on port 8888 on all addresses, forwarding to 5000 in the pod
kubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000
Options:
--address=[localhost]: Addresses to listen on (comma separated). Only accepts IP addresses or
localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1
and will fail if neither of these addresses are available to bind.
By default: $ kubectl port-forward will bind to the localhost i.e. 127.0.0.1. In this setup the localhost will be the internal to the container and will not be accessible from your host even with the --publish (-p) parameter.
To allow the connections that are not originating from localhost you will need to pass earlier mentioned: --address 0.0.0.0. This will make kubectl listen on all IP addresses and respond to the traffic accordingly.
Your Dockerfile CMD should look similar to:
CMD ["bash", "-c", "kubectl -n namespace_test port-forward --address 0.0.0.0 service/postgres-11-2 2223:5432"]
Additional reference:
Kubernetes.io: Docs: Reference: Generated: Kubectl commands

Related

not able to connect to internet via HTTPS minikube pods

cluster info:
Minikube installation steps on centos VM:
curl -LO https://storage.googleapis.com/minikube/releases/v1.21.0/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube start --addons=ingress --vm=true --memory=8192 --driver=none
PODs of this minikube cluster are not able to connect to internet.
However My host VM has internet connection with no firewall or iptables setup.
Can anybody help me debug this connection refused error
UPDATE:
I have noticed just now , I am able to connect non-https URLs, but not https URLs
How you have started the Minikube on the VM ? Which command did you used ?
If you are using the minikube --driver=docker it might won't work.
for starting the minikube on VM you have to change the driver
minikube --driver=none
in docker driver, it will create the container and install the kubernetes inside it and spawn the POD into further.
Check more at : https://minikube.sigs.k8s.io/docs/drivers/none/
if you on docker you can try :
pkill docker
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
docker -d
"It will force docker to recreate the bridge and reinit all the network rules"
reference : https://github.com/moby/moby/issues/866#issuecomment-19218300
Try with
docker run --net=host -it ubuntu
Or else add dns in the config file in /etc/default/docker
DOCKER_OPTS="--dns 208.67.222.222 --dns 208.67.220.220"
Please check your container port and target port. This is my pod setup:
spec:
containers:
name: hello
image: "nginx"
ports:
containerPort: 5000
This is my service setup:
ports:
protocol: TCP
port: 60000
targetPort: 5000
If your target port and container port don't match, you will get a curl: (7) connection refused error(Source).
Checkout similar Stackoverflowlink for more information.

How to connect kubernetes pod server on guest os from host os

I am testing k8s on ubuntu using virtual box.
I have two nodes, one is master, another is worker node.
I deployed a pod containing nginx server container for test.
I can access the webpage deployed by the pod on master node with commands below
kubectl port-forward nginx-server 8080:80
curl localhost:8080
but I want to open this page on my host os(windows10) using chrome web browser
This is how I set port-forwading on virtual-box...
simply answer your question, use address args for the kubectl command:
kubectl port-forward --address 0.0.0.0 nginx-server 8080:80
here is the explanation:
kubectl port-forward bind to localhost by default
the port forward for your virtual box is bind to 10.100.0.104
0.0.0.0 will bind the port to both localhost and 10.100.0.104
change 0.0.0.0 to 10.100.0.104 will also work for 10.100.0.104 access, but not the localhost
and also, when exposing a port, you could use a NodePort service: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport

PgAdmin not working with Postgres container

I am connecting to a postgresql docker service with the following commands :
docker create --name postgres-demo -e POSTGRES_PASSWORD=Welcome -p 5432:5432 postgres:11.5-alpine
docker start postgres-demo
docker exec -it postgres-demo psql -U postgres
I can successfully connect to postgresql conatiner service
Now I want to connect to PgAdmin4 to make some queries to the existing data in postgres database
However I keep having this error
The IP address that I am using is the one I extracted from docker inspect DOCKERID
I have restarted the postgresql service on windows but nothing happens. What I am doing wrong ?
Thanks
In fact, what you get with docker inspect(172.17.0.2) is just the ip of container, to visit the service in container, you need port binding host's port to container's port.
I see you already used -p 5432:5432 to do it, so please get the ip of host using ip a s, then if you get e.g. 10.10.0.186, then use this host ip to visit the service, use 5432 as a port.
To publish a port for our container, we’ll use the --publish flag (-p for short) on the docker run command. The format of the --publish command is [host port]:[container port]. So if we wanted to expose port 8000 inside the container to port 3000 outside the container, we would pass 3000:8000 to the --publish flag.
A diagram let you know the topologic of docker network, FYI:
You should try to connect to:
host: 0.0.0.0
port: 5432
while your docker container is up and running.

kubectl port-forward to another endpoint

Is there a corresponding command with kubectl to:
ssh -L8888:rds.aws.com:5432 example.com
kubectl has port-forward you can also specify --address but that strictly requires an IP address.
The older answer is valid.
Still, a workaround would be to use something like
https://hub.docker.com/r/marcnuri/port-forward
kubectl run --env REMOTE_HOST=your.service.com --env REMOTE_PORT=8080 --env LOCAL_PORT=8080 --port 8080 --image marcnuri/port-forward test-port-forward
Run it on the cluster and then port forward to it.
kubectl port-forward test-port-forward 8080:8080
Short answer, No.
In OpenSSH, local port forwarding is configured using the -L option:
ssh -L 80:intra.example.com:80 gw.example.com
This example opens a connection to the gw.example.com jump server, and forwards any connection to port 80 on the local machine to port 80 on intra.example.com.
By default, anyone (even on different machines) can connect to the specified port on the SSH client machine. However, this can be restricted to programs on the same host by supplying a bind address:
ssh -L 127.0.0.1:80:intra.example.com:80 gw.example.com
You can read the docs here.
The port-forward in Kubernetes works only within the cluster, you can forward traffic that will hit specified port to Deployment or Service or a Pod
kubectl port-forward TYPE/NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]
--address flag is to specify what to listen on 0.0.0.0 means everything localhost is as name and you can set an IP on which it can be listening on.
Documentation is available here, you can also read Use Port Forwarding to Access Applications in a Cluster.
One workaround you can use if you have an SSH server somewhere on the Internet is to SSH to your server from your pod, port-forwarding in reverse:
# Suppose a web console is being served at
# http://my-service-8f6717ab-e.default:8888/
# inside your cluster:
kubectl exec -it my-job-f523b248-7htj6 -- ssh -R8888:my-service-8f6717ab-e.default:8888 user#34.23.1.2
Then you can connect to the service inside Kubernetes from outside of it. If the SSH server is not local to you, you can SSH to it from your local machine with a normal port forward:
me#my-macbook-pro:$ ssh -L8888:localhost:8888 user#34.23.1.2
Then point your browser to http://localhost:8888/

Kubernetes Connectivity

I have a POD running which had a Java Application running. This Java Application talks to a MySql which is on-prem. The MySql accepts connections from 192.* ip's
I have the pod running on EKS worker nodes with Ip - 192.. I am able to telnet Mysql from the worker nodes. When the pod starts, the Java application tries to connect to the Mysql with the POD Ip (which is some random 172. ip) and fails with MySQL connection error.
How can I solve this?
Try to execute a shell inside the pod and connect to the MySQL server from there.
kubectl exec -it -n <namespace-name> <pod-name> -c <container-name> -- COMMAND [args...]
E.g.:
kubectl exec -it -n default mypod -c container1 -- bash
Then check the MySQL connectivity:
#/> mysql --host mysql.dns.name.or.ip --port 3306 --user root --password --verbose
Or start another pod with usual tools and check MySQL port connectivity:
$ kubectl run busybox --rm -it --image busybox --restart=Never
#/> ping mysql.dns.name.or.ip
#/> telnet mysql.dns.name.or.ip 3306
You should see some connection-related information that helps you to resolve your issue.
I can guess you just need to add a route to your cluster pods network on your MySQL host or its default network router.