I am connecting to a postgresql docker service with the following commands :
docker create --name postgres-demo -e POSTGRES_PASSWORD=Welcome -p 5432:5432 postgres:11.5-alpine
docker start postgres-demo
docker exec -it postgres-demo psql -U postgres
I can successfully connect to postgresql conatiner service
Now I want to connect to PgAdmin4 to make some queries to the existing data in postgres database
However I keep having this error
The IP address that I am using is the one I extracted from docker inspect DOCKERID
I have restarted the postgresql service on windows but nothing happens. What I am doing wrong ?
Thanks
In fact, what you get with docker inspect(172.17.0.2) is just the ip of container, to visit the service in container, you need port binding host's port to container's port.
I see you already used -p 5432:5432 to do it, so please get the ip of host using ip a s, then if you get e.g. 10.10.0.186, then use this host ip to visit the service, use 5432 as a port.
To publish a port for our container, we’ll use the --publish flag (-p for short) on the docker run command. The format of the --publish command is [host port]:[container port]. So if we wanted to expose port 8000 inside the container to port 3000 outside the container, we would pass 3000:8000 to the --publish flag.
A diagram let you know the topologic of docker network, FYI:
You should try to connect to:
host: 0.0.0.0
port: 5432
while your docker container is up and running.
Related
Summary:
I have a docker container which is running kubectl port-forward, forwarding the port (5432) of a postgres service running as a k8s service to a local port (2223).
In the Dockerfile, I have exposed the relevant port 2223. Then I ran the container by publishing the said port (-p 2223:2223)
Now when I am trying to access the postgres through psql -h localhost -p 2223, I am getting the following error:
psql: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
However, when I do docker exec -ti to the said container and run the above psql command, I am able to connect to postgres.
Dockerfile CMD:
EXPOSE 2223
CMD ["bash", "-c", "kubectl -n namespace_test port-forward service/postgres-11-2 2223:5432"]
Docker Run command:
docker run -it --name=k8s-conn-12 -p 2223:2223 my_image_name:latest
Output of the docker run command:
Forwarding from 127.0.0.1:2223 -> 5432
So the port forwarding is successful, and I am able to connect to the postgres instance from inside the docker container. What I am not able to do is to connect from outside the container with the exposed and published port
You are missing a following parameter with your $ kubectl port-forward ...:
--address 0.0.0.0
I've reproduced the setup that you've tried to achieve and this was the reason the connection wasn't possible. I've included more explanation below.
Explanation
$ kubectl port-forward --help
Listen on port 8888 on all addresses, forwarding to 5000 in the pod
kubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000
Options:
--address=[localhost]: Addresses to listen on (comma separated). Only accepts IP addresses or
localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1
and will fail if neither of these addresses are available to bind.
By default: $ kubectl port-forward will bind to the localhost i.e. 127.0.0.1. In this setup the localhost will be the internal to the container and will not be accessible from your host even with the --publish (-p) parameter.
To allow the connections that are not originating from localhost you will need to pass earlier mentioned: --address 0.0.0.0. This will make kubectl listen on all IP addresses and respond to the traffic accordingly.
Your Dockerfile CMD should look similar to:
CMD ["bash", "-c", "kubectl -n namespace_test port-forward --address 0.0.0.0 service/postgres-11-2 2223:5432"]
Additional reference:
Kubernetes.io: Docs: Reference: Generated: Kubectl commands
Is there a corresponding command with kubectl to:
ssh -L8888:rds.aws.com:5432 example.com
kubectl has port-forward you can also specify --address but that strictly requires an IP address.
The older answer is valid.
Still, a workaround would be to use something like
https://hub.docker.com/r/marcnuri/port-forward
kubectl run --env REMOTE_HOST=your.service.com --env REMOTE_PORT=8080 --env LOCAL_PORT=8080 --port 8080 --image marcnuri/port-forward test-port-forward
Run it on the cluster and then port forward to it.
kubectl port-forward test-port-forward 8080:8080
Short answer, No.
In OpenSSH, local port forwarding is configured using the -L option:
ssh -L 80:intra.example.com:80 gw.example.com
This example opens a connection to the gw.example.com jump server, and forwards any connection to port 80 on the local machine to port 80 on intra.example.com.
By default, anyone (even on different machines) can connect to the specified port on the SSH client machine. However, this can be restricted to programs on the same host by supplying a bind address:
ssh -L 127.0.0.1:80:intra.example.com:80 gw.example.com
You can read the docs here.
The port-forward in Kubernetes works only within the cluster, you can forward traffic that will hit specified port to Deployment or Service or a Pod
kubectl port-forward TYPE/NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]
--address flag is to specify what to listen on 0.0.0.0 means everything localhost is as name and you can set an IP on which it can be listening on.
Documentation is available here, you can also read Use Port Forwarding to Access Applications in a Cluster.
One workaround you can use if you have an SSH server somewhere on the Internet is to SSH to your server from your pod, port-forwarding in reverse:
# Suppose a web console is being served at
# http://my-service-8f6717ab-e.default:8888/
# inside your cluster:
kubectl exec -it my-job-f523b248-7htj6 -- ssh -R8888:my-service-8f6717ab-e.default:8888 user#34.23.1.2
Then you can connect to the service inside Kubernetes from outside of it. If the SSH server is not local to you, you can SSH to it from your local machine with a normal port forward:
me#my-macbook-pro:$ ssh -L8888:localhost:8888 user#34.23.1.2
Then point your browser to http://localhost:8888/
I just installed Kubernetes with minkube on my desktop(running Ubuntu 18.10) and was then trying to install Postgresql on the desktop machine using Helm.
After installing helm, I did:
helm install stable/postgresql
When this completed successfully, I forwarded postgres port with:
kubectl port-forward --namespace default svc/wise-beetle-postgresql 5432:5432 &
and then I tested connecting to it locally from my desktop with:
psql --host 127.0.0.1 -U postgres
which succeeds.
I attempted to connect to postgres from my laptop and that fails with:
psql -h $MY_DESKTOP_LAN_IP -p 5432 -U postgres
psql: could not connect to the server: Connection refused
Is the server running on host $MY_DESKTOP_LAN_IP and accepting TCP/IP connections on port 5432?
To ensure that my desktop was indeed listening on 5432, I did:
netstat -natp | grep 5432
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 17993/kubectl
tcp6 0 0 ::1:5432 :::* LISTEN 17993/kubectl
Any help anyone? I'm lost.
you need to configure postgresql.conf to allow external client connections look for listen parameter and set it to *, it is under your postgres data directory, and then add your laptop's ip in pg_hba.conf. It controls the client access to your postgresql server, more on this here - https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html
In my case the solution was a little bit of deeper understanding of networking.
For clarity, let's call the machine on which minikube is installed "A".
The IP of this machine as it is visible from other computers on my Wifi maybe be say: 192.100.200.300.1
Since Postgres was being exposed on port 5432, my expectation was that postgres should be visible externally on: 192.100.200.300.1:5432.
But this understanding is wrong which is what was leading to unexpected behavior.
The problem was that minikube runs in a VM and it gets its own IP address. It doesn't simply use the IP of the machine on which it is running. Minikube's IP is different from the IP
of the machine on which it is running. To find out the IP of minikube, run: minikube ip. Let's call this IP $MINIKUBE_IP.
And then I had to setup port forwarding like:
kubectl port-forward --address "192.100.200.300" --namespace default svc/wise-beetle-postgresql 5000:5432 &
So now, if you called a service on: 192.100.200.300:5000 it would be forwarded to port 5432 on the machine which is running minikube and then 5432 would be received by your postgres instance.
Hope this untangles or clarifies this problem that others might encounter.
I have a POD running which had a Java Application running. This Java Application talks to a MySql which is on-prem. The MySql accepts connections from 192.* ip's
I have the pod running on EKS worker nodes with Ip - 192.. I am able to telnet Mysql from the worker nodes. When the pod starts, the Java application tries to connect to the Mysql with the POD Ip (which is some random 172. ip) and fails with MySQL connection error.
How can I solve this?
Try to execute a shell inside the pod and connect to the MySQL server from there.
kubectl exec -it -n <namespace-name> <pod-name> -c <container-name> -- COMMAND [args...]
E.g.:
kubectl exec -it -n default mypod -c container1 -- bash
Then check the MySQL connectivity:
#/> mysql --host mysql.dns.name.or.ip --port 3306 --user root --password --verbose
Or start another pod with usual tools and check MySQL port connectivity:
$ kubectl run busybox --rm -it --image busybox --restart=Never
#/> ping mysql.dns.name.or.ip
#/> telnet mysql.dns.name.or.ip 3306
You should see some connection-related information that helps you to resolve your issue.
I can guess you just need to add a route to your cluster pods network on your MySQL host or its default network router.
I have a docker image that implements a tcp node via twisted and I would like to establish a communication to and from the host
on the host I start netcat
nc -l -p 6789
if I run the docker with
docker run -it -p 6789:6789 image_name
I get
Bind for 0.0.0.0:6789 failed: port is already allocated
If I try the opposite order, so docker run and after start netcat on the host I get
Error: Couldn't setup listening socket (err=-3)
is there a way to bind an allocated port from the host to the container?
The problem is that you are using the same port on the host to run nc -l -p 6789 and to map the containers port (-p 6789:6789). Try to change one of them.