Could not resolve host mongo, elasticsearch - mongodb

We created a docker swarm for 2 containers, mongo (alias mongo), ES (alias elasticsearch). Because of the unique use case our application is not part of the swarm but it still on bridge/default network. We want to reach out to mongo & ES from our app but we face the error
gradle#f63662b54a29:~/project$ curl -XGET http://mongo:27017
curl: (6) Could not resolve host: mongo
We are using the independent images available from mongo and ES.
These are the things we verified
sudo docker inspect --format '{{ .HostConfig.NetworkMode }}' container_id All the 3 containers have the default network config
The containers belong to the same security group and all the traffic is open from itself
We experimented with bindIp values
net:
port: 27017
bindIp: 127.0.0.1 // Also changed it to 0.0.0.0
We curled to google.com and it worked so shouldn't be a dns issue
Any pointers what else we could try to get to the bottom of this?

Related

Accessing mongodb replicaset from outside kubernetes cluster

I have a kubernetes cluster with a nodejs API and two mongodb replica sets. And it's working great, but we are still in development and having to wait for the deployment pipeline to finish every time we make a change is just taking to long.
So we expose our mongodb primary server via loadbalancers so we can access it from our local development environment without the need to push every change to the cluster just to test something.
The problem now is that we want the environments to be identical to make sure everything works (and because we want to use mongodb transactions). So we want to connect to the replicaset instead of just the primary mongodb pod.
Creating a loadbalancer for every single pod in the replicaset and listing them all as host seems to work, but as we have to pay per public IP address it quickly adds up.
So my question is if there is a way that would allow us to only use one IP address per replicaset, and if so, how to do that.
Edit:
What I tried with Ingress so far:
Connecting Domains to the MongoDB Pods (e.g. mongo1.my-site.com, mongo2.my-site.com)
This does not work because MongoDB need TCP connections
In the ingress config you can only bind one port to one service
Hooking up port 27017 to MongoDB1 Primary
Hooking up port 27018 to MongoDB2 Primary
MongoDB1 connects with mongo --host "mongo1.my-site.com:27017" --authenticationDatabase my_database --username my_user --password my_password
MongoDB1 connects with mongo --host "my_replicaSet/mongo1.my-site.com:27017" --authenticationDatabase my_database --username my_user --password my_password
MongoDB2 connects with mongo --host "mongo2.my-site.com:27018" --authenticationDatabase my_database --username my_user --password my_password
MongoDB2 does not connect with mongo --host "my_replicaSet/mongo2.my-site.com:27018" --authenticationDatabase my_database --username my_user --password my_password
MongoDB2 replica mode says it cant find a way to connect to "internal cluster Pod address"
Changin the Port of MongoDB2 deployment to 27018
No change, can still not connect to replica set, only to individual Pod
What else I tried:
Attaching a LoadBalancer to each MongoDB Pod individually
This works as it should
Because of the number of pods running and cost per public IP this is way to expensive
Attaching only one LoadBalancer to the Primary Pod
Now both MongoDBs are acting like MongoDB2 on Ingress
Can connect directly to Primary, but not when trying to the replicaSet
Edit:
Please note that mongo1.my-site.com and mongo2.my-site.com are handeled by the same ingress, so I can not use the same port twice as I can only define one port per ingress, regardless of the domain, I could also change both URLs to be the same. I just wrote it like that to better defferentiate my processes

PgAdmin not working with Postgres container

I am connecting to a postgresql docker service with the following commands :
docker create --name postgres-demo -e POSTGRES_PASSWORD=Welcome -p 5432:5432 postgres:11.5-alpine
docker start postgres-demo
docker exec -it postgres-demo psql -U postgres
I can successfully connect to postgresql conatiner service
Now I want to connect to PgAdmin4 to make some queries to the existing data in postgres database
However I keep having this error
The IP address that I am using is the one I extracted from docker inspect DOCKERID
I have restarted the postgresql service on windows but nothing happens. What I am doing wrong ?
Thanks
In fact, what you get with docker inspect(172.17.0.2) is just the ip of container, to visit the service in container, you need port binding host's port to container's port.
I see you already used -p 5432:5432 to do it, so please get the ip of host using ip a s, then if you get e.g. 10.10.0.186, then use this host ip to visit the service, use 5432 as a port.
To publish a port for our container, we’ll use the --publish flag (-p for short) on the docker run command. The format of the --publish command is [host port]:[container port]. So if we wanted to expose port 8000 inside the container to port 3000 outside the container, we would pass 3000:8000 to the --publish flag.
A diagram let you know the topologic of docker network, FYI:
You should try to connect to:
host: 0.0.0.0
port: 5432
while your docker container is up and running.

kubectl port-forward to another endpoint

Is there a corresponding command with kubectl to:
ssh -L8888:rds.aws.com:5432 example.com
kubectl has port-forward you can also specify --address but that strictly requires an IP address.
The older answer is valid.
Still, a workaround would be to use something like
https://hub.docker.com/r/marcnuri/port-forward
kubectl run --env REMOTE_HOST=your.service.com --env REMOTE_PORT=8080 --env LOCAL_PORT=8080 --port 8080 --image marcnuri/port-forward test-port-forward
Run it on the cluster and then port forward to it.
kubectl port-forward test-port-forward 8080:8080
Short answer, No.
In OpenSSH, local port forwarding is configured using the -L option:
ssh -L 80:intra.example.com:80 gw.example.com
This example opens a connection to the gw.example.com jump server, and forwards any connection to port 80 on the local machine to port 80 on intra.example.com.
By default, anyone (even on different machines) can connect to the specified port on the SSH client machine. However, this can be restricted to programs on the same host by supplying a bind address:
ssh -L 127.0.0.1:80:intra.example.com:80 gw.example.com
You can read the docs here.
The port-forward in Kubernetes works only within the cluster, you can forward traffic that will hit specified port to Deployment or Service or a Pod
kubectl port-forward TYPE/NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]
--address flag is to specify what to listen on 0.0.0.0 means everything localhost is as name and you can set an IP on which it can be listening on.
Documentation is available here, you can also read Use Port Forwarding to Access Applications in a Cluster.
One workaround you can use if you have an SSH server somewhere on the Internet is to SSH to your server from your pod, port-forwarding in reverse:
# Suppose a web console is being served at
# http://my-service-8f6717ab-e.default:8888/
# inside your cluster:
kubectl exec -it my-job-f523b248-7htj6 -- ssh -R8888:my-service-8f6717ab-e.default:8888 user#34.23.1.2
Then you can connect to the service inside Kubernetes from outside of it. If the SSH server is not local to you, you can SSH to it from your local machine with a normal port forward:
me#my-macbook-pro:$ ssh -L8888:localhost:8888 user#34.23.1.2
Then point your browser to http://localhost:8888/

Can't connect to Postgres (installed through Kubernetes Helm) service from external machine, connection refused

I just installed Kubernetes with minkube on my desktop(running Ubuntu 18.10) and was then trying to install Postgresql on the desktop machine using Helm.
After installing helm, I did:
helm install stable/postgresql
When this completed successfully, I forwarded postgres port with:
kubectl port-forward --namespace default svc/wise-beetle-postgresql 5432:5432 &
and then I tested connecting to it locally from my desktop with:
psql --host 127.0.0.1 -U postgres
which succeeds.
I attempted to connect to postgres from my laptop and that fails with:
psql -h $MY_DESKTOP_LAN_IP -p 5432 -U postgres
psql: could not connect to the server: Connection refused
Is the server running on host $MY_DESKTOP_LAN_IP and accepting TCP/IP connections on port 5432?
To ensure that my desktop was indeed listening on 5432, I did:
netstat -natp | grep 5432
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 17993/kubectl
tcp6 0 0 ::1:5432 :::* LISTEN 17993/kubectl
Any help anyone? I'm lost.
you need to configure postgresql.conf to allow external client connections look for listen parameter and set it to *, it is under your postgres data directory, and then add your laptop's ip in pg_hba.conf. It controls the client access to your postgresql server, more on this here - https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html
In my case the solution was a little bit of deeper understanding of networking.
For clarity, let's call the machine on which minikube is installed "A".
The IP of this machine as it is visible from other computers on my Wifi maybe be say: 192.100.200.300.1
Since Postgres was being exposed on port 5432, my expectation was that postgres should be visible externally on: 192.100.200.300.1:5432.
But this understanding is wrong which is what was leading to unexpected behavior.
The problem was that minikube runs in a VM and it gets its own IP address. It doesn't simply use the IP of the machine on which it is running. Minikube's IP is different from the IP
of the machine on which it is running. To find out the IP of minikube, run: minikube ip. Let's call this IP $MINIKUBE_IP.
And then I had to setup port forwarding like:
kubectl port-forward --address "192.100.200.300" --namespace default svc/wise-beetle-postgresql 5000:5432 &
So now, if you called a service on: 192.100.200.300:5000 it would be forwarded to port 5432 on the machine which is running minikube and then 5432 would be received by your postgres instance.
Hope this untangles or clarifies this problem that others might encounter.

how to use a environmental variable as Bind-Ip in mongo db configuration file (mongod.conf)

I'm a newbie in mongodb and As far as I have seen, we always pass constant IP values like 127.0.0.1 or 172.17.0.5 as Bind IP in mongod.conf file.
This is the bind ip configuration in my mongod.conf>
net:
port: 27017
bindIp: 127.0.0.1, 172.17.0.5 # Listen to local interface only, comment to listen on all interfaces.
I have defined an environmental variable in /etc/environment file
DHOST= 172.17.0.5
When I try to give Below configuration in mongod.conf, I cannot connect to mongo shell:
net:
port: 27017
bindIp: 127.0.0.1, *$DHOST* # Listen to local interface only, comment to listen on all interfaces.
Please help me to add a ENV var as bind ip in mongo db configuration
You should pretty much always bind to IPv4 0.0.0.0 or IPv6 ::0 (that is, “all addresses”) for things that run inside Docker containers. The docker run -p option has an optional field that can limit what IP address on the host a published port will bind to; the container can’t be reached directly from off-host without configuration like this and so trying to bind to specific interfaces within the container isn’t especially helpful.