I have a kubernetes cluster with a nodejs API and two mongodb replica sets. And it's working great, but we are still in development and having to wait for the deployment pipeline to finish every time we make a change is just taking to long.
So we expose our mongodb primary server via loadbalancers so we can access it from our local development environment without the need to push every change to the cluster just to test something.
The problem now is that we want the environments to be identical to make sure everything works (and because we want to use mongodb transactions). So we want to connect to the replicaset instead of just the primary mongodb pod.
Creating a loadbalancer for every single pod in the replicaset and listing them all as host seems to work, but as we have to pay per public IP address it quickly adds up.
So my question is if there is a way that would allow us to only use one IP address per replicaset, and if so, how to do that.
Edit:
What I tried with Ingress so far:
Connecting Domains to the MongoDB Pods (e.g. mongo1.my-site.com, mongo2.my-site.com)
This does not work because MongoDB need TCP connections
In the ingress config you can only bind one port to one service
Hooking up port 27017 to MongoDB1 Primary
Hooking up port 27018 to MongoDB2 Primary
MongoDB1 connects with mongo --host "mongo1.my-site.com:27017" --authenticationDatabase my_database --username my_user --password my_password
MongoDB1 connects with mongo --host "my_replicaSet/mongo1.my-site.com:27017" --authenticationDatabase my_database --username my_user --password my_password
MongoDB2 connects with mongo --host "mongo2.my-site.com:27018" --authenticationDatabase my_database --username my_user --password my_password
MongoDB2 does not connect with mongo --host "my_replicaSet/mongo2.my-site.com:27018" --authenticationDatabase my_database --username my_user --password my_password
MongoDB2 replica mode says it cant find a way to connect to "internal cluster Pod address"
Changin the Port of MongoDB2 deployment to 27018
No change, can still not connect to replica set, only to individual Pod
What else I tried:
Attaching a LoadBalancer to each MongoDB Pod individually
This works as it should
Because of the number of pods running and cost per public IP this is way to expensive
Attaching only one LoadBalancer to the Primary Pod
Now both MongoDBs are acting like MongoDB2 on Ingress
Can connect directly to Primary, but not when trying to the replicaSet
Edit:
Please note that mongo1.my-site.com and mongo2.my-site.com are handeled by the same ingress, so I can not use the same port twice as I can only define one port per ingress, regardless of the domain, I could also change both URLs to be the same. I just wrote it like that to better defferentiate my processes
Related
I have a mongodb cluster that I connect to it by port-forwarding the clusterip service on my local.
kubectl port-forward svc/mongodb-svc 27017:27017
I specify readPreference in the db client app, but the service always connects me to random node regardless of my read preferences. As primary node may change in the future, I don't want to create anyother service per node either. So my current workaround is connecting to the pod directly
kubectl port-forward pod/mongodb-0 27017:27017
But this is not an ideal solution to me. Does anyone know anything about getting this work with a service?
Thank you!
We created a docker swarm for 2 containers, mongo (alias mongo), ES (alias elasticsearch). Because of the unique use case our application is not part of the swarm but it still on bridge/default network. We want to reach out to mongo & ES from our app but we face the error
gradle#f63662b54a29:~/project$ curl -XGET http://mongo:27017
curl: (6) Could not resolve host: mongo
We are using the independent images available from mongo and ES.
These are the things we verified
sudo docker inspect --format '{{ .HostConfig.NetworkMode }}' container_id All the 3 containers have the default network config
The containers belong to the same security group and all the traffic is open from itself
We experimented with bindIp values
net:
port: 27017
bindIp: 127.0.0.1 // Also changed it to 0.0.0.0
We curled to google.com and it worked so shouldn't be a dns issue
Any pointers what else we could try to get to the bottom of this?
I am connecting to a postgresql docker service with the following commands :
docker create --name postgres-demo -e POSTGRES_PASSWORD=Welcome -p 5432:5432 postgres:11.5-alpine
docker start postgres-demo
docker exec -it postgres-demo psql -U postgres
I can successfully connect to postgresql conatiner service
Now I want to connect to PgAdmin4 to make some queries to the existing data in postgres database
However I keep having this error
The IP address that I am using is the one I extracted from docker inspect DOCKERID
I have restarted the postgresql service on windows but nothing happens. What I am doing wrong ?
Thanks
In fact, what you get with docker inspect(172.17.0.2) is just the ip of container, to visit the service in container, you need port binding host's port to container's port.
I see you already used -p 5432:5432 to do it, so please get the ip of host using ip a s, then if you get e.g. 10.10.0.186, then use this host ip to visit the service, use 5432 as a port.
To publish a port for our container, we’ll use the --publish flag (-p for short) on the docker run command. The format of the --publish command is [host port]:[container port]. So if we wanted to expose port 8000 inside the container to port 3000 outside the container, we would pass 3000:8000 to the --publish flag.
A diagram let you know the topologic of docker network, FYI:
You should try to connect to:
host: 0.0.0.0
port: 5432
while your docker container is up and running.
I just installed Kubernetes with minkube on my desktop(running Ubuntu 18.10) and was then trying to install Postgresql on the desktop machine using Helm.
After installing helm, I did:
helm install stable/postgresql
When this completed successfully, I forwarded postgres port with:
kubectl port-forward --namespace default svc/wise-beetle-postgresql 5432:5432 &
and then I tested connecting to it locally from my desktop with:
psql --host 127.0.0.1 -U postgres
which succeeds.
I attempted to connect to postgres from my laptop and that fails with:
psql -h $MY_DESKTOP_LAN_IP -p 5432 -U postgres
psql: could not connect to the server: Connection refused
Is the server running on host $MY_DESKTOP_LAN_IP and accepting TCP/IP connections on port 5432?
To ensure that my desktop was indeed listening on 5432, I did:
netstat -natp | grep 5432
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 17993/kubectl
tcp6 0 0 ::1:5432 :::* LISTEN 17993/kubectl
Any help anyone? I'm lost.
you need to configure postgresql.conf to allow external client connections look for listen parameter and set it to *, it is under your postgres data directory, and then add your laptop's ip in pg_hba.conf. It controls the client access to your postgresql server, more on this here - https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html
In my case the solution was a little bit of deeper understanding of networking.
For clarity, let's call the machine on which minikube is installed "A".
The IP of this machine as it is visible from other computers on my Wifi maybe be say: 192.100.200.300.1
Since Postgres was being exposed on port 5432, my expectation was that postgres should be visible externally on: 192.100.200.300.1:5432.
But this understanding is wrong which is what was leading to unexpected behavior.
The problem was that minikube runs in a VM and it gets its own IP address. It doesn't simply use the IP of the machine on which it is running. Minikube's IP is different from the IP
of the machine on which it is running. To find out the IP of minikube, run: minikube ip. Let's call this IP $MINIKUBE_IP.
And then I had to setup port forwarding like:
kubectl port-forward --address "192.100.200.300" --namespace default svc/wise-beetle-postgresql 5000:5432 &
So now, if you called a service on: 192.100.200.300:5000 it would be forwarded to port 5432 on the machine which is running minikube and then 5432 would be received by your postgres instance.
Hope this untangles or clarifies this problem that others might encounter.
I've set up and deployed a Kubernetes stateful set containing three CockroachDB pods, as per docs. My ultimate objective is to query the database without requiring use of kubectl. My intermediate objective is to query the database without actually shelling into the database pod.
I forwarded a port from a pod to my local machine, and attempted to connect:
$ kubectl port-forward cockroachdb-0 26257
Forwarding from 127.0.0.1:26257 -> 26257
Forwarding from [::1]:26257 -> 26257
# later, after attempting to connect:
Handling connection for 26257
E0607 16:32:20.047098 80112 portforward.go:329] an error occurred forwarding 26257 -> 26257: error forwarding port 26257 to pod cockroachdb-0_mc-red, uid : exit status 1: 2017/06/07 04:32:19 socat[40115] E connect(5, AF=2 127.0.0.1:26257, 16): Connection refused
$ cockroach node ls --insecure --host localhost --port 26257
Error: unable to connect or connection lost.
Please check the address and credentials such as certificates (if attempting to
communicate with a secure cluster).
rpc error: code = Internal desc = transport is closing
Failed running "node"
Anyone manage to accomplish this?
From inside the Kubernetes cluster, you can talk to the database by connecting the cockroachdb-public DNS name. In the docs, that corresponds to the example command:
kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never -- sql --insecure --host=cockroachdb-public
While that command is using the CockroachDB image, any Postgres client driver you use should be able to connect to cockroachdb-public when running with the Kubernetes cluster.
Connecting to the database from outside of the Kubernetes cluster will require exposing the cockroachdb-public service. The details will depend somewhat on how your Kubernetes cluster was deployed, so I'd recommend checking out their docs on that:
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#exposing-the-service
And in case you're curious, the reason forwarding port 26257 isn't working for you is because port forwarding from a pod only works if the process in the pod is listening on localhost, but the CockroachDB process in the statefulset configuration is set up to listen on the pod's hostname (as configured via the --host flag).