I have docker -swarm with 3 workers nodes (mongo container on each one). I configured mongo-cluster ("rs0").
When I scaled mongo service (eg. docker service scale test_mongo1=2) I saw new mongo's container but this mongo is not in the cluster.
How to configure mongo's service or mongo-cluster to add automatically new mongo's replica to the mongo cluster?
Related
I have a mongodb cluster that I connect to it by port-forwarding the clusterip service on my local.
kubectl port-forward svc/mongodb-svc 27017:27017
I specify readPreference in the db client app, but the service always connects me to random node regardless of my read preferences. As primary node may change in the future, I don't want to create anyother service per node either. So my current workaround is connecting to the pod directly
kubectl port-forward pod/mongodb-0 27017:27017
But this is not an ideal solution to me. Does anyone know anything about getting this work with a service?
Thank you!
I'm trying to connect an already existing GKE cluster to a Mongo Cluster in mongo atlas.
I followed this tutorial, with the only difference that I didn't create the GKE cluster after creating the peering, but the other way around: I created the GKE Cluster, and then, when I saw I needed the peering, I peered the VPC in which the GKE Cluster was deployed, to mongo.
Then I tried the following
kubectl run -i --tty mongo-test --image=mongo --restart=Never -- sh
and then mongosh <<myconnectionurl>>, but it fails with a timeout.
I fear the peering needs to be done BEFORE creating the GKE cluster, which would be extremely undesirable for me. Note that VPC-native traffic routing is enabled in said GKE Clustter.
I got several replicas of mongodb pods into my cluster. And got a bastion server through which I can connect to each mongodb pod running in a private subnet. I can do that by mongodb pod IP into the connection string.
mongodb://username:password#xx.xxx.xxx.112:27017/
But I want to connect to the database using a pod name / service name instead of a dynamic pod IP which gets changed every time I recreate the pod. Using the pod default DNS with service name doesn't work in this case ( e.g. mongodb-0.mongo.default.svc.cluster.local)
Any idea how to connect to the mongodb pods using mongo client without using their IPs?
We are running redis clusters inside 2 different kubernetes cluster.
We would like to migrate data from Old redis cluster to new redis cluster without any downtime.
Both redis cluster are exposed thru service (ClusterIP) and pods from one k8s cannot communicate with pods running in another k8s cluster. Both can communicate over service IP.
Is there any tool or strategy that we can use to migrate redis cluster form old k8s to new k8s without any downtime or with minimal downtime.
Any help will be appreciated.
Is it possible to add existing instances in the GCP to a kubernetes cluster?
I can see the inputs while creating the cluster in graphically mode is :
Cluster Name,
Location,
Zone,
Cluster Version,
Machine type,
Size.
In command mode:
gcloud container clusters create cluster-name --num-nodes=4
I'm having 10 running instances.
I need to create the kubernetes cluster with the already existing running instances
On your instance run the following
kubelet --api_servers=http://<API_SERVER_IP>:8080 --v=2 --enable_server --allow-privileged
kube-proxy --master=http://<API_SERVER_IP>:8080 --v=2
This will connect your slave node to your existing cluster. Its actually surprisingly simple