How can I fix MongoError: no mongos proxy available on GKE - mongodb

I am trying to deploy and Express api on GKE, with a Mongo StatefulSet.
googlecloud_ssd.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
mongo-statefulset.yaml
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 2
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
I deployed my Express app and it works perfect, I then deployed Mongo using the above yaml config.
Having set the connection string in express as:
"mongodb://mongo-0.mongo,mongo-1.mongo:27017/"
I can see the updated pod(s) not starting.
Looking at the logs for that container I see
{
insertId: "a9tu83g211w2a6"
labels: {…}
logName: "projects/<my-project-id>/logs/express"
receiveTimestamp: "2019-06-03T14:19:14.142238836Z"
resource: {…}
severity: "ERROR"
textPayload: "[ ERROR ] MongoError: no mongos proxy available
"
timestamp: "2019-06-03T14:18:56.132989616Z"
}
I am unsure how to debug / fix MongoError: no mongos proxy available
Edit
So I scaled down my replicas to 1 on each and it's now working.
I'm confused as to why this won't work more than 1 replica.

The connection to your Mongodb database doesn't work for two reasons:
You cannot connect to high-available MongoDB deployment running inside your Kubernetes cluster using Pods DNS names. These unique POD names: mongo-0.mongo, mongo-1.mongo, with corresponding FQDNs as mongo-0.mongo.default.svc.cluster.local, mongo-1.mongo.default.svc.cluster.local, can be only reached within the K8S cluster. You have an Express web application that runs on client side (Web browser), and needs to connect to your mongodb from outside of cluster.
Connection string: you should connect to primary node via Kubernetes service name, that abstracts access to the Pods behind the replica sets.
Solution:
Create a separate Kubernetes Service of LoadBalancer or NodePort type for your Primary ReplicaSet, and use <ExternalIP_of_LoadBalancer>:27017 in your connection string.
I would encourage you to take a look at official mongodb helm chart, to see what kind of manifest files are required to satisfy your case.
Hint: use '--set service.type=LoadBalancer' with this helm chart

Related

Error when creating mongodb user in kubernetes environment

I am trying to create a mongodb user along with a stateful set. Here is my .yaml file:
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
type: NodePort
ports:
- port: 27017
targetPort: 27017
selector:
name: mongo
---
apiVersion: v1
kind: Secret
metadata:
name: admin-secret
# corresponds to user.spec.passwordSecretKeyRef.name
type: Opaque
stringData:
password: pass1
# corresponds to user.spec.passwordSecretKeyRef.key
---
apiVersion: mongodb.com/v1
kind: MongoDBUser
metadata:
name: admin
spec:
passwordSecretKeyRef:
name: admin-secret
# Match to metadata.name of the User Secret
key: password
username: admin
db: "admin" #
mongodbResourceRef:
name: mongo
# Match to MongoDB resource using authenticaiton
roles:
- db: "admin"
name: "clusterAdmin"
- db: "admin"
name: "userAdminAnyDatabase"
- db: "admin"
name: "readWrite"
- db: "admin"
name: "userAdminAnyDatabase"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 2
selector:
matchLabels:
name: mongo
template:
metadata:
labels:
name: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
# - envFrom:
# - secretRef:
# name: mongo-secret
- image: mongo
name: mongodb
command:
- mongod
- "--replSet"
- rs0
- "--bind_ip"
- 0.0.0.0
ports:
- containerPort: 27017
Earlier I used the secret to create a mongo user:
...
spec:
containers:
- envFrom:
- secretRef:
name: mongo-secret
...
but once I added spec.template.spec.containers.command to the StatefulSet this approach is no longer working. Then I added Secret and MongoDBUser but I started getting this error:
unable to recognize "mongo.yaml": no matches for kind "MongoDBUser" in version "mongodb.com/v1"
How to automatically create a mongodb user when creating StatefulSet with few replicas in kubernetes?
One of the resources in your yaml file refers to a kind that doesn't exist in your cluster.
You can check this by running the command kubectl api-resources | grep mongo -i
Specifically it's the resource of kind MongoDBUser. This API resource type is part of MongoDB Enterprise Kubernetes Operator.
You haven't indicated whether you are using this in your cluster, but the error you're getting implies the CRD's for the operator are not installed and so cannot be used.
MongoDB Kubernetes Operator is a paid enterprise package for Kubernetes. If you don't have access to this enterprise package from MongoDB you can also install the community edition yourself by either setting up all the resources yourself or using Helm to install it as a package. Using Helm makes managing the resources significantly easier, especially with regards to configuration, upgrades, re-installation or unistalling. The existing Helm charts are open source and also allow for running MongDB as a standalone instance, replica set or a sharded cluster.
For reference, Bitnami provides a MongoDB Standalone or replica set helm chart which seems to be on the latest MongoDB version and is maintained regularly. There is also this one, but it's on an older version of MongoDB and doesn't seem to be getting much attention.

Kubernetes MongoDB autoscaling

I've deployed a stateful mongodb setup in my k8s cluster. Everytime a scale a new pod, I need to add the pod from mongodb console using rs.add() command. Is there any way I can orchestrate this ?..Also how can I expose my mongodb service outside my k8s cluster..Changing the service type to nodeport didn't work for me..Please help.
Giving below the stateful yaml file which I used to deploy mongodb.
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:3.4
command:
- mongod
- "--replSet"
- rs0
- "--bind_ip"
- 0.0.0.0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
As you are using Kubernetes, which is Container Orchestration platform, you can always scale your deployment/statefulset using $ kubectl scale deployment [deployment_name] --repplicas=X
or $ kubectl scale statefulset [statefulset-name] --replicas=X
where X means how many pods in total you want to have in deployment. It will create autoamatically pods based on your deployment settings.
If you don't want to create it manually, you should read about Kubernetes autoscaling - HPA.
About exposing application outside Kubernetes you have to do it using Service. More information can be found here. I am not sure if NodePort is right in this scenario. You can check ServiceType description.
However I am not very familiar with MongoDB with Kubernetes, but maybe those tutorials help you.
Scaling MongoDB on Kubernetes, Running MongoDB as a Microservice with Docker and Kubernetes, Running MongoDB on Kubernetes with StatefulSets.
Hope it will help.
As #PjoterS suggest you can scale the mongoDB replicas or pods inside the kubernetes using HPA.
But with that you should have to also take care about the volume mounting with it.also data latency between replicas.
I would suggest better first check the native scaling cluster option provided by the mongo db it self and configure. You can use some operators for mongoDB
like : https://docs.mongodb.com/kubernetes-operator/master/tutorial/install-k8s-operator/
Otherwise if you have current config is following native cluster and support scaling replica and data copy between replca's you can go for HPA.
you can also have a look at this : https://medium.com/faun/scaling-mongodb-on-kubernetes-32e446c16b82

Spring boot application pod fails to find the mongodb pod on Kubernetes cluster

I have a Spring Boot Application backed by MongoDB. Both are deployed on a Kubernetes cluster on Azure. My Application throws "Caused by: java.net.UnknownHostException: mongo-dev-0 (pod): Name or service not known" when it tries to connect to MongoDB.
I am able to connect to the mongo-dev-0 pod and run queries on the MongoDB, so there is no issue with the Mongo itself and it looks like the Spring boot is able to connect to Mongo Service and discover the pod behind the service.
How do I ensure the pods are discoverable by my Spring Boot Application?
How do I go about debugging this issue?
Any help is appreciated. Thanks in advance.
Here is my config:
---
apiVersion: v1
kind: Service
metadata:
name: mongo-dev
labels:
name: mongo-dev
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo-dev
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo-dev
spec:
serviceName: "mongo-dev"
replicas: 3
template:
metadata:
labels:
role: mongo-dev
environment: dev
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo-dev
image: mongo:3.4
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
- "--auth"
- "--bind_ip"
- 0.0.0.0
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-dev-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo-dev,environment=dev"
- name: KUBERNETES_MONGO_SERVICE_NAME
value: "mongo-dev"
volumeClaimTemplates:
- metadata:
name: mongo-dev-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "devdisk"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: devdisk
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Premium_LRS
location: abc
storageAccount: xyz
To be able to reach your mongodb pod via its service from your spring boot application, you have to start the mongodb pod and the corresponding service first, and then start your spring boot application pod (let's name it sb-pod).
You can enforce this order by using an initContainer in your sb-pod; to wait for the database service to be available before starting. Something like:
initContainers:
- name: init-mongo-dev
image: busybox
command: ['sh', '-c', 'until nslookup mongo-dev; do echo waiting for mongo-dev; sleep 2; done;']
If you connect to your sb-pod using:
kubectl exec -it sb-pod bash
and type the env command, make sure you can see the environment variables
MONGO_DEV_SERVICE_HOST and MONGO_DEV_SERVICE_PORT
How about mongo-dev-0.mongo-dev.default.svc.cluster.local ?
<pod-id>.<service name>.<namespace>.svc.cluster.local
As in Stable Network ID.

MongoDb RAM Usage in Kubernetes Pods - Not Aware of Node limits

In Google Container Engines Kubernetes I have 3 Nodes each having 3.75 GB of ram
Now i also have an api that is called from a single endpoint. this endpoint makes batch inserts in mongodb like this.
IMongoCollection<T> stageCollection = Database.GetCollection<T>(StageName);
foreach (var batch in entites.Batch(1000))
{
await stageCollection.InsertManyAsync(batch);
}
Now it happens very often then we endup in scenarios out ouf memory scenarios.
On the one hand we limited the wiredTigerCacheSizeGB to 1.5 and on the other hand we defined a ressource limit for the pod.
But still the same result.
For me it looks like mongodb isn't aware of the memory limit the node pod has.
Is this a known issue? how to deal with it, without scaling to "monster" engines?
the configuration yaml looks like this:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:3.6
command:
- mongod
- "--replSet"
- rs0
- "--bind_ip"
- "0.0.0.0"
- "--noprealloc"
- "--wiredTigerCacheSizeGB"
- "1.5"
resources:
limits:
memory: "2Gi"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 32Gi
UPDATE
in the meanwhile i also configured the pod antiaffinity to make sure that on the nodes where mongo db is running we don't have any interference in ram. but still we got the oom scenarios –
I'm facing a similar issue where the pod gets OOMKilled even if there is limits and WiredTiger cache limit set.
This PR is tackling the issue for which MongoDB it's taking the node's memory rather than the container memory limit.
In your case I advise you is to update the MongoDB container image to a more recent version (since the PR is fixing 3.6.13, and you are running 3.6).
It may be still the case that your pod will be OOMKilled given that I'm using 4.4.10 and still facing this issue.

Cannot connect to a mongodb service in a Kubernetes cluster

I have a Kubernetes cluster on Google Cloud, I have a database service, which is running in front of a mongodb deployment. I also have a series of microservices, which are attempting to connect to that datastore.
However, they can't seem to find the host.
apiVersion: v1
kind: Service
metadata:
labels:
name: mongo
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
name: mongo
Here's my mongo deployment...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo-deployment
spec:
replicas: 1
template:
metadata:
labels:
name: mongo
spec:
containers:
- image: mongo:latest
name: mongo
ports:
- name: mongo
containerPort: 27017
hostPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
gcePersistentDisk:
pdName: mongo-disk
fsType: ext4
And an example of one of my services...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bandzest-artists
spec:
replicas: 1
template:
metadata:
labels:
name: bandzest-artists
spec:
containers:
- name: artists-container
image: gcr.io/<omitted>/artists:41040e8
ports:
- containerPort: 7000
imagePullPolicy: Always
env:
- name: DB_HOST
value: mongo
- name: AWS_BUCKET_NAME
value: <omitted>
- name: AWS_ACCESS_KEY_ID
value: <omitted>
- name: AWS_SECRET_KEY
value: <omitted>
First, check that the service is created
kubectl describe svc mongo
You should see it show that it is both created and routing to your pod's IP. If you're wondering what your pod's IP is you can check it out via
kubectl get po | grep mongo
Which should return something like: mongo-deployment-<guid>-<guid>, then do
kubectl describe po mongo-deployment-<guid>-<guid>
You should make sure the pod is started correctly and says Running not something like ImagePullBackoff. It looks like you're mounting a volume from a gcePersistentDisk. If you're seeing your pod just hanging out in the ContainerCreating state it's very likely you're not mounting the disk correctly. Make sure you create the disk before you try and mount it as a volume.
If it looks like your service is routing correctly, then you can check the logs of your pod to make sure it started mongo correctly:
kubectl logs mongo-deployment-<guid>-<guid>
If it looks like the pod and logs are correct, you can exec into the pod and make sure mongo is actually starting and working:
kubectl exec -it mongo-deployment-<guid>-<guid> sh
Which should get you into the container (Pod) and then you can try something like this to see if your DB is running.