Deploying 2 Namespaces With 1 MongoDB Instance In Each - mongodb

I am attempting to deploy 2 MongoDB instances; 1 to the staging namespace and 1 to the production namespace. My staging deployment is running fine, however the production data-retriever deployment crashes with the error MongoNetworkError: getaddrinfo ENOTFOUND
I am sure this is something that has been achieved before but I cannot find any documentation on achieving the desired outcome above.
The following YAML files are being used and the key differences are in the serviceName change in statefulset-prod.yml and MONGODB_URI in the environment variables of deployment-prod.yml:
service-staging.yml
apiVersion: v1
kind: Service
metadata:
name: mongodb
namespace: staging
labels:
app: my-app
environment: staging
spec:
ports:
- name: http
protocol: TCP
port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongodb
environment: staging
service-prod.yml
apiVersion: v1
kind: Service
metadata:
name: mongodb-production
namespace: production
labels:
app: my-app
environment: production
spec:
ports:
- name: http
protocol: TCP
port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongodb-production
environment: production
statefulset-staging.yml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongodb-staging
namespace: staging
labels:
app: my-app
environment: staging
annotations:
prometheus.io.scrape: "true"
spec:
serviceName: "mongodb"
replicas: 1
template:
metadata:
labels:
role: mongodb
environment: staging
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
- "--bind_ip_all"
- "--wiredTigerCacheSizeGB=0.5"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongodb,environment=staging"
- name: KUBERNETES_MONGO_SERVICE_NAME
value: "mongodb"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast-storage
resources:
requests:
storage: 25Gi
statefulset-prod.yml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongodb-production
namespace: production
labels:
app: my-app
environment: production
annotations:
prometheus.io.scrape: "true"
spec:
serviceName: "mongodb-production"
replicas: 1
template:
metadata:
labels:
role: mongodb-production
environment: production
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
- "--bind_ip_all"
- "--wiredTigerCacheSizeGB=0.5"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongodb-production,environment=production"
- name: KUBERNETES_MONGO_SERVICE_NAME
value: "mongodb-production"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast-storage
resources:
requests:
storage: 25Gi
deployment-staging.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: data-retriever-deployment
namespace: staging
labels:
app: data-retriever
environment: staging
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: data-retriever
environment: staging
tier: backend
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- data-retriever
topologyKey: "kubernetes.io/hostname"
imagePullSecrets:
- name: gitlab-reg
containers:
- name: data-retriever
image: registry.gitlab.com/xxx/xxx:latest
env:
- name: MONGODB_URI
value: "mongodb://mongodb:27017/appdb_staging?replicaSet=rs0"
- name: NODE_ENV
value: "staging"
deployment-prod.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: data-retriever-deployment
namespace: production
labels:
app: data-retriever
environment: production
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: data-retriever
environment: production
tier: backend
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- data-retriever
topologyKey: "kubernetes.io/hostname"
imagePullSecrets:
- name: gitlab-reg
containers:
- name: data-retriever
image: registry.gitlab.com/xxx/xxx:latest
env:
- name: MONGODB_URI
value: "mongodb://mongodb-production:27017/appdb_production?replicaSet=rs0"
- name: NODE_ENV
value: "production"
Logs
Unable to connect to database: MongoNetworkError: failed to connect to server [mongodb:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongodb mongodb:27017]
Unable to connect to database: MongoNetworkError: failed to connect to server [mongodb:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongodb mongodb:27017]
at NativeConnection.db.on (/app/services/mongoose.service.js:25:19)
at emitOne (events.js:116:13)
at NativeConnection.emit (events.js:211:7)
at process.nextTick (/node_modules/mongoose/lib/connection.js:575:37)
at _combinedTickCallback (internal/process/next_tick.js:132:7)
at process._tickCallback (internal/process/next_tick.js:181:9)

Related

Authentication Failed when connecting to MongoDB Kubernetes Statefulset

I am deploying a mongodb statefulset in Kubernetes and I am setting up my own username and password, but I can't authenticate to the cluster. Please find my YAML manifest below:
Create the Statefulset along with the Headless service
...
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: ng-mongo
labels:
app: mongo
spec:
ports:
- port: 27017
targetPort: 27017
name: database
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
namespace: ng-mongo
spec:
serviceName: "mongo"
replicas: 1
selector:
matchLabels:
role: mongo
template:
metadata:
labels:
role: mongo
spec:
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: test
- name: MONGO_INITDB_ROOT_PASSWORD
value: test
- name: MONGO_INITDB_DATABASE
value: ng-db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
# https://github.com/cvallance/mongo-k8s-sidecar#settings
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 10Gi
Expose the headless service for testing purposes
kubectl port-forward mongo-0 27017:27017
Try to access the mongodb through mongoshell
mongo -u test -ptest --authenticationDatabase ng-db
And I get the following error:
MongoDB shell version v5.0.11
connecting to: mongodb://127.0.0.1:27017/?authSource=ng-db&compressors=disabled&gssapiServiceName=mongodb
Error: Authentication failed. :
connect#src/mongo/shell/mongo.js:372:17
#(connect):2:6
exception: connect failed
exiting with code 1
Can someone help me on this please? Thank you

Cannot access from Init container with calico NetworkPolicy

I have below namespaces for my app
backend-api <-- API pod deployed here
backend-db <-- redis instance deployed here
In backend API pod, there's an iniit container that will populate the db first
I have below NetworkPolicy (I have deployed calico network provider)
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: restrict-db
namespace: backend-db
spec:
podSelector:
matchLabels:
role: backend-db
policyTypes:
- Ingress
ingress:
- from
- namespaceSelector:
matchLables:
role: backend-api
If I apply the network policy after I deployed all the pods it works (API pods can access redis)
But If I deploy the NetworkPolicy first and when I deploy the API pod it fails to initialize with STATUS Init:CrashLoopBackOff
If I describe the init container it has below log
Data initializer
2022/02/07 04:33:49 dial tcp 10.43.250.221:6379: i/o timeout
Any idea why this is happening?
Deployment yaml for backend API
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-api
namespace: backend-api
spec:
replicas:
selector:
matchLabels:
app: backend-api
role: backend-api
template:
metadata:
labels:
app: backend-api
role: backend-api
spec:
containers:
- name: backend-api
image: "192.168.8.103:5000/goredis:1.0.1"
imagePullPolicy: Always
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 8080
env:
- name: REDIS_HOST
value: redis.backend-db
- name: REDIS_PORT
value: "6379"
initContainers:
- name: init-myservice
image: "192.168.8.103:5000/goredisinit:1.0.1"
imagePullPolicy: Always
env:
- name: REDIS_HOST
value: redis.backend-db
- name: REDIS_PORT
value: "6379"
backend-db
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: backend-db
name: redis-master
labels:
role: backend-db
spec:
selector:
matchLabels:
role: backend-db
replicas: 1
template:
metadata:
labels:
role: backend-db
spec:
containers:
- name: redis
image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /data
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: redis-data

odoo in k8s: Odoo pod running then crashing

I try to deploy Odoo in k8s ;
I have use the below Yaml files for odoo/postgres/services.
the Odoo pod is always crashing . the logs result :
could not translate host name "db" to address: Temporary failure in name resolution
apiVersion: apps/v1
kind: Deployment
metadata:
name: odoo3
spec:
replicas: 1
selector:
matchLabels:
app: odoo3
template:
metadata:
labels:
app: odoo3
spec:
containers:
- name: odoo3
image: odoo
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: POSTGRES_DB
value: "postgres"
- name: POSTGRES_PASSWORD
value: "postgres"
- name: POSTGRES_USER
value: "postgres"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: odoo3
labels:
app: odoo3
spec:
ports:
- port: 80
targetPort: 80
selector:
app: odoo3
You need to specify the environment variable HOST
env:
- name: POSTGRES_DB
value: "postgres"
- name: POSTGRES_PASSWORD
value: "postgres"
- name: POSTGRES_USER
value: "postgres"
- name: HOST
value: "your-postgres-service-name"
Your your-postgres-service-name should point to your postgres database container or server.

Can't authenticate MongoDb on Kubernetes cluster

When I try to connect to MongoDb running on Kubernetes cluster with mongo -u admin -p password -authenticationDatabase admin, I get this error:
{"mechanism":"SCRAM-SHA-1","speculative":false,"principalName":"admin","authenticationDatabase":"admin","remote":"192.168.65.3:47486","extraInfo":{},"error":"UserNotFound: Could not find user \"admin\" for db \"admin\""}}
Below is the yaml file I'm using to create the MongoDb service.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb-statefulset
spec:
serviceName: "mongodb-service"
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: admin
- name: MONGO_INITDB_ROOT_PASSWORD
value: password
volumeMounts:
- mountPath: /data/db
name: data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
type: LoadBalancer
selector:
app: mongodb
ports:
- port: 27017
targetPort: 27017
I've tried everything and it still doesn't work. I appreciate any help.
Try something like, using mongod as command
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongodb-service
spec:
type: NodePort
ports:
- name: "http"
port: 27017
protocol: TCP
targetPort: 27017
selector:
service: mongo
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
spec:
replicas: 1
template:
metadata:
labels:
service: mongo
name: mongodb-service
spec:
containers:
- args:
- mongod
- --smallfiles
image: mongo:latest
name: mongo
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: admin
- name: MONGO_INITDB_ROOT_PASSWORD
value: admin

Setting up MongoDB with Kubernetes on Google Cloud

I'm trying to deploy MongoDB on Google Cloud using Kubernetes. However, I'm having some difficulties. It says:
error converting YAML to JSON: yaml: line 29: found character that cannot start any token
However, on line 29 there's only a space indenting the line, as shown below.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pv-claim
Could somebody point me to where I'm doing something wrong, or more resources on the matter? I've only recently started using Kubernetes, so would appreciate any information.
You had two issues with formatting your yaml file:
volumes should be on the same level as containers.
persistentVolumeClaim should be under - name: mongo-persistent-storage in volumes statement.
Correct yaml file is:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pv-claim