I am in the process of deploying a mongodb ReplicaSet on GKE.
My deployment works, however I would like to enable auth on Mongo.
I have connected to my pod
kubectl exec -it {pod_name} mongo admin
Created an Admin user and also a user for my database. I was then thinking I could update mongo-statefulset.yaml with the --auth flag and apply the updated yaml.
Something like
.....
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongod-container
image: mongo:3.6
command:
- mongod
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
- "--auth"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
.....
But running kubectl apply -f mongo-statefulset.yaml just produces
service/mongo-svc unchanged
statefulset.apps/mongo configured
Should I restart my pods for this to now take effect?
Try to do rolling update:
The RollingUpdate update strategy will update all Pods in a StatefulSet, in reverse ordinal order, while respecting the StatefulSet guarantees.
Patch the web StatefulSet to apply the RollingUpdate update strategy.
$ kubectl patch statefulset your_statefulset_name -p '{"spec":{...}}}'
Don't to forget to add env label with credentials you have created on your pod like:
env:
- name: MONGODB_USERNAME
value: admin
- name: MONGODB_PASSWORD
value: password
I hope it helps.
You could try
kubectl delete -f mongo-statefulset.yaml && kubectl apply -f mongo-statefulset.yaml
Related
I have created a mongodb pod in my Minikube local cluster with the following configuration, now I would like to migrate the data of my existing mongodb database(run in AWS EC2 instance) to this database, how can I accomplish this?
apiVersion: v1
kind: Pod
metadata:
name: mongodb
labels:
app: mongodb
spec:
volumes:
- name: mongo-vol
persistentVolumeClaim:
claimName: mongo-pvc
containers:
- image: mongo
name: container1
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-vol
mountPath: /data/db
Can't added a comment because I do not have more than 50 reputation but here is the thing:
You created a Pod using MongoDB and I see the volumeMount set to /data/db. What is this path? Is this volume a hostPath, NFS, external storage?
The migration itself could be accomplished in two ways:
by running a mongodump on your EC2 instance:
mongodump -host hostname --port 27017 --out /tmp/mongodb-dump
Then a mongorestore on your /data/db path.
You can manually log in on the pod
#copy your dump from host to pod using
kubectl cp /tmp/foo_dir <some-pod>:/tmp/bar_dir
#log on the pod
kubectl exec mongodb -it /bin/bash
#run restore
mongorestore /tmp/mongodb-dump
or
You can copy all files/the entire /data from your EC2 instance and dropped on the volumeMount your pod is mounting /data/db.
I'm running a Kubernetes cluster on AWS and need to configure a replicated MongoDB 4.2 Database.
I'm using StatefulSets in order for other Pods (e.g., REST API NodeJS Pod) to easily connect to the mongo instances (example dsn: "mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/app").
mongo-configmap.yaml (provides a shell script to perform the replication initialization upon mongo container creation):
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-init
data:
init.sh: |
#!/bin/bash
# wait for the readiness health check to pass
until ping -c 1 ${HOSTNAME}.mongo; do
echo "waiting for DNS (${HOSTNAME}.mongo)..."
sleep 2
done
until /usr/bin/mongo --eval 'printjson(db.serverStatus())'; do
echo "connecting to local mongo..."
sleep 2
done
echo "connected to local."
HOST=mongo-0.mongo:27017
until /usr/bin/mongo --host=${HOST} --eval 'printjson(db.serverStatus())'; do
echo "connecting to remote mongo..."
sleep 2
done
echo "connected to remote."
if [[ "${HOSTNAME}" != 'mongo-0' ]]; then
until /usr/bin/mongo --host=${HOST} --eval="printjson(rs.status())" \
| grep -v "no replset config has been received"; do
echo "waiting for replication set initialization"
sleep 2
done
echo "adding self to mongo-0"
/usr/bin/mongo --host=${HOST} --eval="printjson(rs.add('${HOSTNAME}.mongo'))"
fi
if [[ "${HOSTNAME}" == 'mongo-0' ]]; then
echo "initializing replica set"
/usr/bin/mongo --eval="printjson(rs.initiate(\
{'_id': 'rs0', 'members': [{'_id': 0, \
'host': 'mongo-0.mongo:27017'}]}))"
fi
echo "initialized"
while true; do
sleep 3600
done
mongo-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app: mongo
spec:
clusterIP: None
ports:
- port: 27017
selector:
app: mongo
mongo-statefulset.yaml (2 containers inside one Pod, 1 for the actual DB, the other for initialization of the replication):
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongodb
image: mongo:4.2
command:
- mongod
args:
- --replSet
- rs0
- "--bind_ip_all"
ports:
- containerPort: 27017
name: web
volumeMounts:
- name: database
mountPath: /data/db
livenessProbe:
exec:
command:
- /usr/bin/mongo
- --eval
- db.serverStatus()
initialDelaySeconds: 10
timeoutSeconds: 10
- name: init-mongo
image: mongo:4.2
command:
- bash
- /config/init.sh
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: "mongo-init"
volumeClaimTemplates:
- metadata:
name: database
annotations:
volume.beta.kubernetes.io/storage-class: mongodb-storage
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
After applying these configurations, the 3 mongo pods start running (mongo-0, mongo-1, mongo-2).
However, other pods can't connect to these mongo pods.
Further look into the mongo-0 pods (should be primary instance) reveals that the replication did not work.
kubectl exec -it mongo-0 -- /bin/bash
Then running 'mongo' to start the mongo shell, and entering 'rs.status()' into the mongo shell results in the following output:
{
"info" : "run rs.initiate(...) if not yet done for the set",
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94,
"codeName" : "NotYetInitialized"
}
Apparently, mongo-4.x images do not come with 'ping' installed, therefore, the rest of the script was not executed.
Adding these two lines to the script in mongo-configmap.yaml fixes the problem:
apt-get update
apt-get install iputils-ping --yes
After start running all pods, then hit this command
kubectl exec -it mongo-0 -- /bin/bash
(here mongo-0 is pod name)
now start mongo shell,
mongo
now check pod is initiate or not
rs.status()
If not then, initiate and make it primary by hitting these commands one by one
rs.initiate()
var cfg = rs.conf()
cfg.members[0].host=”mongo-0.mongo:27017”
(here is mongo-0 is pod name and mongo is serive name.)
now reconfig primary node
rs.reconfig(cfg)
Add all slaves to primary node
rs.add(“mongo-1.mongo:27017”)
rs.add(“mongo-2.mongo:27017”)
(here is mongo-1 and mongo-2 is pod name and mongo is serive name.)
now check status
rs.status()
now exit from primary shell and go to secondary (slave) node
exit
exit
kubectl exec -it mongo-1 -- /bin/bash
mongo
rs.secondaryOk()
check status and exit
rs.status()
exit
exit
now do same for other secondary (slave) nodes
For logs, I mount a volume from host on to the pod. This is written in the deployment yaml.
But, if my 2 pods run on the same host, there will be conflict as both pods will produce log files with same name.
Can I use some dynamic variables in deployment file so that mount on host is created with different name for different pods?
you can use subPathExpr to achieve the uniqueness in the absolute path, this is one of the use case of the this feature. As of now its is alpha in k1.14.
In this example, a Pod uses subPathExpr to create a directory pod1 within the hostPath volume /var/log/pods, using the pod name from the Downward API. The host directory /var/log/pods/pod1 is mounted at /logs in the container.
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: busybox
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
mountPath: /logs
subPathExpr: $(POD_NAME)
restartPolicy: Never
volumes:
- name: workdir1
hostPath:
path: /var/log/pods
look at pod affinity/anti affinity to not to schedule the replica on the same node. that way each replica of a specific deployment gets deployed on separate node. you will not have to bother about same folder being used by multiple pods.
I had to spend hours for this, your solution worked like a charm!
Had tried with, none worked despite being given in multiple documents.
subPathExpr: "$POD_NAME"
subPathExpr: $POD_NAME
subPathExpr: ${POD_NAME}
Finally this worked, subPathExpr: $(POD_NAME)
I just deployed a docker with Postgres on it on AWS EKS.
Below is the description details.
How do i access or test if postgres is working. I tried accessing both IP with post within VPC from worker node.
psql -h #IP -U #defaultuser -p 55432
Below is the deployment.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 55432
# envFrom:
# - configMapRef:
# name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: efs
Surprisingly I am able to connect to psql but on 5432. :( Not sure what I am doing wrong. I passed containerPort as 55432
In short, you need to run the following command to expose your database on 55432 port.
kubectl expose deployment postgres --port=55432 --target-port=5432 --name internal-postgresql-svc
From now on, you can connect to it via port 55432 from inside your cluster by using the service name as a hostname, or via its ClusterIP address:
kubectl get internal-postgresql-svc
What you did in your deployment manifest file, you just attached additional information about the network connections a container uses, between misleadingly, because your container exposes 5432 port only (you can verify it by your self here). You should use a Kubernetes Service - abstraction which enables access to your PODs, and does the necessary port mapping behind the scene.
Please check also different port Types, if you want to expose your postgresql database outside of the Kubernetes cluster.
To test if progress is running fine inside POD`s container:
kubectl run postgresql-postgresql-client --rm --tty -i --restart='Never' --namespace default --image bitnami/postgresql --env="PGPASSWORD=<HERE_YOUR_PASSWORD>" --command -- psql --host <HERE_HOSTNAME=SVC_OR_IP> -U <HERE_USERNAME>
I have a docker compose file with the following entries
version: '2.1'
services:
mysql:
container_name: mysql
image: mysql:latest
volumes:
- ./mysqldata:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 'password'
ports:
- '3306:3306'
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3306"]
interval: 30s
timeout: 10s
retries: 5
test1:
container_name: test1
image: test1:latest
ports:
- '4884:4884'
- '8443'
depends_on:
mysql:
condition: service_healthy
links:
- mysql
The Test-1 container is dependent on mysql and it needs to be up and running.
In docker this can be controlled using health check and depends_on attributes.
The health check equivalent in kubernetes is readinessprobe which i have already created but how do we control the container startup in the pod's?????
Any directions on this is greatly appreciated.
My Kubernetes file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 1
template:
metadata:
labels:
app: deployment
spec:
containers:
- name: mysqldb
image: "dockerregistry:mysqldatabase"
imagePullPolicy: Always
ports:
- containerPort: 3306
readinessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
periodSeconds: 10
- name: test1
image: "dockerregistry::test1"
imagePullPolicy: Always
ports:
- containerPort: 3000
That's the beauty of Docker Compose and Docker Swarm... Their simplicity.
We came across this same Kubernetes shortcoming when deploying the ELK stack.
We solved it by using a side-car (initContainer), which is just another container in the same pod thats run first, and when it's complete, kubernetes automatically starts the [main] container. We made it a simple shell script that is in loop until Elasticsearch is up and running, then it exits and Kibana's container starts.
Below is an example of a side-car that waits until Grafana is ready.
Add this 'initContainer' block just above your other containers in the Pod:
spec:
initContainers:
- name: wait-for-grafana
image: darthcabs/tiny-tools:1
args:
- /bin/bash
- -c
- >
set -x;
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' http://grafana:3000/login)" != "200" ]]; do
echo '.'
sleep 15;
done
containers:
.
.
(your other containers)
.
.
This was purposefully left out. The reason being is that applications should be responsible for their connect/re-connect logic for connecting to service(s) such as a database. This is outside the scope of Kubernetes.
While I don't know the direct answer to your question except this link (k8s-AppController), I don't think it's wise to use same deployment for DB and app. Because you are tightly coupling your db with app and loosing awesome k8s option to scale any one of them as needed. Further more if your db pod dies you loose your data as well.
Personally what I would do is to have a separate StatefulSet with Persistent Volume for database and Deployment for app and use Service to make sure their communication.
Yes I have to run few different commands and may need at least two separate deployment files but this way I am decoupling them and can scale them as needed. And my data is being persistent as well!
As mentioned, you should run the database and the application containers in separate pods and connect them with a service.
Unfortunately, both Kubernetes and Helm don't provide a functionality similar to what you've described. We had many issues with that and tried a few approaches until we have decided to develop a smallish utility that solved this problem for us.
Here's the link to the tool we've developed: https://github.com/Opsfleet/depends-on
You can make pods wait until other pods become ready according to their readinessProbe configuration. It's very close to Docker's depends_on functionality.
In Kubernetes terminology one your docker-compose set is a Pod.
So, there is no depends_on equivalent there. Kubernetes will check all containers in a pod and they all have to be alive for a mark that pod as Healthy and will always run them together.
In your case, you need to prepare configuration of Deployment like that:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
template:
metadata:
labels:
app: app-and-db
spec:
containers:
- name: app
image: nginx
ports:
- containerPort: 80
- name: db
image: mysql
ports:
- containerPort: 3306
After pod will be started, your database will be available on localhost interface for your application, because of network conception:
Containers within a pod share an IP address and port space, and can find each other via localhost. They can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory.
But, as #leninhasda mentioned, it is not a good idea to run database and application in your pod and without Persistent Volume. Here is a good tutorial on how to run a stateful application in the Kubernetes.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
what about liveness and readiness ??? supports commands, http requests and more
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5