Write command args in kubernetes deployment - mongodb

Can anyone help with this, please?
I have a mongo pod assigned with its service. I need to execute some commands while starting the container in the pod.
I found a small examples like this:
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
But I want to execute these commands while starting the pod:
use ParkInDB
db.roles.insertMany( [ {name :"ROLE_USER"}, {name:"ROLE_MODERATOR"}, {name:"ROLE_ADMIN"} ])

you need to choice one solution :
1- use init-container to deployment for change and execute some command or file
2- use command and args in deployment yaml
for init-container visit this page and use.
for comnad and args use this model in your deployment yaml file:
- image:
name:
command: ["/bin/sh"]
args: ["-c" , "PUT_YOUR_COMMAND_HERE"]

if you are looking forward to run the command before the container start or container stop you can use the container life cycle hooks.
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
however, if you can add a command in the shell script file and edit MongoDB image as per requirement
command: ["/bin/sh", "-c", "/usr/src/script.sh"]
you also edit the yaml with
args:
- '-c'
- |
ls
rm -rf sql_scripts

When you use the official Mongo image, you can specify scripts to use on container startup. The answer accepted here provides you with some information on how this work.
Kubernetes
When it comes to Kubernetes, there are some pre-work you need to do.
What you can do is to write a script like my-script.sh that creates a userDB and insert an item into the users collection:
mongo userDB --eval 'db.users.insertOne({username: "admin", password: "12345"})'
and then write a Dockerfile based on the official mongo image, to copy your script into the folder where custom scripts are run on database initialization.
FROM mongo:latest
COPY my-script.sh /docker-entrypoint-initdb.d/
CMD ["mongod"]
Within the same directory containing your script and dockerfile, build the docker image with
docker build -t dockerhub-username/custom-mongo .
Push the image to docker hub or any repository of your choice, and use it in your deployment yaml.
deployment.yaml
...
spec:
containers:
- name: mongodb-standalone
image: dockerhub-username/custom-mongo
ports:
- containerPort: 27017
Verify by going to your pod and check the logs. You will be able to see that mongo has initialized the db that you have specified in your script in the directory /docker-entrypoint-initdb.d/.

Related

How to get a secrets "file" into Google Cloud Build so docker compose can read it?

How can I configure Google Cloud Build so that a docker-compose setup can use a secret file the same way as it does when it is run locally on my machine accessing a file?
My Docker-compose based setup uses a secrets entry to expose an API key to a backend component like this (simplified for example):
services:
backend:
build: docker_contexts/backend
secrets:
- API_KEY
environment:
- API_KEY_PATH=/run/secrets/api_key
secrets:
API_KEY:
file: ./secrets/api_key.json
From my understanding docker-compose places any files in the secrets section in /run/secrets on the local container for access, that's why the target location is hard-coded to /run/build.
I would like to deploy my docker-compose setup on Google Cloud Build to use this configuration, but the only examples I've seen in documentation have been to load the secret as an environment variable. I have tried to provide this secret to the secret manager and copy it to a local file at /run/secrets like this:
steps:
- name: gcr.io/cloud-builders/gcloud
# copy to /workspace/secrets so docker-compose can find it
entrypoint: 'bash'
args: [ '-c', 'echo $API_KEY > /workspace/secrets/api_key.json' ]
volumes:
- name: 'secrets'
path: /workspace/secrets
secretEnv: ['API_KEY']
# running docker-compose
- name: 'docker/compose:1.29.2'
args: ['up', '-d']
volumes:
- name: 'secrets'
path: /workspace/secrets
availableSecrets:
secretManager:
- versionName: projects/ID/secrets/API_KEY/versions/1
env: API_KEY
But when I run the job on google cloud build, I get this error message after everything is built: ERROR: for backend Cannot create container for service backend: invalid mount config for type "bind": bind source path does not exist: /workspace/secrets/api_key.json.
Is there a way I can copy the API_KEY environment variable at the cloudbuild.yaml level so it is accessible to the docker-compose level like it is when I run it on my local filesystem?
If you want to have the value of API_KEY taken from Secret Manager and placed into a text file at /workspace/secrets/api_key.json then change your step to this:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args: ["-c", "mkdir -p /workspace/secrets && echo $$API_KEY > /workspace/secrets/api_key.json"]
secretEnv: ["API_KEY"]
This will:
Remove the unnecessary volumes attribute which is not necessary as /workspace is already a volume that persists between steps
Make sure the directory exists before you try to put a file in it
Use the $$ syntax as described in Use secrets from Secret Manager so that it echoes the actual secret to the file
Note this section:
When specifying the secret in the args field, specify it using the environment variable prefixed with $$.
You can double-check that this is working by adding another step:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args: ["-c", "cat /workspace/secrets/api_key.json"]
This should echo out the contents of the file as the build step, allowing you to confirm that:
The previous step read the secret
The previous step wrote the secret to the file
The file was written to a volume that persists across steps
From there you can configure docker-compose to read the contents of that persisted file.

How to recover a standalone MongoDB after an unexpected shutdown on k8s

After an unexpected power failure, the mongodb service I deployed on k8s could not be restarted normally. The log of mongodb showed that there was a problem with its data and could not be started.
I did not record the specific error log.
Here is my fix:
First
change k8s deployment.yaml config.
Because we want to repair the data file of mongodb, the first step is to make the mongo pod run, we run the command in the pod.
Now change the startup command of the container:
containers:
- name: mongodb
image: mongo:latest
command: ["sleep"]
args: ["infinity"]
imagePullPolicy: IfNotPresent
ports:
# .......
After apply it.
If I guessed correctly, the mongo pod should be up and running.
Second
Use mongod command to repair data.
kubectl exec -it <YOURPODNAME> -- mongod --dbpath <YOURDBPATH> --repair --directoryperdb
I have to exec it with --directoryperdb, if you run it error, you can try remove it.
If I guessed correctly, So far everything is fine.
Third
Recover k8s deployment.yaml, back to the way they were.
Now reapply it.
---------- Manual split line
The above is my repair process. Its worked for me. I just record it. You can refer to it to fix your mongodb. Thank you.

Kubernetes Execute Script Before Container Start

I want to execute script before I run my container
If I execute script in container like that
containers:
- name: myservice
image: myservice.azurecr.io/myservice:1.0.6019912
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
command:
- '/bin/bash'
- '-c'
- 'ls /mnt/secrets-store;'
then that command replaces my entrypoint and the pod exits. How can I execute command but then start the container after that
A common way to do this is too use Init Containers but I'm unsure what you're trying to run before you run the ENTRYPOINT.
You can apply the same volume mounts in the init container(s), if the init work requires changing state of the mounted file system content.
Another solution may be to run the ENTRYPOINT's command as the last statement in the script.

How to run schema scripts after running couchbase via docker compose?

I have a schema script /data/cb-create.sh that I have made available on a container volume. When I run docker-compose up, my server is not initialized at the time command is executed. So those commands fail because the server isn't launched just yet. I do not see a Starting Couchbase Server -- Web UI available at http://<ip>:8091 log line when the .sh script is running to initialize the schema. This is my docker compose file. How can I sequence it properly?
version: '3'
services:
couchbase:
image: couchbase:community-6.0.0
deploy:
replicas: 1
ports:
- 8091:8091
- 8092:8092
- 8093:8093
- 8094:8094
- 11210:11210
volumes:
- ./:/data
command: /bin/bash -c "/data/cb-create.sh"
container_name: couchbase
volumes:
kafka-data:
First: You should choose either an entrypoint or a command statement.
I guess an option is to write a small bash script where you put these commands in order.
Then in the command you specify running that bash script.

Azure Container Instances - CLI and YAML file giving different outcomes

I'm trying to deploy Mongo onto Azure Container Instances as part of a container group. To do this, I use a Storage Account with a file share to persist the mongo data. It's impossible to mount the volume in the /data/db default location, so I mount it elsewhere and start mongod using the --db-path flag. This all works fine using the CLI, full command below.
However, when I want to translate all of these commands into my YAML config file it doesn't work. Mongo crashes out with an unknown file or directory error. If I start the container without the --db-path flag but still mount the volume, I can exec into the running container and see that the volume is there and is attached. I can even manually create folders in the share via the Azure Portal and see them appear in the container.
Documentation and examples are a little thin on the ground, especially YAML based examples. The biggest difference with the container group is having to define a named volume separate from the container which is used by the volumeMounts property. Is it just a syntax error? Are the CLI command and the YAML not equivalent in some way?
CLI Command
az container create
--resource-group rsenu-hassPilots-group
--name mongo
--image mongo
--azure-file-volume-account-name <account>
--azure-file-volume-account-key "<key>" --azure-file-volume-share-name mongodata
--azure-file-volume-mount-path "/data/mongoaz"
--ports 27017
--cpu 1
--ip-address public
--memory 1.5
--os-type Linux
--protocol TCP
--command-line "mongod --dbpath=/data/mongoaz"
YAML Config
apiVersion: 2018-10-01
location: uksouth
name: trustNewArtGroup
properties:
containers:
- name: mongo
properties:
image: mongo:4.2.3
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 27017
volumeMounts:
- name: database
mountPath: /data/azstorage
environmentVariables:
- name: 'MONGO_INITDB_DATABASE'
value: 'trust-new-art-db'
command:
- "mongod --dbpath=/data/azstorage"
osType: Linux
ipAddress:
type: Public
dnsNameLabel: trustnewart
ports:
- protocol: tcp
port: '27017'
volumes:
- name: database
azureFile:
sharename: mongodata
storageAccountName: <account>
storageAccountKey: <key>
tags: null
type: Microsoft.ContainerInstance/containerGroups
With a bit of help from this page in the documentation, I've discovered it was a syntax issue. The correct way to override the entrypoint in a YAML config file is as follows:
command: ['mongod', '--dbpath', '/data/azstorage']