Kubernetes: How to run a script with different user from root - kubernetes

I'm facing an issue on executing a cronjob.Below is the snippet of the code.
containers:
- name: ssm1db
image: anuragh/ubuntu:mycronjob5
imagePullPolicy: Always
command:
- "/bin/sh"
- "-c"
- "kubectl exec ssm1db-0 -- bash -c 'whoami; /db2/db2inst1/dba/jobs/dbactivate.sh -d wdp'"
For example.
I'm able to execute the below code .Here db2inst1 is the user which i need the script to be executed.
/bin/su -c ./full_online_backup.sh - db2inst1
But while executing using kubectl ,getting below error
/bin/su: /bin/su: cannot execute binary file
command terminated with exit code 126
[root#ssm1db-0 /]#

Related question : How to start crond as non-root user in a Docker Container?
You would encounter permission issues when running crond in a non root user.

Related

volumes_from tried to find a service despite container: is specified

I am running Win10, WSL and Docker Desktop. I have the following test YML which errors out:
version: '2.3'
services:
cli:
image: smartsheet-www
volumes_from:
- container:amazeeio-ssh-agent
➜ ~ ✗ docker-compose -f test.yml up
no such service: amazeeio-ssh-agent
Why does it try to find a service when I specified container: ?
The container exists, runs and has a volume.
docker inspect -f "{{ .Config.Volumes }}" amazeeio-ssh-agent
map[/tmp/amazeeio_ssh-agent:{}]
docker exec -it amazeeio-ssh-agent /bin/sh -c 'ls -l /tmp/amazeeio_ssh-agent/'
total 0
srw------- 1 drupal drupal 0 Apr 1 03:54 socket
Removing the volumes_from and the following line starts the cli service just fine.
After a bit of searching, I finally found https://github.com/docker/compose/issues/8874 and https://github.com/pygmystack/pygmy-legacy/issues/60#issue-1037009622 this fixes it.
uncheck this

How to run schema scripts after running couchbase via docker compose?

I have a schema script /data/cb-create.sh that I have made available on a container volume. When I run docker-compose up, my server is not initialized at the time command is executed. So those commands fail because the server isn't launched just yet. I do not see a Starting Couchbase Server -- Web UI available at http://<ip>:8091 log line when the .sh script is running to initialize the schema. This is my docker compose file. How can I sequence it properly?
version: '3'
services:
couchbase:
image: couchbase:community-6.0.0
deploy:
replicas: 1
ports:
- 8091:8091
- 8092:8092
- 8093:8093
- 8094:8094
- 11210:11210
volumes:
- ./:/data
command: /bin/bash -c "/data/cb-create.sh"
container_name: couchbase
volumes:
kafka-data:
First: You should choose either an entrypoint or a command statement.
I guess an option is to write a small bash script where you put these commands in order.
Then in the command you specify running that bash script.

Write command args in kubernetes deployment

Can anyone help with this, please?
I have a mongo pod assigned with its service. I need to execute some commands while starting the container in the pod.
I found a small examples like this:
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
But I want to execute these commands while starting the pod:
use ParkInDB
db.roles.insertMany( [ {name :"ROLE_USER"}, {name:"ROLE_MODERATOR"}, {name:"ROLE_ADMIN"} ])
you need to choice one solution :
1- use init-container to deployment for change and execute some command or file
2- use command and args in deployment yaml
for init-container visit this page and use.
for comnad and args use this model in your deployment yaml file:
- image:
name:
command: ["/bin/sh"]
args: ["-c" , "PUT_YOUR_COMMAND_HERE"]
if you are looking forward to run the command before the container start or container stop you can use the container life cycle hooks.
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
however, if you can add a command in the shell script file and edit MongoDB image as per requirement
command: ["/bin/sh", "-c", "/usr/src/script.sh"]
you also edit the yaml with
args:
- '-c'
- |
ls
rm -rf sql_scripts
When you use the official Mongo image, you can specify scripts to use on container startup. The answer accepted here provides you with some information on how this work.
Kubernetes
When it comes to Kubernetes, there are some pre-work you need to do.
What you can do is to write a script like my-script.sh that creates a userDB and insert an item into the users collection:
mongo userDB --eval 'db.users.insertOne({username: "admin", password: "12345"})'
and then write a Dockerfile based on the official mongo image, to copy your script into the folder where custom scripts are run on database initialization.
FROM mongo:latest
COPY my-script.sh /docker-entrypoint-initdb.d/
CMD ["mongod"]
Within the same directory containing your script and dockerfile, build the docker image with
docker build -t dockerhub-username/custom-mongo .
Push the image to docker hub or any repository of your choice, and use it in your deployment yaml.
deployment.yaml
...
spec:
containers:
- name: mongodb-standalone
image: dockerhub-username/custom-mongo
ports:
- containerPort: 27017
Verify by going to your pod and check the logs. You will be able to see that mongo has initialized the db that you have specified in your script in the directory /docker-entrypoint-initdb.d/.

Add custom config location to Docker Postgres image preserving its access parameters

I have written a Dockerfile like this:
FROM postgres:11.2-alpine
ADD ./db/postgresql.conf /etc/postgresql/postgresql.conf
CMD ["-c", "config_file=/etc/postgresql/postgresql.conf"]
It just adds custom config location to a generic Postgres image.
Now I have the following docker-compose service description
db:
build:
context: .
dockerfile: ./db/Dockerfile
environment:
POSTGRES_PASSWORD passwordhere
POSTGRES_USER: user
POSTGRES_DB: db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
The problem is I can no longer remotely connect to DB using these credentials if I add this Config option. Without that CMD line it works just fine.
If I prepend "postgres" in CMD it has the same effect due to the underlying script prepending it itself.
Provided all the files are where they need to be, I believe the only problem with your setup is that you've omitted an actual executable from the CMD -- specifying just the option. You need to actually run postgres:
CMD ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]
That should work!
EDIT in response to OP's first comment below
First, I did confirm that behavior doesn't change whether "postgres" is in the CMD or not. It's exactly as you said. Onward!
Then I thought there must be a problem with the particular postgresql.conf in use. If we could just figure out what the default file is.. turns out we can!
How to get the existing postgres.conf out of the postgres image
1. Create docker-compose.yml with the following contents:
version: "3"
services:
db:
image: postgres:11.2-alpine
environment:
- POSTGRES_PASSWORD=passwordhere
- POSTGRES_USER=user
- POSTGRES_DB=db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
2. Spin up the service using
$ docker-compose run --rm --name=postgres db
3. In another terminal get the location of the file used in this release:
$ docker exec -it postgres psql --dbname=db_name --username=user --command="SHOW config_file"
config_file
------------------------------------------
/var/lib/postgresql/data/postgresql.conf
(1 row)
4. View the contents of default postgresql.conf
$ docker exec -it postgres cat /var/lib/postgresql/data/postgresql.conf
5. Replace local config file
Now all we have to do is replace the local config file ./db/postgresql.conf with the contents of the known-working-state config and modify it as necessary.
Database objects are only created once!
Database objects are only created once by the postgres container (source). So when developing the database parameters we have to remove them to make sure we're in a clean state.
Here's a nuclear (be careful!) option to
(1) remove all exited Docker containers, and then
(2) remove all Docker volumes not attached to containers:
$ docker rm $(docker ps -a -q) -f && docker volume prune -f
So now we can be sure to start from a clean state!
Final setup
Let's bring our Dockerfile back into the picture (just like you have in the question).
docker-compose.yml
version: "3"
services:
db:
build:
context: .
dockerfile: ./db/Dockerfile
environment:
- POSTGRES_PASSWORD=passwordhere
- POSTGRES_USER=user
- POSTGRES_DB=db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
Connect to the db
Now all we have to do is build from a clean state.
# ensure all volumes are deleted (see above)
$ docker-compose build
$ docker-compose run --rm --name=postgres db
We can now (still) connect to the database:
$ docker exec -it postgres psql --dbname=db_name --username=user --command="SELECT COUNT(1) FROM pg_database WHERE datname='db_name'"
Finally, we can edit the postgres.conf from a known working state.
As per this other discussion, your CMD command only has arguments and is missing a command. Try:
CMD ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]

docker-compose - unable to attach to containers

Using below docker-compose.yml file if I run "docker-compose up" or "docker-compose up -d" command then I see both containers status as exited however when I run docker restart <postgres-containerId> then its up and running but when I try to run docker restart <java8-containerId> then its restarting and again exiting.
Could you please suggest what parameter I need to specify to make these containers up and running after docker-compose up command and how do I attach to java container I tried with docker attach <java8-containerId> command but was not able to attach ?
docker-compose.yml file -
postgres:
image: postgres:9.4
ports:
- "5430:5432"
javaapp:
image:java8:latest
volumes:
- /pgm:/pgm
working_dir: /pgm
links:
- postgres
command: /bin/bash
docker-compose ps results -
Name Command State Ports
--------------------------------------------------------------------
compose_javaapp_1 /bin/bash Exit 0
compose_postgres_1 /docker-entrypoint.sh postgres Exit 0
To see available containers:
docker ps -a
To open container shell:
docker exec -it <container-name> /bin/bash