How can one prevent variable substitution/expension in AWS Fargate container definition command - amazon-ecs

Locally when running docker with docker run I pass some arguments like:
docker run -p 8080:80 -e "SERVICE_B_URL=somehost.co.uk" -d mynginx:latest /bin/sh -c "envsubst '\${SERVICE_B_URL}' < /etc/nginx/conf.d/default.template > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
This works fine. In my /etc/nginx/conf.d/default.conf the string ${SERVICE_B_URL} is replaced with somehost.co.uk.
When running on AWS fargate with a definition like:
"environment": [
{
"name": "SERVICE_B_URL",
"value": "someotherhost.co.uk"
}
],
"command": [
"/bin/sh",
"-c",
"envsubst '\\${SERVICE_B_URL}' < /etc/nginx/conf.d/default.template > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
],
The \\ was to escape the \ in the JSON file.
When trying to run, the container exits with an error because NGINX is seeing the literal string ${SERVICE_B_URL}. When I inspect the container and see the command AWS used to start the container it is:
Command ["/bin/sh","-c","envsubst '\\' < /etc/nginx/conf.d/default.template > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]
Notice that Fargate has attempted to expand the string '\\${SERVICE_B_URL}' before supplying it as a command to docker run. My intention is to specify that as a literal string.
Is there a way to escape this/stop expansion. I've tried things like '\\\\${SERVICE_B_URL}' -> '\\'.
Footnote, if you are wondering why I specify the '\${SERVICE_B_URL}' to envsubst instead of just using:
docker run -p 8080:80 -e "SERVICE_B_URL=somehost.co.uk" -d mynginx:latest /bin/sh -c "envsubst < /etc/nginx/conf.d/default.template > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
The reason is, that the file being substituted contains other NGINX configuration which makes use of variables with $ syntax. So to prevent these being replaced by envsubst I explicitly name the variable I want to be replaced. Running locally with docker run, it works like a charm...

I've ended up simplifying this by making the command we would pass to docker run part of the Dockerfile itself using CMD, e.g:
CMD ["/bin/sh","-c","envsubst '\\${SERVICE_B_URL}' < /etc/nginx/conf.d/default.template > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]
Now we can remove the command configuration from the JSON file for Fargate.

Related

How to run a command in a container using kubectl exec that uses envrionment variables from the container?

I'm trying to write a script that runs some commands inside the container using kubectl exec. I'd like to use the environment variables that exist inside the container, but struggling to figure out how to prevent my local shell from evaluating the var and still have it evaluated in the container.
This was my first try, but $MONGODB_ROOT_PASSWORD get evaluated by my local shell instead of inside the container:
kubectl -n enterprise exec mycontainer -- mongodump --username root --password $MONGODB_ROOT_PASSWORD --out /dump
I tried this, but the had the same issue with pipe, it was evaluated in my local instead of in the container:
kubectl -n enterprise exec mycontainer -- echo 'mongodump --username root --password $MONGODB_ROOT_PASSWORD --out /dump' | sh
Is there a way to do this with kubectl exec?
You need a sh -c in there, like exec -- sh -c 'whatever $PASSWORD'.

How to run multiple commands with gosu in Kubernetes job

I am defining a Kubernetes job to run a rake task but stuck in how to write the command...
I am new to K8s and trying to run a Rails application in K8s.
In my Rails app Dockerfile, I created a user , copied code to /home/abc and installed rvm and rails in this user, and also specified an entrypoint and command:
ENTRYPOINT ["/home/abc/docker-entrypoint.sh"]
CMD bash -l -c "cd /home/abc && rvm use 2.2.10 --default && rake db:migrate && exec /usr/bin/supervisord -c config/supervisord.conf"
In docker-entrypoint.sh, the last command is
exec gosu abc "$#"
The goal is to at the end, gosu to user abc, and then run db migration and start the server through supervisord. It works, although I dont know whether it is a good practice or not...
Now I would like to run another rake task for some purpose.
Firstly, I tried to run it using kubectl exec command:
kubectl exec my-app-deployment-xxxx -- gosu abc bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'
It worked, but it requires to know the pod id, which is dynamic. so I tried to create a K8s job and specify in the command:
containers:
- name: my-app
image: my-app:v0.2
command:
- "gosu"
- "abc"
- "bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'"
I expect the job can be completed successfully, but it failed, and the error info when kubectl logs job_pod is like:
error: exec: "bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'": stat bash -l -c cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task': no such file or directory
I think it should be because of how to write the 'command' part to run multiple commands with gosu...
Thanks for your help!
Since gosu takes the user name and the Bash shell as arguments, I'd say that this is one rather than three separate commands.
Given that, there can be only one single entrypoint in each container, you can try running it as follows:
containers:
- name: my-app
image: my-app:v0.2
command: ["/bin/sh", "-c", "gosu username bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'"]
Notice that you have to spawn a new TTY in order to run the command as the image's entrypoint is replaced when running commands in the container spec in Kubernetes.

docker varnish cmd error - no such file or directory

I'm trying to get a Varnish container running as part of a multicontainer Docker environment.
I'm using https://github.com/newsdev/docker-varnish as a base.
My Dockerfile looks like:
FROM newsdev/varnish:4.1.0
COPY start-varnishd.sh /usr/local/bin/start-varnishd
ENV VARNISH_VCL_PATH /etc/varnish/default.vcl
ENV VARNISH_PORT 80
ENV VARNISH_MEMORY 64m
EXPOSE 80
CMD [ "exec /usr/local/sbin/varnishd -j unix,user=varnishd -F -f /etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p http_req_hdr_len=16384 -p http_resp_hdr_len=16384" ]
When I run this as part of a docker-compose setup, I get:
ERROR: for eventsapi_varnish_1 Cannot start service varnish: oci
runtime error: container_linux.go:262: starting container process
caused "exec: \"exec /usr/local/sbin/varnishd -j unix,user=varnishd -F
-f /etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p http_req_hdr_len=16384 -p http_resp_hdr_len=16384\": stat exec
/usr/local/sbin/varnishd -j unix,user=varnishd -F -f
/etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p
http_req_hdr_len=16384 -p http_resp_hdr_len=16384: no such file or
directory"
I get the same if I try
CMD ["start-varnishd"]
(as it is in the base newsdev/docker-varnish)
or
CMD [/usr/local/bin/start-varnishd]
But if I run a bash shell on the container directly:
docker run -t -i eventsapi_varnish /bin/bash
and then run the varnishd command from there, varnish starts up fine (and starts complaining that it can't find the web container, obviously).
What am I doing wrong? What file can't it find? Again looking around the running container directly, it seems that Varnish is where it thinks it should be, the VCL file is where it thinks it should be... what's stopping it running from within docker-compose?
Thanks!
I didn't get to the bottom of why I was getting this error, but "fixed" it by using the (more recent?) fork: https://hub.docker.com/r/tripviss/varnish/. My Dockerfile is now just:
FROM tripviss/varnish:5.1
COPY default.vcl /usr/local/etc/varnish/

Executing multiple commands( or from a shell script) in a kubernetes pod

I'm writing a shell script which needs to login into the pod and execute a series of commands in a kubernetes pod.
Below is my sample_script.sh:
kubectl exec octavia-api-worker-pod-test -c octavia-api bash
unset http_proxy https_proxy
mv /usr/local/etc/octavia/octavia.conf /usr/local/etc/octavia/octavia.conf-orig
/usr/local/bin/octavia-db-manage --config-file /usr/local/etc/octavia/octavia.conf upgrade head
After running this script, I'm not getting any output.
Any help will be greatly appreciated
Are you running all these commands as a single line command? First of all, there's no ; or && between those commands. So if you paste it as a multi-line script to your terminal, likely it will get executed locally.
Second, to tell bash to execute something, you need: bash -c "command".
Try running this:
$ kubectl exec POD_NAME -- bash -c "date && echo 1"
Wed Apr 19 19:29:25 UTC 2017
1
You can make it multiline like this:
$ kubectl exec POD_NAME -- bash -c "date && \
echo 1 && \
echo 2"
The following should work
kubectl -it exec podname -- bash -c "ls && ls"
bin dev etc home proc root run sys tmp usr var bin
dev etc home proc root run sys tmp usr var
If above command doesn't work then try too replace bash with one of the following /bin/bash, sh or /bin/sh
-t
can solve your task
For example, I run here few cmd:
kubectl get pods |grep nginx|cut -f1 -d\ |\
while read pod; \
do echo "$pod writing:";\
kubectl exec -t $pod -- bash -c \
"dd if=/dev/zero of=/feeds/test.bin bs=260K count=4 2>&1|\
grep copi |cut -d, -f4; \
a=$SECONDS; echo -ne 'reading:'; cat /feeds/test.bin >/dev/null ; \
let a=SECONDS-a ; \
echo $a sec"
done
p.s. your example will be:
kubectl exec -t octavia-api-worker-pod-test -c octavia-api -- bash -c "unset http_proxy https_proxy ; mv /usr/local/etc/octavia/octavia.conf /usr/local/etc/octavia/octavia.conf-orig ; /usr/local/bin/octavia-db-manage --config-file /usr/local/etc/octavia/octavia.conf ; upgrade ; head"
Posting here because google search still brings you to this post...
I'd like to throw out using a HEREDOC as an additional possibility.
kubectl exec -i --tty-false PODNAME -- bash << EOF
echo "insert all your commands here."
echo "this subprocess will even pickup any variables you have in"
echo "the shell script that is calling this"
EOF

Docker mongodb config file

There is a way to link /data/db directory of the container to your localhost. But I can not find anything about configuration. How to link /etc/mongo.conf to anything from my local file system. Or maybe some other approach is used. Please share your experience.
I'm using the mongodb 3.4 official docker image. Since the mongod doesn't read a config file by default, this is how I start the mongod service:
docker run -d --name mongodb-test -p 37017:27017 \
-v /home/sa/data/mongod.conf:/etc/mongod.conf \
-v /home/sa/data/db:/data/db mongo --config /etc/mongod.conf
removing -d will show you the initialization of the container
Using a docker-compose.yml:
version: '3'
services:
mongodb_server:
container_name: mongodb_server
image: mongo:3.4
env_file: './dev.env'
command:
- '--auth'
- '-f'
- '/etc/mongod.conf'
volumes:
- '/home/sa/data/mongod.conf:/etc/mongod.conf'
- '/home/sa/data/db:/data/db'
ports:
- '37017:27017'
then
docker-compose up
When you run docker container using this:
docker run -d -v /var/lib/mongo:/data/db \
-v /home/user/mongo.conf:/etc/mongo.conf -p port:port image_name
/var/lib/mongo is a host's mongo folder.
/data/db is a folder in docker container.
I merely wanted to know the command used to specify a config for mongo through the docker run command.
First you want to specify the volume flag with -v to map a file or directory from the host to the container. So if you had a config file located at /home/ubuntu/ and wanted to place it within the /etc/ folder of the container you would specify it with the following:
-v /home/ubuntu/mongod.conf:/etc/mongod.conf
Then specify the command for mongo to read the config file after the image like so:
mongo -f /etc/mongod.conf
If you put it all together, you'll get something like this:
docker run -d --net="host" --name mongo-host -v /home/ubuntu/mongod.conf:/etc/mongod.conf mongo -f /etc/mongod.conf
For some reason I should use MongoDb with VERSION:3.0.1
Now : 2016-09-13 17:42:06
That is what I found:
#first step: run mongo 3.0.1 without conf
docker run --name testmongo -p 27017:27017 -d mongo:3.0.1
#sec step:
docker exec -it testmongo cat /entrypoint.sh
#!/bin/bash
set -e
if [ "${1:0:1}" = '-' ]; then
set -- mongod "$#"
fi
if [ "$1" = 'mongod' ]; then
chown -R mongodb /data/db
numa='numactl --interleave=all'
if $numa true &> /dev/null; then
set -- $numa "$#"
fi
exec gosu mongodb "$#"
fi
exec "$#"
I find that there are two ways to start a mongod service.
What I try:
docker run --name mongo -d -v your/host/dir:/container/dir mongo:3.0.1 -f /container/dir/mongod.conf
the last -f is the mongod parameter, you can also use --config instead.
make sure the path like your/host/dir exists and the file mongod.conf in it.