How do you set encrypted Travis env variables in docker? - deployment

In writing my deployment script, I want to set a git checkout url that I want to be secret. I want to create a Travis job to test out my playbook. The easiest approach that I can think of now is by setting my global_vars to look for an env variable say DEPLOYMENT_GIT_URL. I then encrypt this env variable in travis and pass it to docker exec when I am building the docker image to test against my playbook.
Question:
Can I pass my encrypted Travis variable to the instance via docker exec ? Something like sudo docker exec ... export DEPLOYMENT_GIT_URL=$TRAVIS_ENV ansible-playbook -i ....
While this seems the simplest way to do it, appreciate comments on this method.
Thanks

You can pass variables directly to Ansible. If you want nested or complex variables, use a JSON string.
docker exec <container> ansible-playbook -e GITURL="$GITURL" whatever.yml
As gogstad mentions, Ansible has the facility to manage secrets via ansible-vault. Unless this URL is information that Travis needs to be the source of, it might be easier storing it directly in Ansible. Otherwise openssl can manage the secret:
secret=$(echo -n "yourdata" | openssl enc -e -aes-256-cbc -a -k 'passpasspass')
echo $secret | openssl enc -d -aes-256-cbc -a -k 'passpasspass'
If you really want to pass the data with an environment variable you would need to do that at container creation with docker run and -e
docker run -e GITURL="what" busybox sh -c 'echo $GITURL'

Related

Pass Mongodb Atlas Operator env vars from travis to kubernetes deploy.sh

I am trying to adapt the quickstart guide for Mongo Atlas Operator here Atlas Operator Quickstart to use secure env variables set in TravisCI.
I want to put the quickstart scripts into my deploy.sh, which is triggered from my travis.yaml file.
My travis.yaml already sets one global variable like this:
env:
global:
- SHA=$(git rev-parse HEAD)
Which is consumed by the deploy.sh file like this:
docker build -t mydocker/k8s-client:latest -t mydocker/k8s-client:$SHA -f ./client/Dockerfile ./client
but I'm not sure how to pass vars set in the Environment variables bit in the travis Settings to deploy.sh
This is the section of script I want to pass variables to:
kubectl create secret generic mongodb-atlas-operator-api-key \
--from-literal="orgId=$MY_ORG_ID" \
--from-literal="publicApiKey=$MY_PUBLIC_API_KEY" \
--from-literal="privateApiKey=$MY_PRIVATE_API_KEY" \
-n mongodb-atlas-system
I'm assuming the --from-literal syntax will just put in the literal string "orgId=$MY_ORG_ID" for example, and I need to use pipe syntax - but can I do something along the lines of this?:
echo "$MY_ORG_ID" | kubectl create secret generic mongodb-atlas-operator-api-key --orgId-stdin
Or do I need to put something in my travis.yaml before_install script?
Looks like the echo approach is fine, I've found a similar use-case to yours, have a look here.

Bash script from a BAT file not running after connecting to a kubectl pod in Google Cloud Shell editor

For my project, I have to connect to a postgres Database in Google Cloud Shell using a series of commands:
gcloud config set project <project-name> gcloud auth activate-service-account <keyname>#<project-name>.iam.gserviceaccount.com --key-file=<filename>.json gcloud container clusters get-credentials banting --region <region> --project <project> kubectl get pods -n <node> kubectl exec -it <pod-name> -n <node> bash apt-get update apt install postgresql postgresql-contrib psql -h <hostname> -p <port> -d <database> -U <userId>`
I am a beginner to this and just running the scripts provided to me by copy pasting till now.
But to make things easier, I have created a .bat file in the Shell editor with all the above commands and tried to run it using bash <filename>
But once the kubectl exec -it <pod-name> -n <node> bash command runs and new directory is opened like below, the rest of the commands do not run.
Defaulted container "<container>" out of: <node>, istio-proxy, istio-init (init) root#<pod-name>:/#
So how can I make the shell run the rest of these scripts from the .bat file:
apt-get update apt install postgresql postgresql-contrib psql -h <hostname> -p <port> -d <database> -U <userId>`
Cloud Shell is a Linux instance and default to the Bash shell.
BAT commonly refers to Windows|DOS batch files.
On Linux, shell scripts are generally .sh.
Your script needs to be revised in order to pass the commands intended for the kubectl exec command to the Pod and not to the current script.
You can try (!) the following. It creates a Bash (sub)shell on the Pod and runs the commands listed after -c in it:
gcloud config set project <project-name>
gcloud auth activate-service-account <keyname>#<project-name>.iam.gserviceaccount.com \
--key-file=<filename>.json
gcloud container clusters get-credentials banting \
--region <region> \
--project <project>
kubectl get pods -n <node>
kubectl exec -it <pod-name> -n <node> bash -c "apt-get update && apt install postgresql postgresql-contrib && psql -h <hostname> -p <port> -d <database> -U <userId>"
However, I have some feedback|recommendations:
It's unclear whether even this approach will work because your running psql but doing nothing with it. In theory, I think you could then pass a script to the psql command too but then your script is becoming very janky.
It is considered not good practice to install software in containers as you're doing. The recommendation is to create the image that you want to run beforehand and use that. It is recommended that containers be immutable
I encourage you to use long flags when you write scripts as short flags (-n) can be confusing whereas --namespace= is more clear (IMO). Yes, these take longer to type but your script is clearer as a result. When you're hacking on the command-line, short flags are fine.
I encourage you to not use gcloud config set e.g. gcloud config set project ${PROJECT}. This sets global values. And its use is confusing because subsequent commands use the values implicitly. Interestingly, you provide a good example of why this can be challenging. Your subsequent command gcloud container clusters get-credentials --project=${PROJECT} explicitly uses the --project flag (this is good) even though you've already implicitly set the value for project using gcloud config set project.

How do I send a command to a remote system via ssh with concourse

I have the need to start a java rest server with concourse that lives on an Ubuntu 18.04 machine. The version of concourse my company uses is 5.5.11. The server code is written in Java, so a simple java -jar <uber.jar> suffices from the command line (see below). In production, I will not have this simple luxury, hence my question.
I have an scp command working that copies the .jar from concourse to the target Ubuntu machine:
scp -i /tmp/key.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ./${NEW_DIR}/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST}:/var/www
Note that my private key is passed with -i and I can confirm that is working.
I followed this other SO Q&A that seemed to be promising: Getting ssh to execute a command in the background on target machine
, but after trying a few permutations of the suggested solution and other answers, I still don't have my rest service kicked off.
I've tried a few permutations of this line in my concourse script:
ssh -f -i /tmp/pvt_key1.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST} "bash -c 'nohup java -jar /var/www/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} -c \"/opt/testcerts/clientkeystore\" -w \"password\" > /dev/null 2>&1 &'"
I've tried with and without the -f and -t switches in ssh, with and without the file stream redirection, with and without nohup and the Linux background ('&') command and various ways to escape the quotes.
At the bash prompt, this line successfully starts my server. The two switches are needed to point to the certificate and provide the password:
java -jar rest-service.jar -c "/opt/certificates/clientkeystore" -w "password"
I really think this is possible to do in Concourse, but I'm stuck at this point.
After a lot of trial an error, it seems I needed to do this:
ssh -f -i /tmp/pvt_key1.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST} "bash -c 'sudo java -jar /var/www/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} -c \"/path/to/my/certificate\" -w \"password\" > /var/www/log.txt 2>&1 &'"
The key was I was missing the 'sudo' portion of the command. Using nohup as opposed to putting in a Linux bash background indicator ('&') seems to give me an error in the pipeline. This works for me, but others are welcome to post responses with better answers or methods that might be a better practice.

How do I handle passwords and dockerfiles?

I've created an image for docker which hosts a postgresql server. In the dockerfile, the environment variable 'USER', and I pass a constant password into the a run of psql:
USER postgres
RUN /etc/init.d/postgresql start && psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" && createdb -O docker docker
Ideally either before or after calling 'docker run' on this image, I'd like the caller to have to input these details into the command line, so that I don't have to store them anywhere.
I'm not really sure how to go about this. Does docker have any support for reading stdin into an environment variable? Or perhaps there's a better way of handling this all together?
At build time
You can use build arguments in your Dockerfile:
ARG password=defaultPassword
USER postgres
RUN /etc/init.d/postgresql start && psql --command "CREATE USER docker WITH SUPERUSER PASSWORD '$password';" && createdb -O docker docker
Then build with:
$ docker build --build-arg password=superSecretPassword .
At run time
For setting the password at runtime, you can use an environment variable (ENV) that you can evaluate in an entrypoint script (ENTRYPOINT):
ENV PASSWORD=defaultPassword
ADD entrypoint.sh /docker-entrypoint.sh
USER postgres
ENTRYPOINT /docker-entrypoint.sh
CMD ["postgres"]
Within the entrypoint script, you can then create a new user with the given password as soon as the container starts:
pg_ctl -D /var/lib/postgresql/data \
-o "-c listen_addresses='localhost'" \
-w start
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD '$password';"
postgres pg_ctl -D /var/lib/postgresql/data -m fast -w stop
exec $#
You can also have a look at the Dockerfile and entrypoint script of the official postgres image, from which I've borrowed most of the code in this answer.
A note on security
Storing secrets like passwords in environment variables (both build and run time) is not incredibly secure (unfortunately, to my knowledge, Docker does not really offer any better solution for this, right now). An interesting discussion on this topic can be found in this question.
You could use environment variable in your Dockerfile and override the default value when you call docker run using -e or --env argument.
Also you will need to amend the init script to run psql command on startup referenced by the CMD instruction.

Passing variable from container start to file

I have the following lines in a Dockerfile where I want to set a value in a config file to a default value before the application starts up at the end and provide optionally setting it using the -e option when starting the container.
I am trying to do this using Docker's ENV commando
ENV CONFIG_VALUE default_value
RUN sed -i 's/CONFIG_VALUE/'"$CONFIG_VALUE"'/g' CONFIG_FILE
CMD command_to_start_app
I have the string CONFIG_VALUE explicitly in the file CONFIG_FILE and the default value from the Dockerfile gets correctly substituted. However, when I run the container with the added -e CONFIG_VALUE=100 the substitution is not carried out, the default value set in the Dockerfile is kept.
When I do
docker exec -i -t container_name bash
and echo $CONFIG_VALUE inside the container the environment variable does contain the desired value 100.
Instructions in the Dockerfile are evaluated line-by-line when you do docker build and are not re-evaluated at run-time.
You can still do this however by using an entrypoint script, which will be evaluated at run-time after any environment variables have been set.
For example, you can define the following entrypoint.sh script:
#!/bin/bash
sed -i 's/CONFIG_VALUE/'"$CONFIG_VALUE"'/g' CONFIG_FILE
exec "$#"
The exec "$#" will execute any CMD or command that is set.
Add it to the Dockerfile e.g:
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Note that if you have an existing entrypoint, you will need to merge it with this one - you can only have one entrypoint.
Now you should find that the environment variable is respected i.e:
docker run -e CONFIG_VALUE=100 container_name cat CONFIG_FILE
Should work as expected.
That shouldn't be possible in a Dockerfile: those instructions are static, for making an image.
If you need runtime instruction when launching a container, you should code them in a script called by the CMD directive.
In other words, the sed would take place in a script that the CMD called. When doing the docker run, that script would have access to the environment variable set just before said docker run.