Pass Mongodb Atlas Operator env vars from travis to kubernetes deploy.sh - kubernetes

I am trying to adapt the quickstart guide for Mongo Atlas Operator here Atlas Operator Quickstart to use secure env variables set in TravisCI.
I want to put the quickstart scripts into my deploy.sh, which is triggered from my travis.yaml file.
My travis.yaml already sets one global variable like this:
env:
global:
- SHA=$(git rev-parse HEAD)
Which is consumed by the deploy.sh file like this:
docker build -t mydocker/k8s-client:latest -t mydocker/k8s-client:$SHA -f ./client/Dockerfile ./client
but I'm not sure how to pass vars set in the Environment variables bit in the travis Settings to deploy.sh
This is the section of script I want to pass variables to:
kubectl create secret generic mongodb-atlas-operator-api-key \
--from-literal="orgId=$MY_ORG_ID" \
--from-literal="publicApiKey=$MY_PUBLIC_API_KEY" \
--from-literal="privateApiKey=$MY_PRIVATE_API_KEY" \
-n mongodb-atlas-system
I'm assuming the --from-literal syntax will just put in the literal string "orgId=$MY_ORG_ID" for example, and I need to use pipe syntax - but can I do something along the lines of this?:
echo "$MY_ORG_ID" | kubectl create secret generic mongodb-atlas-operator-api-key --orgId-stdin
Or do I need to put something in my travis.yaml before_install script?

Looks like the echo approach is fine, I've found a similar use-case to yours, have a look here.

Related

Using kubectl to exec a command that passes a string from a file as a flag

I'm trying to run the following command:
kubectl exec vault-1 -- vault operator raft join -leader-ca-cert=`cat "$VAULT_CACERT"` https://vault-0.vault-internal:8200
The goal here is to get the contents of the cert file at the path stored in $VAULT_CACERT (variable on the pod) and pass that in as the -leader-ca-cert using kubectl. When I run I get cat: '': No such file or directory which seems to indicate is possibly using my local machines env. Connecting to the pod and running the command that way does work.
I've tried a few different commands and I can seem to find a way to achieve what I want through kubectl. Is there a better way to pass this data in somehow?
If the ENV is inside the container, then you should not use backtick in the exec command. Use single quotes to avoid shell expansion on your local machine terminal.
so you can try something like below
kubectl exec vault-1 -- vault operator raft join -leader-ca-cert='cat "$VAULT_CACERT"' https://vault-0.vault-internal:8200
I tested out something like below and it seems working for me
kubectl exec demo -- sh -c 'cat "$FILE_PATH"'

backup postgresql from azure container instance

I created Azure Container Instance and ran postgresql in it. Mounted an azure container instance storage account. How can I start backup work, possibly by sheduler?
When I run the command
az container exec --resource-group Vitalii-demo --name vitalii-demo --exec-command "pg_dumpall -c -U postgrace > dump.sql"
I get an error error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \ "pg_dumpall -c -U postgrace > dump.sql\": executable file not found in $PATH"
I read that
Azure Container Instances currently supports launching a single process with az container exec, and you cannot pass command arguments. For example, you cannot chain commands like in sh -c "echo FOO && echo BAR", or execute echo FOO.
Perhaps there is an opportunity to run as a task? Thanks.
Unfortunately - and as you already mentioned - it's not possible to run any commands with arguments like echo FOO or chain multiple commands together with &&.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-exec#run-a-command-with-azure-cli
You should be able to run an interactive shell by using --exec-command /bin/bash.
But this will not help if you want to schedule the backups programatically.
pg_dumpall can also be configured by environment variables:
https://www.postgresql.org/docs/9.3/libpq-envars.html
You could launch your backup-container with the correct environment variables in order to connect your database service:
PGHOST
PGPORT
PGUSER
PGPASSWORD
When having these variables set, a simple pg_dumpall should totally do what you want.
Hope that helps.
UPDATE:
Yikes, even when configuring the connection via environment-variables you won't be able to state the desired output file... Sorry.
You could create your own Dockerimage with a pre-configured script for dumping your PostgreSQL-database.
Doing it that way, you can configure the output-file in your script and then simply execute the script with --exec-command dump_my_db.sh.
Keep in mind that your script has to be located somewhere in the default $PATH - e.g. /usr/local/bin.

Makefile target to add k8s cluster config

I want, in one command with args to config kubeconfig, that is able to connect to k8s cluster.
I tried the following which does not work.
cfg:
mkdir ~/.kube
kube: cfg
touch config $(ARGS)
In the args the user should pass the config file content of the cluster (kubeconfig).
If there is a shorter way please let me know.
update
I've used the following which (from the answer) is partially solve the issue.
kube: cfg
case "$(ARGS)" in \
("") printf "Please provide ARGS=/some/path"; exit 1;; \
(*) cp "$(ARGS)" /some/where/else;; \
esac
The problem is because of the cfg which is creating the dir in case the user not providing the args and in the second run when providing the path the dir is already exist and you get an error, is there a way to avoid it ? something like if the arg is not provided dont run the cfg
I assume the user input is the pathname of a file. The make utility can take variable assignments as arguments, in the form of make NAME=VALUE. You refer to these in your Makefile as usual, with $(NAME). So something like
kube: cfg
case "$(ARGS)" in \
("") printf "Please provide ARGS=/some/path"; exit 1;; \
(*) cp "$(ARGS)" /some/where/else;; \
esac
called with
make ARGS=/some/path/file kube
would then execute cp /some/path/file /some/where/else. If that is not what you were asking, please rephrase the question, providing exact details of what you want to do.

Filtering on labels in Docker API not working (possible bug?)

I'm using the Docker API to get info on containers in JSON format. Basically, I want to do a filter based on label values, but it is not working (just returns all containers). This filter query DOES work if you just use the command line docker, i.e.:
docker ps -a -f label=owner=fred -f label=speccont=true
However, if I try to do the equivalent filter query using the API, it just returns ALL containers (no filtering done), i.e.:
curl -s --unix-socket /var/run/docker.sock http:/containers/json?all=true&filters={"label":["speccont=true","owner=fred"]}
Note that I do uri escape the filters param when I execute it, but am just showing it here unescaped for readability.
Am I doing something wrong here? Or does this seem to be a bug in the Docker API? Thanks for any help you can give!
The correct syntax for filtering containers by label as of Docker API v1.41 is
curl -s -G -X GET --unix-socket /var/run/docker.sock http://localhost/containers/json" \
--data 'all=true' \
--data-urlencode 'filters={"label":["speccont=true","owner=fred"]}'
Note the automatic URL encoding as mentioned in this stackexchange post.
I felt there was a bug with API too. But turns out there is none. I am on API version 1.30.
I get desired results with this call:
curl -sS localhost:4243/containers/json?filters=%7B%22ancestor%22%3A%20%5B%222bab985010c3%22%5D%7D
I got the url escaped string using used above with:
python -c 'import urllib; print urllib.quote("""{"ancestor": ["2bab985010c3"]}""")'

How do you set encrypted Travis env variables in docker?

In writing my deployment script, I want to set a git checkout url that I want to be secret. I want to create a Travis job to test out my playbook. The easiest approach that I can think of now is by setting my global_vars to look for an env variable say DEPLOYMENT_GIT_URL. I then encrypt this env variable in travis and pass it to docker exec when I am building the docker image to test against my playbook.
Question:
Can I pass my encrypted Travis variable to the instance via docker exec ? Something like sudo docker exec ... export DEPLOYMENT_GIT_URL=$TRAVIS_ENV ansible-playbook -i ....
While this seems the simplest way to do it, appreciate comments on this method.
Thanks
You can pass variables directly to Ansible. If you want nested or complex variables, use a JSON string.
docker exec <container> ansible-playbook -e GITURL="$GITURL" whatever.yml
As gogstad mentions, Ansible has the facility to manage secrets via ansible-vault. Unless this URL is information that Travis needs to be the source of, it might be easier storing it directly in Ansible. Otherwise openssl can manage the secret:
secret=$(echo -n "yourdata" | openssl enc -e -aes-256-cbc -a -k 'passpasspass')
echo $secret | openssl enc -d -aes-256-cbc -a -k 'passpasspass'
If you really want to pass the data with an environment variable you would need to do that at container creation with docker run and -e
docker run -e GITURL="what" busybox sh -c 'echo $GITURL'