Is there any clean way to append an env_file to an autogenerated docker file? - docker-compose

I am planning to self-host Bitwarden using Ansible. During the execution of the ansible playbook, a file hierarchy and a docker-compose file is generated on the remote based on config file parameters and other. At the end, this docker file is executed to spawn the containers.
My personal server infrastructure already servers an nginx-proxy combined with letsencrypt. In order to use this service, I need to add several specific environmental parameters to the "main container" in this docker-compose file. I collected them in an external file and now I need to add a this external file to the docker file. I want to do it dynamically. Until now, I am using the ansible task:
- name: append nginx.env as env file to automatically generated docker file
shell: 'sed -i "\$a\ \ \ \ \ \ - ../env/nginx.env" /srv/bitwarden/bwdata/docker/docker-compose.yml'
to transform:
...
*OTHER CONTAINERS*
...
nginx:
...
*UNIMPORTANT PARAMETERS*
...
env_file:
- ../env/uid.env(EOF)
into
...
*OTHER CONTAINERS*
...
nginx:
...
*UNIMPORTANT PARAMETERS*
...
env_file:
- ../env/uid.env
- ../env/nginx.env(EOF)
But I am unhappy with this sed solution because Bitwarden could modify this auto-generated file (appending to the end would eventually fail.. and I also can't really use any regEx because they could change the structure of single components of the file etc...).
Is there any built-in safe way by ansible to achieve this exact behavior?
EDIT: I am looking for something like:match <container_name> append to env_file <../env/nginx.env>

Related

Basic string variable substitution in docker-compose files (3.8)

Take my example of two services:
services:
nginx:
ports:
- 443:443
volumes:
- "CONFIG_DIRECTORY/nginx/nginx.conf:/etc/nginx/nginx.conf"
- "CONFIG_DIRECTORY/certs:/etc/ssl/certs"
web:
command: ["node", "index.js"]
volumes:
- "CONFIG_DIRECTORY/certs:/var/client/config/ssl/certs"
- "CONFIG_DIRECTORY/process:/var/client/process"
I'd really like to be able to substitute a string such as /home/garnettm/development/config directly into the indicated CONFIG_DIRECTORY locations in the above strings.
Is there any way to do this other than the many currently available environment variable substitution process options?
A .env file for example would allow you to do this using an already defined variable and the $VARIABLE syntax.
The only form of string interpolation Compose supports at all is environment variable substitution. This uses shell-style $VARIABLE syntax, with very limited options for if the variable isn't set, ${VARIABLE:-default value} or ${VARIABLE:?error message}.
There is no way to declare these variables in the Compose file itself. Compose does support putting variable values in a .env file, so they don't necessarily need to be set as actual environment variables. You do not need to mention this file in a Compose env_file: directive, Compose reads it on its own to set environment variables to do these substitutions.
# .env
CONFIG_DIRECTORY=/home/garnettm/development/config
# docker-compose.yml
volumes:
- "$CONFIG_DIRECTORY/nginx/nginx.conf:/etc/nginx/nginx.conf"
For this specific setting, on the left-hand side of volumes: and other places that refer to host directories, relative paths are interpreted relative to the location of the Compose file (the first one, if you have multiple docker-compose -f options). A very common setup is to put all of the state and source code in the same directory tree, and to put the docker-compose.yml file at the root of that tree. In that case the relative directory . will almost always be right
volumes:
- "./nginx/nginx.conf:/etc/nginx/nginx.conf"
YAML anchors are occasionally used to provide common blocks of settings, but you can't do string interpolation using anchors, only supply an entire YAML scalar node. They won't be helpful here.

How to reference values inside a Docker env file inside the same file

This is my connection string inside my docker env file:
DB_URL=mongodb://${MONGO_INITDB_ROOT_USERNAME}:${MONGO_INITDB_ROOT_PASSWORD}#XX.XXXX.XXX.XXXX:27017/david?authSource=admin
And inside my env file I have this:
MONGO_INITDB_ROOT_USERNAME=root
MONGO_INITDB_ROOT_PASSWORD=example
I do I reference these values in the connection string above in the same file, or I need to hard code them. Currently I am using the format ${.....} but not sure it's the right way.
I have this in the env file:
MONGO_INITDB_ROOT_USERNAME=root
MONGO_INITDB_ROOT_PASSWORD=example
...
DB_URL=mongodb://${MONGO_INITDB_ROOT_USERNAME}:${MONGO_INITDB_ROOT_PASSWORD}#XX.XXXX.XXX.XXXX:27017/david?authSource=admin

docker swam - secrets from file not resolving tilde

Using secrets from docker-compose on my dev machine works. But the remote server via ssh just says open /home/johndoe/~/my-secrets/jenkinsuser.txt: no such file or directory.
secret definition in stack.yml:
secrets:
jenkinsuser:
file: ~/my-secrets/jenkinsuser.txt
Run with:
docker stack deploy -c stack.yml mystack
The documentation does not mention any gotchas about ~ path. I am not going to put the secret files inside . as all examples do, because that directory is version controlled.
Am I missing some basics about variable expansion, or differences between docker-compose and docker swarm?
Your ~ character in your path is considered as literal. Use $HOME wich is treated as a variable in your string path.
Tilde character work only if it is unquoted. In your remote environment, the SWARM yaml parser consider your path as a string, where the prefix-tilde is read as a normal character (see prefix-tilde).

Riak-KV: how to create bucket in docker-compose file?

I try to use the original riak-kv image in docker-compose and I want on init add one bucket but docker-compose up won't start. How I can edit volumes.schemas to add bucket on init?
Original image allows to add riak.conf file in docker-compose ? If yes, then how can I do that?
Creating a bucket type with a custom datatype
I assume you want to create a bucket type when starting your container. You have to create a file in the /etc/riak/schemas directory with the bucket's name, like bucket_name.dt. The file should contain a single line with the type you would like to create (e.g. counter, set, map, hll).
You can also use the following command to create the file:
echo "counter" > schemas/bucket_name.dt
After that you have to just mount the schemas folder with the file to the /etc/riak/schemas directory in the container:
docker run -d -P -v $(pwd)/schemas:/etc/riak/schemas basho/riak-ts
Creating a bucket type with default datatype
Currently creating a bucket type with default datatype is only available if you add a custom post-start script under the /etc/riak/poststart.d directory.
Create a shell script with the command you would like to run. An example can be found here.
You have to mount it as a read-only file into the /etc/riak/poststart.d folder:
docker run -d -P -v $(pwd)/poststart.d/03-bootstrap-my-datatype.sh:/etc/riak/poststart.d/03-bootstrap-my-datatype.sh:ro basho/riak-ts
References
See the whole documentation for the docker images here. The rest can be found in GitHub.
Also, the available datatypes can be found here.

Get ansible to read value from mysql and/or perl script

There may be a much better way to do what i need altogether. I'll give the background first then my current (non-working) approach.
The goal is to migrate a bunch of servers from one SLES 11 to SLES 12 making use of ansible playbooks. The problem is that the newserver and the oldserver are supposed to have the same nfs mounted dir. This has to be done at the beginning of the playbook so that all of the other tasks can be completed. The name of the dir being created can be determined in 2 ways - on the oldserver directly or from a mysql db query for the volume name on that oldserver. The newservers are named migrate-(oldservername). I tried to prompt for the volumename in ansible, but that would then apply the same name to every server. Goal Recap: dir name must be determined from the oldserver, created on new server.
Approach 1: I've created a perl script that ansible will copy to the newserver and it will execute the mysql query, and create the dir itself. There are 2 problems with this - 1) mysql-client needs to be installed on each server. This is completely unneccesary for these servers and would have to then be uninstalled after the query is run. 2) Copying files and remotely executing them seems like a bad approach in general.
Approach 2: Create a version of the above except run it on the ansible control machine (where mysql-client is installed already) and store the values in key:value pairs in a file. Problems - 1) I cannot figure out how to in the perl script determine what hosts Ansible is running against and would have to manually enter them into the perl script. 2) I cannot figure out how to get Ansible to import those values correctly from the file created.
Here's the relevant perl code I have for this -
$newserver = migrate-old.server.com
($mig, $oldhost) = split(/\-/, $newserver);
$volname=`mysql -u idm-ansible -p -s -N -e "select vol_name from assets.servers where hostname like '$oldhost'";`;
open(FH, ">vols.yml");
$values = $newhost{$volname};
print FH "$newhost:$volname\n";
close(FH);
My Ansible code is all over the place as I've tried and commented out a ton of things. I can share that here if it is helpful.
Approach 3: Do it completely in Ansible Basically a mysql query for loop over each host. Problem - I have absolutely no idea how to do this. Way too unfamiliar with ansible. I think this is what I would prefer to try and do though.
What is the best approach here? How do I go about getting the right value into ansible to create the correct directory?
Please let me know if I can clarify anything.
Goal Recap: dir name must be determined from the oldserver, created on new server.
Will magic variables do?
Something like this:
---
- hosts: old-server
tasks:
- shell: "/get-my-mount-name.sh"
register: my_mount_name
- hosts: new-server
tasks:
- shell: "/mount-me.sh --mount_name={{ hostvars['old-server'].my_mount_name.stdout }}"