How to get a secrets "file" into Google Cloud Build so docker compose can read it? - docker-compose

How can I configure Google Cloud Build so that a docker-compose setup can use a secret file the same way as it does when it is run locally on my machine accessing a file?
My Docker-compose based setup uses a secrets entry to expose an API key to a backend component like this (simplified for example):
services:
backend:
build: docker_contexts/backend
secrets:
- API_KEY
environment:
- API_KEY_PATH=/run/secrets/api_key
secrets:
API_KEY:
file: ./secrets/api_key.json
From my understanding docker-compose places any files in the secrets section in /run/secrets on the local container for access, that's why the target location is hard-coded to /run/build.
I would like to deploy my docker-compose setup on Google Cloud Build to use this configuration, but the only examples I've seen in documentation have been to load the secret as an environment variable. I have tried to provide this secret to the secret manager and copy it to a local file at /run/secrets like this:
steps:
- name: gcr.io/cloud-builders/gcloud
# copy to /workspace/secrets so docker-compose can find it
entrypoint: 'bash'
args: [ '-c', 'echo $API_KEY > /workspace/secrets/api_key.json' ]
volumes:
- name: 'secrets'
path: /workspace/secrets
secretEnv: ['API_KEY']
# running docker-compose
- name: 'docker/compose:1.29.2'
args: ['up', '-d']
volumes:
- name: 'secrets'
path: /workspace/secrets
availableSecrets:
secretManager:
- versionName: projects/ID/secrets/API_KEY/versions/1
env: API_KEY
But when I run the job on google cloud build, I get this error message after everything is built: ERROR: for backend Cannot create container for service backend: invalid mount config for type "bind": bind source path does not exist: /workspace/secrets/api_key.json.
Is there a way I can copy the API_KEY environment variable at the cloudbuild.yaml level so it is accessible to the docker-compose level like it is when I run it on my local filesystem?

If you want to have the value of API_KEY taken from Secret Manager and placed into a text file at /workspace/secrets/api_key.json then change your step to this:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args: ["-c", "mkdir -p /workspace/secrets && echo $$API_KEY > /workspace/secrets/api_key.json"]
secretEnv: ["API_KEY"]
This will:
Remove the unnecessary volumes attribute which is not necessary as /workspace is already a volume that persists between steps
Make sure the directory exists before you try to put a file in it
Use the $$ syntax as described in Use secrets from Secret Manager so that it echoes the actual secret to the file
Note this section:
When specifying the secret in the args field, specify it using the environment variable prefixed with $$.
You can double-check that this is working by adding another step:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args: ["-c", "cat /workspace/secrets/api_key.json"]
This should echo out the contents of the file as the build step, allowing you to confirm that:
The previous step read the secret
The previous step wrote the secret to the file
The file was written to a volume that persists across steps
From there you can configure docker-compose to read the contents of that persisted file.

Related

mounting secrets as hidden file on K8s

I have a php app that uses a .env file to fetch Environmental Variables. All of these variables are stored on AWS Secrets Manager and fetched by EKS and stored in a K8S secret. I want to mount the secret as a .env file in the container. I am getting the error below when I run the container on K8S.
Error: failed to start container "php-app": Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: │
│ error during container init: error mounting "/var/lib/kubelet/pods/4aa66371-0590-403c-b81d-6bff51160fa0/volume-subpaths/envfile/php-app/0" to rootfs at "/var/www/html/.env": mount /var/lib/kubelet/pods/4aa66371-0590-403c-b81d-6bff51160fa0/volume │
│ -subpaths/envfile/php-app/0:/var/www/html/.env (via /proc/self/fd/6), flags: 0x5001: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
my deployment manifest
spec:
volumes:
- name: envfile
secret:
secretName: php-app-secrets
containers:
- name: php-app
image: php-image:1.9
imagePullPolicy: Always
volumeMounts:
- name: envfile
mountPath: "/var/www/html/.env"
readOnly: true
subPath: ".env"
Any idea on how I can mount the .env file? is it even possible?
You cannot mount the entire Secret as a single file.
If you look at, for example, the Create a Pod that has access to the secret data through a Volume sample setup, this mounts the entire Secret as a volume. In that example each setting in the Secret becomes a separate file in the container. Trying to set subPath: .env will try to mount only the single value from the Secret with the value .env.
What you can do, though, is Configure all key-value pairs in a Secret as container environment variables
spec:
containers:
- name: php-app
image: php-image:1.9
envFrom:
- secretRef:
name: php-app-secrets
This will cause every value in the Secret to appear in the container directly as an environment variable; you don't need to write out a .env file and then load it back in. This is exactly the same kind of environment variable as if you had run export VARIABLE=value in a local shell.

Write command args in kubernetes deployment

Can anyone help with this, please?
I have a mongo pod assigned with its service. I need to execute some commands while starting the container in the pod.
I found a small examples like this:
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
But I want to execute these commands while starting the pod:
use ParkInDB
db.roles.insertMany( [ {name :"ROLE_USER"}, {name:"ROLE_MODERATOR"}, {name:"ROLE_ADMIN"} ])
you need to choice one solution :
1- use init-container to deployment for change and execute some command or file
2- use command and args in deployment yaml
for init-container visit this page and use.
for comnad and args use this model in your deployment yaml file:
- image:
name:
command: ["/bin/sh"]
args: ["-c" , "PUT_YOUR_COMMAND_HERE"]
if you are looking forward to run the command before the container start or container stop you can use the container life cycle hooks.
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
however, if you can add a command in the shell script file and edit MongoDB image as per requirement
command: ["/bin/sh", "-c", "/usr/src/script.sh"]
you also edit the yaml with
args:
- '-c'
- |
ls
rm -rf sql_scripts
When you use the official Mongo image, you can specify scripts to use on container startup. The answer accepted here provides you with some information on how this work.
Kubernetes
When it comes to Kubernetes, there are some pre-work you need to do.
What you can do is to write a script like my-script.sh that creates a userDB and insert an item into the users collection:
mongo userDB --eval 'db.users.insertOne({username: "admin", password: "12345"})'
and then write a Dockerfile based on the official mongo image, to copy your script into the folder where custom scripts are run on database initialization.
FROM mongo:latest
COPY my-script.sh /docker-entrypoint-initdb.d/
CMD ["mongod"]
Within the same directory containing your script and dockerfile, build the docker image with
docker build -t dockerhub-username/custom-mongo .
Push the image to docker hub or any repository of your choice, and use it in your deployment yaml.
deployment.yaml
...
spec:
containers:
- name: mongodb-standalone
image: dockerhub-username/custom-mongo
ports:
- containerPort: 27017
Verify by going to your pod and check the logs. You will be able to see that mongo has initialized the db that you have specified in your script in the directory /docker-entrypoint-initdb.d/.

How to place configuration files inside pods?

For example I want to place an application configuration file inside:
/opt/webserver/my_application/config/my_config_file.xml
I create a ConfigMap from file and then place it in a volume like:
/opt/persistentData/
The idea is to run afterwards an script that does something like:
cp /opt/persistentData/my_config_file.xml /opt/webserver/my_application/config/
But it could be any startup.sh script that does needed actions.
How do I run this command/script? (during Pod initialization before Tomcat startup).
I would first try if this works.
spec:
containers:
- volumeMounts:
- mountPath: /opt/webserver/my_application/config/my_config_file.xml
name: config
subPath: my_config_file.xml
volumes:
- configMap:
items:
- key: KEY_OF_THE_CONFIG
path: my_config_file.xml
name: config
name: YOUR_CONFIGMAP_NAME
If not, add an init container to copy the file.
spec:
initContainers:
- name: copy-config
image: busybox
command: ['sh', '-c', '/bin/cp /opt/persistentData/my_config_file.xml /opt/webserver/my_application/config/']
How about mounting the ConfigMap where you actually want it instead of copying over?
update:
The init container #ccshih mentioned should do, but one can try other options too:
Build a custom image modyfying the base one, using a Docker recipe. The example below takes a java+tomcat7 openshift image, adds an additional folder to the app classpath, so you can mount your ConfigMap to /mnt/config without overwriting anything, keeping both folders available.
.
FROM openshift/webserver31-tomcat7-openshift:1.2-6
# add classpaths to config
RUN sed -i 's/shared.loader=/shared.loader=\/mnt\/config/'
/opt/webserver/conf/catalina.properties
Change the ENTRYPOINT of the application, either by modifying the image, or by the DeploymentConfig hooks, see: https://docs.okd.io/latest/dev_guide/deployments/deployment_strategies.html#pod-based-lifecycle-hook
With the hooks one just needs to remember to call the original entrypoint or launch script after all the custom stuff is done.
.
spec:
containers:
-
name: my-app
image: 'image'
command:
- /bin/sh
args:
- '-c'
- cp /wherever/you/have/your-config.xml /wherever/you/want/it/ && /opt/webserver/bin/launch.sh

run kubernetes job in cloud builder

I want to create and remove a job using Google Cloud Builder. Here's my configuration which builds my Docker image and pushes to GCR.
# cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/xyz/abc:latest','-f','Dockerfile.ng-unit','.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/xyz/abc:latest']
Now I want to create a job , I want to run something like
kubectl create -R -f ./kubernetes
which creates job in kubernetes folder.
I know cloud builder has - name: 'gcr.io/cloud-builders/kubectl' but I can't figure out how to use it. Plus how can I authenticate it to run kubectl commands? How can I use service_key.json
I wasn't able to connect and get cluster credentials. Here's what I did
Go to IAM, add another Role to xyz#cloudbuild.gserviceaccount.com. I used Project Editor.
Wrote this on cloudbuild.yaml name: 'gcr.io/cloud-builders/kubectl'
args: ['create', '-R', '-f','./dockertests/unit-tests/kubernetes']

docker stack: setting environment variable from secrets

I was trying to set the password from secrets but it wasn't picking it up.
Docker Server verions is 17.06.2-ce. I used the below command to set the secret:
echo "abcd" | docker secret create password -
My docker compose yml file looks like this
version: '3.1'
...
build:
context: ./test
dockerfile: Dockerfile
environment:
user_name: admin
eureka_password: /run/secrets/password
secrets:
- password
I also have root secrets tag:
secrets:
password:
external: true
When I hardcode the password in environment it works but when I try via the secrets it doesn't pick up. I tried to change the compose version to 3.2 but with no luck. Any pointers are highly appreciated!
To elaborate on the original accepted answer, just change your docker-compose.yml file so that it contains this as your entrypoint:
version: "3.7"
services:
server:
image: alpine:latest
secrets:
- test
entrypoint: [ '/bin/sh', '-c', 'export TEST=$$(cat /var/run/secrets/test) ; source /entrypoint.sh' ]
secrets:
test:
external: true
That way you don't need any additional files!
You need modify docker compose to read the secret env file from /run/secrets. If you want to set environment variables via bash, you can overwrite your docker-compose.yaml file as displayed below.
You can save the following code as entrypoint_overwrited.sh:
# get your envs files and export envars
export $(egrep -v '^#' /run/secrets/* | xargs)
# if you need some specific file, where password is the secret name
# export $(egrep -v '^#' /run/secrets/password| xargs)
# call the dockerfile's entrypoint
source /docker-entrypoint.sh
In your docker-compose.yaml overwrite the dockerfile and entrypoint keys:
version: '3.1'
#...
build:
context: ./test
dockerfile: Dockerfile
entrypoint: source /data/entrypoint_overwrited.sh
tmpfs:
- /run/secrets
volumes:
- /path/your/data/where/is/the/script/:/data/
environment:
user_name: admin
eureka_password: /run/secrets/password
secrets:
- password
Using the snippets above, the environment variables user_name or eureka_password will be overwritten. If your secret env file defines the same env vars, the same will happen if you define in your service some env_file.
I found this neat extension to Alejandro's approach: make your custom entrypoint load from ENV_FILE variables to ENV ones:
environment:
MYSQL_PASSWORD_FILE: /run/secrets/my_password_secret
entrypoint: /entrypoint.sh
and then in your entrypoint.sh:
#!/usr/bin/env bash
set -e
file_env() {
local var="$1"
local fileVar="${var}_FILE"
local def="${2:-}"
if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
exit 1
fi
local val="$def"
if [ "${!var:-}" ]; then
val="${!var}"
elif [ "${!fileVar:-}" ]; then
val="$(< "${!fileVar}")"
fi
export "$var"="$val"
unset "$fileVar"
}
file_env "MYSQL_PASSWORD"
Then, when the upstream image changes adds support for _FILE variables, you can drop the custom entrypoint without making changes to your compose file.
One option is to map your secret directly before you run your command:
entrypoint: "/bin/sh -c 'eureka_password=`cat /run/secrets/password` && echo $eureka_password'"
For example MYSQL password for node:
version: "3.7"
services:
app:
image: xxx
entrypoint: "/bin/sh -c 'MYSQL_PASSWORD=`cat /run/secrets/sql-pass` npm run start'"
secrets:
- sql-pass
secrets:
sql-pass:
external: true
Because you are initialising the eureka_password with a file instead of value.