So I need to store somewhere .env files for docker-compose. One approach is to store their contents in masked variables in gitlab CI/CD but it seems not secure for me as hacking quite a lot of apps would only take someone to crack a gitlab account.
I would like to store .env files in a directory on server and copy them to new pulled repository path in the first job of CI/CD. I tried artifacts for that, but they are uploaded to gitlab and can be viewed there and I didn't manage to find them in the later jobs (ls in after_script didn't show them).
How could I copy .env files into all jobs and not upload them on gitlab?
.gitlab-ci.yml
before_script:
- docker info
- docker compose --version
copy_env_files:
script:
- cp /home/myuser/myapp/env.* .
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
artifacts:
paths:
- env.*
build_image:
script:
- docker-compose -f docker-compose.yml up -d --build
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
artifacts:
paths:
- env.*
collect_static_files:
script:
- docker-compose exec web python manage.py collectstatic --no-input
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
artifacts:
paths:
- env.*
migrate_database:
script:
- docker-compose exec web python manage.py migrate --no-input
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
artifacts:
paths:
- env.*
after_script:
- docker container ls
- pwd
- ls
How could I copy .env files into all jobs and not upload them on gitlab?
By integrating your gitlab-ci job to an external vault, where sensitive data would securely reside.
For instance: "Authenticating and reading secrets with HashiCorp Vault", but it is for GitLab premium only.
You still can use external secrets in CI
Configure your vault and secrets.
Generate your JWT and provide it to your CI job.
Runner contacts HashiCorp Vault and authenticates using the JWT.
HashiCorp Vault verifies the JWT.
HashiCorp Vault checks the bounded claims and attaches policies.
HashiCorp Vault returns the token.
Runner reads secrets from the HashiCorp Vault.
I should have added "cp /home/myuser/myapp/env.* ." into before_script, not separate it as a job.
I also fixed my errors with django --no-input (by adding -T to docker exec) which occured after docker was successfully builded.
before_script:
- docker info
- docker compose --version
- cp /home/myuser/myproject/env.* .
build_image:
script:
- docker-compose -f docker-compose.yml up -d --build
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
collect_static_files:
script:
- docker-compose exec -T web python manage.py collectstatic --no-input
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
migrate_database:
script:
- docker-compose exec -T web python manage.py migrate --no-input
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
after_script:
- docker container ls
Related
How can I configure Google Cloud Build so that a docker-compose setup can use a secret file the same way as it does when it is run locally on my machine accessing a file?
My Docker-compose based setup uses a secrets entry to expose an API key to a backend component like this (simplified for example):
services:
backend:
build: docker_contexts/backend
secrets:
- API_KEY
environment:
- API_KEY_PATH=/run/secrets/api_key
secrets:
API_KEY:
file: ./secrets/api_key.json
From my understanding docker-compose places any files in the secrets section in /run/secrets on the local container for access, that's why the target location is hard-coded to /run/build.
I would like to deploy my docker-compose setup on Google Cloud Build to use this configuration, but the only examples I've seen in documentation have been to load the secret as an environment variable. I have tried to provide this secret to the secret manager and copy it to a local file at /run/secrets like this:
steps:
- name: gcr.io/cloud-builders/gcloud
# copy to /workspace/secrets so docker-compose can find it
entrypoint: 'bash'
args: [ '-c', 'echo $API_KEY > /workspace/secrets/api_key.json' ]
volumes:
- name: 'secrets'
path: /workspace/secrets
secretEnv: ['API_KEY']
# running docker-compose
- name: 'docker/compose:1.29.2'
args: ['up', '-d']
volumes:
- name: 'secrets'
path: /workspace/secrets
availableSecrets:
secretManager:
- versionName: projects/ID/secrets/API_KEY/versions/1
env: API_KEY
But when I run the job on google cloud build, I get this error message after everything is built: ERROR: for backend Cannot create container for service backend: invalid mount config for type "bind": bind source path does not exist: /workspace/secrets/api_key.json.
Is there a way I can copy the API_KEY environment variable at the cloudbuild.yaml level so it is accessible to the docker-compose level like it is when I run it on my local filesystem?
If you want to have the value of API_KEY taken from Secret Manager and placed into a text file at /workspace/secrets/api_key.json then change your step to this:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args: ["-c", "mkdir -p /workspace/secrets && echo $$API_KEY > /workspace/secrets/api_key.json"]
secretEnv: ["API_KEY"]
This will:
Remove the unnecessary volumes attribute which is not necessary as /workspace is already a volume that persists between steps
Make sure the directory exists before you try to put a file in it
Use the $$ syntax as described in Use secrets from Secret Manager so that it echoes the actual secret to the file
Note this section:
When specifying the secret in the args field, specify it using the environment variable prefixed with $$.
You can double-check that this is working by adding another step:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args: ["-c", "cat /workspace/secrets/api_key.json"]
This should echo out the contents of the file as the build step, allowing you to confirm that:
The previous step read the secret
The previous step wrote the secret to the file
The file was written to a volume that persists across steps
From there you can configure docker-compose to read the contents of that persisted file.
i installed the new gitlab agent for kubernetes cluster. This works when I use KUBECTL and gives this error when I try to deploy in Azure Cloud with Helm chart.
my .gitlab-ci.yml
variables:
#registry variable
REGISTRY: registry.gitlab.com
#docker-image tag
DOCKER_IMAGE_TAG: ${CI_COMMIT_SHA}
#target variable
TARGET: metrix9/wysiwys-ic
stages:
- build
- package
- deploy
#job to build gradle application and save the jar file in artifacts
build docker image:
image: gradle
stage: build
before_script:
- chmod +x ./gradlew
script:
- ./gradlew jib -Djib.to.auth.username=$CI_REGISTRY_USER -Djib.to.auth.password=$CI_REGISTRY_PASSWORD -Djib.from.auth.username=$CI_REGISTRY_USER -Djib.from.auth.password=$CI_REGISTRY_PASSWORD
# job to push file-server docker-imagedocker
package wysiwys image:
stage: package
image: docker.io/library/docker
#dependencies:
# - build
services:
- name: docker:dind
before_script:
- IMAGE=${CI_REGISTRY}/${TARGET}
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull "${IMAGE}:latest" || true
script:
#- docker build --tag "${IMAGE}:latest" .
- docker push "${IMAGE}:latest"
#job to package and push the file-server helm chart
package wysiwys-ic helm:
stage: package
image:
name: alpine/helm
entrypoint: [""]
before_script:
- helm repo add --username $CI_REGISTRY_USER --password $CI_REGISTRY_PASSWORD wysiwys-ci-repo https://gitlab.com/api/v4/projects/$CI_PROJECT_ID/packages/helm/stable
- helm plugin install https://github.com/chartmuseum/helm-push
script:
- helm package wysiwys-helm
- helm cm-push ./wysiwys-helm-0.1.0.tgz wysiwys-ci-repo
#job to install convert2pdf with helm chart
install wysiwys-ic:
stage: deploy
image:
name: alpine/helm
entrypoint: [""]
before_script:
- helm repo add bitnami https://charts.bitnami.com/bitnami -n Convert2pdf-repo
script:
- helm upgrade --install wysiwys-ci ./wysiwys-helm
gitlab agent:
i tryed export the KUBECONFIG and to run helm repo update in the pipeline..
but the same error comes out...
I was struggling with the same issue. First use image with helm and kubectl(f.e. registry.gitlab.com/gitlab-org/cluster-integration/cluster-applications) and try adding the following changes in the deployment part:
deploy app:
stage: deploy-app
variables:
KUBE_CONTEXT: -->gitlabproject<--:-->name of the installed agent<--
before_script:
- if [ -n "$KUBE_CONTEXT" ]; then kubectl config use-context "$KUBE_CONTEXT"; fi
hope this question helps others struggling to use GCP.
I am trying to automate deployments of my strapi app to Google App Engine using CloudBuild. This is my cloudbuild.yaml:
steps:
- name: 'ubuntu'
entrypoint: "bash"
args:
- "-c"
- |
rm -rf app.yaml
touch app.yaml
cat <<EOT >> app.yaml
runtime: custom
env: flex
env_variables:
HOST: '0.0.0.0'
NODE_ENV: 'production'
DATABASE_NAME: ${_DATABASE_NAME}
DATABASE_USERNAME: ${_DATABASE_USERNAME}
DATABASE_PASSWORD: ${_DATABASE_PASSWORD}
INSTANCE_CONNECTION_NAME: ${_INSTANCE_CONNECTION_NAME}
beta_settings:
cloud_sql_instances: ${_CLOUD_SQL_INSTANCES}
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
EOT
cat app.yaml
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: ['-c', 'gcloud app deploy app.yaml --project ecomm-backoffice']
If I understand correctly how general CI/CD works, this file should create an app.yaml and then run gcloud app deploy app.yaml --project ecomm-backoffice command.
However, CloudBuild is creating nested recursive builds once i push my changes to github(triggers are enabled).
Can someone please help me with the right way of deploying strapi/nodejs to app engine using cloudbuild? I tried searching lot of solutions but haven't had any luck so far.
I am trying to build and deploy nodejs app using gitlab ci/cd and kubernates cluster. the build pass successfully while the deployment failed. Meanwhile I added Kubernates cluster to gitlab (API url, CA certificate and service token) and the error that I got for running kubectl within the deploy due to issue related to KUBECONFIG and the below is gitlab-ci.yml that I am using
stages:
- build
- deploy
services:
- docker:dind
build_app:
stage: build
image: docker:git
only:
- master
- develop
script:
- docker login -u gitlab-ci-token -p ${CI_BUILD_TOKEN} ${CI_REGISTRY}
- docker build -t ${CI_REGISTRY}/${CI_PROJECT_PATH} .
- docker tag ${CI_REGISTRY}/${CI_PROJECT_PATH} ${CI_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_SHORT_SHA}
- docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_SHORT_SHA}
variables:
DOCKER_HOST: tcp://docker:2375/
deploy:
stage: deploy
image:
name: bitnami/kubectl:latest
entrypoint: [""]
script:
- USER_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
- CERTIFICATE_AUTHORITY_DATA=$(cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt | base64 -i -w0 -)
- kubectl config set-cluster k8s --server="https://kubernetes.default.svc"
- kubectl config set clusters.k8s.certificate-authority-data ${CERTIFICATE_AUTHORITY_DATA}
- kubectl config set-credentials gitlab --token="${USER_TOKEN}"
- kubectl config set-context default --cluster=k8s --user=gitlab
- kubectl config use-context default
- kubectl set image deployment test-flight web=${CI_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_SHORT_SHA} -n test-flight-dev
$ USER_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
cat: /var/run/secrets/kubernetes.io/serviceaccount/token: No such file or directory
Update: Creating Environment and attach it to the stage solve the issue of identifying the cluster which the deployment will be, and so the cluster can get the action to apply the command
Creating Environment and attach it to the stage solve the issue of identifying the cluster which the deployment will be, and so the cluster can get the action to apply the command environment:
name: production
I have a CircleCI Configured config.yml file to build and deploy the code and I wanted that config.yml file to be run in Azure DevOps pipeline but I am getting the error as below.Kindly help in fixing my below script where should I need to change to run in Azure DevOps? I am new to the YAML configuration and new in Azure DevOps,so Kindly help me in this matter.
Error:
config.yml:
#
# Required variables
#
# Production:
# - GCLOUD_SERVICE_KEY_PRODUCTION
# - GCLOUD_PROJECT_ID_PRODUCTION
# - GCLOUD_PROJECT_CLUSTER_ID_PRODUCTION
# - GCLOUD_PROJECT_CLUSTER_ZONE_PRODUCTION
#
# Staging:
# - GCLOUD_SERVICE_KEY_STAGING
# - GCLOUD_PROJECT_ID_STAGING
# - GCLOUD_PROJECT_CLUSTER_ID_STAGING
# - GCLOUD_PROJECT_CLUSTER_ZONE_STAGING
#
gcp_runtime: &gcp_runtime
docker:
- image: boiyaa/google-cloud-sdk-nodejs
setup-production_credentials: &setup-production_credentials
run:
name: Setup credentials to act on behalf of circle service account
command: |
echo ${GCLOUD_SERVICE_KEY_PRODUCTION} > ${HOME}/gcp-key.json
gcloud auth activate-service-account --key-file ${HOME}/gcp-key.json
gcloud container clusters get-credentials ${GCLOUD_PROJECT_CLUSTER_ID_PRODUCTION} \
--zone ${GCLOUD_PROJECT_CLUSTER_ZONE_PRODUCTION} \
--project ${GCLOUD_PROJECT_ID_PRODUCTION}
setup-staging_credentials: &setup-staging_credentials
run:
name: Setup credentials to act on behalf of circle service account
command: |
echo ${GCLOUD_SERVICE_KEY_STAGING} > ${HOME}/gcp-key.json
gcloud auth activate-service-account --key-file ${HOME}/gcp-key.json
gcloud container clusters get-credentials ${GCLOUD_PROJECT_CLUSTER_ID_STAGING} \
--zone ${GCLOUD_PROJECT_CLUSTER_ZONE_STAGING} \
--project ${GCLOUD_PROJECT_ID_STAGING}
setup-production-env: &setup-production-env
run:
name: Setup env for production
command: |
rm -f .env
echo "REACT_APP_API_URL=${REACT_APP_API_URL_PRODUCTION}" >> .env
echo "REACT_APP_SOCIAL_API_URL=${REACT_APP_SOCIAL_API_URL_PRODUCTION}" >> .env
echo "REACT_APP_WEB_URL=${REACT_APP_WEB_URL_PRODUCTION}" >> .env
echo "REACT_APP_AUTH0_DOMAIN=${REACT_APP_AUTH0_DOMAIN_PRODUCTION}" >> .env
echo "REACT_APP_AUTH0_CLIENT_ID=${REACT_APP_AUTH0_CLIENT_ID_PRODUCTION}" >> .env
echo "REACT_APP_PUSHER_KEY=${REACT_APP_PUSHER_KEY_PRODUCTION}" >> .env
echo "REACT_APP_PUSHER_CLUSTER=${REACT_APP_PUSHER_CLUSTER_PRODUCTION}" >> .env
echo "REACT_APP_VALID_DOMAIN=${REACT_APP_VALID_DOMAIN_PRODUCTION}" >> .env
setup-staging-env: &setup-staging-env
run:
name: Setup env for staging
command: |
rm -f .env
echo "REACT_APP_API_URL=${REACT_APP_API_URL_STAGING}" >> .env
echo "REACT_APP_SOCIAL_API_URL=${REACT_APP_SOCIAL_API_URL_STAGING}" >> .env
echo "REACT_APP_WEB_URL=${REACT_APP_WEB_URL_STAGING}" >> .env
echo "REACT_APP_AUTH0_DOMAIN=${REACT_APP_AUTH0_DOMAIN_STAGING}" >> .env
echo "REACT_APP_AUTH0_CLIENT_ID=${REACT_APP_AUTH0_CLIENT_ID_STAGING}" >> .env
echo "REACT_APP_PUSHER_KEY=${REACT_APP_PUSHER_KEY_STAGING}" >> .env
echo "REACT_APP_PUSHER_CLUSTER=${REACT_APP_PUSHER_CLUSTER_STAGING}" >> .env
echo "REACT_APP_VALID_DOMAIN=${REACT_APP_VALID_DOMAIN_STAGING}" >> .env
build_docker_images: &build_docker_images
run:
name: build and cache all docker images first and fail before deploying
command: |
true || docker build --build-arg CIRCLE_BUILD_NUM=${CIRCLE_BUILD_NUM:-0} -f ./Dockerfile -t web .
deploy_script_production: &deploy_script_production
run:
name: Deploy the application to prod
command: bash ./deploy/deploy-all.sh prod
deploy_script_staging: &deploy_script_staging
run:
name: Deploy the application to staging
command: bash ./deploy/deploy-all.sh staging
deploy-production: &deploy-production
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- *build_docker_images
- *setup-production-env
- *setup-production_credentials
- *deploy_script_production
deploy-staging: &deploy-staging
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- *build_docker_images
- *setup-staging-env
- *setup-staging_credentials
- *deploy_script_staging
version: 2
jobs:
deploy_to_production:
<<: *gcp_runtime
environment:
ENVIRONMENT: production
SKIP_BASE: "true"
<<: *deploy-production
deploy_to_staging:
<<: *gcp_runtime
environment:
ENVIRONMENT: staging
SKIP_BASE: "true"
<<: *deploy-staging
workflows:
version: 2
deploy_to_production:
jobs:
- deploy_to_production:
filters:
branches:
only: production
deploy_to_staging:
jobs:
- deploy_to_staging:
filters:
branches:
only: staging
As stated in the Azure DevOps documentation:
Note: Azure Pipelines doesn't support all features of YAML, such as anchors, complex keys, and sets.
This means that you need to do away with all anchors (and aliases) in your YAML file. Moreover, you cannot expect a CircleCI configuration to be a valid Azure DevOps configuration. They are different tools and have a different configuration structure.
You should start by reading the Azure DevOps docs and then rewrite your file accordingly. This is not a trivial modification of the file.