I have cloudbuild.yaml file where I'm trying use helm image
Inside my step I want to have access to secrets from GCP Secret Manager but I cannot use it in regular way silimary to this case.
Is it possible to use "helm step" with secrets from GCP SM?
Something like this:
- name: gcr.io/$PROJECT_ID/helm
entrypoint: 'bash'
args:
- -c
- |
helm upgrade $_NAME ./deployment/charts/$_NAME --namespace $_NAMESPACE --set secret.var3="$$VAR3"
[EDIT]
to be more precise how my cloudbuild looks like and how it should
when I use "helm step" in classic way:
steps:
- name: gcr.io/$PROJECT_ID/helm
args:
- upgrade
- "$_NAME"
- "./deployment/charts/$_NAME"
- "--namespace"
- "$_NAMESPACE"
- "--set"
- "secret.var3=$$VAR3"
env:
- "CLOUDSDK_COMPUTE_ZONE=$_GKE_LOCATION"
- "CLOUDSDK_CONTAINER_CLUSTER=$_GKE_CLUSTER"
secretEnv: ['VAR3']
id: Apply deploy
substitutions:
_GKE_LOCATION: europe-west3-b
_GKE_CLUSTER: cluster-name
_NAME: "test"
_NAMESPACE: "test"
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/test-var-3/versions/latest
env: 'VAR3'
options:
substitution_option: 'ALLOW_LOOSE'
step works fine but my variable VAR3 is equal to "$VAR3" not to value what hide behind, so according to documentation I try use something like this:
steps:
- name: gcr.io/$PROJECT_ID/helm
entrypoint: 'helm'
args:
- |
upgrade $_NAME ./deployment/charts/$_NAME --namespace $_NAMESPACE --set secret.var3="$$VAR3"
env:
- "CLOUDSDK_COMPUTE_ZONE=$_GKE_LOCATION"
- "CLOUDSDK_CONTAINER_CLUSTER=$_GKE_CLUSTER"
secretEnv: ['VAR3']
id: Apply deploy
substitutions:
_GKE_LOCATION: europe-west3-b
_GKE_CLUSTER: cluster-name
_NAME: "test"
_NAMESPACE: "test"
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/test-var-3/versions/latest
env: 'VAR3'
options:
substitution_option: 'ALLOW_LOOSE'
but then I got an error:
UPGRADE FAILED: Kubernetes cluster unreachable: Get
"http://localhost:8080/version?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
You forget to use the secretEnv as shown in the example
Example :
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: ['-c', 'docker login --username=$$USERNAME --password=$$PASSWORD']
secretEnv: ['USERNAME', 'PASSWORD']
Read more about it : https://cloud.google.com/build/docs/securing-builds/use-secrets#access-utf8-secrets
Related
i installed the new gitlab agent for kubernetes cluster. This works when I use KUBECTL and gives this error when I try to deploy in Azure Cloud with Helm chart.
my .gitlab-ci.yml
variables:
#registry variable
REGISTRY: registry.gitlab.com
#docker-image tag
DOCKER_IMAGE_TAG: ${CI_COMMIT_SHA}
#target variable
TARGET: metrix9/wysiwys-ic
stages:
- build
- package
- deploy
#job to build gradle application and save the jar file in artifacts
build docker image:
image: gradle
stage: build
before_script:
- chmod +x ./gradlew
script:
- ./gradlew jib -Djib.to.auth.username=$CI_REGISTRY_USER -Djib.to.auth.password=$CI_REGISTRY_PASSWORD -Djib.from.auth.username=$CI_REGISTRY_USER -Djib.from.auth.password=$CI_REGISTRY_PASSWORD
# job to push file-server docker-imagedocker
package wysiwys image:
stage: package
image: docker.io/library/docker
#dependencies:
# - build
services:
- name: docker:dind
before_script:
- IMAGE=${CI_REGISTRY}/${TARGET}
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull "${IMAGE}:latest" || true
script:
#- docker build --tag "${IMAGE}:latest" .
- docker push "${IMAGE}:latest"
#job to package and push the file-server helm chart
package wysiwys-ic helm:
stage: package
image:
name: alpine/helm
entrypoint: [""]
before_script:
- helm repo add --username $CI_REGISTRY_USER --password $CI_REGISTRY_PASSWORD wysiwys-ci-repo https://gitlab.com/api/v4/projects/$CI_PROJECT_ID/packages/helm/stable
- helm plugin install https://github.com/chartmuseum/helm-push
script:
- helm package wysiwys-helm
- helm cm-push ./wysiwys-helm-0.1.0.tgz wysiwys-ci-repo
#job to install convert2pdf with helm chart
install wysiwys-ic:
stage: deploy
image:
name: alpine/helm
entrypoint: [""]
before_script:
- helm repo add bitnami https://charts.bitnami.com/bitnami -n Convert2pdf-repo
script:
- helm upgrade --install wysiwys-ci ./wysiwys-helm
gitlab agent:
i tryed export the KUBECONFIG and to run helm repo update in the pipeline..
but the same error comes out...
I was struggling with the same issue. First use image with helm and kubectl(f.e. registry.gitlab.com/gitlab-org/cluster-integration/cluster-applications) and try adding the following changes in the deployment part:
deploy app:
stage: deploy-app
variables:
KUBE_CONTEXT: -->gitlabproject<--:-->name of the installed agent<--
before_script:
- if [ -n "$KUBE_CONTEXT" ]; then kubectl config use-context "$KUBE_CONTEXT"; fi
Need to change the values of AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER & AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY via CI-CD variable. The above values are present in airflow_template.yaml file. I tried substituting the CI-CD variables, but it is not working. If there is a better way to parameterize. please let me know.
#My project folder structure looks like below:
dataops
-- docker
-- base
-- airflow.cfg
-- **airflow_template.yaml**
-- Dockerfile
-- dag-image
--Dockerfile
-- helm
--Chart.yaml
--values.yaml
--templates
--deployment.yaml
--svc.yaml
**airflow_template.yaml**
apiVersion: v1
kind: Pod
metadata:
labels: {}
spec:
containers:
- args: []
command: []
env:
- name: AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY
value: $DEV_AIRFLOW_CONTAINER_REPO
- name: AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER
value: $DEV_AIRFLOW_LOG_FOLDER
envFrom: []
imagePullPolicy: Always
name: base
ports: []
volumeMounts:
- mountPath: /usr/local/airflow/logs
name: airflow-logs
hostNetwork: false
imagePullSecrets: []
initContainers: []
nodeSelector: {}
restartPolicy: Never
securityContext:
runAsUser: 1000
serviceAccountName: default
volumes:
- emptyDir: {}
name: airflow-logs
gitlab-ci.yml
stages:
- build_and_upload
- deploy_to_dev
- tag_prod
- deploy_to_prod
build_and_upload:
stage: build_and_upload
image: docker:latest
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:19.03.14-dind
script:
- echo $DEV_CREDENTIALS > service_account.json && cat service_account.json | docker login -u _json_key --password-stdin https://gcr.io
- echo "as- $DEV_AIRFLOW_LOG_FOLDER"
- export DEV_AIRFLOW_LOG_FOLDER="${DEV_AIRFLOW_LOG_FOLDER}"
- mkdir -p edfi/operation
- cp -r airflow_dags/ dataops/docker/dag-image/airflow_dags/
- cd dataops/docker/dag-image/
- docker build -t "$DEV_DAGS_IMAGE:$CI_COMMIT_SHORT_SHA" --build-arg COMMIT_HASH=$CI_COMMIT_SHORT_SHA .
- docker tag $DEV_DAGS_IMAGE:$CI_COMMIT_SHORT_SHA $DEV_DAGS_IMAGE:latest
- docker push $DEV_DAGS_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $DEV_DAGS_IMAGE:latest
only:
refs:
- develop
# variables:
# - $CI_COMMIT_MESSAGE =~ /penguin/
deploy_to_dev:
stage: deploy_to_dev
image: $CI_REGISTRY_IMAGE:kube-image
script:
- echo $DEV_CREDENTIALS > service_account.json && cat service_account.json | docker login -u _json_key --password-stdin https://gcr.io
- echo "as- $DEV_AIRFLOW_LOG_FOLDER"
- export DEV_AIRFLOW_CONTAINER_REPO="${DEV_AIRFLOW_CONTAINER_REPO}"
- export DEV_AIRFLOW_LOG_FOLDER="${DEV_AIRFLOW_LOG_FOLDER}"
- gcloud auth activate-service-account $DEV_SERVICE_ACCOUNT --key-file=./service_account.json --project=$DEV_PROJECT_NAME
- gcloud container clusters get-credentials $DEV_GKE_CLUSTER --region $REGION
- echo $DEV_DB_CONN > dataops/helm/airflow-loadbalancer/files/secrets/airflow/AIRFLOW__CORE__SQL_ALCHEMY_CONN
- cd dataops/helm/
- helm upgrade airflow-dev airflow-loadbalancer/ --install --atomic --set dags_image.tag=$CI_COMMIT_SHORT_SHA
only:
refs:
- develop
You could make it a jinja2 template and use a small Python program to interpolate the values into the template.
Then you also have all the flexibility to use environment variables or something else.
I am trying to setup a deployment pipeline to configure an Azure Kubernetes service from github actions. I have found steps on the github actions marketplace for configuring various steps however I cannot get any combination of them to work correctly. I keep getting errors saying
error loading config file
"/home/runner/work/_temp/kubeconfig_xxxx": yaml: did not find
expected key
or similar errors saying
error loading config file couldn't get version/kind; json parse error: json: cannot unmarshal array into Go value of type struct { APIVersion string json:"apiVersion,omitempty; Kind string json:\kind,omitempty\ }
depending on how I try to pass the kube_config from Terraform. If I run the same environment locally it works so I am assuming there is something wrong with how it is setup on github actions.
Here is my deployment file:
name: Deploy
on:
workflow_dispatch:
inputs:
<redacted>
jobs:
deploy:
name: Deploy
runs-on: ubuntu-18.04
env:
<redacted>
defaults:
run:
shell: bash
steps:
- uses: actions/checkout#v2
- uses: azure/login#v1.1
with:
creds: ${{ <redacted> }}
- name: Generate Terraform backend
uses: azure/cli#v1.0.3
with:
azcliversion: 2.11.1
inlineScript: |
<redacted>
- uses: hashicorp/setup-terraform#v1.1.0
with:
terraform_version: 0.13.0
- name: Terraform Init
run: |
terraform init
- name: Terraform Plan
run: |
terraform plan \
<redacted>
-out=tfplan
- name: Terraform Apply
run: |
terraform apply \
-auto-approve \
tfplan
- uses: azure/setup-kubectl#v1
with:
version: 'v1.19.2'
- uses: azure/setup-helm#v1
with:
version: 'v3.3.1'
- name: Save Config
run: |
terraform output kube_config > ./aks.yml
- name: Set Env
run: |
echo ::set-env name=XXX::$(cat ./aks.yml)
- uses: azure/k8s-set-context#v1
with:
method: kubeconfig
kubeconfig: "${{ env.XXX }}"
- name: Test
run: |
kubectl get pods -o wide
I have tried setting KUBECONFIG and getting pods in one step using bash and it also fails. Any ideas what I am missing? Thanks in advance!
I'm using autodevops or gitlab ci (which uses auto deploy from autodevops). Except when I deploy, the name of the workload is production, except, I would like to change the name because I want to have several websites.
I tried to change the name like this:
environment:
name: nameofmyproject
but after deployment the website return an 503 Service Temporarily Unavailable
.
Do you have an idea ?
My gitlab and workload kubernetes :
enter image description here
My gitlab ci
image: alpine:latest
variables:
# KUBE_INGRESS_BASE_DOMAIN is the application deployment domain and should be set as a variable at the group or project level.
# KUBE_INGRESS_BASE_DOMAIN: domain.example.com
DISABLE_POSTGRES: "yes"
POSTGRES_USER: user
POSTGRES_PASSWORD: testing-password
POSTGRES_ENABLED: "true"
POSTGRES_DB: $CI_ENVIRONMENT_SLUG
POSTGRES_VERSION: 9.6.2
DOCKER_DRIVER: overlay2
ROLLOUT_RESOURCE_TYPE: deployment
DOCKER_TLS_CERTDIR: "" # https://gitlab.com/gitlab-org/gitlab-runner/issues/4501
stages:
- build
- production
build:
stage: build
image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-build-image/master:stable"
variables:
DOCKER_TLS_CERTDIR: ""
services:
- docker:stable-dind
script:
- |
if [[ -z "$CI_COMMIT_TAG" ]]; then
export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG}
export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_SHA}
else
export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}
export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_TAG}
fi
- /build/build.sh
only:
- branches
- tags
.auto-deploy:
image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.9.1"
.production: &production_template
extends: .auto-deploy
stage: production
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
- auto-deploy delete canary
- auto-deploy delete rollout
- auto-deploy persist_environment_url
environment:
name: production
url: http://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN
artifacts:
paths: [environment_url.txt]
production:
<<: *production_template
only:
refs:
- master
kubernetes: active
You can set variable ADDITIONAL_HOSTS or CI_PROJECT_PATH_SLUG
I am trying to configure gitlab ci to deploy app to google compute engine. I have succesfully pushed image to gitlab repository but after applying kubernetes deployment config i see following error in kubectl describe pods:
Failed to pull image "registry.gitlab.com/proj/subproj/api:v1": rpc error: code = 2
desc = Error response from daemon: {"message":"Get https://registry.gitlab.com/v2/proj/subproj/api/manifests/v1: unauthorized: HTTP Basic: Access denied"}
Here is my deployment gitlab-ci job:
docker:
stage: docker_images
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/proj/subproj/api:v1 -f Dockerfile .
- docker push registry.gitlab.com/proj/subproj/api:v1
only:
- master
dependencies:
- build_java
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-c
- gcloud config set project proj
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials proj-cluster
- kubectl delete secret registry.gitlab.com --ignore-not-found
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com/v1/ --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD" --docker-email=some#gmail.com
- kubectl apply -f cloud-kubernetes.yml
and here is cloud-kubernetes.yml:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: proj
labels:
app: proj
spec:
type: LoadBalancer
ports:
- port: 8082
name: proj
targetPort: 8082
nodePort: 32756
selector:
app: proj
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: projdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: proj
spec:
containers:
- name: projcontainer
image: registry.gitlab.com/proj/subproj/api:v1
imagePullPolicy: Always
env:
- name: SPRING_PROFILES_ACTIVE
value: "cloud"
ports:
- containerPort: 8082
imagePullSecrets:
- name: registry.gitlab.com
I have followed this article
There is workaround, image could be pushed to google container registry, and then pulled from gcr without security. We can push image to gcr without gcloud cli using json token file. So .gitlab-ci.yaml could look like:
docker:
stage: docker_images
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/proj/subproj/api:v1 -f Dockerfile .
- docker push registry.gitlab.com/proj/subproj/api:v1
- docker tag registry.gitlab.com/proj/subproj/api:v1 gcr.io/proj/api:v1
- docker login -u _json_key -p "$GOOGLE_KEY" https://gcr.io
- docker push gcr.io/proj/api:v1
only:
- master
dependencies:
- build_java
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-c
- gcloud config set project proj
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials proj-cluster
- kubectl apply -f cloud-kubernetes.yml
And image in cloud-kubernetes.yaml should be:
gcr.io/proj/api:v1
You must use --docker-server=CI_REGISTRY.
The same as you sue for docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY.
Also note that your docker secrets must be in the same namespace with Deployment/ReplicaSet/DaemonSet/StatefullSet/Job.