Gitlab AutoDevop Deployment - Change name workload and container - kubernetes

I'm using autodevops or gitlab ci (which uses auto deploy from autodevops). Except when I deploy, the name of the workload is production, except, I would like to change the name because I want to have several websites.
I tried to change the name like this:
environment:
name: nameofmyproject
but after deployment the website return an 503 Service Temporarily Unavailable
.
Do you have an idea ?
My gitlab and workload kubernetes :
enter image description here
My gitlab ci
image: alpine:latest
variables:
# KUBE_INGRESS_BASE_DOMAIN is the application deployment domain and should be set as a variable at the group or project level.
# KUBE_INGRESS_BASE_DOMAIN: domain.example.com
DISABLE_POSTGRES: "yes"
POSTGRES_USER: user
POSTGRES_PASSWORD: testing-password
POSTGRES_ENABLED: "true"
POSTGRES_DB: $CI_ENVIRONMENT_SLUG
POSTGRES_VERSION: 9.6.2
DOCKER_DRIVER: overlay2
ROLLOUT_RESOURCE_TYPE: deployment
DOCKER_TLS_CERTDIR: "" # https://gitlab.com/gitlab-org/gitlab-runner/issues/4501
stages:
- build
- production
build:
stage: build
image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-build-image/master:stable"
variables:
DOCKER_TLS_CERTDIR: ""
services:
- docker:stable-dind
script:
- |
if [[ -z "$CI_COMMIT_TAG" ]]; then
export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG}
export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_SHA}
else
export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}
export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_TAG}
fi
- /build/build.sh
only:
- branches
- tags
.auto-deploy:
image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.9.1"
.production: &production_template
extends: .auto-deploy
stage: production
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
- auto-deploy delete canary
- auto-deploy delete rollout
- auto-deploy persist_environment_url
environment:
name: production
url: http://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN
artifacts:
paths: [environment_url.txt]
production:
<<: *production_template
only:
refs:
- master
kubernetes: active

You can set variable ADDITIONAL_HOSTS or CI_PROJECT_PATH_SLUG

Related

Github Actions - Invalid workflow file

I am trying to build CI/CD pipelines using GitHub Actions but unfortunately, I am stuck with an error with the yaml file.
Here is my Yaml file is:
---
name: Build and push python code to gcp with github actions
on:
push:
branches:
- main
jobs:
build_push_grc:
name: Build and push to gcr
runs_on: unbuntu-latest
env:
IMAGE_NAME: learning_cicd
PROJECT_ID: personal-370316
steps:
- name: Checkoutstep
uses: actions/checkout#v2
- uses: google-github-actions/setup-gcloud#master
with:
service_account_key: ${{ secrets.SERVICE_ACCOUNT_KEY}}
project_id: ${{ env.PROJECT_ID }}
export_default_credentials: true
- name: Build Docker Image
run: docker build -t $IMAGE_NAME:latest .
- name: Configure Docker Client
run: |-
gcloud auth configure-docker --quiet
- name: Push Docker Image to Container Registry (GCR)
env:
GIT_TAG: v0.1.0
run: |-
docker tag $IMAGE_NAME:latest gcr.io/$PROJECT_ID/$IMAGE_NAME:latest
docker tag $IMAGE_NAME:latest gcr.io/$PROJECT_ID/$IMAGE_NAME:$GIT_TAG
docker push gcr.io/$PROJECT_ID/$IMAGE_NAME:latest
docker push gcr.io/$PROJECT_ID/$IMAGE_NAME:$GIT_TAG
Here is an error where I am stuck with:
GitHub Actions
/ .github/workflows/gcp.yaml
Invalid workflow file
You have an error in your yaml syntax on line 15
I tried all possible indentations available on the internet but had no luck. I tried Yamllinter but still could not find where the error comes from. Please point me to where I am going wrong.
Thanks.
The runs-on (not runs_on) should have two spaces indentation relative to the job identifier. Also, the OS should be ubuntu-latest.
Then, env should have the same indentation as runs-on or name, the same as steps.
Here is the correct WF:
---
name: Build and push python code to gcp with github actions
on:
push:
branches:
- main
jobs:
build_push_grc:
name: Build and push to gcr
runs-on: ubuntu-latest
env:
IMAGE_NAME: learning_cicd
PROJECT_ID: personal-370316
steps:
- name: Checkoutstep
uses: actions/checkout#v2
- uses: google-github-actions/setup-gcloud#master
with:
service_account_key: ${{ secrets.SERVICE_ACCOUNT_KEY}}
project_id: ${{ env.PROJECT_ID }}
export_default_credentials: true
- name: Build Docker Image
run: docker build -t $IMAGE_NAME:latest .
- name: Configure Docker Client
run: |-
gcloud auth configure-docker --quiet
- name: Push Docker Image to Container Registry (GCR)
env:
GIT_TAG: v0.1.0
run: |-
docker tag $IMAGE_NAME:latest gcr.io/$PROJECT_ID/$IMAGE_NAME:latest
docker tag $IMAGE_NAME:latest gcr.io/$PROJECT_ID/$IMAGE_NAME:$GIT_TAG
docker push gcr.io/$PROJECT_ID/$IMAGE_NAME:latest
docker push gcr.io/$PROJECT_ID/$IMAGE_NAME:$GIT_TAG
I would recommend debugging such issues in the GitHub file edit form (editing the yml file in the .github/workflows directory). It will highlight all the issues regarding the workflow syntax. Demo.

Using secrets from GCP Secret Manager in Helm GCP Cloud Builder

I have cloudbuild.yaml file where I'm trying use helm image
Inside my step I want to have access to secrets from GCP Secret Manager but I cannot use it in regular way silimary to this case.
Is it possible to use "helm step" with secrets from GCP SM?
Something like this:
- name: gcr.io/$PROJECT_ID/helm
entrypoint: 'bash'
args:
- -c
- |
helm upgrade $_NAME ./deployment/charts/$_NAME --namespace $_NAMESPACE --set secret.var3="$$VAR3"
[EDIT]
to be more precise how my cloudbuild looks like and how it should
when I use "helm step" in classic way:
steps:
- name: gcr.io/$PROJECT_ID/helm
args:
- upgrade
- "$_NAME"
- "./deployment/charts/$_NAME"
- "--namespace"
- "$_NAMESPACE"
- "--set"
- "secret.var3=$$VAR3"
env:
- "CLOUDSDK_COMPUTE_ZONE=$_GKE_LOCATION"
- "CLOUDSDK_CONTAINER_CLUSTER=$_GKE_CLUSTER"
secretEnv: ['VAR3']
id: Apply deploy
substitutions:
_GKE_LOCATION: europe-west3-b
_GKE_CLUSTER: cluster-name
_NAME: "test"
_NAMESPACE: "test"
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/test-var-3/versions/latest
env: 'VAR3'
options:
substitution_option: 'ALLOW_LOOSE'
step works fine but my variable VAR3 is equal to "$VAR3" not to value what hide behind, so according to documentation I try use something like this:
steps:
- name: gcr.io/$PROJECT_ID/helm
entrypoint: 'helm'
args:
- |
upgrade $_NAME ./deployment/charts/$_NAME --namespace $_NAMESPACE --set secret.var3="$$VAR3"
env:
- "CLOUDSDK_COMPUTE_ZONE=$_GKE_LOCATION"
- "CLOUDSDK_CONTAINER_CLUSTER=$_GKE_CLUSTER"
secretEnv: ['VAR3']
id: Apply deploy
substitutions:
_GKE_LOCATION: europe-west3-b
_GKE_CLUSTER: cluster-name
_NAME: "test"
_NAMESPACE: "test"
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/test-var-3/versions/latest
env: 'VAR3'
options:
substitution_option: 'ALLOW_LOOSE'
but then I got an error:
UPGRADE FAILED: Kubernetes cluster unreachable: Get
"http://localhost:8080/version?timeout=32s": dial tcp 127.0.0.1:8080:
connect: connection refused
You forget to use the secretEnv as shown in the example
Example :
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: ['-c', 'docker login --username=$$USERNAME --password=$$PASSWORD']
secretEnv: ['USERNAME', 'PASSWORD']
Read more about it : https://cloud.google.com/build/docs/securing-builds/use-secrets#access-utf8-secrets

Gitlab automatically stops environment after merge

In our team we have multiple static environments. There is always dev and test and for certain customers where our infrastructure is allowed to deploy also staging and prod. We want to migrate our CI/CD Pipelines from Teamcity to Gitlab but something that was causing many troubles is how Gitlab behaves when we merge a MR which is able to be deployed onto a static environment. We're not able to just deploy each MR on its own environment which was no problem with TeamCity as we could just deploy every branch onto every environment.
For the deployment itself we use Terraform although I have a testing repository which just echos a small text as a test.
Something which is especially confusing is that Gitlab stops an environment even though it was never deployed from the MR.
Something gets merged into develop
This commit is deployed onto e.g. test
A new branch is merged into develop
Gitlab stops the test environment even though it got never deployed to from the MR
Is this something that is just not possible with Gitlab as of now or have we missed a configuration option? Protected environments are not available for us but I also feel like this wouldn't be the right option for the problem we're facing.
The following is the pipeline of my test repository.
stages:
- deploy
- destroy-deployment
### Deployment
deployment-prod:
extends: .deployment
environment:
on_stop: destroy-deployment-prod
url: http://proto.url
deployment_tier: production
variables:
ENVIRONMENT: prod
rules:
- if: $CI_COMMIT_TAG
destroy-deployment-prod:
extends: .destroy-deployment
variables:
ENVIRONMENT: prod
environment:
url: http://proto.url
deployment_tier: production
action: stop
rules:
- if: $CI_COMMIT_TAG
when: manual
### Deployment-Test
deployment-test:
extends: .deployment
environment:
on_stop: destroy-deployment-test
url: http://test.proto.url
deployment_tier: testing
variables:
ENVIRONMENT: test
when: manual
destroy-deployment-test:
extends: .destroy-deployment
variables:
ENVIRONMENT: test
environment:
url: http://test.proto.url
deployment_tier: testing
action: stop
### Deployment-Staging
deployment-staging:
extends: .deployment
environment:
on_stop: destroy-deployment-staging
url: http://staging.proto.url
deployment_tier: staging
variables:
ENVIRONMENT: staging
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
destroy-deployment-staging:
extends: .destroy-deployment
variables:
ENVIRONMENT: staging
TF_VAR_host: staging.proto.url
environment:
url: http://staging.proto.url
deployment_tier: staging
action: stop
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
when: manual
### Deployment-Dev
deployment-dev:
extends: .deployment
environment:
on_stop: destroy-deployment-dev
url: http://dev.proto.url
deployment_tier: development
variables:
ENVIRONMENT: dev
rules:
- if: '$CI_COMMIT_BRANCH == "develop"'
destroy-deployment-dev:
extends: .destroy-deployment
variables:
ENVIRONMENT: dev
TF_VAR_host: dev.proto.url
environment:
url: http://dev.proto.url
deployment_tier: development
action: stop
rules:
- if: '$CI_COMMIT_BRANCH == "develop"'
when: manual
### Deployment Templates
.deployment:
image: alpine
stage: deploy
script:
- echo "Deploying $ENVIRONMENT"
resource_group: proto-$ENVIRONMENT
environment:
name: $ENVIRONMENT
url: http://$ENVIRONMENT.proto.url
rules:
- when: manual
.destroy-deployment:
image: alpine
stage: destroy-deployment
script:
- echo "Destroy $ENVIRONMENT"
resource_group: proto-$ENVIRONMENT
environment:
name: $ENVIRONMENT
url: http://$ENVIRONMENT.proto.url
action: stop
when: manual
rules:
- when: manual
I have similar problem, don't understand the reasons of such behaviour. But I figured out that it depends on your default env variable.
I wonder if you're running into this bug?: Merging triggers manual environment on_stop action
It seems like a regression that was introduced in GitLab 14.9 and fixed in GitLab 14.10.
In GitLab 14.10, to get the fix you need to enable this feature flag: fix_related_environments_for_merge_requests
In GitLab 15.0, the fix is enabled by default.

Anchors are not currently supporting while running azure devops YML File

I have a CircleCI Configured config.yml file to build and deploy the code and I wanted that config.yml file to be run in Azure DevOps pipeline but I am getting the error as below.Kindly help in fixing my below script where should I need to change to run in Azure DevOps? I am new to the YAML configuration and new in Azure DevOps,so Kindly help me in this matter.
Error:
config.yml:
#
# Required variables
#
# Production:
# - GCLOUD_SERVICE_KEY_PRODUCTION
# - GCLOUD_PROJECT_ID_PRODUCTION
# - GCLOUD_PROJECT_CLUSTER_ID_PRODUCTION
# - GCLOUD_PROJECT_CLUSTER_ZONE_PRODUCTION
#
# Staging:
# - GCLOUD_SERVICE_KEY_STAGING
# - GCLOUD_PROJECT_ID_STAGING
# - GCLOUD_PROJECT_CLUSTER_ID_STAGING
# - GCLOUD_PROJECT_CLUSTER_ZONE_STAGING
#
gcp_runtime: &gcp_runtime
docker:
- image: boiyaa/google-cloud-sdk-nodejs
setup-production_credentials: &setup-production_credentials
run:
name: Setup credentials to act on behalf of circle service account
command: |
echo ${GCLOUD_SERVICE_KEY_PRODUCTION} > ${HOME}/gcp-key.json
gcloud auth activate-service-account --key-file ${HOME}/gcp-key.json
gcloud container clusters get-credentials ${GCLOUD_PROJECT_CLUSTER_ID_PRODUCTION} \
--zone ${GCLOUD_PROJECT_CLUSTER_ZONE_PRODUCTION} \
--project ${GCLOUD_PROJECT_ID_PRODUCTION}
setup-staging_credentials: &setup-staging_credentials
run:
name: Setup credentials to act on behalf of circle service account
command: |
echo ${GCLOUD_SERVICE_KEY_STAGING} > ${HOME}/gcp-key.json
gcloud auth activate-service-account --key-file ${HOME}/gcp-key.json
gcloud container clusters get-credentials ${GCLOUD_PROJECT_CLUSTER_ID_STAGING} \
--zone ${GCLOUD_PROJECT_CLUSTER_ZONE_STAGING} \
--project ${GCLOUD_PROJECT_ID_STAGING}
setup-production-env: &setup-production-env
run:
name: Setup env for production
command: |
rm -f .env
echo "REACT_APP_API_URL=${REACT_APP_API_URL_PRODUCTION}" >> .env
echo "REACT_APP_SOCIAL_API_URL=${REACT_APP_SOCIAL_API_URL_PRODUCTION}" >> .env
echo "REACT_APP_WEB_URL=${REACT_APP_WEB_URL_PRODUCTION}" >> .env
echo "REACT_APP_AUTH0_DOMAIN=${REACT_APP_AUTH0_DOMAIN_PRODUCTION}" >> .env
echo "REACT_APP_AUTH0_CLIENT_ID=${REACT_APP_AUTH0_CLIENT_ID_PRODUCTION}" >> .env
echo "REACT_APP_PUSHER_KEY=${REACT_APP_PUSHER_KEY_PRODUCTION}" >> .env
echo "REACT_APP_PUSHER_CLUSTER=${REACT_APP_PUSHER_CLUSTER_PRODUCTION}" >> .env
echo "REACT_APP_VALID_DOMAIN=${REACT_APP_VALID_DOMAIN_PRODUCTION}" >> .env
setup-staging-env: &setup-staging-env
run:
name: Setup env for staging
command: |
rm -f .env
echo "REACT_APP_API_URL=${REACT_APP_API_URL_STAGING}" >> .env
echo "REACT_APP_SOCIAL_API_URL=${REACT_APP_SOCIAL_API_URL_STAGING}" >> .env
echo "REACT_APP_WEB_URL=${REACT_APP_WEB_URL_STAGING}" >> .env
echo "REACT_APP_AUTH0_DOMAIN=${REACT_APP_AUTH0_DOMAIN_STAGING}" >> .env
echo "REACT_APP_AUTH0_CLIENT_ID=${REACT_APP_AUTH0_CLIENT_ID_STAGING}" >> .env
echo "REACT_APP_PUSHER_KEY=${REACT_APP_PUSHER_KEY_STAGING}" >> .env
echo "REACT_APP_PUSHER_CLUSTER=${REACT_APP_PUSHER_CLUSTER_STAGING}" >> .env
echo "REACT_APP_VALID_DOMAIN=${REACT_APP_VALID_DOMAIN_STAGING}" >> .env
build_docker_images: &build_docker_images
run:
name: build and cache all docker images first and fail before deploying
command: |
true || docker build --build-arg CIRCLE_BUILD_NUM=${CIRCLE_BUILD_NUM:-0} -f ./Dockerfile -t web .
deploy_script_production: &deploy_script_production
run:
name: Deploy the application to prod
command: bash ./deploy/deploy-all.sh prod
deploy_script_staging: &deploy_script_staging
run:
name: Deploy the application to staging
command: bash ./deploy/deploy-all.sh staging
deploy-production: &deploy-production
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- *build_docker_images
- *setup-production-env
- *setup-production_credentials
- *deploy_script_production
deploy-staging: &deploy-staging
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- *build_docker_images
- *setup-staging-env
- *setup-staging_credentials
- *deploy_script_staging
version: 2
jobs:
deploy_to_production:
<<: *gcp_runtime
environment:
ENVIRONMENT: production
SKIP_BASE: "true"
<<: *deploy-production
deploy_to_staging:
<<: *gcp_runtime
environment:
ENVIRONMENT: staging
SKIP_BASE: "true"
<<: *deploy-staging
workflows:
version: 2
deploy_to_production:
jobs:
- deploy_to_production:
filters:
branches:
only: production
deploy_to_staging:
jobs:
- deploy_to_staging:
filters:
branches:
only: staging
As stated in the Azure DevOps documentation:
Note: Azure Pipelines doesn't support all features of YAML, such as anchors, complex keys, and sets.
This means that you need to do away with all anchors (and aliases) in your YAML file. Moreover, you cannot expect a CircleCI configuration to be a valid Azure DevOps configuration. They are different tools and have a different configuration structure.
You should start by reading the Azure DevOps docs and then rewrite your file accordingly. This is not a trivial modification of the file.

How should the YAML look to use a Docker container (sidecar service) in my build pipeline

I manage to use them fine as long as I don't need to pass custom arguments.
Lets say I want to use an official Docker image: somePublicImage:1.2.3; then the following works fine:
stages:
- stage: Build
jobs:
- job: BuildTestPack
displayName: 'Build, test & pack'
timeoutInMinutes: 5
cancelTimeoutInMinutes: 2
services:
someService:
image: somePublicImage:1.2.3
ports:
- 4223:4222
There's an option to configure the container with --foo bar
How do I define this in a Azure build pipeline?
I've tried:
command
options
arguments
entrypoint
Service containers must define a CMD or ENTRYPOINT. The pipeline will docker run the provided container without additional arguments.
Check the link below:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/service-containers?view=azure-devops&tabs=yaml
Seems like you need to create a "custom" resource container first. E.g
resources:
containers:
- container: myThing
image: somePublicImage:1.2.3
ports:
- 4223:4222
volumes:
- /docker_vol_config:/config
command: '--foo bar'
which then can be used as a service:
stages:
- stage: Build
jobs:
- job: BuildTestPack
displayName: 'Build, test & pack'
timeoutInMinutes: 5
cancelTimeoutInMinutes: 2
services:
myThing:myThing