In our team we have multiple static environments. There is always dev and test and for certain customers where our infrastructure is allowed to deploy also staging and prod. We want to migrate our CI/CD Pipelines from Teamcity to Gitlab but something that was causing many troubles is how Gitlab behaves when we merge a MR which is able to be deployed onto a static environment. We're not able to just deploy each MR on its own environment which was no problem with TeamCity as we could just deploy every branch onto every environment.
For the deployment itself we use Terraform although I have a testing repository which just echos a small text as a test.
Something which is especially confusing is that Gitlab stops an environment even though it was never deployed from the MR.
Something gets merged into develop
This commit is deployed onto e.g. test
A new branch is merged into develop
Gitlab stops the test environment even though it got never deployed to from the MR
Is this something that is just not possible with Gitlab as of now or have we missed a configuration option? Protected environments are not available for us but I also feel like this wouldn't be the right option for the problem we're facing.
The following is the pipeline of my test repository.
stages:
- deploy
- destroy-deployment
### Deployment
deployment-prod:
extends: .deployment
environment:
on_stop: destroy-deployment-prod
url: http://proto.url
deployment_tier: production
variables:
ENVIRONMENT: prod
rules:
- if: $CI_COMMIT_TAG
destroy-deployment-prod:
extends: .destroy-deployment
variables:
ENVIRONMENT: prod
environment:
url: http://proto.url
deployment_tier: production
action: stop
rules:
- if: $CI_COMMIT_TAG
when: manual
### Deployment-Test
deployment-test:
extends: .deployment
environment:
on_stop: destroy-deployment-test
url: http://test.proto.url
deployment_tier: testing
variables:
ENVIRONMENT: test
when: manual
destroy-deployment-test:
extends: .destroy-deployment
variables:
ENVIRONMENT: test
environment:
url: http://test.proto.url
deployment_tier: testing
action: stop
### Deployment-Staging
deployment-staging:
extends: .deployment
environment:
on_stop: destroy-deployment-staging
url: http://staging.proto.url
deployment_tier: staging
variables:
ENVIRONMENT: staging
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
destroy-deployment-staging:
extends: .destroy-deployment
variables:
ENVIRONMENT: staging
TF_VAR_host: staging.proto.url
environment:
url: http://staging.proto.url
deployment_tier: staging
action: stop
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
when: manual
### Deployment-Dev
deployment-dev:
extends: .deployment
environment:
on_stop: destroy-deployment-dev
url: http://dev.proto.url
deployment_tier: development
variables:
ENVIRONMENT: dev
rules:
- if: '$CI_COMMIT_BRANCH == "develop"'
destroy-deployment-dev:
extends: .destroy-deployment
variables:
ENVIRONMENT: dev
TF_VAR_host: dev.proto.url
environment:
url: http://dev.proto.url
deployment_tier: development
action: stop
rules:
- if: '$CI_COMMIT_BRANCH == "develop"'
when: manual
### Deployment Templates
.deployment:
image: alpine
stage: deploy
script:
- echo "Deploying $ENVIRONMENT"
resource_group: proto-$ENVIRONMENT
environment:
name: $ENVIRONMENT
url: http://$ENVIRONMENT.proto.url
rules:
- when: manual
.destroy-deployment:
image: alpine
stage: destroy-deployment
script:
- echo "Destroy $ENVIRONMENT"
resource_group: proto-$ENVIRONMENT
environment:
name: $ENVIRONMENT
url: http://$ENVIRONMENT.proto.url
action: stop
when: manual
rules:
- when: manual
I have similar problem, don't understand the reasons of such behaviour. But I figured out that it depends on your default env variable.
I wonder if you're running into this bug?: Merging triggers manual environment on_stop action
It seems like a regression that was introduced in GitLab 14.9 and fixed in GitLab 14.10.
In GitLab 14.10, to get the fix you need to enable this feature flag: fix_related_environments_for_merge_requests
In GitLab 15.0, the fix is enabled by default.
Related
Working on Github actions for the first time.
In my .yml file I have the following
on:
workflow_dispatch:
branches:
- main
inputs:
environment:
type: choice
description: 'Select environment to deploy in'
required: true
options:
- dev
- non-prod
- prod
- staging
based on the option I need to do the following
for staging
- name: build
run: CI=false yarn build-staging
for non-prod
- name: build
run: CI=false yarn build
Could you please provide me with some pointers on how this can be achieved?
The simplest way to go about it would be to use an if condition on the jobs within your workflow, for example:
on:
workflow_dispatch:
branches:
- main
inputs:
environment:
type: choice
description: 'Select environment to deploy in'
required: true
options:
- dev
- non-prod
- prod
- staging
jobs:
staging:
runs-on: ubuntu-latest
if: inputs.environment == 'staging'
steps:
- name: build
run: CI=false yarn build-staging
prod:
runs-on: ubuntu-latest
if: inputs.environment == 'prod'
steps:
- name: build
run: CI=false yarn build
We are planning to move our release pipelines to yaml and we are ready with it.
I have multiple environments like dev, test and prod where I'm trying to use same deployment job templates for all environments.
jobs:
- deployment: deploy
displayName: Deploy
environment:
name: dev # This should be replaced with environment specific variable
resourceType: VirtualMachine
tags: WEB01
In above code, my intension is to provide name as environment specific variable. Could someone pls help?
Thank you!
You can do this with parameters, but you need to use the expression syntax which is evaluated when the pipeline is compiled
parameters:
- name: environment
type: string
default: dev
values:
- dev
- test
- preprod
- prod
jobs:
- deployment: deploy
displayName: Deploy
environment:
name: ${{ parameters.environment }}
resourceType: VirtualMachine
tags: WEB01
You might like my answer in another post. it does stages per environment with approvals.
https://stackoverflow.com/a/74159554/4485260
I was using the following configuration to deploy Yii2 applications with GitHub actions:
name: Build and Deploy - DEV
on:
push:
branches:
- development
jobs:
build:
name: Build and Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout#master
- name: Setup Enviroment
uses: shivammathur/setup-php#v2
with:
php-version: '7.2'
- name: Install Packages
run: composer install --no-dev --optimize-autoloader
- name: Deploy to Server
uses: yiier/yii2-base-deploy#master
with:
user: github
host: ${{ host }}
path: ${{ path }}
owner: github
env:
DEPLOY_KEY: ${{ secrets.DEPLOY_KEY }}
- name: Apply migration
run: php yii migrate --interactive=0
It worked quite well, but now is giving this error:
Current runner version: '2.285.1'
Operating System
Virtual Environment
Virtual Environment Provisioner
GITHUB_TOKEN Permissions
Secret source: Actions
Prepare workflow directory
Prepare all required actions
Getting action download info
Error: Unable to resolve action `yiier/yii2-base-deploy#master`, repository not found
Appears that yiier/yii2-base-deploy#master no longer existis.
Anyone knows a replacer?
Thanks!
Thanks to SiZE comment i remember I had fork the original repo.
I have a deployment pipeline in azure-devops and it takes artifacts from other pipelines and deploy it to different environments. Yml looks like this:
...
resources:
repositories:
- repository: self
trigger:
branches:
include:
- develop
- release
- master
pipelines:
- pipeline: pipeline-1
source: different-repo-pipeline-1
- pipeline: pipeline-2
source: different-repo-pipeline-2
...
jobs:
- deployment: some-name
environment: develop
strategy:
runOnce:
deploy:
steps:
- download: pipeline-1
- download: pipeline-2
# Do real deployment
After deployment on target environment tab I saw:
There is no clue on it about deployed versions. I know this version inside deployment steps, or at least can use variables from pipelines (resources.pipeline.pipeline-1.runName). But I don't see any options to add this info to Environment deployment tab. Can it be done in some ways?
No, this is not possible. We don't have any way to control what is displayed on environment tab.
I'm using autodevops or gitlab ci (which uses auto deploy from autodevops). Except when I deploy, the name of the workload is production, except, I would like to change the name because I want to have several websites.
I tried to change the name like this:
environment:
name: nameofmyproject
but after deployment the website return an 503 Service Temporarily Unavailable
.
Do you have an idea ?
My gitlab and workload kubernetes :
enter image description here
My gitlab ci
image: alpine:latest
variables:
# KUBE_INGRESS_BASE_DOMAIN is the application deployment domain and should be set as a variable at the group or project level.
# KUBE_INGRESS_BASE_DOMAIN: domain.example.com
DISABLE_POSTGRES: "yes"
POSTGRES_USER: user
POSTGRES_PASSWORD: testing-password
POSTGRES_ENABLED: "true"
POSTGRES_DB: $CI_ENVIRONMENT_SLUG
POSTGRES_VERSION: 9.6.2
DOCKER_DRIVER: overlay2
ROLLOUT_RESOURCE_TYPE: deployment
DOCKER_TLS_CERTDIR: "" # https://gitlab.com/gitlab-org/gitlab-runner/issues/4501
stages:
- build
- production
build:
stage: build
image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-build-image/master:stable"
variables:
DOCKER_TLS_CERTDIR: ""
services:
- docker:stable-dind
script:
- |
if [[ -z "$CI_COMMIT_TAG" ]]; then
export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG}
export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_SHA}
else
export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}
export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_TAG}
fi
- /build/build.sh
only:
- branches
- tags
.auto-deploy:
image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.9.1"
.production: &production_template
extends: .auto-deploy
stage: production
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
- auto-deploy delete canary
- auto-deploy delete rollout
- auto-deploy persist_environment_url
environment:
name: production
url: http://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN
artifacts:
paths: [environment_url.txt]
production:
<<: *production_template
only:
refs:
- master
kubernetes: active
You can set variable ADDITIONAL_HOSTS or CI_PROJECT_PATH_SLUG