I am using github actions to run my liquibase deployments. I have variables that I want to substitute into my liquibase script during deployment the liquibase article here states that this should be possible. I have a changelog.json that simply includes the sql files like so:
"databaseChangeLog": [
{
"include":{"file": "path-to-sql/my_file.sql"}
}
]
now in my_file.sql I have:
--changeset author:1
create user my-user with password ${MY_ENV};
However, I receive the error:
Unexpected error running Liquibase: Migration failed for changeset my_file.sql:
Reason: liquibase.exception.DatabaseException: ERROR: syntax error at or near "$"...
Any one come across this specific error with variable substitution? Is this just a syntax error? Thanks.
You can use the -d parameter,
-D<property.name>=<property.value> Pass a name/value pair for substitution of ${} blocks in the changelogs.
For example:
--liquibase formatted sql
--changeset your.name:1 labels:example-label context:example-context
--comment: example comment
create table ${daTableName} (
id int primary key auto_increment not null,
name varchar(50) not null,
address1 varchar(50),
address2 varchar(50),
city varchar(30)
)
--rollback DROP TABLE ${daTableName};
And the command:
liquibase update -DdaTableName=MySofExampleTable
Did work for me
Note that this approach will log your variables in Github actions, and in your case the value represents a password so be carful with that, or you add this value to your liquibase.properties file, for example
parameter.daTableName=SomeTable
then you just run your liquibase commands without passing the -d parameter.
From Github actions side, you would fetch the secret, and ammend it to your properties file, for example:
jobs:
foo:
runs-on: ubuntu-latest
steps:
- run: echo "liquibase.MY_ENV=${{ secrets.MY_SECRET }}" >> liquibase.properties
Or as a plan-B for some use-cases, you can use envsubst to replace your environment variables, for example:
jobs:
foo:
runs-on: ubuntu-latest
steps:
- run: echo "create user my-user with password \${MY_ENV};" > my-migration.template.sql
- run: envsubst < my-migration.template.sql > my-migration.sql
env:
MY_ENV: bob
- run: cat my-migration.sql
Result:
So you basically create actions (or scripts) that replace your environment variables, if you have multiple files you may do something like:
for f in $(find ./lqb-src -regex '.*\.sql'); do envsubst < $f > "./lqb-out/$(basename $f)"; done
Related
I am actually implementing CI/CD for my application. I want to start the application automatically using pm2. So I am getting the syntax error on line 22.
This is my yml file
This is the error I am getting on github
The problem in the syntax here is related to how you used the - symbol.
With Github actions, you need at least a run or uses field inform for each step inside your job, at the same level of the name field (which is not mandarory), otherwise the github interpreter will return an error.
Here, from line 22, you used something like this:
- name: ...
- run: ...
- run: ...
- run: ...
So there are two problems:
First, the name and the run field aren't at the same yaml level.
Second, your step with the name field doesn't have a run or uses field associated with it (you need at least one of them).
The correct syntax should be:
- name: ...
run: ...
- run: ...
- run: ...
Reference about workflow syntax
I have a question about rundeck (noob alert !)
I need to set conditionnal options variables, (don't know if its the good word).
For exemple, i want to launch a job with only one value option:
Customer01
and i need to have a relation between variable.
If i put Customer01 the other variable need to dynamic have default options:
exemple:
if
cust = Customer01ID
then ID = MyID and Oracle_schema = Myschema.
How can i make this working ?
Thanks a lot and forgive me if my problem is not clear.
A good way to do that is using cascade options, take a look at this answer.
Another way is just scripting, basing on an option selection and using an inline script step you can do anything based on the option selected, let me share a job definition based example (save as a YAML file and import to your instance to test it):
- defaultTab: nodes
description: ''
executionEnabled: true
id: e89a7cb0-2ecc-445d-b744-c1eebd540c91
loglevel: INFO
name: VariablesExample
nodeFilterEditable: false
options:
- name: customer_id
value: acme
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
database="none"
if [ "#option.customer_id#" = "fiat" ]; then
database="oracle"
else
database="postgresql"
fi
echo "setting $database"
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: e89a7cb0-2ecc-445d-b744-c1eebd540c91
Basically, the script takes the option value (#option.customer_id#) and based on that option sets the bash variable $database and does anything.
So, probably you're thinking about executing a specific step based on a job option, and for that, you can use the ruleset strategy (Rundeck Enterprise), which basically is a way to design complex workflows and a perfect scenario is to execute specific steps based on a job option selection.
I have a build config file that looks like:
steps:
...
<i use the ${_IMAGE} variable around 4 times here>
...
images: ['${_IMAGE}']
options:
dynamic_substitutions: true
substitutions:
_IMAGE: http://example.com/image-${_ENVIRON}
And I trigger the build like:
gcloud builds submit . --config=config.yaml --substitutions=_ENVIRON=prod
What I expected is for the gcloud to substitute the _ENVIRON variable in my script and then substitute the _IMAGE variable so that it'd expand to 'http://example.com/image-prod' - but instead I'm getting the following error:
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: generic::invalid_argument: key "_ENVIRON" in the substitution data is not matched in the template
What can I do to make that work? I really want to be able to change the environment easily with a sub and without the need to change anything in code
As you've seen, this isn't possible.
If the only use of _ENVIRON is by _IMAGE, why not drop the substitions from config.yaml and use _IMAGE as the substitution:
ENVIRON="prod"
IMAGE: http://example.com/image-${ENVIRON}
gcloud builds submit . \
--config=config.yaml \
--substitutions=_IMAGE=${IMAGE}
I have an environment.yml shown as follow, I would like to read out the content of the name variable (core-force) and set it as a value of the global variable in my azure-pipeline.yamal file how can I do it?
name: core-force
channels:
- conda-forge
dependencies:
- click
- Sphinx
- sphinx_rtd_theme
- numpy
- pylint
- azure-cosmos
- python=3
- flask
- pytest
- shapely
in my azure-pipeline.yml file I would like to have something like
variables:
tag: get the value of the name from the environment.yml aka 'core-force'
Please check this example:
File: vars.yml
variables:
favoriteVeggie: 'brussels sprouts'
File: azure-pipelines.yml
variables:
- template: vars.yml # Template reference
steps:
- script: echo My favorite vegetable is ${{ variables.favoriteVeggie }}.
Please note, that variables are simple string and if you want to use list you may need do some workaraund in powershell in place where you want to use value from that list.
If you don't want to use template functionality as it is shown above you need to do these:
create a separate job/stage
define step there to read environment.yml file and set variables using REST API or Azure CLI
create another job/stage and move you current build defitnion into there
I found this topic on developer community where you can read:
Yaml variables have always been string: string mappings. The doc appears to be currently correct, though we may have had a bug when last you visited.
We are preparing to release a feature in the near future to allow you to pass more complex structures. Stay tuned!
But I don't have more info bout this.
Global variables should be stored in a separate template file. This file ideally would be in a separate repo where other repos can refer to this.
Here is another answer for this
I was new to concourse, and set up the environment in my centos7.6 like below.
$ wget https://concourse-ci.org/docker-compose.yml
$ docker-compose up -d
Then login by `fly --target example login --team-name main --concourse-url http://192.168.77.140:8080/ -u test -p test`
I can see below.
[root#centostest ~]# fly targets
name url team expiry
example http://192.168.77.140:8080 main Sun, 16 Jun 2019 02:23:48 UTC
I used below yaml.xml named with 2.yaml
---
resources:
- name: my-git-repo
type: git
source:
uri: https://github.com/ruanbekker/concourse-test
branch: basic-helloworld
jobs:
- name: hello-world-job
public: true
plan:
- get: my-git-repo
- task: task_print-hello-world
file: my-git-repo/ci/task-hello-world.yml
Then I run below commands step by step.
fly -t example sp -c 2.yaml -p pipeline-01
fly -t example up -p pipeline-01
fly -t example tj -j pipeline-01/hello-world-job --watch
But i just hang on there , no useful response like below.
[root#centostest ~]# fly -t example tj -j pipeline-01/hello-world-job --watch
started pipeline-01/hello-world-job #3
Theoretically, it should print something like below.
Cloning into '/tmp/build/get'...
Fetching HEAD
292c84b change task name
initializing
running echo hello world
hello world
succeeded
Where I did wrong? thanks.
welcome to Concourse!
One thing that can be confusing when starting with Concourse is understanding when Concourse detects that the pipeline has changed and what happens if the pipeline is one file or multiple files.
Your pipeline (as the majority of real-world pipelines) is "nested": main pipeline file 2.yaml refers to a task file named my-git-repo/ci/task-hello-world.yml
What sets Concourse apart from other CI systems is that:
the main pipeline file (2.yaml) can reside everywhere, also in a different repository.
Due to 1, Concourse is unable to detect a change to the main pipeline file, you have to tell Concourse that the file has changed, either with fly set-pipeline or with automatic means such as the concourse-pipeline-resource.
So the following errors happen often:
Changing the main pipeline file, committing and pushing, and expecting Concourse to pick up the change. Missing: you have to do fly set-pipeline
Once doing fly set-pipeline becomes second nature, you can stumble upon the opposite error: Change both the main pipeline file and the nested task file, not pushing, doing set-pipeline. In this case, the only changes picked up by Concourse will be the ones to the main pipeline file, not to the task file. Missing: commit and push.
From the description of your problem, I have the feeling that it is a mixture of the gotchas I mentioned.