I want to create a GitHub Action that manually runs database migrations for different environments.
On the documentation I found a way to set the data source URL using option inputs, as follows
workflow_dispatch:
inputs:
dataSourceUrl:
description: 'Data source URL for each environment'
required: true
type: choice
options:
- jdbc:postgresql://10.xxx.xxx.xx:5433/database_dev
- jdbc:postgresql://10.xxx.xxx.xx:5433/database_test
- jdbc:postgresql://10.xxx.xxx.xx:5433/database_stage
- jdbc:postgresql://10.xxx.xxx.xx:5433/database
But the best way I image doing it would be if the options were a key/value pair, so on the GitHub UI it would show options like Dev, Tests, Stage and Prod instead of the URL.
Would be something like:
options:
- option1:
- key: dev
- value: jdbc:postgresql://10.xxx.xxx.xx:5433/database_dev
- option2:
- key: test
- value: jdbc:postgresql://10.xxx.xxx.xx:5433/database_test
Does anyone knows how I can do that?
Related
I installed Rundeck v3.3.5 (on CentOS 7 via RPM) to replace an old Rundeck instance that was decommissioned. I did the export/import of projects (which worked brilliantly) while connected to the new server as the default admin user. The imported jobs run properly on the correct schedule. I subsequently configured the new server to use LDAP authentication and configured ACLs for users/roles. That also works properly.
However, I see an error like this in the service.log:
ERROR services.NotificationService - Error sending notification email to foo#bar.com for Execution 9358 Error executing tag <g:render>: could not initialize proxy [rundeck.Workflow#9468] - no Session
My thought is to switch job owners from admin to a user that exists in LDAP. I mean, I would like to switch job owners regardless, but I'm also hoping it addresses the error.
Is there a way in the web interface or using rd that I can bulk-modify jobs to switch the owner?
It turns out that the error in the log was caused by notification settings in an included job. I didn't realize that notifications were configured on the parameterized shared job definition, but there were; removing the notification settings caused the error to stop being added to /var/log/rundeck/service.log.
To illustrate the problem, here are chunks of YAML I've edited to show just the important parts. Here's the common job:
- description: Do the actual work with arguments passed
group: jobs/common
id: a618ceb6-f966-49cf-96c5-03a0c2efb9d8
name: do_the_work
notification:
onstart:
email:
attachType: file
recipients: ops#company.com
subject: Actual work being started
notifyAvgDurationThreshold: null
options:
- enforced: true
name: do_the_job
required: true
values:
- yes
- no
valuesListDelimiter: ','
- enforced: true
name: fail_a_lot
required: true
values:
- yes
- no
valuesListDelimiter: ','
scheduleEnabled: false
sequence:
commands:
- description: The actual work
script: |-
#!/bin/bash
echo ${RD_OPTION_DO_THE_JOB} ${RD_OPTION_FAIL_A_LOT}
keepgoing: false
strategy: node-first
timeout: '60'
uuid: a618ceb6-f966-49cf-96c5-03a0c2efb9d8
And here's the job that calls it (the one that is scheduled and causes an error to show up in the log when it runs):
- description: Do the job
group: jobs/individual
name: do_the_job
...
notification:
onfailure:
email:
recipients: ops#company.com
subject: '[Rundeck] Failure of ${job.name}'
notifyAvgDurationThreshold: null
...
sequence:
commands:
- description: Call the job that does the work
jobref:
args: -do_the_job yes -fail_a_lot no
group: jobs/common
name: do_the_work
If I remove the notification settings from the common job, the error in the log goes away. I'm not sure if sending notifications from an included job is not supported. It would be useful to me if it was, so I could place notification settings in a single location. However, I can understand why it presents a problem for the scheduler/executor.
I'm using multi branch resourcing in a concourse pipeline like so:
resources:
- name: my-resource
type: git-multibranch
source:
uri: git#github.com.../my-resource
branches: 'feature/.*'
private_key: ...
ignore-branches: ''
How can I access the branch the resource is on at the time the job runs? like so:
jobs:
...
outputs:
- name: my-resource
params:
GIT_BRANCH: {BRANCH-GOES-HERE}
I'm looking to access it via something like my-resource.branch but haven't found any thing that works yet
Below is how im trying to add a custom fiels name in my filebeat 7.2.0
filebeat.inputs:
- type: log
enabled: true
paths:
- D:\Oasis\Logs\Admin_Log\*
- D:\Oasis\Logs\ERA_Log\*
- D:\OasisServices\Logs\*
processors:
- add_fields:
fields:
application: oasis
and with this, im expecting a new field called application whose data entries will be 'oasis'.
But i dont get any.
I also tried
fields:
application: oasis/'oasis'
Help me with this.
If you want to add a customized field for every log, you should put the "fields" configuration in the same level of type. Try the following:
- type: log
enabled: true
paths:
- D:\Oasis\Logs\Admin_Log\*
- D:\Oasis\Logs\ERA_Log\*
- D:\OasisServices\Logs\*
fields.application: oasis
There are two ways to add custom fields on filebeat, using the fields option and using the add_fields processor.
To add fields using the fields option, your configuration needs to be something like the one below.
filebeat.inputs:
- type: log
paths:
- 'D:/path/to/your/files/*'
fields:
custom_field: 'custom field value'
fields_under_root: true
To add fields using the add_fields processor, you can try the following configuration.
filebeat.inputs:
- type: log
paths:
- 'D:/path/to/your/files/*'
processors:
- add_fields:
target: ''
fields:
custom_field: 'custom field value'
Both configurations will create a field named custom_field with the value custom field value in the root of your document.
The fields option can be used per input and the add_fields processor is applied to all the data exported by the filebeat instance.
Just remember to pay attention to the indentation of your configuration, if it is wrong filebeat won't work correctly or even start.
Say I want to run a task only when a specific tag is NOT in the list of tags supplied on the command line, even if other tags are specified. Of these, only the last one will work as I expect in all situations:
- hosts: all
tasks:
- debug:
msg: 'not TAG (won't work if other tags specified)'
tags: not TAG
- debug:
msg: 'always, but not if TAG specified (doesn't work; always runs)'
tags: always,not TAG
- debug:
msg: 'ALWAYS, but not if TAG in ansible_run_tags'
when: "'TAG' not in ansible_run_tags"
tags: always
Try it with different CLI options and you'll hopefully see why I find this a bit perplexing:
ansible-playbook tags-test.yml -l HOST
ansible-playbook tags-test.yml -l HOST -t TAG
ansible-playbook tags-test.yml -l HOST -t OTHERTAG
Questions: (a) is that expected behavior? and (b) is there a better way or some logic I'm missing?
I'm surprised I had to dig into the (undocumented, AFAICT) variable ansible_run_tags.
Amendment: It was suggested that I post my actual use case. I'm using ansible to drive system updates on Debian family systems. I'm trying to notify at the end if a reboot is required unless the tag reboot was supplied, in which case cause a reboot (and wait for system to come back up). Here is the relevant snippet:
- name: check and perhaps reboot
block:
- name: Check if a reboot is required
stat:
path: /var/run/reboot-required
get_md5: no
register: reboot
tags: always,reboot
- name: Alert if a reboot is required
fail:
msg: "NOTE: a reboot required to finish uppdates."
when:
- ('reboot' not in ansible_run_tags)
- reboot.stat.exists
tags: always
- name: Reboot the server
reboot:
msg: rebooting after Ansible applied system updates
when: reboot.stat.exists or ('force-reboot' in ansible_run_tags)
tags: never,reboot,force-reboot
I think my original question(s) still have merit, but I'm also willing to accept alternative methods of accomplishing this same functionality.
For completeness, and since only #paul-sweeney has offered any alternative solution, I'll answer my own question with my current best solution and let people pick / up-vote their favorite:
---
- name: run only if 'TAG' not specified
debug:
msg: 'ALWAYS, but not if TAG in ansible_run_tags'
when: "'TAG' not in ansible_run_tags"
tags: always
I know it's an old(ish) question, but I had a similar requirement.
It's probably something best implemented another way ... but ... sometimes it can be useful.
I'd achieve it by setting a fact if the tag IS specified, then outputting the message only if the fact is not set, something like:
---
- name: "test task runs only if tag missing"
hosts: all
tasks:
- name: "suppress message if tag given"
set_fact: suppress_message=yes
tags: reboot,never
- name: "message"
debug:
msg: "You didn't say 'reboot'"
when: suppress_message is not defined
I think that we have states for controlling (example: started, restarted, stopped), states for installing (present,absent) and components (webserver, db,...).
Ansible is lacking a good separation of those 3 dimensions and mixing those 3 dimensions in a single tag system is leading to confusion.
For example, if you have a 'webserver' and a 'DB' tag, you want to 'restart' the DB and not the webserver using a 'restart' tag.
But it won't work if the 'restart' tasks of the DB and the webserver are in the same tasks file with the same 'restart' tag as the 'restart' tag will start both the DB and the webserver...
So you will have probably to separate webserver and DB tasks in 2 separate files and use the tag at the level of the include.
Using tags means that you have a tree of options, not a matrix of options.
I like the tag concept but the fact that it is not possible to use it in conditional expressions is making it less appealing.
What I recommend is to declare tags in a role but map them into variables as a first task. So the 'restart' and 'db' tags would become boolean variables in my role and use when: instead of tags:
ansible-playbook has a skip-tags option. The example from the docs is
ansible-playbook example.yml --skip-tags "packages"
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html
Enable / disable sections of a CloudFormation for CodePipeline using Conditionals:
This creates a manual notification action once staging has been built and passed Runscope tests:
- InputArtifacts: []
Name: !Join ["",[!Ref GitHubRepository, "-prd-approval"]]
ActionTypeId:
Category: Approval
Owner: AWS
Version: '1'
Provider: Manual
OutputArtifacts: []
Configuration:
NotificationArn: !GetAtt ["SNSApprovalNotification", "Outputs.SNSTopicArn"]
ExternalEntityLink: OutputTestUrl
RunOrder: 3
How to enable/disable this like other CloudFormation resources with a Condition: .
Action steps don't recognize Condition: param
I could make 2 copies of the whole pipeline code one with and one without and then toggle which pipeline I create but it seems like there should be a better way.
You should be able to accomplish this by conditionally inserting the AWS::CodePipeline::Pipeline Resource's Action into the Actions list using the Fn::If Intrinsic Function referencing your Conditions element, returning the Action when the Condition is true and AWS::NoValue (which removes the property, in this case removing the item from the list) when it is not true:
- !If
- IsProdCondition
- InputArtifacts: []
Name: !Join ["",[!Ref GitHubRepository, "-prd-approval"]]
ActionTypeId:
Category: Approval
Owner: AWS
Version: '1'
Provider: Manual
OutputArtifacts: []
Configuration:
NotificationArn: !GetAtt ["SNSApprovalNotification", "Outputs.SNSTopicArn"]
ExternalEntityLink: OutputTestUrl
RunOrder: 3
- !Ref AWS::NoValue