Rundeck -> how can we pass the captured Key Value Data into the mail Template - rundeck

How can we pass the captured Key-Value Data (Log filter) into the mail Template,
For example, my current template looks like this
<html>
<head>
<title>create Heap dump</title>
</head>
<body>
<p>
Hi,<br><br>
${option.Description} <br>
${logoutput.data}<br><br>
Regards,<br>
Game World</p>
</body>
</html>
Currently i am not able to pass any captured value like ${data.value}. Is there anything i am missing ?

The easiest way is to export that data value variable to a global one and then use it in your notifications.
The first step print some text, with a filter, capture the data value and it's s stored in a ${data.MYDATA}.
The second step takes that data variable and creates a global one using the "Global Variable" Step.
You can use that global variable in any notification as ${export.myvariable}.
Take a look at this job definition example:
- defaultTab: nodes
description: ''
executionEnabled: true
id: ea07f41a-71b4-4ed9-91fb-6113de996e48
loglevel: INFO
name: TestJob
nodeFilterEditable: false
notification:
onsuccess:
plugin:
configuration:
authentication: None
body: ${export.myglobal}
contentType: application/json
method: POST
remoteUrl: https://any/webhook/url
timeout: '30000'
type: HttpNotification
notifyAvgDurationThreshold: null
plugins:
ExecutionLifecycle: {}
scheduleEnabled: true
schedules: []
sequence:
commands:
- exec: env
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
regex: ^(USER)\s*=\s*(.+)$
type: key-value-data
- configuration:
export: myglobal
group: export
value: ${data.USER*}
nodeStep: false
type: export-var
- exec: echo ${export.myglobal}
keepgoing: false
strategy: node-first
uuid: ea07f41a-71b4-4ed9-91fb-6113de996e48
Using the HTTP notification (in the body section) you can see the value (same for your case using email notification).

Related

Referencing json list value created in Rundeck Data Workflow step

Rundeck job: When I create data in data workflow step as json list
{
"repo": ["repo1","repo2","repo3"],
"myrepo": "repo4"
}
how can I access the elements in the list from inline script in next step?
#stub.repo[1]#
doesn't work
#stub.myrepo#
works fine
Data Workflow step executed
Script:
echo "value: #stub.repo[1]]#"
echo "value2: #stub.myrepo#"
Result:
value:
value2: repo4
The easiest way to catch that array is to use the jq-JSON mapper log filter plugin in any step like the command step or script step (here are the releases, here is how to install the plugin, and here how Log filters works).
Using this plugin you can use the array positions directly, e.g: ${data.data.0}, ${data.data.1}, etc.
Job definition example with your JSON output for testing.
- defaultTab: summary
description: ''
executionEnabled: true
group: JSON
id: f0d2843f-8de3-4984-a9ae-2fd7ab3963ae
loglevel: INFO
name: test-json-array
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- plugins:
LogFilter:
- config:
filter: .[]
logData: 'true'
prefix: data
type: json-mapper
script: |-
cat <<-END
{
"repo": ["repo1","repo2","repo3"],
"myrepo": "repo4"
}
END
- exec: echo ${data.data.0}
keepgoing: false
strategy: node-first
uuid: f0d2843f-8de3-4984-a9ae-2fd7ab3963ae
Result.
More info about the plugin here.

AWS HTTP API Integration with AWS Step Functions -> Sending Multiple Values in the Input

I have a Type: AWS::Serverless::HttpApi which I am trying to connect to a Type: AWS::Serverless::StateMachine as a trigger. Meaning the HTTP API would trigger the Step Function state machine.
I can get it working, by only specifying a single input. For example, the DefinitionBody when it works, looks like this:
DefinitionBody:
info:
version: '1.0'
title:
Ref: AWS::StackName
paths:
"/github/secret":
post:
responses:
default:
description: "Default response for POST /"
x-amazon-apigateway-integration:
integrationSubtype: "StepFunctions-StartExecution"
credentials:
Fn::GetAtt: [StepFunctionsApiRole, Arn]
requestParameters:
Input: $request.body
StateMachineArn: !Ref SecretScannerStateMachine
payloadFormatVersion: "1.0"
type: "aws_proxy"
connectionType: "INTERNET"
timeoutInMillis: 30000
openapi: 3.0.1
x-amazon-apigateway-importexport-version: "1.0"
Take note of the following line: Input: $request.body. I am only specifying the $request.body.
However, I need to be able to send the $request.body and $request.header.X-Hub-Signature-256. I need to send BOTH these values to my state machine as an input.
I have tried so many different ways. For example:
Input: " { body: $request.body, header: $request.header.X-Hub-Signature-256 }"
and
$request.body
$request.header.X-Hub-Signature-256
and
Input: $request
I get different errors each time, but this is the main one:
Warnings found during import: Unable to create integration for resource at path 'POST /github/secret': Invalid selection expression specified: Validation Result: warnings : [], errors : [Invalid source: $request specified for destination: Input].
Any help on how to pass multiple values would so be appreciated.

AZP: Is there a best practice to be able to "namespace" script tasks in yaml templates for usage of variables?

In Azure Pipelines: my main problem is, if I create a yml template and have some logic inside that template in a script task where I want to set a variable, i need the
name: "pseudonamespace" to reference that variable further down in that template via
$(pseudonamespace.variablename)
An example, where the script part does nothing overtly useful, but should demonstrate my problem:
mytemplate.yml:
parameters:
- name: isWindowsOnTarget
type: boolean
default: true
steps:
- script: |
if [ "${{lower(parameters.isWindowsOnTarget)}}" == "true" ]; then
delimiter="\\"
else
delimiter="/"
fi
echo "##vso[task.setvariable variable=myCoolVariable;isOutput=true]$delimiter"
name: MyFakeNameSpace
...
- task: SomeTask#0
inputs:
myInput: $(MyFakeNameSpace.myCoolVariable)
This codeblock works; but only, if, in a job, I only instanciate it once:
- template: mytemplate.yml#templates
parameters:
isWindowsOnTarget: true
If I would need that template twice, differently parameterized, I get the error that the name of the script block needs to be unique.
Is there any useful possibility I'm not currently thinking about other than to have an extra parameter for the template that I could basically just call "UniqueNamespace"?
There is no much space to move. Your task needs a unique name as later as you mention for output parameters it works like a namespace. So the best and the only way you have is to provide another parameter which would be task name.
parameters:
- name: isWindowsOnTarget
type: boolean
default: true
- name: taskName
type: string
steps:
- script: |
if [ "${{lower(parameters.isWindowsOnTarget)}}" == "true" ]; then
delimiter="\\"
else
delimiter="/"
fi
echo "##vso[task.setvariable variable=myCoolVariable;isOutput=true]$delimiter"
name: ${{ parameters.taskName }}
...
- task: SomeTask#0
inputs:
myInput: $(MyFakeNameSpace.myCoolVariable)
and then:
- template: mytemplate.yml#templates
parameters:
isWindowsOnTarget: true
taskName: MyFakeNameSpace
- template: mytemplate.yml#templates
parameters:
isWindowsOnTarget: true
taskName: MyFakeNameSpace2
In fact when you do not provide a name Azure DevOps assign a unique name. However, in this way you don't know the name till runtime.

How can I skip a failing Rundeck job?

I have a Rundeck job to update servers using a custom script that is run as local command. If the server update needs to be postponed due to running jobs on it, the custom local command will return a special return code. The Rundeck job is configured to fail the step without running on any remaining nodes if a node fails.
I want to skip a node and continue with the next node if this node return the special return code.
I tried to experiment with an error handler using code like:
/bin/sh -c 'if test "${result.resultCode}" = "125"; then exit 0; fi; exit "${result.resultCode}"'
The stripped down job configuration looks like:
- defaultTab: summary
executionEnabled: true
loglevel: INFO
multipleExecutions: true
name: Server update
nodeFilterEditable: true
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
successOnEmptyNodeFilter: false
threadcount: ${option.parallelity}
filter: ''
nodesSelectedByDefault: true
notification:
onfailure:
email:
recipients: me#example.com
subject: 'rundeck: server update failed'
onsuccess:
email:
recipients: me#example.com
subject: 'rundeck: server update finished'
notifyAvgDurationThreshold: null
options:
- description: Maximum number of server updates in parallel.
name: parallelity
regex: ^[0-9]+$
required: true
value: '1'
scheduleEnabled: true
sequence:
commands:
- configuration:
command: /usr/bin/custom-server-update "${node.name}"
nodeStep: true
type: localexec
keepgoing: false
strategy: parallel
At the moment of creating or edit your job scroll down and set this: If a node fails -> Continue running on any remaining nodes before failing the step.

How to fix Circular dependency between resources on a logical ID

I trying to automate building process of my serverless application. When I set up CognitoUserPool resources I need the resources Ref CognitoUserPoolClient to create a link to redirect the client in "EmailMessage"
But the CognitoUserPoolClient need Ref CognitoUserPool
CognitoUserPool:
Type: AWS::Cognito::UserPool
Properties:
# Generate a name based on the stage
UserPoolName: ${self:custom.STAGE}-${self:custom.CLIENT}-user-pool
EmailMessage: !Join
- ''
- - >
You are invited to the SonderMMS platform, the world's first owned media management software. <br>
Username: {username} <br>
Password: {####} <br>
Login
<a href='
https://${self:custom.COGNITO_DOMAIN}.auth.${self:custom.REGION}.amazoncognito.com/oauth2/authorize?&response_type=token&redirect_uri=${self:custom.URL.${self:custom.STAGE}}&client_id=
- !Sub '#{CognitoUserPoolClient}'
- >
'> here </a>.
EmailSubject: "Email Invite"
CognitoUserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
# Generate an app client name based on the stage
ClientName: ${self:custom.STAGE}-${self:custom.CLIENT}-user-pool-client
UserPoolId: !Ref CognitoUserPool
ExplicitAuthFlows:
- ADMIN_NO_SRP_AUTH
GenerateSecret: false
So i get this error
The CloudFormation template is invalid: Circular dependency between resources: [CognitoUserPoolClient, ApiGatewayAuthorizer, IdentityPool, SSMCognitoUserPoolClientId, IdentityPoolRoleMapping, SSMUserPoolId, ApiGatewayMethodTelstraPost, CognitoUserPool, SSMIdentityPoolId, ApiGatewayDeployment1559121507544, CognitoUnAuthorizedRole]