concourse complains about inputs for on_success - concourse

concourse complains about inputs for on_success: unknown/extra keys: - jobs[1].plan[2].on_success.inputs
This just happened on a recent upgrade, wasn't always the case. I'm using 3.2.1
- put: deploy-to-cloud
inputs:
- name: some-input
params
manifest: manifest.yml
path: some-input

There was a lot of cleanup done for the pipeline configuration. It has become more strict.
Within a put, you cannot define inputs. A put get's any available resource automagically mounted into it. In your use-case, some-input is already in there.
- put: deploy-to-cloud
params
manifest: manifest.yml
path: some-input

Related

How to access a multi branch resource attribute in a concourse job?

I'm using multi branch resourcing in a concourse pipeline like so:
resources:
- name: my-resource
type: git-multibranch
source:
uri: git#github.com.../my-resource
branches: 'feature/.*'
private_key: ...
ignore-branches: ''
How can I access the branch the resource is on at the time the job runs? like so:
jobs:
...
outputs:
- name: my-resource
params:
GIT_BRANCH: {BRANCH-GOES-HERE}
I'm looking to access it via something like my-resource.branch but haven't found any thing that works yet

Do AWS-SAM templates support lifecycleconfiguration settings?

Does anyone know if SAM templates support Lifecycleconfigruation settings? I see within standard cloudformation definitions you can define the lifecycle of objects like:
BucketName: "Mys3Bucket"
LifecycleConfiguration:
Rules:
- AbortIncompleteMultipartUpload:
DaysAfterInitiation: 7
Status: Enabled
- ExpirationInDays: 14
...
But this seems to fail when used in a SAM template. Am I doing something wrong or is this not part of the serverless application model definition?
It works for me using the SAM CLI 1.15.0, although documentation seems sparse (hence my landing on this question while trying to figure it out).
The SAM template snippet below successfully creates a bucket and sets an appropriate lifecycle rule.
Resources:
Bucket1:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: !Sub "${BucketName}"
AccessControl: Private
VersioningConfiguration:
Status: Enabled
LifecycleConfiguration:
Rules:
- ExpirationInDays: 6
Status: Enabled

run ansible task only if tag is NOT specified

Say I want to run a task only when a specific tag is NOT in the list of tags supplied on the command line, even if other tags are specified. Of these, only the last one will work as I expect in all situations:
- hosts: all
tasks:
- debug:
msg: 'not TAG (won't work if other tags specified)'
tags: not TAG
- debug:
msg: 'always, but not if TAG specified (doesn't work; always runs)'
tags: always,not TAG
- debug:
msg: 'ALWAYS, but not if TAG in ansible_run_tags'
when: "'TAG' not in ansible_run_tags"
tags: always
Try it with different CLI options and you'll hopefully see why I find this a bit perplexing:
ansible-playbook tags-test.yml -l HOST
ansible-playbook tags-test.yml -l HOST -t TAG
ansible-playbook tags-test.yml -l HOST -t OTHERTAG
Questions: (a) is that expected behavior? and (b) is there a better way or some logic I'm missing?
I'm surprised I had to dig into the (undocumented, AFAICT) variable ansible_run_tags.
Amendment: It was suggested that I post my actual use case. I'm using ansible to drive system updates on Debian family systems. I'm trying to notify at the end if a reboot is required unless the tag reboot was supplied, in which case cause a reboot (and wait for system to come back up). Here is the relevant snippet:
- name: check and perhaps reboot
block:
- name: Check if a reboot is required
stat:
path: /var/run/reboot-required
get_md5: no
register: reboot
tags: always,reboot
- name: Alert if a reboot is required
fail:
msg: "NOTE: a reboot required to finish uppdates."
when:
- ('reboot' not in ansible_run_tags)
- reboot.stat.exists
tags: always
- name: Reboot the server
reboot:
msg: rebooting after Ansible applied system updates
when: reboot.stat.exists or ('force-reboot' in ansible_run_tags)
tags: never,reboot,force-reboot
I think my original question(s) still have merit, but I'm also willing to accept alternative methods of accomplishing this same functionality.
For completeness, and since only #paul-sweeney has offered any alternative solution, I'll answer my own question with my current best solution and let people pick / up-vote their favorite:
---
- name: run only if 'TAG' not specified
debug:
msg: 'ALWAYS, but not if TAG in ansible_run_tags'
when: "'TAG' not in ansible_run_tags"
tags: always
I know it's an old(ish) question, but I had a similar requirement.
It's probably something best implemented another way ... but ... sometimes it can be useful.
I'd achieve it by setting a fact if the tag IS specified, then outputting the message only if the fact is not set, something like:
---
- name: "test task runs only if tag missing"
hosts: all
tasks:
- name: "suppress message if tag given"
set_fact: suppress_message=yes
tags: reboot,never
- name: "message"
debug:
msg: "You didn't say 'reboot'"
when: suppress_message is not defined
I think that we have states for controlling (example: started, restarted, stopped), states for installing (present,absent) and components (webserver, db,...).
Ansible is lacking a good separation of those 3 dimensions and mixing those 3 dimensions in a single tag system is leading to confusion.
For example, if you have a 'webserver' and a 'DB' tag, you want to 'restart' the DB and not the webserver using a 'restart' tag.
But it won't work if the 'restart' tasks of the DB and the webserver are in the same tasks file with the same 'restart' tag as the 'restart' tag will start both the DB and the webserver...
So you will have probably to separate webserver and DB tasks in 2 separate files and use the tag at the level of the include.
Using tags means that you have a tree of options, not a matrix of options.
I like the tag concept but the fact that it is not possible to use it in conditional expressions is making it less appealing.
What I recommend is to declare tags in a role but map them into variables as a first task. So the 'restart' and 'db' tags would become boolean variables in my role and use when: instead of tags:
ansible-playbook has a skip-tags option. The example from the docs is
ansible-playbook example.yml --skip-tags "packages"
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html

Serverless CloudFormation template error instance of Fn::GetAtt references undefined resource

I'm trying to setup a new repo and I keep getting the error
The CloudFormation template is invalid: Template error: instance of Fn::GetAtt
references undefined resource uatLambdaRole
in my uat stage, however the dev stage with the exact same format works fine.
I have a resource file for each of these environments.
dev
devLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: dev-lambda-role # The name of the role to be created in aws
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
#Documentation states the below policy is included automatically when you add VPC configuration but it is currently bugged.
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
uat
uatLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: uat-lambda-role # The name of the role to be created in aws
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
#Documentation states the below policy is included automatically when you add VPC configuration but it is currently bugged.
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
In my serverless.yml my role is defined as
role: ${self:custom.stage}LambdaRole
and the stage is set as
custom:
stage: ${opt:stage, self:provider.stage}
Running serverless deploy --stage dev --verbose succeeds, but running serverless deploy --stage uat --verbose fails with the error. Can anyone see what I'm doing wrong? The uat resource was copied directly from the dev one with only the stage name change.
Here is a screenshot of the directory the resource files are in
I had the same issue, eventually I discovered that my SQS queue name wasn't the same in all 3 places. The following 3 places that the SQS name should match are shown below:
...
functions:
mylambda:
handler: sqsHandler.handler
events:
- sqs:
arn:
Fn::GetAtt:
- mySqsName # <= Make sure that these match
- Arn
resources:
Resources:
mySqsName: # <= Make sure that these match
Type: "AWS::SQS::Queue"
Properties:
QueueName: "mySqsName" # <= Make sure that these match
FifoQueue: true
Ended up here with the same error message. My issue ended up being that I got the "resource" and "Resource" keys in serverless.yml backwards.
Correct:
resources: # <-- lowercase "r" first
Resources: # <-- uppercase "R" second
LambdaRole:
Type: AWS::IAM::Role
Properties:
...
🤦‍♂️
I missed copying a key part of my config here, the actual reference to my Resources file
resources:
Resources: ${file(./serverless-resources/${self:provider.stage}-resources.yml)}
The issue was that I had copied this from a guide and had accientally used self:provider.stage rather than self:custom.stage. When I changed this, it could then deploy.
Indentation Issue
In general, when YAML isn't working I start by checking the indentation.
I hit this issue in my case one of my resources was indented too much, therefore, putting the resource in the wrong node/object. The resources should be two indents in as they're in node resources sub-node Resources
For more info on this see yaml docs

Need to configure serverless resource output to get api gateway api id

I have a serverless project that is creating an API Gateway API amongst other things. One of the functions in the project needs to generate a URL for an API endpoint.
My plan is to get the API ID using a resource output in serverless.yml then create the URL and pass it through to the lambda function as an env parameter.
My problem/question is how to get the API ID as a cloud formation output in serverless.yml?
I've tried:
resources:
Outputs:
RESTApiId:
Description: The id of the API created in the API gateway
Value:
Ref: name-of-api
but this give the error:
The CloudFormation template is invalid: Unresolved resource dependencies [name-of-api] in the Outputs block of the template
You can write something like this in the serverless.yml file:
provider:
region: ${opt:region, 'eu-west-1'}
stage: ${opt:stage, 'dev'}
environment:
REST_API_URL:
Fn::Join:
- ""
- - "https://"
- Ref: "ApiGatewayRestApi"
- ".execute-api."
- ${self:provider.region}
- Ref: "AWS::URLSuffix"
- "/"
- ${self:provider.stage}"
Now you can call serverless with optional commandline options --stage and/or --region to override the defaults defined above, e.g:
serverless deploy --stage production --region us-east-1
In your code you can then use the environment variable REST_API_URL
node.js:
const restApiUrl = process.env.REST_API_URL;
python:
import os
rest_api_url = os.environ['REST_API_URL']
Java:
String restApiUrl = System.getenv("REST_API_URL");
The serverless framework has a documentation page on how they generate names for resources.
See. AWS CloudFormation Resource Reference
So the generated RestAPI resource is called ApiGatewayRestApi.
Unfortunately, the documentation doesn't mention it:
resources:
Outputs:
apiGatewayHttpApiId:
Value:
Ref: HttpApi
Export:
Name: YourAppHttpApiId