Rundeck global variable in meta job not resolved - rundeck

we have a Rundeck (3.1.2-20190927) job which triggers multiple other jobs in Rundeck. In the /etc/rundeck/framework.properties are global variables defined that are used in the jobs. It's used to build the url of our Icinga so Rundeck can submit th job result to the monitoring. The variable is used in the notifications tab (edit job -> notification) of every single job.
When the meta job has run, it submits the result successfully to the monitoring. The same applies to the 'sub' job, if you trigger them manualy. BUT if they are triggert by the meta job, they throw an error:
Error calling the endpoint: Illegal character in authority at index 8: https://icinga-master-${globals.environment}.some.domain.
It looks like the global variable is not resolved correctly when the job are triggert by the meta job. Strange to say, other meta jobs don't have this problem. I can't find any configuration differences. Has anybody an idea what the cause could be?
Thanks for any help!
Update: Here's my job definition
- defaultTab: output
description: "Call Patchday Jobs in a row \n**Attention! This will reboot the servers**"
executionEnabled: true
group: CloudServices/SomeSoftWare
id: 5cf2966c-3e5f-4a32-8cce-b3e82b6fd036
loglevel: INFO
multipleExecutions: true
name: Patchday SomeSoftWare - Meta Job
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'tags: role_SomeSoftWare'
nodesSelectedByDefault: false
notification:
onfailure:
plugin:
configuration:
_noSSLVerification: ''
_printResponseToFile: ''
_proxySettings: ''
authentication: Basic
body: |-
{
"type": "Service",
"filter": "service.name==\"Rundeck-Job - ${job.name}\"",
"exit_status": 2,
"plugin_output": "'${job.name}' failed"
}
contentType: application/json
file: ''
headers: 'Accept: application/json'
method: POST
noSSLVerification: 'true'
oauthTokenEndpoint: ''
oauthValidateEndpoint: ''
password: ******
proxyIP: ''
proxyPort: ''
remoteUrl: https://icinga-master-${globals.environment}.some.domain:5665/v1/actions/process-check-result
timeout: '30000'
username: rundeck_process-check-result
type: HttpNotification
onsuccess:
plugin:
configuration:
_noSSLVerification: ''
_printResponseToFile: ''
_proxySettings: ''
authentication: Basic
body: |-
{
"type": "Service",
"filter": "service.name==\"Rundeck-Job - ${job.name}\"",
"exit_status": 0,
"plugin_output": "'${job.name}' succeeded"
}
contentType: application/json
file: ''
headers: 'Accept: application/json'
method: POST
noSSLVerification: 'true'
oauthTokenEndpoint: ''
oauthValidateEndpoint: ''
password: ******
proxyIP: ''
proxyPort: ''
remoteUrl: https://icinga-master-${globals.environment}.some.domain:5665/v1/actions/process-check-result
timeout: '30000'
username: rundeck_process-check-result
type: HttpNotification
notifyAvgDurationThreshold: null
options:
- description: Addition paramater to give to the ansible-playbook call
name: AdditionalParameter
scheduleEnabled: true
sequence:
commands:
- jobref:
args: 'branch: ${globals.defaultbranch}'
group: Infrastructure
name: Icinga Service Downtime
nodefilters:
dispatch:
nodeIntersect: true
uuid: 6eec5749-ef35-481e-aea8-674f233c32ac
- description: Pause Bamboo Server
jobref:
args: -branch ${globals.defaultbranch} -bamboo_command pause
group: Infrastructure
name: Bamboo Control
nodefilters:
dispatch:
nodeIntersect: true
uuid: 87bc7f1c-d133-4d7e-9df9-2b40fb935fd4
- configuration:
ansible-become: 'false'
ansible-disable-limit: 'false'
ansible-playbook-inline: |
- name: +++ Stop Tomcat ++++
hosts: tag_role_SomeSoftWare
gather_facts: true
remote_user: ec2-user
become: yes
tasks:
- name: stop tomcat
service:
name: tomcat
state: stopped
nodeStep: false
type: com.batix.rundeck.plugins.AnsiblePlaybookInlineWorkflowStep
- description: SomeSoftWare Update
jobref:
args: -branch ${globals.defaultbranch}
group: Infrastructure
name: SomeSoftWare Server - Setup
nodefilters:
dispatch:
nodeIntersect: true
uuid: f01a4483-d8b2-43cf-99fd-6a610d25c3a4
- description: Install/Update fs-cli
jobref:
args: -branch ${globals.defaultbranch}
group: Infrastructure
name: firstspirit-cli - Setup
nodefilters:
dispatch:
nodeIntersect: true
uuid: c7c54433-be96-4d85-b2c1-32d0534b5c60
- description: Install/Update Modules
jobref:
args: -branch ${globals.defaultbranch}
group: Infrastructure
name: SomeSoftWare Modules - Setup
nodefilters:
dispatch:
nodeIntersect: true
uuid: f7a8929b-2bc3-4abe-8c69-e0d2acf62159
- description: restart SomeSoftWare
exec: sudo service SomeSoftWare restart
- description: Resume Bamboo Server
jobref:
args: -branch ${globals.defaultbranch} -bamboo_command resume
group: Infrastructure
name: Bamboo Control
nodefilters:
dispatch:
nodeIntersect: true
uuid: 87bc7f1c-d133-4d7e-9df9-2b40fb935fd4
keepgoing: false
strategy: node-first
timeZone: Europe/Berlin
uuid: 5cf2966c-3e5f-4a32-8cce-b3e82b6fd036

Related

Rundeck: Pass data between jobs

I'm trying to follow the instructions provided at https://stackoverflow.com/a/61802154 to pass output from one job as input into another job.
Job1 sets up the k/v data
- defaultTab: output
description: ''
executionEnabled: true
id: b6656d3b-2b32-4554-b224-52bd3702c305
loglevel: INFO
name: job1
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: rdnode01'
nodesSelectedByDefault: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- description: output k/v
exec: echo RUNDECK:DATA:MYNUM=123
- description: test k/v
exec: echo ${data.MYNUM}
keepgoing: false
pluginConfig:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
regex: ^RUNDECK:DATA:\s*([^\s]+?)\s*=\s*(.+)$
replaceFilteredResult: 'false'
type: key-value-data
strategy: node-first
uuid: b6656d3b-2b32-4554-b224-52bd3702c305
Job2 will output that k/v data
- defaultTab: output
description: ''
executionEnabled: true
id: c069e7d3-2d1f-46f2-a4d8-15eb19761daf
loglevel: INFO
name: job2
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: rdnode01'
nodesSelectedByDefault: true
options:
- name: option_for_receive
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo ${option.option_for_receive}
keepgoing: false
strategy: node-first
uuid: c069e7d3-2d1f-46f2-a4d8-15eb19761daf
Wrapper runs the job references as node steps and passes the data from job1 to job2
- defaultTab: output
description: ''
executionEnabled: true
id: 5a62cabf-ffc2-45d1-827b-156f4134a082
loglevel: INFO
name: wrapper job
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: rdnode01'
nodesSelectedByDefault: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- description: job1
jobref:
childNodes: true
group: ''
name: job1
nodeStep: 'true'
uuid: b6656d3b-2b32-4554-b224-52bd3702c305
- description: job2
jobref:
args: -option_for_receive ${data.MYNUM}
childNodes: true
group: ''
name: job2
nodeStep: 'true'
uuid: c069e7d3-2d1f-46f2-a4d8-15eb19761daf
keepgoing: false
strategy: node-first
uuid: 5a62cabf-ffc2-45d1-827b-156f4134a082
This the formatted text from the execution log
11:26:39 [rundeck#rdnode01 1#node=rdnode01/1][NORMAL] RUNDECK:DATA:MYNUM=123
11:26:40 [rundeck#rdnode01 1#node=rdnode01/1][NORMAL] {"MYNUM":"123"}
11:26:40 [rundeck#rdnode01 1#node=rdnode01/2][NORMAL] 123
11:26:41 [rundeck#rdnode01 2#node=rdnode01/1][NORMAL] '${data.MYNUM}'
This is what it looks like on the screen:
As you can see, job2 is outputting '${data.MYNUM}' instead of the actual contents. Thus I think there's a syntax issue somewhere.
The data values are generated in the job context, in that case, the "Wrapper Job" (Parent Job in the Rundeck terminology) doesn't know about that data variable in their context (generated in the first job).
If you want to pass that data value to another job, call the second one from the first one in the following way (Workflow Node Step):
JobA:
- defaultTab: output
description: ''
executionEnabled: true
id: b6656d3b-2b32-4554-b224-52bd3702c305
loglevel: INFO
name: job1
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: localhost '
nodesSelectedByDefault: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- description: output k/v
exec: echo RUNDECK:DATA:MYNUM=123
- description: test k/v
exec: echo ${data.MYNUM}
- jobref:
args: -option_for_receive ${data.MYNUM}
childNodes: true
group: ''
name: job2
nodeStep: 'true'
uuid: c069e7d3-2d1f-46f2-a4d8-15eb19761daf
keepgoing: false
pluginConfig:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
regex: ^RUNDECK:DATA:\s*([^\s]+?)\s*=\s*(.+)$
replaceFilteredResult: 'false'
type: key-value-data
strategy: node-first
uuid: b6656d3b-2b32-4554-b224-52bd3702c305
JobB:
- defaultTab: output
description: ''
executionEnabled: true
id: c069e7d3-2d1f-46f2-a4d8-15eb19761daf
loglevel: INFO
name: job2
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: localhost '
nodesSelectedByDefault: true
options:
- name: option_for_receive
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo ${option.option_for_receive}
keepgoing: false
strategy: node-first
uuid: c069e7d3-2d1f-46f2-a4d8-15eb19761daf

Rundeck - loop on referenced job

I break my head with a problem of referenced job on my workflow. I'm not sur this is possible with Rundeck :
I have a job who call a second. I want run this second for all nodes but only over one server.
With this exemple maybe it's more simple to understand:
Workflow : Select Nodes
Referenced job 1
NodeA > Website www.exempleA.com < restore DB with default value
NodeB > Website www.exempleB.com < restore DB with default value
NodeC > Website www.exempleC.com < restore DB with default value
NodeD > Website www.exempleD.com < restore DB with default value
This run perfectly
Referenced Job 2 : Use Cypress server to test websites. Node filter have only Cypress server.
NodeE > Cypress -url https://${node.name} = NodeA > www.exempleA.com
NodeE > Cypress -url https://${node.name} = NodeB > www.exempleB.com
NodeE > Cypress -url https://${node.name} = NodeC > www.exempleC.com
NodeE > Cypress -url https://${node.name} = NodeD > www.exempleD.com
So I want to make a loop with a referenced job who execute on only one server but for all nodes name.
Someone know if this configuration is possible with Rundeck ?
Thank you for your knowledge.
Erwan
An excellent way to do that is to play with parent job options in two ways: first, against the first child job as a node filter (to dispatch to remote nodes), and second, against the second child job (to create an array and run the Cypress command in a bash loop).
Here is an example to test.
Parent Job. Contains an option that should be used for child jobs.
- defaultTab: nodes
description: ''
executionEnabled: true
id: db051872-7d5f-4506-bd49-17719af9785b
loglevel: INFO
name: ParentJob
nodeFilterEditable: false
options:
- name: nodes
value: node00 node01 node02
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- jobref:
args: -myfilter ${option.nodes}
group: ''
name: FirstChildJob
nodeStep: 'true'
uuid: f7271fc4-3ccb-41a5-9de4-a12e65093a3d
- jobref:
args: -myarray ${option.nodes}
childNodes: true
group: ''
name: SecondChildJob
nodeStep: 'true'
uuid: 1b8b1d82-a8dc-4949-9245-e973a8c37f5a
keepgoing: false
strategy: sequential
uuid: db051872-7d5f-4506-bd49-17719af9785b
First Child Job. Takes the parent job option and uses that as a job filter, the node filter is an own option called ${option.myfilter}.
- defaultTab: nodes
description: ''
executionEnabled: true
id: f7271fc4-3ccb-41a5-9de4-a12e65093a3d
loglevel: INFO
name: FirstChildJob
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: ${option.myfilter}
nodesSelectedByDefault: true
options:
- name: myfilter
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "hi"
keepgoing: false
strategy: node-first
uuid: f7271fc4-3ccb-41a5-9de4-a12e65093a3d
Second Child Job. Contains an inline-script step that takes the parent's job option as an array and runs in a bash loop.
- defaultTab: nodes
description: ''
executionEnabled: true
id: 1b8b1d82-a8dc-4949-9245-e973a8c37f5a
loglevel: INFO
name: SecondChildJob
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: node02'
nodesSelectedByDefault: true
options:
- name: myarray
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- script: "#!/bin/bash\narray=(#option.myarray#)\nfor i in \"${array[#]}\"\ndo\n\
\techo \"execute $i\"\ndone"
keepgoing: false
strategy: node-first
uuid: 1b8b1d82-a8dc-4949-9245-e973a8c37f5a
Here is the loop script (inside the second child job as inline-script):
#!/bin/bash
array=(#option.myarray#)
for i in "${array[#]}"
do
echo "$i"
done
And here you can see the result.

How to make a Rundeck parent job with different nodes for each workflow step?

I have three jobs in the same project with their own node filters. And the matched nodes do not overlap between these jobs. I want to create a parent job that runs these three jobs instead of me running them individually. How do I configure the nodes on this parent job? Each step has it's own list of nodes.
Nothing is needed in the Parent Job, just edit the Job Reference Steps and click on the "Use referenced job's nodes." checkbox.
A basic example:
Parent Job:
- defaultTab: nodes
description: ''
executionEnabled: true
id: a0d5834d-4b62-44d9-bd1e-f00a6befb990
loglevel: INFO
name: ParentJob
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- jobref:
childNodes: true
group: ''
name: JobA
uuid: 63fb953c-53e0-4233-ba28-eabd69a0e41c
- jobref:
childNodes: true
group: ''
name: JobB
uuid: 8936db73-9bd4-4912-ae07-c5fc8500ee9d
- jobref:
childNodes: true
group: ''
name: JobC
uuid: 16fa66d3-fbda-439a-9a2b-14f90e99f72b
keepgoing: false
strategy: node-first
uuid: a0d5834d-4b62-44d9-bd1e-f00a6befb990
JobA:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 63fb953c-53e0-4233-ba28-eabd69a0e41c
loglevel: INFO
name: JobA
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: node00 '
nodesSelectedByDefault: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: hostname
keepgoing: false
strategy: node-first
uuid: 63fb953c-53e0-4233-ba28-eabd69a0e41c
JobB:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 8936db73-9bd4-4912-ae07-c5fc8500ee9d
loglevel: INFO
name: JobB
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: node01'
nodesSelectedByDefault: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: hostname
keepgoing: false
strategy: node-first
uuid: 8936db73-9bd4-4912-ae07-c5fc8500ee9d
JobC:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 16fa66d3-fbda-439a-9a2b-14f90e99f72b
loglevel: INFO
name: JobC
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: node02'
nodesSelectedByDefault: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: hostname
keepgoing: false
strategy: node-first
uuid: 16fa66d3-fbda-439a-9a2b-14f90e99f72b
Check the Result.

AWS batch cloudformation - “CannotPullContainerError”

I have a cloud Formation template for a AWS Batch POC with 6 resources.
3 AWS::IAM::Role
1 AWS::Batch::ComputeEnvironment
1 AWS::Batch::JobQueue
1 AWS::Batch::JobDefinition
The AWS::IAM::Role have the policy "arn:aws:iam::aws:policy/AdministratorAccess" (In order to avoid issues.)
The roles are used:
1 into the AWS::Batch::ComputeEnvironment
2 into the AWS::Batch::JobDefinition
But even with the policy "arn:aws:iam::aws:policy/AdministratorAccess" I get "CannotPullContainerError: Error response from daemon: Get https://********.dkr.ecr.eu-west-1.amazonaws.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" when I rin a job.
Disclainer: All is FARGATE (Compute enviroment and Job), not EC2
AWSTemplateFormatVersion: '2010-09-09'
Description: Creates a POC AWS Batch environment.
Parameters:
Environment:
Type: String
Description: 'Environment Name'
Default: TEST
Subnets:
Type: List<AWS::EC2::Subnet::Id>
Description: 'List of Subnets to boot into'
ImageName:
Type: String
Description: 'Name and tag of Process Container Image'
Default: 'upload:6.0.0'
Resources:
BatchServiceRole:
Type: 'AWS::IAM::Role'
Properties:
RoleName: !Join ['', ['Demo', BatchServiceRole]]
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: 'Allow'
Principal:
Service: 'batch.amazonaws.com'
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/AdministratorAccess'
BatchContainerRole:
Type: 'AWS::IAM::Role'
Properties:
RoleName: !Join ['', ['Demo', BatchContainerRole]]
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: 'Allow'
Principal:
Service:
- 'ecs-tasks.amazonaws.com'
Action:
- 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/AdministratorAccess'
BatchJobRole:
Type: 'AWS::IAM::Role'
Properties:
RoleName: !Join ['', ['Demo', BatchJobRole]]
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: 'Allow'
Principal:
Service: 'ecs-tasks.amazonaws.com'
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/AdministratorAccess'
BatchCompute:
Type: "AWS::Batch::ComputeEnvironment"
Properties:
ComputeEnvironmentName: DemoContentInput
ComputeResources:
MaxvCpus: 256
SecurityGroupIds:
- sg-0b33333333333333
Subnets: !Ref Subnets
Type: FARGATE
ServiceRole: !Ref BatchServiceRole
State: ENABLED
Type: Managed
Queue:
Type: "AWS::Batch::JobQueue"
DependsOn: BatchCompute
Properties:
ComputeEnvironmentOrder:
- ComputeEnvironment: DemoContentInput
Order: 1
Priority: 1
State: "ENABLED"
JobQueueName: DemoContentInput
ContentInputJob:
Type: "AWS::Batch::JobDefinition"
Properties:
Type: Container
ContainerProperties:
Command:
- -v
- process
- new-file
- -o
- s3://contents/{content_id}/{content_id}.mp4
Environment:
- Name: SECRETS
Value: !Join [ ':', [ '{{resolve:secretsmanager:common.secrets:SecretString:aws_access_key_id}}', '{{resolve:secretsmanager:common.secrets:SecretString:aws_secret_access_key}}' ] ]
- Name: APPLICATION
Value: upload
- Name: API_KEY
Value: '{{resolve:secretsmanager:common.secrets:SecretString:fluzo.api_key}}'
- Name: CLIENT
Value: upload-container
- Name: ENVIRONMENT
Value: !Ref Environment
- Name: SETTINGS
Value: !Join [ ':', [ '{{resolve:secretsmanager:common.secrets:SecretString:aws_access_key_id}}', '{{resolve:secretsmanager:common.secrets:SecretString:aws_secret_access_key}}', 'upload-container' ] ]
ExecutionRoleArn: 'arn:aws:iam::**********:role/DemoBatchJobRole'
Image: !Join ['', [!Ref 'AWS::AccountId','.dkr.ecr.', !Ref 'AWS::Region', '.amazonaws.com/', !Ref ImageName ] ]
JobRoleArn: !Ref BatchContainerRole
ResourceRequirements:
- Type: VCPU
Value: 1
- Type: MEMORY
Value: 2048
JobDefinitionName: DemoContentInput
PlatformCapabilities:
- FARGATE
RetryStrategy:
Attempts: 1
Timeout:
AttemptDurationSeconds: 600
Into AWS::Batch::JobQueue:ContainerProperties:ExecutionRoleArn I harcoded the arn because if write !Ref BatchJobRole I get an error. But it's no my goal with this question.
The question is how to avoid "CannotPullContainerError: Error response from daemon: Get https://********.dkr.ecr.eu-west-1.amazonaws.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" when I run a Job.
It sounds like you can't reach the internet from inside your subnet.
Make sure:
There is an internet gateway device associated with your VPC (create one if there isn't -- even if you are just using nat-gateway for egress)
The route table that is associated with your subnet has a default route (0.0.0./0) to an internet gateway or nat-gateway with an attached elastic-ip.
An attached security group has rules allowing outbound internet traffic (0.0.0.0/0) for your ports and protocols. (e.g. 80/http, 443/https)
The network access control list (network ACL) that is associated with the subnet has rules allowing both outbound and inbound traffic to the internet.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-connect-internet-gateway/

CORS configuration in AWS SAM with HTTP API appears to be ignored

Edit: This started working as expected by itself the day after. I'm very sure I didn't do anything different. Don't know what to do with the question. Close, Delete?, Let it be?
I am creating a servless web api using AWS SAM and the new HTTP API gateway.
This is my current template file:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
fotffleet-api
Sample SAM Template for fotffleet-api
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
CodeUri: fotffleet/
Runtime: python3.7
Resources:
ServerlessHttpApi:
Type: AWS::Serverless::HttpApi
Properties:
CorsConfiguration:
AllowOrigins:
- "http://localhost:3000"
AllowMethods:
- GET
DefinitionBody:
openapi: "3.0.1"
info:
title: "sam-app"
version: "1.0"
tags:
- name: "httpapi:createdBy"
x-amazon-apigateway-tag-value: "SAM"
paths:
/cells:
get:
responses:
default:
description: "Default response for GET /cells"
x-amazon-apigateway-integration:
payloadFormatVersion: "2.0"
type: "aws_proxy"
httpMethod: "POST"
uri:
Fn::GetAtt: [CellsFunction, Arn]
connectionType: "INTERNET"
/cells/{cellName}:
get:
responses:
default:
description: "Default response for GET /cells/{cellName}"
x-amazon-apigateway-integration:
payloadFormatVersion: "2.0"
type: "aws_proxy"
httpMethod: "POST"
uri:
Fn::GetAtt: [CellInfoFunction, Arn]
connectionType: "INTERNET"
/hello:
get:
responses:
default:
description: "Default response for GET /hello"
x-amazon-apigateway-integration:
payloadFormatVersion: "2.0"
type: "aws_proxy"
httpMethod: "POST"
uri:
Fn::GetAtt: [HelloFunction, Arn]
connectionType: "INTERNET"
/jobs/{cellName}:
get:
responses:
default:
description: "Default response for GET /jobs/{cellName}"
x-amazon-apigateway-integration:
payloadFormatVersion: "2.0"
type: "aws_proxy"
httpMethod: "POST"
uri:
Fn::GetAtt: [JobsFunction, Arn]
connectionType: "INTERNET"
x-amazon-apigateway-cors:
maxAge: 0
allowCredentials: false
allowOrigins:
- "http://localhost:3000"
x-amazon-apigateway-importexport-version: "1.0"
HelloFunction:
Type: AWS::Serverless::Function
Properties:
Handler: app.hello
Events:
Hello:
Type: HttpApi
Properties:
Path: /hello
Method: get
ApiId: !Ref ServerlessHttpApi
CellInfoFunction:
Type: AWS::Serverless::Function
Properties:
Handler: app.cell_info
Policies:
- AWSIoTFullAccess
Events:
Http:
Type: HttpApi
Properties:
Path: /cells/{cellName}
Method: get
ApiId: !Ref ServerlessHttpApi
CellsFunction:
Type: AWS::Serverless::Function
Properties:
Handler: app.cells
Policies:
- AWSIoTFullAccess
Events:
Http:
Type: HttpApi
Properties:
Path: /cells
Method: get
ApiId: !Ref ServerlessHttpApi
JobsFunction:
Type: AWS::Serverless::Function
Properties:
Handler: app.jobs
Policies:
- AmazonDynamoDBReadOnlyAccess
Events:
Http:
Type: HttpApi
Properties:
Path: /jobs/{cellName}
Method: get
ApiId: !Ref ServerlessHttpApi
Outputs:
# Find out more about other implicit resources you can reference within SAM
# https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
HelloWorldApi:
Description: "API Gateway endpoint URL for Prod stage"
Value: !Sub "https://${ServerlessHttpApi}.execute-api.${AWS::Region}.amazonaws.com"
As it can be seen, I have added some CORS configuration. Both in the AWS::Serverless::HttpApi and also in the DefinitionBody i.e. the OpenAPI spec.
My problem is that as far as I can tell, all CORS configurations are completely ignored. When I deploy after changing cors configuration, it says:
Waiting for changeset to be created..
Error: No changes to deploy. Stack sam-app is up to date
When I run sam validate --debug --profile [my-profile] it is my understandign that the CloudFormation file which it tries to deploy is output. It looks like this:
AWSTemplateFormatVersion: '2010-09-09'
Description: 'fotffleet-api
Sample SAM Template for fotffleet-api
'
Resources:
HelloFunction:
Properties:
Code:
S3Bucket: bucket
S3Key: value
Handler: app.hello
Role:
Fn::GetAtt:
- HelloFunctionRole
- Arn
Runtime: python3.7
Tags:
- Key: lambda:createdBy
Value: SAM
Timeout: 3
Type: AWS::Lambda::Function
HelloFunctionRole:
Properties:
AssumeRolePolicyDocument:
Statement:
- Action:
- sts:AssumeRole
Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Version: '2012-10-17'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Tags:
- Key: lambda:createdBy
Value: SAM
Type: AWS::IAM::Role
HelloFunctionHelloPermission:
Properties:
Action: lambda:InvokeFunction
FunctionName:
Ref: HelloFunction
Principal: apigateway.amazonaws.com
SourceArn:
Fn::Sub:
- arn:${AWS::Partition}:execute-api:${AWS::Region}:${AWS::AccountId}:${__ApiId__}/${__Stage__}/GET/hello
- __ApiId__:
Ref: ServerlessHttpApi
__Stage__: '*'
Type: AWS::Lambda::Permission
CellInfoFunction:
Properties:
Code:
S3Bucket: bucket
S3Key: value
Handler: app.cell_info
Role:
Fn::GetAtt:
- CellInfoFunctionRole
- Arn
Runtime: python3.7
Tags:
- Key: lambda:createdBy
Value: SAM
Timeout: 3
Type: AWS::Lambda::Function
CellInfoFunctionRole:
Properties:
AssumeRolePolicyDocument:
Statement:
- Action:
- sts:AssumeRole
Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Version: '2012-10-17'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
- arn:aws:iam::aws:policy/AWSIoTFullAccess
Tags:
- Key: lambda:createdBy
Value: SAM
Type: AWS::IAM::Role
CellInfoFunctionHttpPermission:
Properties:
Action: lambda:InvokeFunction
FunctionName:
Ref: CellInfoFunction
Principal: apigateway.amazonaws.com
SourceArn:
Fn::Sub:
- arn:${AWS::Partition}:execute-api:${AWS::Region}:${AWS::AccountId}:${__ApiId__}/${__Stage__}/GET/cells/*
- __ApiId__:
Ref: ServerlessHttpApi
__Stage__: '*'
Type: AWS::Lambda::Permission
CellsFunction:
Properties:
Code:
S3Bucket: bucket
S3Key: value
Handler: app.cells
Role:
Fn::GetAtt:
- CellsFunctionRole
- Arn
Runtime: python3.7
Tags:
- Key: lambda:createdBy
Value: SAM
Timeout: 3
Type: AWS::Lambda::Function
CellsFunctionRole:
Properties:
AssumeRolePolicyDocument:
Statement:
- Action:
- sts:AssumeRole
Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Version: '2012-10-17'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
- arn:aws:iam::aws:policy/AWSIoTFullAccess
Tags:
- Key: lambda:createdBy
Value: SAM
Type: AWS::IAM::Role
CellsFunctionHttpPermission:
Properties:
Action: lambda:InvokeFunction
FunctionName:
Ref: CellsFunction
Principal: apigateway.amazonaws.com
SourceArn:
Fn::Sub:
- arn:${AWS::Partition}:execute-api:${AWS::Region}:${AWS::AccountId}:${__ApiId__}/${__Stage__}/GET/cells
- __ApiId__:
Ref: ServerlessHttpApi
__Stage__: '*'
Type: AWS::Lambda::Permission
JobsFunction:
Properties:
Code:
S3Bucket: bucket
S3Key: value
Handler: app.jobs
Role:
Fn::GetAtt:
- JobsFunctionRole
- Arn
Runtime: python3.7
Tags:
- Key: lambda:createdBy
Value: SAM
Timeout: 3
Type: AWS::Lambda::Function
JobsFunctionRole:
Properties:
AssumeRolePolicyDocument:
Statement:
- Action:
- sts:AssumeRole
Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Version: '2012-10-17'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
- arn:aws:iam::aws:policy/AmazonDynamoDBReadOnlyAccess
Tags:
- Key: lambda:createdBy
Value: SAM
Type: AWS::IAM::Role
JobsFunctionHttpPermission:
Properties:
Action: lambda:InvokeFunction
FunctionName:
Ref: JobsFunction
Principal: apigateway.amazonaws.com
SourceArn:
Fn::Sub:
- arn:${AWS::Partition}:execute-api:${AWS::Region}:${AWS::AccountId}:${__ApiId__}/${__Stage__}/GET/jobs/*
- __ApiId__:
Ref: ServerlessHttpApi
__Stage__: '*'
Type: AWS::Lambda::Permission
ServerlessHttpApi:
Properties:
Body:
info:
title:
Ref: AWS::StackName
version: '1.0'
openapi: 3.0.1
paths:
/cells:
get:
responses: {}
x-amazon-apigateway-integration:
httpMethod: POST
payloadFormatVersion: '2.0'
type: aws_proxy
uri:
Fn::Sub: arn:${AWS::Partition}:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${CellsFunction.Arn}/invocations
/cells/{cellName}:
get:
parameters:
- in: path
name: cellName
required: true
responses: {}
x-amazon-apigateway-integration:
httpMethod: POST
payloadFormatVersion: '2.0'
type: aws_proxy
uri:
Fn::Sub: arn:${AWS::Partition}:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${CellInfoFunction.Arn}/invocations
/hello:
get:
responses: {}
x-amazon-apigateway-integration:
httpMethod: POST
payloadFormatVersion: '2.0'
type: aws_proxy
uri:
Fn::Sub: arn:${AWS::Partition}:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${HelloFunction.Arn}/invocations
/jobs/{cellName}:
get:
parameters:
- in: path
name: cellName
required: true
responses: {}
x-amazon-apigateway-integration:
httpMethod: POST
payloadFormatVersion: '2.0'
type: aws_proxy
uri:
Fn::Sub: arn:${AWS::Partition}:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${JobsFunction.Arn}/invocations
tags:
- name: httpapi:createdBy
x-amazon-apigateway-tag-value: SAM
Type: AWS::ApiGatewayV2::Api
ServerlessHttpApiApiGatewayDefaultStage:
Properties:
ApiId:
Ref: ServerlessHttpApi
AutoDeploy: true
StageName: $default
Tags:
httpapi:createdBy: SAM
Type: AWS::ApiGatewayV2::Stage
Outputs:
HelloWorldApi:
Description: API Gateway endpoint URL for Prod stage
Value:
Fn::Sub: https://${ServerlessHttpApi}.execute-api.${AWS::Region}.amazonaws.com
I don't know what this should look like, but I find it strange that there is no mention of anything CORS related in there at all.
I have tried many variations such as: CORS settings only in the properites, only in the openAPI definition, not having a definition body and letting SAM generate it, having it in a seperate file.
No matter what I do, it is as if SAM completely and silently ignores any cors settings.
If it is not obvious, I would like to know what to do in order to get SAM to apply these CORS settings on deploy.