"Importing" non-importable resources into a CloudFormation stack - aws-cloudformation
With AWS CloudFormation, you can import existing resources of supported types into a new or existing Stack. Some resources such as Routes and various associations and attachments are not supported. I'm guessing many of these are not "full fledged" resources and exist behind the scenes as simply a component of another resource.
I have found that I can "fake import" an existing VPCGatewayAttachment simply by adding it to the templates and creating and executing an UPDATE ChangeSet after successfully IMPORTing the VPC and Internet Gateway via an IMPORT ChangeSet. The VPCGatewayAttachment is added with no error and also becomes a part of the Stack. In the demonstration below, you can note that the PhysicalResourceId of the VPCGatewayAttachment in question changes between its initial creation and subsequent removal and re-adding into the Stack. (NOTE: The initial creation via template is to simplify the example -- normally this would have been an existing resource not in a Stack). I'm not sure if this reflects actual destruction of the existing attachment and re-creation of a new one, or if it's simply that the attachment has no actual PhysicalResourceId but one is randomly assigned when added into a Stack.
My questions are:
Is the 'fake import' of the VPCGatewayAttachment non-destructive, i.e. non-disruptive, in a production environment?
If it is non-disruptive, what other resources that are not supported for Import, can also effectively be brought into the Stack in a non-disruptive manner using the same technique: simply adding the equivalent resource in the template and creating and executing an UPDATE ChangeSet. I'm thinking mainly of Routes and other associations and attachments.
Below is a demonstration of this. To run it, place the first 4 files (.ps1 and .yaml) in files with the specified name in the same directory. You must have AWS CLI installed and configured for a profile with proper permissions to manipulate stacks and resources. Run the PowerShell (.ps1) file. You probably want to replace the name of the S3 bucket with something unique. The script cleans up all created resources (stack and s3 bucket).
If you want to skip running it, I have included the output from a local run as the last file below.
case-XXXXXXXXXX-example.ps1:
echo "---------------------------------------------------------------------------"
echo "---------------------------------------------------------------------------"
echo "- Demonstration for Case XXXXXXXXXX"
echo "---------------------------------------------------------------------------"
echo "---------------------------------------------------------------------------"
echo "-"
echo "---------------------------------------------------------------------------"
echo "Create S3 bucket and upload templates"
echo "---------------------------------------------------------------------------"
aws s3api create-bucket --bucket case-XXXXXXXXXX --no-paginate --no-cli-pager
aws s3 sync . s3://case-XXXXXXXXXX --exclude * --include *.yaml --no-paginate --no-cli-pager
echo "---------------------------------------------------------------------------"
echo "- Create stack with VPC, Internet Gateway, and Gateway Attachment"
echo "- (the latter has DeletionPolicy: Retain)"
echo "---------------------------------------------------------------------------"
echo "- Create stack"
aws cloudformation create-stack --stack-name case-XXXXXXXXXX --template-url https://case-XXXXXXXXXX.s3.amazonaws.com/case-XXXXXXXXXX-example-1.yaml --no-paginate --no-cli-pager
echo "- Wait stack create complete"
aws cloudformation wait stack-create-complete --stack-name case-XXXXXXXXXX --no-paginate --no-cli-pager
echo "- Describe stack and resources"
echo "- Note the PhysicalResourceId of the Gateway Attachment."
aws cloudformation describe-stacks --stack-name case-XXXXXXXXXX --no-paginate --no-cli-pager
aws cloudformation describe-stack-resources --stack-name case-XXXXXXXXXX --no-paginate --no-cli-pager
echo "---------------------------------------------------------------------------"
echo "- Create and execute a change-set that removes the Gateway Attachment"
echo "- This leaves us in a state simulating having IMPORTed the VPC and"
echo "- Internet Gateway, but the Gateway Attachment is not in the stack."
echo "- This sets up the next part which actually demonstrates a 'fake import'"
echo "- of the Gateway Attachment"
echo "---------------------------------------------------------------------------"
echo "- Create change-set"
aws cloudformation create-change-set --stack-name case-XXXXXXXXXX --change-set-name delete-igw-attach --template-url https://case-XXXXXXXXXX.s3.amazonaws.com/case-XXXXXXXXXX-example-2.yaml --no-paginate --no-cli-pager
echo "- Wait change-set create complete"
aws cloudformation wait change-set-create-complete --stack-name case-XXXXXXXXXX --change-set-name delete-igw-attach --no-paginate --no-cli-pager
echo "- Describe change-set"
aws cloudformation describe-change-set --stack-name case-XXXXXXXXXX --change-set-name delete-igw-attach --no-paginate --no-cli-pager
echo "- Execute change-set"
aws cloudformation execute-change-set --stack-name case-XXXXXXXXXX --change-set-name delete-igw-attach --no-paginate --no-cli-pager
echo "- Wait stack update complete"
aws cloudformation wait stack-update-complete --stack-name case-XXXXXXXXXX --no-paginate --no-cli-pager
echo "---------------------------------------------------------------------------"
echo "- Note the Gateway Attachment is not in the stack, but the Internet Gateway"
echo "- is still attached to the VPC"
echo "---------------------------------------------------------------------------"
aws cloudformation describe-stack-resources --stack-name case-XXXXXXXXXX --no-paginate --no-cli-pager
aws ec2 describe-internet-gateways --filter "Name=tag:Name,Values=Case-XXXXXXXXXX" --no-paginate --no-cli-pager
echo "---------------------------------------------------------------------------"
echo "- THE WHOLE POINT OF THIS DEMONSTRATION IS NEXT"
echo "- 'Fake Import' the Gateway Attachment just by adding it to the template and"
echo "- creating and executing an UPDATE change-set."
echo "---------------------------------------------------------------------------"
echo "- Create change-set"
aws cloudformation create-change-set --stack-name case-XXXXXXXXXX --change-set-name fake-import-igw-attach --template-url https://case-XXXXXXXXXX.s3.amazonaws.com/case-XXXXXXXXXX-example-3.yaml --no-paginate --no-cli-pager
echo "- Wait change-set create complete"
aws cloudformation wait change-set-create-complete --stack-name case-XXXXXXXXXX --change-set-name fake-import-igw-attach --no-paginate --no-cli-pager
echo "- Describe change-set"
aws cloudformation describe-change-set --stack-name case-XXXXXXXXXX --change-set-name fake-import-igw-attach --no-paginate --no-cli-pager
echo "- Execute change-set"
aws cloudformation execute-change-set --stack-name case-XXXXXXXXXX --change-set-name fake-import-igw-attach --no-paginate --no-cli-pager
echo "- Wait stack update complete"
aws cloudformation wait stack-update-complete --stack-name case-XXXXXXXXXX --no-paginate --no-cli-pager
echo "---------------------------------------------------------------------------"
echo "- Note that the Gateway Attachment is now in the stack and the Internet"
echo "- Gateway is still attached, and there weren't any errors."
echo "- The PhysicalResourceId did change, however."
echo "---------------------------------------------------------------------------"
aws cloudformation describe-stack-resources --stack-name case-XXXXXXXXXX --no-paginate --no-cli-pager
aws ec2 describe-internet-gateways --filter "Name=tag:Name,Values=Case-XXXXXXXXXX" --no-paginate --no-cli-pager
echo "---------------------------------------------------------------------------"
echo "- Delete stack"
echo "---------------------------------------------------------------------------"
aws cloudformation delete-stack --stack-name case-XXXXXXXXXX --no-paginate --no-cli-pager
echo "- Wait stack delete complete"
aws cloudformation wait stack-delete-complete --stack-name case-XXXXXXXXXX --no-paginate --no-cli-pager
echo "---------------------------------------------------------------------------"
echo "- Delete S3 bucket with templates"
echo "---------------------------------------------------------------------------"
aws s3 rb s3://case-XXXXXXXXXX --force --no-paginate --no-cli-pager
echo "---------------------------------------------------------------------------"
echo "- DONE"
echo "---------------------------------------------------------------------------"
case-XXXXXXXXXX-example-1.yaml
Description: >
Create the VPC, Internet Gateway, and attach the gateway
to the VPC, with a DeletionPolicy of Retain so that we can
remove it from the stack without deleting it. Run with
aws cloudformation create-stack.
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.187.32.0/24
Tags:
- Key: Name
Value: Case-XXXXXXXXXX
IGW:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: Case-XXXXXXXXXX
IGWassoc:
Type: AWS::EC2::VPCGatewayAttachment
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref IGW
case-XXXXXXXXXX-example-2.yaml
Description: >
Delete the gateway attachment, but it will be retained
so we can import it next. Run with aws cloudformation
create-change-set --change-set-type UPDATE
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.187.32.0/24
Tags:
- Key: Name
Value: Case-XXXXXXXXXX
IGW:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: Case-XXXXXXXXXX
case-XXXXXXXXXX-example3.yaml
Description: >
Create the gateway attachment. It already exists, and is
not importable but this action succeeds and SEEMS to be
non-destructive. Run with aws cloudformation
create-change-set --change-set-type UPDATE
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.187.32.0/24
Tags:
- Key: Name
Value: Case-XXXXXXXXXX
IGW:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: Case-XXXXXXXXXX
IGWassoc:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref IGW
output.txt
---------------------------------------------------------------------------
---------------------------------------------------------------------------
- Demonstration for Case XXXXXXXXXX
---------------------------------------------------------------------------
---------------------------------------------------------------------------
-
---------------------------------------------------------------------------
Create S3 bucket and upload templates
---------------------------------------------------------------------------
{
"Location": "/case-XXXXXXXXXX"
}
upload: .\case-XXXXXXXXXX-example-2.yaml to s3://case-XXXXXXXXXX/case-XXXXXXXXXX-example-2.yaml
upload: .\case-XXXXXXXXXX-example-1.yaml to s3://case-XXXXXXXXXX/case-XXXXXXXXXX-example-1.yaml
upload: .\case-XXXXXXXXXX-example-3.yaml to s3://case-XXXXXXXXXX/case-XXXXXXXXXX-example-3.yaml
---------------------------------------------------------------------------
- Create stack with VPC, Internet Gateway, and Gateway Attachment
- (the latter has DeletionPolicy: Retain)
---------------------------------------------------------------------------
- Create stack
{
"StackId": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829"
}
- Wait stack create complete
- Describe stack and resources
- Note the PhysicalResourceId of the Gateway Attachment.
{
"Stacks": [
{
"StackId": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829",
"StackName": "case-XXXXXXXXXX",
"Description": "Create the VPC, Internet Gateway, and attach the gateway to the VPC, with a DeletionPolicy of Retain so that we can remove it from the stack without deleting it. Run with aws cloudformation create-stack.\n",
"CreationTime": "2021-11-03T15:05:03.251000+00:00",
"RollbackConfiguration": {},
"StackStatus": "CREATE_COMPLETE",
"DisableRollback": false,
"NotificationARNs": [],
"Tags": [],
"EnableTerminationProtection": false,
"DriftInformation": {
"StackDriftStatus": "NOT_CHECKED"
}
}
]
}
{
"StackResources": [
{
"StackName": "case-XXXXXXXXXX",
"StackId": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829",
"LogicalResourceId": "IGW",
"PhysicalResourceId": "igw-028cb469265fa34a8",
"ResourceType": "AWS::EC2::InternetGateway",
"Timestamp": "2021-11-03T15:05:49.880000+00:00",
"ResourceStatus": "CREATE_COMPLETE",
"DriftInformation": {
"StackResourceDriftStatus": "NOT_CHECKED"
}
},
{
"StackName": "case-XXXXXXXXXX",
"StackId": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829",
"LogicalResourceId": "IGWassoc",
"PhysicalResourceId": "case-IGWas-ZHZQ0DZ9KXLS",
"ResourceType": "AWS::EC2::VPCGatewayAttachment",
"Timestamp": "2021-11-03T15:06:08.293000+00:00",
"ResourceStatus": "CREATE_COMPLETE",
"DriftInformation": {
"StackResourceDriftStatus": "NOT_CHECKED"
}
},
{
"StackName": "case-XXXXXXXXXX",
"StackId": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829",
"LogicalResourceId": "VPC",
"PhysicalResourceId": "vpc-03b26a31ca1bca800",
"ResourceType": "AWS::EC2::VPC",
"Timestamp": "2021-11-03T15:05:28.179000+00:00",
"ResourceStatus": "CREATE_COMPLETE",
"DriftInformation": {
"StackResourceDriftStatus": "NOT_CHECKED"
}
}
]
}
---------------------------------------------------------------------------
- Create and execute a change-set that removes the Gateway Attachment
- This leaves us in a state simulating having IMPORTed the VPC and
- Internet Gateway, but the Gateway Attachment is not in the stack.
- This sets up the next part which actually demonstrates a 'fake import'
- of the Gateway Attachment
---------------------------------------------------------------------------
- Create change-set
{
"Id": "arn:aws:cloudformation:us-east-1:606679984871:changeSet/delete-igw-attach/631b12ca-c8f4-407d-b248-b2766a730eba",
"StackId": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829"
}
- Wait change-set create complete
- Describe change-set
{
"ChangeSetName": "delete-igw-attach",
"ChangeSetId": "arn:aws:cloudformation:us-east-1:606679984871:changeSet/delete-igw-attach/631b12ca-c8f4-407d-b248-b2766a730eba",
"StackId": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829",
"StackName": "case-XXXXXXXXXX",
"CreationTime": "2021-11-03T15:06:40.618000+00:00",
"ExecutionStatus": "AVAILABLE",
"Status": "CREATE_COMPLETE",
"NotificationARNs": [],
"RollbackConfiguration": {},
"Capabilities": [],
"Changes": [
{
"Type": "Resource",
"ResourceChange": {
"Action": "Remove",
"LogicalResourceId": "IGWassoc",
"PhysicalResourceId": "case-IGWas-ZHZQ0DZ9KXLS",
"ResourceType": "AWS::EC2::VPCGatewayAttachment",
"Scope": [],
"Details": []
}
}
],
"IncludeNestedStacks": false
}
- Execute change-set
- Wait stack update complete
---------------------------------------------------------------------------
- Note the Gateway Attachment is not in the stack, but the Internet Gateway
- is still attached to the VPC
---------------------------------------------------------------------------
{
"StackResources": [
{
"StackName": "case-XXXXXXXXXX",
"StackId": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829",
"LogicalResourceId": "IGW",
"PhysicalResourceId": "igw-028cb469265fa34a8",
"ResourceType": "AWS::EC2::InternetGateway",
"Timestamp": "2021-11-03T15:05:49.880000+00:00",
"ResourceStatus": "CREATE_COMPLETE",
"DriftInformation": {
"StackResourceDriftStatus": "NOT_CHECKED"
}
},
{
"StackName": "case-XXXXXXXXXX",
"StackId": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829",
"LogicalResourceId": "VPC",
"PhysicalResourceId": "vpc-03b26a31ca1bca800",
"ResourceType": "AWS::EC2::VPC",
"Timestamp": "2021-11-03T15:05:28.179000+00:00",
"ResourceStatus": "CREATE_COMPLETE",
"DriftInformation": {
"StackResourceDriftStatus": "NOT_CHECKED"
}
}
]
}
{
"InternetGateways": [
{
"Attachments": [
{
"State": "available",
"VpcId": "vpc-03b26a31ca1bca800"
}
],
"InternetGatewayId": "igw-028cb469265fa34a8",
"OwnerId": "606679984871",
"Tags": [
{
"Key": "aws:cloudformation:logical-id",
"Value": "IGW"
},
{
"Key": "Name",
"Value": "Case-XXXXXXXXXX"
},
{
"Key": "aws:cloudformation:stack-name",
"Value": "case-XXXXXXXXXX"
},
{
"Key": "aws:cloudformation:stack-id",
"Value": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829"
}
]
}
]
}
---------------------------------------------------------------------------
- THE WHOLE POINT OF THIS DEMONSTRATION IS NEXT
- 'Fake Import' the Gateway Attachment just by adding it to the template and
- creating and executing an UPDATE change-set.
---------------------------------------------------------------------------
- Create change-set
{
"Id": "arn:aws:cloudformation:us-east-1:606679984871:changeSet/fake-import-igw-attach/95510e17-3f44-4ba4-be9e-4183cbb143ca",
"StackId": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829"
}
- Wait change-set create complete
- Describe change-set
{
"ChangeSetName": "fake-import-igw-attach",
"ChangeSetId": "arn:aws:cloudformation:us-east-1:606679984871:changeSet/fake-import-igw-attach/95510e17-3f44-4ba4-be9e-4183cbb143ca",
"StackId": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829",
"StackName": "case-XXXXXXXXXX",
"CreationTime": "2021-11-03T15:07:52.172000+00:00",
"ExecutionStatus": "AVAILABLE",
"Status": "CREATE_COMPLETE",
"NotificationARNs": [],
"RollbackConfiguration": {},
"Capabilities": [],
"Changes": [
{
"Type": "Resource",
"ResourceChange": {
"Action": "Add",
"LogicalResourceId": "IGWassoc",
"ResourceType": "AWS::EC2::VPCGatewayAttachment",
"Scope": [],
"Details": []
}
}
],
"IncludeNestedStacks": false
}
- Execute change-set
- Wait stack update complete
---------------------------------------------------------------------------
- Note that the Gateway Attachment is now in the stack and the Internet
- Gateway is still attached, and there weren't any errors.
- The PhysicalResourceId did change, however.
---------------------------------------------------------------------------
{
"StackResources": [
{
"StackName": "case-XXXXXXXXXX",
"StackId": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829",
"LogicalResourceId": "IGW",
"PhysicalResourceId": "igw-028cb469265fa34a8",
"ResourceType": "AWS::EC2::InternetGateway",
"Timestamp": "2021-11-03T15:05:49.880000+00:00",
"ResourceStatus": "CREATE_COMPLETE",
"DriftInformation": {
"StackResourceDriftStatus": "NOT_CHECKED"
}
},
{
"StackName": "case-XXXXXXXXXX",
"StackId": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829",
"LogicalResourceId": "IGWassoc",
"PhysicalResourceId": "case-IGWas-3DBXKEM6SFPL",
"ResourceType": "AWS::EC2::VPCGatewayAttachment",
"Timestamp": "2021-11-03T15:08:48.657000+00:00",
"ResourceStatus": "CREATE_COMPLETE",
"DriftInformation": {
"StackResourceDriftStatus": "NOT_CHECKED"
}
},
{
"StackName": "case-XXXXXXXXXX",
"StackId": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829",
"LogicalResourceId": "VPC",
"PhysicalResourceId": "vpc-03b26a31ca1bca800",
"ResourceType": "AWS::EC2::VPC",
"Timestamp": "2021-11-03T15:05:28.179000+00:00",
"ResourceStatus": "CREATE_COMPLETE",
"DriftInformation": {
"StackResourceDriftStatus": "NOT_CHECKED"
}
}
]
}
{
"InternetGateways": [
{
"Attachments": [
{
"State": "available",
"VpcId": "vpc-03b26a31ca1bca800"
}
],
"InternetGatewayId": "igw-028cb469265fa34a8",
"OwnerId": "606679984871",
"Tags": [
{
"Key": "aws:cloudformation:logical-id",
"Value": "IGW"
},
{
"Key": "Name",
"Value": "Case-XXXXXXXXXX"
},
{
"Key": "aws:cloudformation:stack-name",
"Value": "case-XXXXXXXXXX"
},
{
"Key": "aws:cloudformation:stack-id",
"Value": "arn:aws:cloudformation:us-east-1:606679984871:stack/case-XXXXXXXXXX/6bf590e0-3cb7-11ec-b30d-0a5d84963829"
}
]
}
]
}
---------------------------------------------------------------------------
- Delete stack
---------------------------------------------------------------------------
- Wait stack delete complete
---------------------------------------------------------------------------
- Delete S3 bucket with templates
---------------------------------------------------------------------------
delete: s3://case-XXXXXXXXXX/case-XXXXXXXXXX-example-3.yaml
delete: s3://case-XXXXXXXXXX/case-XXXXXXXXXX-example-2.yaml
delete: s3://case-XXXXXXXXXX/case-XXXXXXXXXX-example-1.yaml
remove_bucket: case-XXXXXXXXXX
---------------------------------------------------------------------------
- DONE
---------------------------------------------------------------------------
Tried this with an AWS::EC2::Route and it failed with "route already exists."
So while I might be able to get away with brute forcing a VPCGatewayAttachment into the Stack, I won't be able to with Routes, and likely other resource types.
The amount of time to investigate what resources might be possible is not worth the effort for an undocumented approach.
The best way forward for getting resources into a Stack that don't support import will be to have a script that deletes existing non-importable resources and re-creates them with a change-set. This will have to be done during a maintenance window as there will certainly be outages in a non-redundant system.
Related
MongoDB Replica Set - The value of parameter linuxConfiguration.ssh.publicKeys.keyData is invalid
This is concerning the Azure Deployment Template for a MongoDB Replica Set defined here mongodb-replica-set-centos. When I run the recommended deployment commands to deploy the replica set, namely az group create --name <resource-group-name> --location <resource-group-location> # Use this command when you need to create a new resource group for your deployment. az deployment group create --resource-group <my-resource-group> --template-uri https://raw.githubusercontent.com/migr8/AzureDeploymentTemplates/main/mongo/mongodb-replica-set-centos/azuredeploy.json where the resource group is already set up. I receive the following error: { "status": "Failed", "error": { "code": "DeploymentFailed", "message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.", "details": [ { "code": "Conflict", "message": "{\r\n \"status\": \"Failed\",\r\n \"error\": {\r\n \"code\": \"ResourceDeploymentFailure\",\r\n \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n \"details\": [\r\n {\r\n \"code\": \"DeploymentFailed\",\r\n \"message\": \"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.\",\r\n \"details\": [\r\n {\r\n \"code\": \"BadRequest\",\r\n \"message\": \"{\\r\\n \\\"error\\\": {\\r\\n \\\"code\\\": \\\"InvalidParameter\\\",\\r\\n \\\"message\\\": \\\"The value of parameter linuxConfiguration.ssh.publicKeys.keyData is invalid.\\\",\\r\\n \\\"target\\\": \\\"linuxConfiguration.ssh.publicKeys.keyData\\\"\\r\\n }\\r\\n}\"\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n}" }, { "code": "Conflict", "message": "{\r\n \"status\": \"Failed\",\r\n \"error\": {\r\n \"code\": \"ResourceDeploymentFailure\",\r\n \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n \"details\": [\r\n {\r\n \"code\": \"DeploymentFailed\",\r\n \"message\": \"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.\",\r\n \"details\": [\r\n {\r\n \"code\": \"BadRequest\",\r\n \"message\": \"{\\r\\n \\\"error\\\": {\\r\\n \\\"code\\\": \\\"InvalidParameter\\\",\\r\\n \\\"message\\\": \\\"The value of parameter linuxConfiguration.ssh.publicKeys.keyData is invalid.\\\",\\r\\n \\\"target\\\": \\\"linuxConfiguration.ssh.publicKeys.keyData\\\"\\r\\n }\\r\\n}\"\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n}" } ] } } The problem field is in both primary-resources.json and secondary-resources.json appears to be "variables": { "subnetRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('subnet').vnet, parameters('subnet').name)]", "securityGroupName": "[concat(parameters('namespace'), parameters('vmbasename'), 'nsg')]", "linuxConfiguration": { "disablePasswordAuthentication": true, "ssh": { "publicKeys": [ { "path": "[concat('/home/', parameters('adminUsername'), '/.ssh/authorized_keys')]", "keyData": "[parameters('adminPasswordOrKey')]" } ] } } }, And ascociated with the variable adminPasswordOrKey. I have tried changing this to be both standard passwords and SSH keys of varying bit-depth, no luck... How can I fix this? Repro steps Run az group create --name <resource-group-name> --location <resource-group-location> where resource group exists. Run az deployment group create --resource-group <my-resource-group> --template-uri https://raw.githubusercontent.com/migr8/AzureDeploymentTemplates/main/mongo/mongodb-replica-set-centos/azuredeploy.json and step through the prompts Enter the relevant in formation. Further Investigation I have just seen this answer (https://stackoverflow.com/a/60860498/626442) saying specifically that Note: Please note that the only allowed path is /home//.ssh/authorized_keys due to a limitation of Azure. I have changed this value of the path, no joy, same error. :'[
You forgot to pass parameters in az deployment group create .... --parameters azuredeploy.parameters.json. You can download azuredeploy.parameters.json and change values as needed. See https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/template-tutorial-use-parameter-file?tabs=azure-cli#deploy-template for details. Specifically the error in the question complains about adminUsername parameter being empty. Bear in mind this user name is also being used in the home directory path, so limit yourself to lowcase ASCII a-z, numbers, underscore. No spaces, not special characters, no utf. Not related to the error, but be aware these necromancers use mongo 3.2 which was buried 4 years ago: https://www.mongodb.com/support-policy/lifecycles. Considering they open it wide to the internet you may have way more problems if you actually deploy it. UPDATE An example of the parameters I used: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "adminUsername": { "value": "yellow" }, "mongoAdminUsername": { "value": "phrase" }, "mongoAdminPassword": { "value": "settle#SING" }, "secondaryNodeCount": { "value": 2 }, "sizeOfDataDiskInGB": { "value": 2 }, "dnsNamePrefix": { "value": "written" }, "centOsVersion": { "value": "7.7" }, "primaryNodeVmSize": { "value": "Standard_D1_v2" }, "secondaryNodeVmSize": { "value": "Standard_D1_v2" }, "zabbixServerIPAddress": { "value": "Null" }, "adminPasswordOrKey": { "value": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDdNRTU0XF3xazhhDmwXXWGG7wp4AaQC1r89K7sZFRXp9VSUtydV59DHr67mV5/0DWI5Co1yWK713QJ00BPlBIHNMNLuoBBq8IkOx8fBZF1g9YFm5Zy4ay+CF4WgDITAsyxhKvUWL6jwG5M3XIdVYm49K+EFOCWSSaNtCk8tHhi3v6/5HFkwc2r0UL/WWWbbt5AmpJ8QOCDk/x+XcgCjP9vE5jYYGsFz9F6V1FdOpjVfDwi13Ibivj/w2wOZh2lQGskC+qDjd2upK13+RfWYHY3rr+ulNRPckHRhOqmZ2vlUapO4T0X9mM6ugSh1FprLP5nHdVCUls2yw4BAcSoM9NMiyafE56Xkp9h3bTAfx5Ufpe5mjwQp+j15np1pVpwDaEgk7ZeaPoZPhbalpvZGyg9KiKfs9+KUYHfGklIOHKJ3RUoPE286rg1U4LGswil5RARRSf86kBBHXaIPxy1X0N6QryeWhk0aM6LWEdl7mVbQksa7ilANnsaVMl7FSdY/Cc=" } } } DANGER: It will deploy publicly accessible mongodb replica set with publicly accessible credentials, so please delete the resources as soon as you are happy with testing/debugging This is how deployment looks like on the portal:
AWS CodePipeline GitHub webhook can not be registered with GitHub if repo is an organisation repository
When I set up the hook using the console it works, but when I try to do it using cloudformation it never works. It does not even work if I use the AWS CLI version: aws codepipeline register-webhook-with-third-party --webhook-name AppPipelineWebhook-aOnbonyFrNZu This is how my webhook looks like (output from "aws codepipeline list-webhooks"): { "webhooks": [ { "definition": { "name": "AppPipelineWebhook-aOnbonyFrNZu", "targetPipeline": "ftp-proxy-cf", "targetAction": "GitHubAction", "filters": [ { "jsonPath": "$.ref", "matchEquals": "refs/heads/{Branch}" } ], "authentication": "GITHUB_HMAC", "authenticationConfiguration": { "SecretToken": "<REDACTED>" } }, "url": "https://eu-west-1.webhooks.aws/trigger?t=eyJ<ALSO REDACTED>F9&v=1", "arn": "arn:aws:codepipeline:eu-west-1:<our account ID>:webhook:AppPipelineWebhook-aOnbonyFrNZu", "tags": [] } ] } The error I get is: An error occurred (ValidationException) when calling the RegisterWebhookWithThirdParty operation: Webhook could not be registered with GitHub. Error cause: Not found [StatusCode: 404, Body: {"message":"Not Found","documentation_url":"https://developer.github.com/v3/repos/hooks/#create-a-hook"}] These are the two relevant sections from my cloudformation file: Resources: AppPipelineWebhook: Type: AWS::CodePipeline::Webhook Properties: Authentication: GITHUB_HMAC AuthenticationConfiguration: SecretToken: '{{resolve:secretsmanager:my/secretpath/github:SecretString:token}}' Filters: - JsonPath: $.ref MatchEquals: 'refs/heads/{Branch}' TargetPipeline: !Ref CodePipeline TargetAction: GitHubAction TargetPipelineVersion: !GetAtt CodePipeline.Version # RegisterWithThirdParty: true CodePipeline: Type: AWS::CodePipeline::Pipeline Properties: Name: Ref: PipelineName RoleArn: !GetAtt CodePipelineServiceRole.Arn Stages: - Name: Source Actions: - Name: GitHubAction ActionTypeId: Category: Source Owner: ThirdParty Version: 1 Provider: GitHub OutputArtifacts: - Name: SourceOutput Configuration: Owner: myorganisationnameongithub Repo: ftp-proxy Branch: master OAuthToken: '{{resolve:secretsmanager:my/secretpath/github:SecretString:token}}' PollForSourceChanges: false It can poll changes all right. So if I manually order an execution of the GitHubAction stage from the AWS Console, the latest commits are downloaded. And if I set PollForSourceChanges: true, that kind of polling also works, but alas not the webhook workflow (because the hook can not be registered with GitHub)
The error is observed due to (2) possible causes: The Personal Access Token (PAT) is not configured to have the following GitHub scopes: admin:repo_hook and admin:org_hook 1 You can verify these permissions under 'User' (Top RIght) > 'Settings' > 'Developer Settings' > 'Personal Access Tokens' 'Owner' and/or 'Repository' name are incorrect in the CloudFormation template: For the Pipeline Configuration in CloudFormation, make sure 'GitHubOwner' is the 'Organization name' and repository name is just the repo name and does not have a "org/repo_name" in it, e.g. in your case: Example: Configuration: Owner: !Ref GitHubOwner <========== Github org name Repo: !Ref RepositoryName Branch: !Ref BranchName OAuthToken: !Ref GitHubOAuthToken <========== <Personal Access Token>
How to access CloudWatch Event data from triggered Fargate task?
I read the docs on how to Run an Amazon ECS Task When a File is Uploaded to an Amazon S3 Bucket. However, this document stops short of explaining how to get the bucket/key values from the triggering event from within the Fargate task code itself. How can that be done?
I am not sure if you still need the answer for this one. But I did something similar to what Steven1978 mentioned but only using CloudFormation. The config you're looking for is the InputTransformer. Check this example for a YAML CloudFormation template for an Event Rule: rEventRuleForFileUpload: Type: AWS::Events::Rule Properties: Description: "EventRule" State: "ENABLED" EventPattern: source: - "aws.s3" detail-type: - 'AWS API Call via CloudTrail' detail: eventSource: - s3.amazonaws.com eventName: - "PutObject" - "CompleteMultipartUpload" requestParameters: bucketName: "{YOUR_BUCKET_NAME}" Targets: - Id: '{YOUR_ECS_CLUSTER_ID}' Arn: !Sub "arn:aws:ecs:${AWS::Region}:${AWS::AccountId}:cluster/${NAME_OF_YOUR_CLUSTER_RESOURCE}" RoleArn: !GetAtt {YOUR_ROLE}.Arn EcsParameters: TaskCount: 1 TaskDefinitionArn: !Ref {YOUR_TASK_DEFINITION} LaunchType: FARGATE {... WHATEVER CONFIG YOU MIGHT HAVE...} InputTransformer: InputPathsMap: s3_bucket: "$.detail.requestParameters.bucketName" s3_key: "$.detail.requestParameters.key" InputTemplate: '{ "containerOverrides": [ { "name": "{THE_NAME_OF_YOUR_CONTAINER_DEFINITION}", "environment": [ { "name": "EVENT_BUCKET", "value": <s3_bucket> }, { "name": "EVENT_OBJECT_KEY", "value": <s3_key> }] } ] }' With this approach, you'll be able to get the s3 bucket name (EVENT_BUCKET) and the s3 object key (EVENT_OBJECT_KEY) as environment variables inside your container. The info isn't very clear, indeed, but here are some sources I used to finally get it working: Container Override; https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerOverride.html InputTransformer: https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_InputTransformer.html#API_InputTransformer_Contents
GKE service catalog BigQuery ACL/permission problems - The user xx does not have bigquery.jobs.create permission in project yy
I am trying to use the service catalog of Google Kubernetes to connect to BigQuery. I had however a lot of issues regarding IAM/ACL permissions. I added the Owner role to the myProjectId#cloudservices.gserviceaccount.com account, since Editor was not enough to access IAM during the creation of a binding's service account. After manually adding projectReaders, projectWriters and projectOwners to the ACL of the dataset, I could finally read and write to BigQuery, but I can not create jobs, since this requires project permissions. The command to update the dataset was bq update --source /tmp/roles myDatasetId After that I tried to query bq, but it failed with root#batch-shell:/app# cat sql/xxx.sql | bq query --format=none --allow_large_results=true --destination_table=myDatasetId.pages_20180730 --maximum_billing_tier 3 BigQuery error in query operation: Access Denied: Project my-staging-project: The user k8s-bigquery-acc#my-staging-project.iam.gserviceaccount.com does not have bigquery.jobs.create permission in project my-staging-project. I tried to set the account's role to "Owner" and "BigQuery Job User" with no effect. I even tried all the other accounts as Owner. This are my current ACL permissions: [16:52:45] blackfalcon:~/src/myproject/batch :chris $ bq --format=prettyjson show myDatasetId { "access": [ { "role": "WRITER", "specialGroup": "projectWriters" }, { "role": "OWNER", "specialGroup": "projectOwners" }, { "role": "OWNER", "userByEmail": "myProjectId#cloudservices.gserviceaccount.com" }, { "role": "OWNER", "userByEmail": "k8s-bigquery-acc#my-staging-project.iam.gserviceaccount.com" }, { "role": "READER", "specialGroup": "allAuthenticatedUsers" }, { "role": "READER", "specialGroup": "projectReaders" } ], "creationTime": "1532859638248", "datasetReference": { "datasetId": "myDatasetId", "projectId": "my-staging-project" }, "defaultTableExpirationMs": "8000000000", "description": "myproject Access myDatasetId", "id": "my-staging-project:myDatasetId", "kind": "bigquery#dataset", "lastModifiedTime": "1533184961736", "location": "US", "selfLink": "https://www.googleapis.com/bigquery/v2/projects/my-staging-project/datasets/myDatasetId" } [16:53:02] blackfalcon:~/src/myproject/batch :chris $ gcloud projects get-iam-policy my-staging-project bindings: - members: - serviceAccount:k8s-bigquery-acc#my-staging-project.iam.gserviceaccount.com - user:myemail#somedomain.com role: roles/bigquery.admin - members: - serviceAccount:k8s-cloudsql-acc-staging#my-staging-project.iam.gserviceaccount.com role: roles/cloudsql.client - members: - serviceAccount:service-myProjectId#compute-system.iam.gserviceaccount.com role: roles/compute.serviceAgent - members: - serviceAccount:service-myProjectId#container-engine-robot.iam.gserviceaccount.com role: roles/container.serviceAgent - members: - serviceAccount:myProjectId-compute#developer.gserviceaccount.com - serviceAccount:myProjectId#cloudservices.gserviceaccount.com - serviceAccount:service-myProjectId#containerregistry.iam.gserviceaccount.com role: roles/editor - members: - serviceAccount:service-myProjectId#cloud-ml.google.com.iam.gserviceaccount.com role: roles/ml.serviceAgent - members: - serviceAccount:myProjectId#cloudservices.gserviceaccount.com - user:myemail#somedomain.com role: roles/owner - members: - serviceAccount:scg-fv6fz3sjnxo3cfpppcl2qs5edm#my-staging-project.iam.gserviceaccount.com role: roles/servicebroker.operator - members: - serviceAccount:service-myProjectId#gcp-sa-servicebroker.iam.gserviceaccount.com role: roles/servicebroker.serviceAgent - members: - serviceAccount:k8s-bigquery-acc#my-staging-project.iam.gserviceaccount.com - user:myemail#somedomain.com role: roles/storage.admin version: 1 It seems I need to set the projects ACL for BigQuery, but everything I found indicates, that setting the roles with IAM should be enough Any help would be greatly appreciated. UPDATE: I solved that for now. Turns out that the service account itself was not working properly. I tried giving an Owner role to the service account and used the service account locally to access a few gcloud resources, all failed with permission errors. I created then a new service account with the same permissions and tried again and it worked. So, it seems the service account was somehow broken. I deleted the bindings, then the IAM and service account and rebuild the bindings. Now it is working like a charm
Get TargetGroupArn from name?
You use TargetGroupArn in a CF template for ECS services. I have a situation where the target group has already been created and I want to make this a param for the template But those arn's are awful: arn:aws:elasticloadbalancing:us-east-1:123456:targetgroup/mytarget/4ed48ba353064a79 That unique number at the end makes this almost impossible. Can I reference the target by name instead of full arn in the template? Maybe i can use Fn::GetAtt here but not sure what that looks like This doesn't work: TargetGroupArn: !GetAtt mytarget.TargetGroupName I get error: An error occurred (ValidationError) when calling the CreateChangeSet operation: Template error: instance of Fn::GetAtt references undefined resource mytarget
Unfortunately with Target Groups, you won't be able to use convention to determine it's ARN due to the extra string at the end. If the Target Group was created in Cloudformation, it's easy enough to get the ARN output by using !Ref myTargetGroup. If the Target Group was created in another CF stack, try Exporting the Target Group ARN and use Fn::ImportValue when creating the ECS Service to input the Target Group ARN. Type: "AWS::ECS::Service" Properties: ... LoadBalancers: - ContainerName: MyContainer ContainerPort: 1234 TargetGroupArn: !ImportValue myExportedTargetGroupARN ...
If you want to use the available Target-group, You pass the target group name as the default parameter to the Service CF template. Internally refer the default parameter as the ref to the TargetGroupArn in the Action section of the LiestnerRule It will get the target group ARN. Check this link: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-service.html { "Parameters": { "VPC": { ... "TargetGroup": { "Description": "TargetGroup name for ListenerRule", "Type": "String", "Default": "my-target" } }, "Resources": { "Service": { "TaskDefinition": { .... "ListenerRule": { .... "Actions": [ { "TargetGroupArn": { "Ref": "TargetGroup" }, "Type": "forward" } ] } }, "ServiceRole": { } }