I would like to access supporting files of cloud formation from s3 using a relative path from where the template is
I felt like I need to use AWS::Include Transform and CodeUri together to achieve this but I am not sure how to do the same
Right now in cloudformation init am using
Below is the sample command
"1_Step1": {
"command": {
"Fn::Join": [
"",
[
"powershell.exe -NoProfile -ExecutionPolicy ByPass -command \"& { &'C:\\ step1.ps1'", -> ec2 internal path
" bucketname-v/sub-version/dev/Artifacts” -> s3 folder full path
]
]
}
}
and I need somthing like
"1_Step1": {
"command": {
"Fn::Join": [
"",
[
"powershell.exe -NoProfile -ExecutionPolicy ByPass -command \"& { &'C:\\ extractDevopsInstaller.ps1'", -> ec2 internal path
" /Artifacts” -> s3 folder relative path ‘base of cloud formation template path from s3 ‘
]
]
}
}
It seems as if you're looking for the Fn::Split and Fn::Select intrinsic function.
e.g.:
Resources:
Foobar:
Type: Some::Resource::Type
Properties:
MyProperty: !Join
- /
- - C:\something
- !Select
- 1
- !Split
- /
- some/awesome/path/to/my/template
or in JSON:
"Resources": {
"Foobar": {
"Type": "Some::Resource::Type",
"Properties": {
"MyProperty": {
"Fn::Join": [
"/",
[
"C:\something",
{
"Fn::Select": [
1,
{
"Fn::Split": [
"/",
"some/awesome/path/to/my/template"
]
}
]
}
]
]
}
}
}
}
}```
Related
I want to add multiple Principal values for a KMS key using CloudFormation. This is a snippet of the code:
"KmsKeyManager": {
"Type": "String",
"Default": "user1,user2,user3"
}
"Principal": {
"AWS": {
"Fn::Split": [
",",
{
"Fn::Sub": [
"arn:aws:iam::${AWS::AccountId}:user/people/${rest}",
{
"rest": {
"Fn::Join": [
"",
[
"arn:aws:iam::",
{
"Ref": "AWS::AccountId"
},
":user/people/",
{
"Ref": "KmsKeyManager"
}
]
...
The ARN should be constructed as arn:aws:iam::12345678:user/people/user1 etc.
The template is accepted in the console, but when running I get the following error:
Resource handler returned message: "An ARN in the specified key policy is invalid.
I followed the answer here which resulted in the above error
CloudFormation Magic to Generate A List of ARNs from a List of Account Ids
Any idea where I am going wrong? CloudFormation is new to me, so the alternative is I create with 1 user and add new users manually.
Let me explain from the answer you linked. They use the string ":root,arn:aws:iam::" as a delimiter.
Therefore,
"Accounts" : {
"Type" : "CommaDelimitedList",
"Default" : "12222234,23333334,1122143234,..."
}
"rest": {
"Fn::Join": [
":root,arn:aws:iam::",
{ "Ref": "Accounts" }
]
}
gives rest like this.
12222234:root,arn:aws:iam::23333334:root,arn:aws:iam::1122143234
and this rest is substituted for ${rest} in "arn:aws:iam::${rest}:root" (This long string will be split finally with "Fn::Split".)
In your case, delimiter will be "arn:aws:iam::${AWS::AccountId}:user/people/".
This is also need to be joined:
{
"Fn::Join": [
"", [
"arn:aws:iam::",
{
"Ref": "AWS::AccountId"
},
":user/people/"
]
]
}
The total will be like:
"Fn::Sub": [
"arn:aws:iam::${AWS::AccountId}:user/people/${rest}",
{
"rest": {
"Fn::Join": [
"Fn::Join": [
"", [
"arn:aws:iam::",
{
"Ref": "AWS::AccountId"
},
":user/people/"
]
],
{
"Ref": "KmsKeyManager"
}
]
}
}
]
Currently we are deploying images with packer (In a build pipeline which is located in Azure DevOps) within our AWS domain with success. Now we want to take this a step further and we're trying to configure a couple of user for future Ansible maintenance. So we're written a script and tried it as an inline Powershell script but both of the options do not seem to pick up the variable which is set in the variable group in Azure DevOps, all the other variables are being used with success. My code is as follows:
{
"variables": {
"build_version": "{{isotime \"2006.01.02.150405\"}}",
"aws_access_key": "$(aws_access_key)",
"aws_secret_key": "$(aws_secret_key)",
"region": "$(region)",
"vpc_id": "$(vpc_id)",
"subnet_id": "$(subnet_id)",
"security_group_id": "$(security_group_id)",
"VagrantUserpassword": "$(VagrantUserPassword)"
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "{{user `region`}}",
"vpc_id": "{{user `vpc_id`}}",
"subnet_id": "{{user `subnet_id`}}",
"security_group_id": "{{user `security_group_id`}}",
"source_ami_filter": {
"filters": {
"name": "Windows_Server-2016-English-Full-Base-*",
"root-device-type": "ebs",
"virtualization-type": "hvm"
},
"most_recent": true,
"owners": [
"801119661308"
]
},
"ami_name": "WIN2016-CUSTOM-{{user `build_version`}}",
"instance_type": "t3.xlarge",
"user_data_file": "userdata.ps1",
"associate_public_ip_address": true,
"communicator": "winrm",
"winrm_username": "Administrator",
"winrm_timeout": "15m",
"winrm_use_ssl": true,
"winrm_insecure": true,
"ssh_interface": "private_ip"
}
],
"provisioners": [
{
"type": "powershell",
"environment_vars": ["VagrantUserPassword={{user `VagrantUserPassword`}}"],
"inline": [
"Install-WindowsFeature web-server,web-webserver,web-http-logging,web-stat-compression,web-dyn-compression,web-asp-net,web-mgmt-console,web-asp-net45",
"New-LocalUser -UserName 'Vagrant' -Description 'User is responsible for Ansible connection.' -Password '$(VagrantUserPassword)'"
]
},
{
"type": "powershell",
"environment_vars": ["VagrantUserPassword={{user `VagrantUserPassword`}}"],
"scripts": [
"scripts/DisableUAC.ps1",
"scripts/iiscompression.ps1",
"scripts/ChocoPackages.ps1",
"scripts/PrepareAnsibleUser.ps1"
]
},
{
"type": "windows-restart",
"restart_check_command": "powershell -command \"& {Write-Output 'Machine restarted.'}\""
},
{
"type": "powershell",
"inline": [
"C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule",
"C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\SysprepInstance.ps1 -NoShutdown"
]
}
]
}
The "VagrantUserpassword": "$(VagrantUserPassword)" is what is not working, we've tried multiple options but none of them seem to be working.
Any idea's?
Kind regards,
Rick.
Based on my test, the pipeline variables indeed couldn't pass to the powershell environment variable.
Workaround:
You could try to use the Replace Token task to pass the pipeline value to Json file.
Here are the steps:
1.Set the value in Json file.
{
"variables": {
....
"VagrantUserpassword": "#{VagrantUserPassword}#"
},
Use Replace Token task before the script task.
Set the value in Pipeline variables.
Then the value could be set successfully.
On the other hand, I also find some issues in your sample file.
"environment_vars": ["VagrantUserPassword={{user VagrantUserPassword}}"], The VagrantUserPassword need to be replaced with VagrantUserpassword(["VagrantUserPassword={{user VagrantUserpassword}}"]).
Note: This is case sensitive.
You need to use $Env:VagrantUserPassword to replace the $(VagrantUserPassword)
For example:
"inline": [
"Write-Host \"Automatically generated aws password is: $Env:VagrantUserPassword\"",
"Write-Host \"Automatically generated aws password is: $Env:VAR5\""
]
I am reading path from a JSON file:
$path = zoomdata\conf\consul.conf.d\f1.txt
filename: f1.txt
I am using the following command:
jar xf jar1.jar "$path"
I am using PowerShell.
My JSON file:
{
"name": "zipZoom",
"extension": "jar",
"change_flag": "TRUE",
"unpack": "TRUE",
"thirdparty_version": "",
"fileinfo": [
{
"fileName": "edc-mssql.properties",
"file_extraction_path": "zoomdata\\conf\\",
"file_destination_path": "release\\installers\\11.3.5.x\\setupfiles\\botinsight\\zoomdata"
},
{
"fileName": "query-engine.properties",
"file_extraction_path": "zoomdata\\conf\\",
"file_destination_path": "release\\installers\\11.3.5.x\\setupfiles\\botinsight\\zoomdata"
},
{
"fileName": "consul.json",
"file_extraction_path": "zoomdata\\conf\\consul.conf.d",
"file_destination_path": "release\\installers\\11.3.5.x\\setupfiles\\botinsight\\zoomdata"
}
]
}
Enquote you path in back ticks:
$fileName = "f1.txt"
$path = "zoomdata\conf\consul.conf.d\$fileName"
Start-Process jar -ArgumentList "xf jar1.jar `"$path`""
I am trying to set up a Cloud Formation Template to create a Cloudwatch-Dashboard.
In this context I want to use the Pseudo Variable to ascertain the Region.
If I simply use the Pseudo Variable AWS::Regionthe code doesnt seem to work:
AutoscalingDashboard:
Type: AWS::CloudWatch::Dashboard
Properties:
DashboardName: AutoscalingDashboard
DashboardBody: '
{
"widgets":[
{
"type":"metric",
"x":0,
"y":0,
"width":12,
"height":6,
"properties":{
"metrics":[
[ "AWS/ECS", "MemoryUtilization", "ServiceName", "invoice_web", "ClusterName", "InvoicegenappCluster" ],
[ "...", "invoice_data", ".", "." ],
[ "...", "invoice_generator", ".", "." ]
],
"region": "AWS::Region",
"period": 300,
"view": "timeSeries",
"title":"ECS MemoryUtilization",
"stacked": false
}
How can I use the Pseudo Variable AWS::Region or a RefFunction to keep the variables dynamically?
Merci A
In your example, the DashboardBody is a string, therefore AWS::Region will not get replaced.
You'll probably be better by adding the Fn::Sub function, like:
AutoscalingDashboard:
Type: 'AWS::CloudWatch::Dashboard'
Properties:
DashboardName: 'AutoscalingDashboard'
DashboardBody: !Sub >-
{
"widgets":[
{
"type":"metric",
"x":0,
"y":0,
"width":12,
"height":6,
"properties":{
"metrics":[
[ "AWS/ECS", "MemoryUtilization", "ServiceName", "invoice_web", "ClusterName", "InvoicegenappCluster" ],
[ "...", "invoice_data", ".", "." ],
[ "...", "invoice_generator", ".", "." ]
],
"region": "${AWS::Region}",
"period": 300,
"view": "timeSeries",
"title":"ECS MemoryUtilization",
"stacked": false
}
}]
}
Notice the ${} around the region, and also the YAML block string >-.
I'd like to reference an EC2 Container Registry image in the Elastic Beanstalk section of my Cloud Formation template. The sample file references an S3 bucket for the source bundle:
"applicationVersion": {
"Type": "AWS::ElasticBeanstalk::ApplicationVersion",
"Properties": {
"ApplicationName": { "Ref": "application" },
"SourceBundle": {
"S3Bucket": { "Fn::Join": [ "-", [ "elasticbeanstalk-samples", { "Ref": "AWS::Region" } ] ] },
"S3Key": "php-sample.zip"
}
}
}
Is there any way to reference an EC2 Container Registry image instead? Something like what is available in the EC2 Container Service TaskDefinition?
Upload a Dockerrun file to S3 in order to do this. Here's an example dockerrun:
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "my-bucket",
"Key": "mydockercfg"
},
"Image": {
"Name": "quay.io/johndoe/private-image",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080:80"
}
],
"Volumes": [
{
"HostDirectory": "/var/app/mydb",
"ContainerDirectory": "/etc/mysql"
}
],
"Logging": "/var/log/nginx"
}
Use this file as the s3 key. More info is available here.