I am creating a VPC peering between 2 VPC's in different regions.For the VpcId for the peeringconnection setup I am trying to import the VPC ID from another stack that previously created the VPC. Here's my code:
VpcPeeringConnection:
Type: "AWS::EC2::VPCPeeringConnection"
Properties:
VpcId:
- Fn::ImportValue: !Sub ${VpcStack}-vpcId
PeerVpcId: !Ref PeerVPCId
PeerRegion: !Ref PeerRegion
I get the following error
Template error: the attribute in Fn::ImportValue must not depend on any resources, imported values, or Fn::GetAZs
It is not possible to use the Fn::ImportValue to retrieve a value from a stack in another region.
You can't create cross-stack references across regions. You can use the intrinsic function Fn::ImportValue to import only values that have been exported within the same region.
source
Related
I am using CloudFormation. In my YAML I define a resource of type AWS::ECS::TaskDefinition and its ContainerDefinition property. In the ContainerDefinition I have RepositoryCredentials. Its CredentialsParameter must be the ARN of a secret according to the docs.
We will have two secrets of the same symbolic name abc in the same region, one for test and one for production. These secrets are not defined in the same CloudFormation source as mine so it appears I cannot !Ref them. And because the final few characters are randomized, it is not enough to substitute the account id in the ARN string.
I could simply make the secret ARN, or the random suffix, a parameter required to create/update the CF stack, but is there some way to look up the ARN by symbolic name? Something like CredentialsParameter !ArnFor ['secret', 'abc'] perhaps?
I have deployed EC2 Instance through Cloudformation and need to update the security group now and I am doing the changes in the existing template but in the Change set I can see my EC2 is getting replaced. How I can modify the security group without recreating the Instance in CF
Tried updating SG in the template but it is recreating EC2
It depends on the changes you to want to make on the AWS::EC2::SecurityGroup resource. If you change the GroupDescription, GroupName or VpcId properties, an update for these require replacement.
This means a new AWS::EC2::SecurityGroup resource will be created and the old one will be deleted. A new PhysicalId will be created for the AWS::EC2::SecurityGroup resource.
When looking at the AWS::EC2::Instance SecurityGroups property update requires replacement. Because a new PhysicalId was generated for the Security Group, this means the sg-idxxxxx passed to the EC2 instance is different causing the EC2 to be replaced.
Properties you can modify on the AWS::EC2::SecurityGroup resource that will not replace your EC2 instance are SecurityGroupEgress, SecurityGroupIngress and Tags because for these an update requires some interruptions or no interruption.
I am trying to build an AWS EKS Cluster with AWS cdk in Java.
We have an existing VPC and subnets which need to get some Kubernetes tags like kubernetes.io/role/internal-elb=1 etc.
I can get the ISubnets by getting the vpc with:
IVpc vpc = Vpc.fromVpcAttributes(this, "my-vpc", vpcAttributes);
List<ISubnet> subnets = vpc.getPrivateSubnets();
subnets.forEach(iSubnet -> Tag.add(iSubnet, "kubernetes.io/role/internal-elb", "1"));
but awscdk.core.Tag.add() is expecting a Construct, which I am not creating because the subnet already exists.
Also tried the example here: https://docs.aws.amazon.com/de_de/cdk/latest/guide/tagging.html
private void addTagToAllVPCSubnets(Tag tag) {
TagProps includeOnlySubnets = TagProps.builder()
.includeResourceTypes(singletonList("AWS::EC2::Subnet"))
.build();
Tag.add(this, tag.getKey(), tag.getValue(), includeOnlySubnets);
}
... but still i can not see any of the new tags in the CF yaml of the cdk synth.
Any help will be appreciated!
You can do it automatically using lambda-supported custom resources.
It seems like this is a limitation in CDK at the moment. This is something that the EKS construct in CDK should deal with, but which is currently not possible as indicated by a warning during a CDK deployment:
[Warning at /stack/some-project-EKS-cluster] Could not auto-tag private subnets with "kubernetes.io/role/internal-elb=1", please remember to do this manually
For the same reason that this can't be done automatically, you can't do it by using Tag.add().
Since the EKS module in CDK is still experimental/development preview, you have three options right now:
Wait for a full release, which perhaps includes automatic subnet tagging.
Create your own VPC through CDK, which allows you to tag your own subnets.
Manually edit existing subnets through the VPC service interface in the AWS console
A good idea would probably be to create an issue on the AWS CDK Github and request tagging existing subnets (and other existing constructs in general) as a feature. I could not find other issues regarding this on their Github.
I created the repo as shown below, but I want to add a branch with a specific name at creation time.
Resources:
CodeCommitRepository:
Type: AWS::CodeCommit::Repository
Properties:
RepositoryName: !Ref Message
Answer:
It is not possible to create a CodeCommit branch directly in the CloudFormation template using the resource AWS::CodeCommit::Repository AND there is no other resource available to do this either.
Why
IMHO: Because CloudFormation is an infrastructure-as-code service. So dealing with details of what's going to be running inside the infrastructure (or on it) should not be a part of the infrastructure code itself. But that's just my point of view.
Possible Alternative:
Write a Lambda function within the CloudFormation template, it should depend on the repository resource AWS::CodeCommit::Repository so you can use DependsOn while defining your Lambda function and set it to CodeCommitRepository like below:
Resources:
CodeCommitRepository:
Type: AWS::CodeCommit::Repository
Properties:
RepositoryName: !Ref Message
LambdaForBranchCreation:
Type: AWS::Lambda::Function
DependsOn: CodeCommitRepository
Properties:
Code:
And then use boto3 to create a branch using this api call. Hope it helps!
Reference:
CloudFormation Template Reference lists reference templates for all the possible resource types and their properties. The resource: AWS::CodeCommit::Repository is the only resource type listed under the CodeCommit Resource Type Reference and there is no property listed under this resource
Looking at this example of mounting an EFS volume for persisting docker volumes in ECS, I'm unsure how to provide the correct mount point for the availability zone that the instance is in. I have two availability zones in my stack and need the correct mount point to insert in this section of the cfn-init:
01_mount:
command: !Join [ "", [ "mount -t nfs4 -o nfsvers=4.1 ", !ImportValue '!Ref FileSystem', ".efs.", !Ref 'AWS::Region', ".amazonaws.com:/ /", !Ref MountPoint ] ]
02_fstab:
command: !Join [ "", [ "echo \"", !ImportValue '!Ref FileSystem', ".efs.", !Ref 'AWS::Region', ".amazonaws.com:/ /", !Ref MountPoint, " nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0\" >> /etc/fstab" ] ]
03_permissions:
command: !Sub "chown -R ec2-user:ec2-user /${MountPoint}"
It is no longer necessary to use the availability-zone-specific mount target when mounting an EFS filesystem, if you are using the DNS settings in your VPC and have other necessary prerequisites in place in the VPC configuration.
File system DNS name – Using the file system's DNS name is your simplest mounting option. The file system DNS name will automatically resolve to the mount target’s IP address in the Availability Zone of the connecting Amazon EC2 instance. You can get this DNS name from the console, or if you have the file system ID, you can construct it using the following convention:
file-system-id.efs.aws-region.amazonaws.com
(emphasis added)
http://docs.aws.amazon.com/efs/latest/ug/mounting-fs-mount-cmd-dns-name.html
This feature was introduced in December, 2016, several months after the service was released from preview. Formerly, the hostname style shown above had to be prepended with the availability zone you wanted. That option is still supported, but this option effectively eliminates this awkward configuration requirement, both in docker and on ordinary instances with fstab mounts.
See the referenced page for the VPC configuration elements that must be in place for this solution to work in your VPC.