How I can update security group through Cloudformation without recreating EC2 Instance - aws-cloudformation

I have deployed EC2 Instance through Cloudformation and need to update the security group now and I am doing the changes in the existing template but in the Change set I can see my EC2 is getting replaced. How I can modify the security group without recreating the Instance in CF
Tried updating SG in the template but it is recreating EC2

It depends on the changes you to want to make on the AWS::EC2::SecurityGroup resource. If you change the GroupDescription, GroupName or VpcId properties, an update for these require replacement.
This means a new AWS::EC2::SecurityGroup resource will be created and the old one will be deleted. A new PhysicalId will be created for the AWS::EC2::SecurityGroup resource.
When looking at the AWS::EC2::Instance SecurityGroups property update requires replacement. Because a new PhysicalId was generated for the Security Group, this means the sg-idxxxxx passed to the EC2 instance is different causing the EC2 to be replaced.
Properties you can modify on the AWS::EC2::SecurityGroup resource that will not replace your EC2 instance are SecurityGroupEgress, SecurityGroupIngress and Tags because for these an update requires some interruptions or no interruption.

Related

Rename the EKS creator's IAM user name via aws cli

If we have a role change in the team, I read that EKS creator can NOT be transferred. Can we instead rename the creator's IAM user name via aws cli? Will that break EKS?
I only find ways to add new user using configmap but this configmap doesn't have the root user in there.
$ kubectl edit configmap aws-auth --namespace kube-system
There is no way to transfer the root user of an EKS cluster to another IAM user. The only way to do this would be to delete the cluster and recreate it with the new IAM user as the root user.
Can we instead rename the creator's IAM user name via aws cli? Will that break EKS?
The creator record is immutable and managed within EKS. This record is simply not accessible using CLI and not amendable (including DELETE).
How do we know a cluster was created by IAM roles or IAM users?
If you cannot find the identity (userIdentity.arn) in CloudTrail that invoked CreateCluster (eventName) for the cluster (responseElements.clusterName) in last 90 days, you need to raise it to the AWS Support to obtain the identity.
is it safe to delete the creator IAM user?
Typically, you start with deactivate the IAM user account (creator) if you are not sure of any side effect. You can proceed to delete the account later when you are confident to do so.
As already mentioned in the answer by Muhammad, it is not possible to transfer the root/creator role to another IAM user.
To avoid getting into the situation that you describe, or any other situation where the creator of the cluster should not stay root, it is recommended to not create clusters with IAM users but with assumed IAM roles instead.
This leads to the IAM role becoming the "creator", meaning that you can use IAM access management to control who can actually assume the given role und thus act as root.
You can either have dedicated roles for each cluster or one role for multiple clusters, depending on how you plan to do access management. The limits will however apply later, meaning that you can not switch the creator role afterwards, so this must be properly planned in advance.

AWS CDK: tagging existing subnets

I am trying to build an AWS EKS Cluster with AWS cdk in Java.
We have an existing VPC and subnets which need to get some Kubernetes tags like kubernetes.io/role/internal-elb=1 etc.
I can get the ISubnets by getting the vpc with:
IVpc vpc = Vpc.fromVpcAttributes(this, "my-vpc", vpcAttributes);
List<ISubnet> subnets = vpc.getPrivateSubnets();
subnets.forEach(iSubnet -> Tag.add(iSubnet, "kubernetes.io/role/internal-elb", "1"));
but awscdk.core.Tag.add() is expecting a Construct, which I am not creating because the subnet already exists.
Also tried the example here: https://docs.aws.amazon.com/de_de/cdk/latest/guide/tagging.html
private void addTagToAllVPCSubnets(Tag tag) {
TagProps includeOnlySubnets = TagProps.builder()
.includeResourceTypes(singletonList("AWS::EC2::Subnet"))
.build();
Tag.add(this, tag.getKey(), tag.getValue(), includeOnlySubnets);
}
... but still i can not see any of the new tags in the CF yaml of the cdk synth.
Any help will be appreciated!
You can do it automatically using lambda-supported custom resources.
It seems like this is a limitation in CDK at the moment. This is something that the EKS construct in CDK should deal with, but which is currently not possible as indicated by a warning during a CDK deployment:
[Warning at /stack/some-project-EKS-cluster] Could not auto-tag private subnets with "kubernetes.io/role/internal-elb=1", please remember to do this manually
For the same reason that this can't be done automatically, you can't do it by using Tag.add().
Since the EKS module in CDK is still experimental/development preview, you have three options right now:
Wait for a full release, which perhaps includes automatic subnet tagging.
Create your own VPC through CDK, which allows you to tag your own subnets.
Manually edit existing subnets through the VPC service interface in the AWS console
A good idea would probably be to create an issue on the AWS CDK Github and request tagging existing subnets (and other existing constructs in general) as a feature. I could not find other issues regarding this on their Github.

RouteController failed to create a route on GKE

I have a cluster on GKE whose node pool I create when I want to use the cluster, and delete when I'm done with it.
It's a two node cluster with the master in europe-west2-a and with and whose node zones are europe-west2-a and europe-west2-b.
The most recent creation resulted in the node in zone B failing with NetworkUnavailable because RouteController failed to create a route. The reason was because Could not create route xxx 10.244.1.0/24 for node xxx after 342.263706ms: instance not found.
Why would this be happening all of a sudden, and what can I do to fix it?!
You didn't mention which version of GKE you are using so just for clarification:
Changes in access scopes
Beginning with Kubernetes version 1.10, gcloud and GCP Console no longer grants the compute-rw access scope on new clusters and new node pools by default. Furthermore, if --scopes is specified in gcloud container create, gcloud no longer silently adds compute-rw or storage-ro.
In any case you can still revert to legacy access scopes but this is not recommended approach.
Hope this help.
With gke 1.13.6-gke.13, some of the default scopes were changed, including the compute-rw scope being removed. I think that due to the age of the cluster, this scope was necessary for a route to be correctly created between nodes in a node pool.
In the end, my gcloud creation command had these scopes:
--scopes https://www.googleapis.com/auth/projecthosting,storage-rw,monitoring,trace,compute-rw

What prerequisites do I need for Kubernetes to mount an EBS volume?

The documentation doesn’t go into detail. I imagine I would at least need an iam role.
This is what we have done and it worked well.
I was on kubernetes 1.7.2 and trying to provision storage (dynamic/static) for kubernetes pods on AWS. Some of the things mentioned below may not be needed if you are not looking for dynamic storage classes.
Made sure that the DefaultStorageClass admission controller is enabled on the API server. (DefaultStorageClass is among the comma-delimited, ordered list of values for the --enable-admission-plugins flag of the API server component.)
I have given options --cloud-provider=aws and --cloud-config=/etc/aws/aws.config (while starting apiserver, controller-manager, kubelet)
(the file /etc/aws/aws.conf is present on instance with below contents)
$ cat /etc/aws/aws.conf
[Global]
Zone = us-west-2a
Created IAM policy added to role (as in link below), created instance profile for it and attached to the instances. (NOTE: I missed attaching instance profile and it did not work).
https://medium.com/#samnco/using-aws-elbs-with-the-canonical-distribution-of-kubernetes-9b4d198e2101
For dynamic provisioning:
Created storage class and made it default.
Let me know it this did not work.
Regards
Sudhakar
This is the one used by kubespray, and is very likely indicative of a rational default:
https://github.com/kubernetes-incubator/kubespray/blob/v2.5.0/contrib/aws_iam/kubernetes-minion-policy.json
with the tl;dr of that link being to create an Allow for the following actions:
s3:*
ec2:Describe*
ec2:AttachVolume
ec2:DetachVolume
route53:*
(although I would bet that s3:* is too wide, I don't have the information handy to provide a more constrained version; similar observation on the route53:*)
All of the Resource keys for those are * except the s3: one which restricts the resource to buckets beginning with kubernetes-* -- unknown if that's just an example, or there is something special in the kubernetes prefixed buckets. Obviously you might have a better list of items to populate the Resource keys to genuinely restrict attachable volumes (just be careful with dynamically provisioned volumes, as would be created by PersistentVolume resources)

How to create a role with specified arn with cloudformation in aws?

I want to create a role with specified arn by cloudformation in aws. But I don't know how to do it. Because we can't specify a name for the role in the template file.
Assuming you are referring to an IAM role, this is not possible using CloudFormation. CloudFormation will automatically generate a name for your role based on the stack name and the logical resource ID. For example, arn:aws:iam::112233445566:role/myStackName-myRoleName-XXXXXXXXXXXXX.