Discover AWS ECS cluster association from running container (self managed cluster) - amazon-ecs

I'm working with ECS with self managed EC2 based clusters. We have 1 cluster for each env, dev/stage/prod
I'm struggling to have my containers in ECS be aware of what cluster / environment they start in so that on task start up time they can properly configure themselves without having to bake the env specific config into the images.
It would be really easy if there was some command to run inside the container that could return the cluster name. It seems like that should be easy. I can think of a few sub optimal ways to do this. get the container/host IP and look up the instance. Try to grab /etc/ecs/ecs.config from the host instance etc...
It seems like there should be a better way. my google skills are failing... thx!

The ECS Task Metadata endpoint, available at ${ECS_CONTAINER_METADATA_URI_V4}/task within any ECS task, will return the cluster name, among other things.
Alternatively, if you were using an IaC tool such as Terraform or CloudFormation to build your ECS tasks, it would be trivial to inject the cluster name as an environment variable in the tasks.

Mark B's answer is better but before I got that I found this solution:
Add ECS_ENABLE_CONTAINER_METADATA=true to the /etc/ecs/ecs.config file on the ec2 host and you will have access to the ecs.config file as well as having the file available as and env var. See:
[Ecs Container Metadata File][1]
I think Mark's answer is better b/c this solution involves editing the userdata script for the host instances
[1]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-metadata.html

Related

Aws ECS Fargate enforce readonlyfilesystem

I need to enforce on ECS Fargate services 'readonlyrootFileSystem' to reduce Security hub vulnerabilities.
I thought it was an easy task by just setting it true in the task definition.
But it backfired as the service does not deploy because the commands in the dockerfile are not executed because they do not have access to folders and also this is incompatible with ssm execute commands, so I won't be able to get inside the container.
I managed to set the readonlyrootFileSystem To true and have my service back on by mounting a volume. To do I mounted a tmp volume that is used by the container to install dependencies at start and a data volume to store data (updates).
So now according to the documentation the security hub vulnerability should be fixed as the rule needs that variable not be False but still security hub is flagging the task as non complaint.
---More update---
the task definition of my service spins also a datadog image for monitoring. That also needs to have its filesystem as readonly to satisfy security hub.
Here I cannot solve as above because datadog agent needs access to /etc/ folder and if I mount a volume there I will lose files and the service wont' start.
is there a way out of this?
Any ideas?
In case someone stumbles into this.
The solution (or workaround, call it as you please), was to set readonlyrootFileSystem True for both container and sidecard (datadog in this case) and use bind mounts.
The rules for monitoring ECS using datadog can be found here
The bind mount that you need to add for your service depend on how you have setup your dockerfile.
in my case it was about adding a volume for downloading data.
Moreover since with readonly FS ECS exec (SSM) does not work, if you want this you also have to add mounts: if added two mounts in /var/lib/amazon and /var/log/amazon. This will allow to have ssm (docker exec basically into your container)
As for datadog, I just needed to fix the mounts so that the agent could work. In my case, since it was again a custom image, I mounted a volume on /etc/datadog-agent.
happy days!

ecs instances metadata files for EKS

I know that in Amazon ECS container agent by setting the variable ECS_ENABLE_CONTAINER_METADATA=true ecs metadata files are created for the containers.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-metadata.html
Is there any similar feature for the EKS?. I would like to retrieve instance metadata info from a file inside the container instead of using the IMDSv2 api.
you simply can't, you still need to use IMDSv2 api in your service,if you want to have get instance metadata
if you're looking at the Pod Metadata, ref https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
or you can use pod labels too...
Try adding this as part of the user data:
"echo 'ECS_ENABLE_CONTAINER_METADATA=1' > /etc/ecs/ecs.config"
Found here: https://github.com/aws/amazon-ecs-agent/issues/1514

Kubernetes: run a job on node reboot only once

Is there a way to run a job only once upon machine reboot in Kubernetes?
Thought of running a cronjob as static pod but seems kubelet does not like it.
Edit: Based on the replies, I'd like to clarify. I'm only looking at doing this through native Kubernetes. I'm ok writing a cronjob in Kubernetes but I need this to run only once and upon node reboot.
Not sure your platform.
For example, in AWS, ec2 instance has user_data (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html)
It will run commands on Your Linux/Windows Instance at Launch
You should be fine to find the similar solutions for other cloud providers or on-premise servers.
If I understand you correctly you should consider using DaemonSet:
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As
nodes are added to the cluster, Pods are added to them. As nodes are
removed from the cluster, those Pods are garbage collected. Deleting a
DaemonSet will clean up the Pods it created.
That way you could create a container with a job that would be run from a DaemonSet.
Alternatively you could consider DaemonJob:
This is an example CompositeController that's similar to Job, except
that a pod will be scheduled to each node, similar to DaemonSet.
Also there are:
Kubebuilder
Kubebuilder is a framework for building Kubernetes APIs using custom
resource definitions (CRDs).
and:
Metacontroller
Metacontroller is an add-on for Kubernetes that makes it easy to write
and deploy custom controllers in the form of simple scripts.
But the first option that I have provided would be easier to implement in my opinion.
Please let me know if that helped.
Consider using CronJob for this. It takes the same format as normal cron scheduler on linux.
Besides the usual format (minute / hour / day of month / month / day of week) that is widely used to indicate a schedule, cron scheduler also allows the use of #reboot.
This directive, followed by the absolute path to the script, will cause it to run when the machine boots.

Terraform apply, how to increment count and add kubernetes worker nodes to the existing workers?

I deployed k8s cluster on bare metal using terraform following this repository on github
Now I have three nodes:
ewr1-controller, ewr1-worker-0, ewr1-worker-1
Next, I would like to run terraform apply and increment the worker nodes (*ewr1-worker-3, ewr1-worker-4 ... *) while keeping the existing controller and worker nodes.
I tried incrementing the count.index to start from 3, however it still overwrites the existing workers.
resource "packet_device" "k8s_workers" {
project_id = "${packet_project.kubenet.id}"
facilities = "${var.facilities}"
count = "${var.worker_count}"
plan = "${var.worker_plan}"
operating_system = "ubuntu_16_04"
hostname = "${format("%s-%s-%d", "${var.facilities[0]}", "worker", count.index+3)}"
billing_cycle = "hourly"
tags = ["kubernetes", "k8s", "worker"]
}
I havent tried this but if I do
terraform state rm 'packet_device.k8s_workers'
I am assuming these worker nodes will not be managed by the kubernetes master. I don't want to create all the nodes at beginning because the worker nodes that I am adding will have different specs(instance types).
The entire script I used is available here on this github repository.
I appreciate it if someone could tell what I am missing here and how to achieve this.
Thanks!
Node resizing is best addressed using an autoscaler. Using Terraform to scale a nodepool might not be the optimal approach as the tool is meant to declare the state of the system rather than dynamically change it. The best approach for this is to use a cloud auto scaler.
In bare metal, you can implement a CloudProvider interface (like the one provided by cloud such as AWS, GCP, Azure) as described here
After implementing that, you need to determine if your K8s implementation can be operated as a provider by Terraform, and if that's the case, find the nodepool autoscaler resource that allows the autoscaling.
Wrapping up, Terraform is not meant to be used as an autoscaler given its natures as a declarative language that describes the infrastructure.
The autoscaling features in K8s are meant to tackle this kind of requirements.
I solved this issue by modifying and removing modules, resources from the terraform.state: manually modifying it and using terraform state rm <--->.
Leaving out (remove state) the section that I want to keep as they
are.
Modifying the sections in terraform.state that I want change when new servers are added.
Incrementing the counter to add new resources, see terraform
interpolation.
I am using bare-metal cloud provider to deploy k8s and it doesn't support k8s HA or VA autos-calling. This is may not be optimal solution as other have pointed out but if it is not something you need to do quite often, terraform can do the job albeit the hardway.

terraforming with dependant providers

In my terraform infrastructure, I spin up several Kubernetes clusters based on parameters, then install some standard contents to those Kubernetes clusters using the kubernetes provider.
When I change the parameters and one of the clusters is no longer needed, terraform is unable to tear it down because the provider and resources are both in the module. I don't see an alternative, however, because I create the kubernetes cluster in that same module, and the kubernetes object are all per kubernetes cluster.
All solutions I can think of involve adding a bunch of boilerplate to my terraform config. Should I consider generating my terraform config from a script?
I made a git repo that shows exactly the problems I'm having:
https://github.com/bukzor/terraform-gke-k8s-demo
TL;DR
Two solutions:
Create two separate modules with Terraform
Use interpolations and depends_on between the code that creates your Kubernetes cluster and the kubernetes resources:
resource "kubernetes_service" "example" {
metadata {
name = "my-service"
}
depends_on = ["aws_vpc.kubernetes"]
}
resource "aws_vpc" "kubernetes" {
...
}
When destroying resources
You are encountering a dependency lifecycle issue
PS: I don't know the code you've used to create / provision your Kubernetes cluster but I guess it looks like this
Write code for the Kubernetes cluster (creates a VPC)
Apply it
Write code for provisionning Kubernetes (create an Service that creates an ELB)
Apply it
Try to destroy everything => Error
What is happenning is that by creating a LoadBalancer Service, Kubernetes will provision an ELB on AWS. But Terraform doesn't know that and there is no link between the ELB created and any other resources managed by Terraform.
So when terraform tries to destroy the resources in the code, it will try to destroy the VPC. But it can't because there is an ELB inside that VPC that terraform doesn't know about.
The first thing would be to make sure that Terraform "deprovision" the Kubernetes cluster and then destroy the cluster itself.
Two solutions here:
Use different modules so there is no dependency lifecycle. For example the first module could be k8s-infra and the other could be k8s-resources. The first one manages all the squeleton of Kubernetes and is apply first / destroy last. The second one manages what is inside the cluster and is apply last / destroy first.
Use the depends_on parameter to write the dependency lifecycle explicitly
When creating resources
You might also ran into a dependency issue when terraform apply cannot create resources even if nothing is applied yet. I'll give an other example with a postgres
Write code to create an RDS PostgreSQL server
Apply it with Terraform
Write code, in the same module, to provision that RDS instance with the postgres terraform provider
Apply it with Terraform
Destroy everything
Try to apply everything => ERROR
By debugging Terraform a bit I've learned that all the providers are initialized at the beggining of the plan / apply so if one has an invalid config (wrong API keys / unreachable endpoint) then Terraform will fail.
The solution here is to use the target parameter of a plan / apply command.
Terraform will only initialize providers that are related to the resources that are applied.
Apply the RDS code with the AWS provider: terraform apply -target=aws_db_instance
Apply everything terraform apply. Because the RDS instance is already reachable, the PostgreSQL provider can also initiate itself