How can I add EC2 instances for different ECS clusters? - amazon-ecs

I want to setup 2 ECS clusters: TEST and PROD.
To allow EC2 instances to become part of your cluster, you need to have the ECS-agent running together with an ECS-Role. How can I make sure that DEV and PROD run on separated servers (instances)?

Paste the following code in your EC2 User-data section:
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
Source: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html

Related

Discover AWS ECS cluster association from running container (self managed cluster)

I'm working with ECS with self managed EC2 based clusters. We have 1 cluster for each env, dev/stage/prod
I'm struggling to have my containers in ECS be aware of what cluster / environment they start in so that on task start up time they can properly configure themselves without having to bake the env specific config into the images.
It would be really easy if there was some command to run inside the container that could return the cluster name. It seems like that should be easy. I can think of a few sub optimal ways to do this. get the container/host IP and look up the instance. Try to grab /etc/ecs/ecs.config from the host instance etc...
It seems like there should be a better way. my google skills are failing... thx!
The ECS Task Metadata endpoint, available at ${ECS_CONTAINER_METADATA_URI_V4}/task within any ECS task, will return the cluster name, among other things.
Alternatively, if you were using an IaC tool such as Terraform or CloudFormation to build your ECS tasks, it would be trivial to inject the cluster name as an environment variable in the tasks.
Mark B's answer is better but before I got that I found this solution:
Add ECS_ENABLE_CONTAINER_METADATA=true to the /etc/ecs/ecs.config file on the ec2 host and you will have access to the ecs.config file as well as having the file available as and env var. See:
[Ecs Container Metadata File][1]
I think Mark's answer is better b/c this solution involves editing the userdata script for the host instances
[1]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-metadata.html

Access AWS Tags inside ECS docker containers

I am creating a ECS Service with a few resource tags through a cloudformation template. Once the service is up and running, is there a way I can access these aws tags from within the container?
I was thinking if there is a way to make them available as environment variables in the container?
Run one of the following command from within the container:
list-tags-for-resource CLI commands with task ARN 3
aws ecs list-tags-for-resource --resource-arn arn:aws:ecs:<region>:<Account_Number>:task/test/186de825c8EXAMPLE10bf1c3bb142
list-tags-for-resource CLI commands with service ARN 3
aws ecs list-tags-for-resource --resource-arn arn:aws:ecs:<region>:<Account_Number>:service/test/service

How to make a multi-regional Kafka/Zookeeper cluster using multiple Google Kubernetes Engine (GKE) clusters?

I have 3 GKE clusters sitting in 3 different regions on Google Cloud Platform.
I would like to create a Kafka cluster which has one Zookeeper and one Kafka node (broker) in every region (each GKE cluster).
This set-up is intended to survive regional failure (I know a whole GCP region going down is rare and highly unlikely).
I am trying this set-up using this Helm Chart provided by Incubator.
I tried this setup manually on 3 GCP VMs following this guide and I was able to do it without any issues.
However, setting up a Kafka cluster on Kubernetes seems complicated.
As we know we have to provide the IPs of all the zookeeper server in each zookeeper configuration file like below:
...
# list of servers
server.1=0.0.0.0:2888:3888
server.2=<Ip of second server>:2888:3888
server.3=<ip of third server>:2888:3888
...
As I can see in the Helm chart config-script.yaml file has a script which creates the Zookeeper configuration file for every deployment.
The part of the script which echos the zookeeper servers looks something like below:
...
for (( i=1; i<=$ZK_REPLICAS; i++ ))
do
echo "server.$i=$NAME-$((i-1)).$DOMAIN:$ZK_SERVER_PORT:$ZK_ELECTION_PORT" >> $ZK_CONFIG_FILE
done
...
As of now the configuration that this Helm chart creates has the below Zookeeper server in the configuration with one replica (replica here means Kubernetes Pods replicas).
...
# "release-name" is the name of the Helm release
server.1=release-name-zookeeper-0.release-name-zookeeper-headless.default.svc.cluster.local:2888:3888
...
At this point, I am clueless and do not know what to do, so that all the Zookeeper servers get included in the configuration file?
How shall I modify the script?
I see you are trying to create 3 node zookeeper cluster on top of 3 different GKE clusters.
This is not an easy task and I am sure there are multiple ways to achieve it
but I will show you one way in which it can be done and I believe it should solve your problem.
The first thing you need to do is create a LoadBalancer service for every zookeeper instance.
After LoadBalancers are created, note down the ip addresses that got assigned
(remember that by default these ip addresses are ephemeral so you might want to change them later to static).
Next thing to do is to create an private DNS zone
on GCP and create A records for every zookeeper LoadBalancer endpoint e.g.:
release-name-zookeeper-1.zookeeper.internal.
release-name-zookeeper-2.zookeeper.internal.
release-name-zookeeper-3.zookeeper.internal.
and in GCP it would look like this:
After it's done, just modify this line:
...
DOMAIN=`hostname -d'
...
to something like this:
...
DOMAIN={{ .Values.domain }}
...
and remember to set domain variable in Values file to zookeeper.internal
so in the end it should look like this:
DOMAIN=zookeeper.internal
and it should generate the folowing config:
...
server.1=release-name-zookeeper-1.zookeeper.internal:2888:3888
server.2=release-name-zookeeper-2.zookeeper.internal:2888:3888
server.3=release-name-zookeeper-3.zookeeper.internal:2888:3888
...
Let me know if it is helpful

Where is the kubectl command get executed?

I want to know where do we usually execute kubectl command ?
Is it on master node or a different node, because i executed kubectl command from one of the EC2 instances in AWS and master and worker node was completely different ( total 3 node = 1 master and 2 worker node).
And when we create cluster ,does that cluster lies in worker node or it includes master node too?
The kubectl command itself is a command line utility that always executes locally however all it really does is issue commands against a Kubernetes server via its Kubernetes API
Which Kubernetes server it acts against is determined by the local environment the command is run with. This is configured using a "kubeconfig" file which is is read from the KUBECONFIG environment variable (or defaults to the file in in $HOME/.kube/config). For more information see Configuring Access to Multiple Clusters

best way to seed new machine with k8s/eks info

Say we have a couple of clusters on Amazon EKS. We have a new user or new machine that needs .kube/config to be populated with the latest cluster info.
Is there some easy way we get the context info from our clusters on EKS and put the info in the .kube/config file? something like:
eksctl init "cluster-1-ARN" "cluster-2-ARN"
so after some web-sleuthing, I heard about:
aws eks update-kubeconfig
I tried that, and I get this:
$ aws eks update-kubeconfig usage: aws [options]
[ ...] [parameters] To see help text, you can
run:
aws help aws help aws help
aws: error: argument --name is required
I would think it would just update for all clusters then, but it don't. So I put the cluster names/ARNs, like so:
aws eks update-kubeconfig --name arn:aws:eks:us-west-2:913xxx371:cluster/eks-cluster-1
aws eks update-kubeconfig --name arn:aws:eks:us-west-2:913xxx371:cluster/ignitecluster
but then I get:
kbc stderr: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: arn:aws:eks:us-west-2:913xxx371:cluster/eks-cluster-1.
kbc stderr: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: arn:aws:eks:us-west-2:913xxx371:cluster/ignitecluster.
hmmm this is kinda dumb 😒 those cluster names exist..so what 🤷 do I do now
So yeah those clusters I named don't actually exist. I discovered that via:
aws eks list-clusters
ultimately however, I still feel strong because we people need to make a tool that can just update your config with all the clusters that exist instead of having you name them.
So to do this programmatically, it would be:
aws eks list-clusters | jq '.clusters[]' | while read c; do
aws eks update-kubeconfig --name "$c"
done;
In my case, I was working with two AWS environments. My ~/.aws/credentials were pointing to one and had to be changed to point to the correct account. Once you change the account details, you can verify the change by running the following commands:
eksctl get clusters
and then setting the kube-config using the command below after verifying the region.
aws eks --region your_aws_region update-kubeconfig --name your_eks_cluster