AWS ECS Fargate: How to create a service with auto scaling with the service API - amazon-ecs

I do not understand how I can create auto scaling for a ecs fargate service with the API. I create my service with code like this with the ecs create service api:
{
"serviceName": "my-service",
"cluster": "my-cluster",
"taskDefinition": "my-task",
"desiredCount": 1,
"launchType": "FARGATE",
"loadBalancers":[
{
"targetGroupArn": my_target_group_arn,
"containerName": "my-container-nginx",
"containerPort": 8090
}
],
"networkConfiguration": {
'awsvpcConfiguration': {
'subnets': settings.AWS_SUBNET_IDS,
'securityGroups': [securitygroup_id]
}
},
}
How to add the auto scaling, which I can configure in the AWS console in the browser easily? Do I have to create a capacityProvider?

This documentation walks you through how to configure a service autoscaling policy for ECS via the CLI. Note that often the console provides a "macro" experience to make all the steps easier whereas if you have to do the same with the CLI/API it's often multiple steps (as outlined in the doc).

Related

Why isn't my `KubernetesPodOperator` using the IRSA I've annotated worker pods with?

I've deployed an EKS cluster using the Terraform module terraform-aws-modules/eks/aws. I’ve deployed Airflow on this EKS cluster with Helm (the official chart, not the community one), and I’ve annotated worker pods with the following IRSA:
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the release name
name: "airflow-worker"
# Annotations to add to worker kubernetes service account.
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::123456789:role/airflow-worker"
This airflow-worker role has a policy attached to it to enable it to assume a different role.
I have a Python program that assumes this other role and performs some S3 operations. I can exec into a running BashOperator pod, open a Python shell, assume this role, and issue the exact same S3 operations successfully.
But, when I create a Docker image with this program and try to call it from a KubernetesPodOperator task, I see the following error:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the AssumeRole operation:
User: arn:aws:sts::123456789:assumed-role/core_node_group-eks-node-group-20220726041042973200000001/i-089c64b96cf7878d8 is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::987654321:role/TheOtherRole
I don't really know what this role is, but I believe it was created automatically by the Terraform module. However, when I kubectl describe one of these failed pods, I see this:
Environment:
...
...
...
AWS_ROLE_ARN: arn:aws:iam::123456789:role/airflow-worker
My questions:
Why is this role being used, and not the IRSA airflow-worker that I've specified in the Helm chart's values?
What even is this role? It seems the Terraform module creates a number of roles automatically, but it is very difficult to tell what their purpose is or where they're used from the Terraform documentation.
How am I able to assume this role and do everything the Dockerized Python program does when in a shell in the pod? Okay, this is because other operators (such as BashOperator) do use the airflow-worker role. Just not KubernetesPodOperators.
What is the AWS_IAM_ROLE environment variable, and why isn't it being used?
Happy to provide more context if it's helpful.
In order to use the AWS role in EKS pod, you need to add this policy to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": " arn:aws:iam::123456789:role/airflow-worker”
},
"Action": "sts:AssumeRole"
}
]
}
Here you can find some information about AWS Security Token Service (STS).
For the tasks running in the worker prod, they will use the role automatically, but if you create a new pod, it will be separated from your worker pod, so you need to let it use the service account which attach the role in order to add the AWS role creds file to the pod.
This is pretty much by design. The non-KubernetesPodOperators use an auto-generated pod template file that has Helm chart values as default properties, while the KubernetesPodOperator needs its own pod template file. That, or it needs to essentially create one by passing arguments to KubernetesPodOperator(....
I fixed the ultimate issue by passing service_account="airflow-worker" to KubernetesPodOperator(....

AWS CDK Stack is deployed silently even if not explicitly specified

I have multi stacks CDK setup, the core stack contains the VPC and EKS. The "Tenant" stack that deploys some s3 buckets and k8s namespaces and some other tenant related deployments.
cdk ls is displaying all the existing stacks as expected.
- eks-stack
- tenant-a
- tenant-b
If I want to deploy only a single tenant stack, I run cdk deploy tenant-a. To my surprise, I see that in my k8s cluster, the manifest of tenant-1 and tenant-b were deployed, and not just tenant-a as I expected.
The CDK output on the CLI correctly outputs that tenant-a was deployed. The CLI output doesn't mention ⁣tenant-b. I also see that most of the changes did happen inside the eks stack and not in the tenant stack, as I am using the references.
# app.py
# ...
# EKS
efs_stack = EksStack(
app,
"eks-stack",
stack_log_level="INFO",
)
# Tenant Specific stacks
tenants = ['tenant-a', 'tenant-b']
for tenant in tenants:
tenant_stack = TenantStack(
app,
f"tenant-stack-{tenant}",
stack_log_level="INFO",
cluster=eks_cluster_stack.eks_cluster,
tenant=tenant,
)
--
#
# Inside TenantStack.py a manifest is applied to k8s
self.cluster.add_manifest(f'db-job-{self.tenant}', {
"apiVersion": 'v1',
"kind": 'Pod',
"metadata": {"name": 'mypod'},
"spec": {
"serviceAccountName": "bootstrap-db-job-access-ssm",
"containers": [
{
"name": 'hello',
"image": 'amazon/aws-cli',
"command": 'magic stuff ....'
}
]
}
})
I found out that when I import a cluster by its attributes and by reference
eg.
self.cluster = Cluster.from_cluster_attributes(
self, 'cluster', cluster_name=cluster,
open_id_connect_provider=eks_open_id_connect_provider,
kubectl_role_arn=kubectl_role
I can deploy tenant stack a and b separately and my core eks stack stays untouched. Now I have read It's recommended to use references as CDK can automatically can create dependencies and detect circular dependencies.
There is an option to exclude dependencies. Use cdk deploy tenant-a --exclusively to don't deploy dependencies.

Is that possible to create cluster Autoscaler in kubeadm cluster?

I have created Kubernetes cluster with Kubeadm tool on AWS. What are all the possible ways to Autoscale the node?
I don't know about all the ways to scale a Kubernetes cluster on AWS. However I would argue the best approach would be to use the Kubernetes Cluster Autoscaler. It can dynamically scale out a cluster based on scheduled pods in the cluster (as opposed to something like autoscaling groups that can only schedule based on node resource usage). Even for AWS EKS it is now the documented approach to autoscaling.
If you have auto scaling group for the slave nodes in your aws cluster, then you can certainly install Kubernetes Cluster Autoscaler.
Please check out this cluster-autoscaler bash script and as it is for kops, you can actually use it to learn how many steps and resources you have to follow/configure to install autoscaler.
First create iam policy
printf " a) Creating IAM policy to allow aws-cluster-autoscaler access to AWS autoscaling groups…\n"
cat > asg-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup"
],
"Resource": "*"
}
]
}
EOF
Then attach create a role and attached the policy with the created role.
aws iam attach-role-policy --policy-arn $ASG_POLICY_ARN --role-name $IAM_ROLE the policy
then download example yml from here download the desired one and edit the yml to change --nodes=1:10:k8s-worker-asg-1 with your autoscaling group name.
For more details please check this document.

Kubernetes Rest API to change existing secret/configmap in Pod

I have deployed Pod using kubernetes Rest API POST /api/v1/namespaces/{namespace}/pods
The request body has the podspec with volumes something as below:
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "test",
"namespace": "default"
},
"spec": {
"volumes": [
{
"name": "test-secrets",
"secret": {
"secretName": "test-secret-one"
}
}
],
"containers":[
<<container json>>.........
]
}
}
Now I want to change the secret name test-secret-one to test-secret-two for the Pod?
How can I achieve this? And what Rest API I need to use?
Patch rest API - I can use the change container image but cant be used for Volumes. If this can be used Can you give me an example or reference?
Is there any Kubernetes Rest API to restart the Pod. Note that we are not using a deployment object model. It is directly deployed as Pod, not as deployment.
Can anyone help here?
I'm posting the answer as Community Wiki as solution came from #Matt in the comments.
Volumes aren't updatable fields, you will need to recreate the pod
with the new spec.
The answer to most of your questions is use a deployment and patch it.
The deployment will manage the updates and restarts for you.
A different approach is also possible and was suggested by #Kitt:
If you only update the content of Secrets and Configmap instead of
renaming them, the mounted volumes will be refreshed by kubelet in
the duration --sync-frequency(1m as default).

Accessing Kubernetes API using username and password

I'm am currently configuring Heketi Server (Deployed on K8S clusterA) to interact with my Glusterfs cluster that is deployed as a DaemonSet on another K8S cluster ClusterB.
One of the configurations required by Heketi to connect to GlusterFS K8S cluster are :
"kubeexec": {
"host" :"https://<URL-OF-CLUSTER-WITH-GLUSTERFS>:6443",
"cert" : "<CERTIFICATE-OF-CLUSTER-WITH-GLUSTERFS>",
"insecure": false,
"user": "WHERE_DO_I_GET_THIS_FROM",
"password": "<WHERE_DO_I_GET_THIS_FROM>",
"namespace": "default",
"backup_lvm_metadata": false
},
As you can see, it requires a user and password. I have no idea where to get that from.
One thing that comes to mind is creating a service account on ClusterB and using the token to authenticate but Heketi does not seem to be taking that as an authentication mechanism.
The cert is something that I got from /usr/local/share/ca-certificates/kube-ca.crt but I have no idea where to get the user/password from. Any idea what could be done?
If I do a kubectl config view I only see certificates for the admin user of my cluster.
That could only mean one thing: basic HTTP auth.
You can specify a username/password in a file when you start the kube-apiserver with the --basic-auth-file=SOMEFILE option.