Increase ECS fargate memory using EFS - amazon-ecs

one of my application running ECS(with fargate) needs more memory, the 20GB ephemeral memory is not sufficient for my application, so I am planning to use efs
volume {
name = "efs-test-space
efs_volume_configuration {
file_system_id = aws_efs_file_system.efs_apache.id
root_directory = "/"
transit_encryption = "ENABLED"
container_path = "/home/user/efs/"
authorization_config {
access_point_id = aws_efs_access_point.efs-access-point.id
iam = "ENABLED"
}
}
I can see it is mounted and my application is able to access the mounted folder, but because of HA and to have parallelism my ecs task count are 6. Since I am using sone EFS and same will be shared by all tasks. So here the problem I got stuck is providing unique mounted EFS filepath for each task .
I added something like this /home/user/efs/{random_id} but this I want to make as part of task lifecycle, I mean this folder should get deleted if my task is stopped or destroyed/
So is there a way to mount efs as bind mount or enable delete of folder during task destroy stage?

You can now increase your ephemeral storage size to 200GB all you need to do is to set the ephemeral parameter in fargate task definition
"ephemeralStorage": {
"sizeInGiB": 100
}

While this could be achieved in theory, there is a lot of moving parts to it because the life cycle of EFS (and Access Points) is decoupled from the life cycle of tasks. This means you need to create Access Points out of band and on the fly AND these Access Points (and their data) are not automatically deleted when you tear down your tasks. The Fargate/EFS integration did not have this as a primary use case. The primary use case were more around sharing of the same data among different tasks(which is what you are observing but it doesn't serve your use case!) in addition to provide point persistency for a single task. .
What you need to solve your problem easily is a new feature the Fargate team is working on right now and that will allow you to expand your local ephemeral storage as a property of the task. I can't say more about the timing but the feature is actively being developed so you may want to consider intercepting it rather than building a complex workflow to achieve the same result.

Related

Persistent Volume and Persistent Volume Claim in Kubernetes

I want to know can we create multiple disks on K8 cluster ? Like currently i see below configuration on my cluster
"disks":[
{
"prefix":"abc-storage",
"count":1,
"size":10,
"disk-type": StandardSSD_LRS"
}
And now the ask is to - create fresh storage disk with 2 GB minimum size with labels and same storage class of existing "abc-storage" ?
Does it mean i will create similar one more disks block and change size to 2 or change in the existing block from 10 to 2?
Create with labels so that it is useful to refer under selector section ? I am not able to understand this point.
]

scale stateful set with shared volume per az

I would like to scale a kubernetes stateful set in EKS which serves static data. The data should be shared where possible, which means by availability zone.
Can I specify a volume claim template to make this possible? Or is there another mechanism?
I also need to initialize the volume (in an init-container) when the first node joins. Do I need to provide some external locking mechanism rather than just checking if volume is empty?
If you want your pods to share static data, then you can:
Put the data into the Persistent Volume
Mount the same volume into the Pods (with ReadOnlyMany)
A few comments:
Here you can find the list of volumes, so you can choose the one that fits your needs
Since every pod serves the same data, you may use Deployment instead of StatefulSet, StatefulSets are when each of your pod is different
If you need the first pod to initialize the data, then you can either use initContainer (then use ReadWriteMany instead of ReadOnlyMany). Depending on what you try to do exactly, but maybe you can first initialize the data and then start your Deployment (and Pods), then you'd not need to lock anything

Is it possible to share secrets among ECS Container Definitions in Cloud-formation?

I have requirement to create 5 Container definitions which have common environmental variables as secrets. But the count of them is around 50. So instead of duplicating these 50 secrets, is there a way I can create all these a single resource and refer them in all container definitions ?
You could use AWS Systems Manager Parameter Store. It's intended for this kind of situation, when sharing config values between multiple containers, lambda functions, etc.

How does Cloud Data Fusion decide which project network to use for the dataproc resources?

I have a project with 4 VPC networks. I created a GCDF instance, I had expected that the "default" network would be picked but I see that another one was picked, (the first one alphabetically). Is this the algorithm, the alphabetic order of names?
Is there a way to specify the network to be used, that would be very useful since I would like to isolate the network where those VMs run.
Your observation is correct. Current implementation selects network alphabetically. To use specific network, there are multiple options:
Create a dataproc compute profile that uses default or any other VPC network you have already created.
Use system.profile.properties.network=default as system preference.

How to Use Data Disk from Service Fabric

I have a service fabric .net application that needs more temporary storage than is available on the instance. The current instance allows for up to 4 data disks. I see how to allocate these from the scale set, but not how to specify mounts, permissions, or even how to properly access them from the file API. I was surprised to find very little in the way of documentation on this so any help is appreciated.
Once you get into the instance of the scale then you add new disk to the VM using instruction here
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/attach-disk-ps
After that you can access them using standard IO i.e System.IO.File.ReadAllText
However it is not recommend to change node type for primary node and for non primary node type it is better to create a new vm scale set. Graduate move services from old scale set to the new one by update placement properties