I want to mount EFS to ECS but I don't have EFS Volume Type option when creating ECS task definition for EC2-cluster using console - amazon-ecs

Info says: "Currently, only the bind mount volume type and EFS volume type are supported". But there is no option for EFS.
I read aws article https://aws.amazon.com/premiumsupport/knowledge-center/efs-mount-on-ecs-container-or-task/ :
"5. Enter the name of the volume, and then select EFS from the Volume types drop-down menu."
I created SG for EFS that allows access to port 2049 from the security group of my EC2 cluster.
I created efs in the same vpc and subnets and attached SG for EFS to all mount points.
But that doesn't help. The console doesn't offer me EFS volume type.

Now that the new ECS experience is live, the updated portal also has EFS support. To achieve that, fill in the required details in Step 1 and click Next.
In step 2, under storage, click on "Add volume" and select EFS as the Volume type.
The new ECS experience (top left corner) gives a pretty bad half baked UI to the options that are available. If you disable that, and use the "old experience" and create a new task definition and scroll all the way to the bottom, there's an option to add Volumes where you can add EFS as the volume type.

Related

Aws ECS Fargate enforce readonlyfilesystem

I need to enforce on ECS Fargate services 'readonlyrootFileSystem' to reduce Security hub vulnerabilities.
I thought it was an easy task by just setting it true in the task definition.
But it backfired as the service does not deploy because the commands in the dockerfile are not executed because they do not have access to folders and also this is incompatible with ssm execute commands, so I won't be able to get inside the container.
I managed to set the readonlyrootFileSystem To true and have my service back on by mounting a volume. To do I mounted a tmp volume that is used by the container to install dependencies at start and a data volume to store data (updates).
So now according to the documentation the security hub vulnerability should be fixed as the rule needs that variable not be False but still security hub is flagging the task as non complaint.
---More update---
the task definition of my service spins also a datadog image for monitoring. That also needs to have its filesystem as readonly to satisfy security hub.
Here I cannot solve as above because datadog agent needs access to /etc/ folder and if I mount a volume there I will lose files and the service wont' start.
is there a way out of this?
Any ideas?
In case someone stumbles into this.
The solution (or workaround, call it as you please), was to set readonlyrootFileSystem True for both container and sidecard (datadog in this case) and use bind mounts.
The rules for monitoring ECS using datadog can be found here
The bind mount that you need to add for your service depend on how you have setup your dockerfile.
in my case it was about adding a volume for downloading data.
Moreover since with readonly FS ECS exec (SSM) does not work, if you want this you also have to add mounts: if added two mounts in /var/lib/amazon and /var/log/amazon. This will allow to have ssm (docker exec basically into your container)
As for datadog, I just needed to fix the mounts so that the agent could work. In my case, since it was again a custom image, I mounted a volume on /etc/datadog-agent.
happy days!

Cloudformation template to attach multiple filesystems to a single EBS volume

Can someone point me in the direction as to how I can attach multiple ext4 filesystems to the same EBS volume in a Cloudformation template?
Should I use AWS::Cloudformation::Init to run commands?
For a running instance, you can use AWS::EC2::VolumeAttachment to:
Attaches an Amazon EBS volume to a running instance and exposes it to the instance with the specified device name.
Obviously, attaching the volume is the first step. If its brand new volume, you have to format it and mount from inside the instance.
For a new instance created using CFN, you can AWS::Cloudformation::Init or User Data to run commands to mount and/or format the volumes.

How to mimic Docker ability to pre-populate a volume from a container directory with Kubernetes

I am migrating my previous deployment made with docker-compose to Kubernetes.
In my previous deployment, some containers do have some data made at build time in some paths and these paths are mounted in persistent volumes.
Therefore, as the Docker volume documentation states,the persistent volume (not a bind mount) will be pre-populated with the container directory content.
I'd like to achieve this behavior with Kubernetes and its persistent volumes, How can I do ? Do I need to add some kind of logic using scripts in order to copy my container's files to the mounted path when data is not present the first time the container starts ?
Possibly related question: Kubernetes mount volume on existing directory with files inside the container
I think your options are
ConfigMap (are "some data" configuration files?)
Init containers (as mentioned)
CSI Volume Cloning (clone combining an init or your first app container)
there used to be a gitRepo; deprecated in favour of init containers where you can clone your config and data from
HostPath volume mount is an option too
An NFS volume is probably a very reasonable option and similar from an approach point of view to your Docker Volumes
Storage type: NFS, iscsi, awsElasticBlockStore, gcePersistentDisk and others can be pre-populated. There are constraints. NFS probably the most flexible for sharing bits & bytes.
FYI
The subPath might be of interest too depending on your use case and
PodPreset might help in streamlining the op across the fleet of your pods
HTH

kubernetes : dynamic persistent volume provisioning using iSCSI and NFS

I am using Kubernetes 1.4 persistent volume support, iSCSI/NFS PV and PVC successfully, in my containers. However it needs to first provision the storage by specifying the capacity both at PV creation and during claiming the storage.
My requirement is to just provide storage to cluster(and don't want to mention the capacity of storage) and let users/developers claim the storage based on their requirements. So need to use dynamic provisioning using StorageClass. Just declare the storage with details and let developers claim it based on their needs.
However got confused about using dynamic volume provisioning for iSCSI and NFS using Storage class and not getting exact steps to follow. As per documentation i need to use external volume plugin for both these types and it has already been made available as a part of incubator project - https://github.com/kubernetes-incubator/external-storage/. But i am not getting how to load/run that external provisioner(i need to run it as a container itself??i guess) and then write storage class with details of iSCSI/NFS storage.
Can somebody who has already done/used it can guide/provide pointers on this?
Thanks in advance,
picku
The project you pointed to is specific to iSCSI targets running targetd. You basically download the YAML files here https://github.com/kubernetes-incubator/external-storage/tree/master/iscsi/targetd/kubernetes, modify them with your storage provider's parameters and deploy the pods using kubectl create. In your pods you need to specify the a storageclass. The storageclass then specifies a the iSCSI provisioner. There are more steps but that's the gist of it.
See this link for more detailed instructions https://github.com/kubernetes-incubator/external-storage/tree/master/iscsi/targetd
the OpenEBS community has folks running this way afaik. There is a blog here for example explaining one approach supporting WordPress: https://blog.openebs.io/setting-up-persistent-volumes-in-rwx-mode-using-openebs-142632244cb2

glusterfs volume creation failed - brick is already part of volume

In a cloud , we have a cluster of glusterfs nodes (participating in gluster volume) and clients (that mount to gluster volumes). These nodes are created using terraform hashicorp tool.
Once the cluster is up and running, if we want to change the gluster machine configuration like increasing the compute size from 4 cpus to 8 cpus , terraform has the provision to recreate the nodes with new configuration.So the existing gluster nodes are destroyed and new instances are created but with the same ip. In the newly created instance , volume creation command fails saying brick is already part of volume.
sudo gluster volume create VolName replica 2 transport tcp ip1:/mnt/ppshare/brick0 ip2:/mnt/ppshare/brick0
volume create: VolName: failed: /mnt/ppshare/brick0 is already part
of a volume
But no volumes are present in this instance.
I understand if I have to expand or shrink volume, I can add or remove bricks from existing volume. Here, I'm changing the compute of the node and hence it has to be recreated. I don't understand why it should say brick is already part of volume as it is a new machine altogether.
It would be very helpful if someone can explain why it says Brick is already part of volume and where it is storing the volume/brick information. So that I can recreate the volume successfully.
I also tried the below steps from this link to clear the glusterfs volume related attributes from the mount but no luck.
https://linuxsysadm.wordpress.com/2013/05/16/glusterfs-remove-extended-attributes-to-completely-remove-bricks/.
apt-get install attr
cd /glusterfs
for i in attr -lq .; do setfattr -x trusted.$i .; done
attr -lq /glusterfs (for testing, the output should pe empty)
Simply put "force" in the end of "gluster volume create ..." command.
Please check if you have directories /mnt/ppshare/brick0 created.
You should have /mnt/ppshare without the brick0 folder. The create command creates those folders. The error indicates that the brick0 folders are present.