Is is disk space shared between ECS Containers - amazon-ecs

I have a simple question but cannot seem to find an answer in the AWS docs.
I'm running container instances on AWS ECS. Can anyone tell me how hard disk space is allocated to containers by default? Do all containers simply share the available hard disk space from the underlying EC2 instance or is this configurable somehow?
Thanks,

Different AMIs will configure this differently, by default, the Amazon ECS-optimized Amazon Linux AMI ships with 30 GiB of total storage. There is an additional 22-GiB volume that is attached at /dev/xvdcz that Docker uses for image and metadata storage. You can extend the Docker logical volume (see link).

Related

How to have data in a database with FastAPI persist across multiple nodes?

If I use the https://github.com/tiangolo/full-stack-fastapi-postgresql project generator, how would one be able to persist data across multiple nodes (either with docker swarm or kubernetes)?
As I understand it, any postgresql data in a volumes directory would be different for every node (e.g. every digitalocean droplet). In this case, a user may ask for their data, get directed by traefik to a node with a different volumes directory, and return different information to the case where they may have been directed to another node. Is this correct?
If so, what would be the best approach to have multiple servers running a database work together and have the same data in the database?
On kubernetes, persistent volumes are used to associate storage that is mounted onto pods wherever they are loaded in the cluster and they are managed by providing the cluster with storage classes that map to drivers that map to some kind of SAN storage.
Docker / Docker swarm has similar support for docker volume plugins, but with the ascendancy of K8s there are virtually no active open source projects, and most of the prior commercial SAN driver vendors have migrated to K8s instead.
Nonetheless, depending on your tolerance, you can use a mix of direct nfs / fuse mounts, there are some not entirely abandoned docker volume drivers available in the nfs / glusterfs space.
This issue moby/moby #39624 addresses CSI support that we will hopefully see drop in 2021 that will bring swarm back inline with k8s.

Regarding mongodb deployment as container in AWS Fargate

I want to deploy Mongo image on a container service like amazon Fargate.
can i write data to that container, if its possible to write data to the container where the data will be stored and they will charge it as a task?
Each Fargate task (PV 1.4) comes with 20GB of ephemeral storage included in the price. You can extend it up to 200GB (for an additional fee). See here. Again this space is ephemeral, if the task shuts down your disk is wiped.
One other option (in this case persistent) would be to mount an EFS volume to the Fargate task but probably not a great fit for a database workload.

Opensource Storage Options for Kubernetes Cluster running on bare metal

I have a 3 node cluster running on bare metal. it was setup using kubeadm.
Each node in the cluster has 100GB disk space, total add up to 300GB.
I would like to utilize the 300GB disk space available on them to run stateful pods like mysql, postgresql, mongodb, cassandra etc. what are the different opensource options available to create persistent volumes.
I still havent used kuberentes v1.14 which offers local persistent volumes out of the box. It would be one of the option
second option is to run NFS server on each node and utilize the nfs share from the respective machine to create PV's.
Apart from these what other options can be looked at. please suggest

GKE How many Persistent Disks could be attached to a single node?

Is it possible to attach ~30 persistent disks to single k8s node (e.g. n1-standard-4)?
According to the documentation 2-4 core node can support up to 64 attached disks in Beta: Link.
Is it supported by GKE? Is there any limit in GKE Kubernetes?
GKE has the same limitation as vanilla Kubernetes on GCP per se. The Kubernetes limits for the largest public cloud providers are documented here
You can also change those limits using the KUBE_MAX_PD_VOLS on the kube-scheduler (After restarting). Unfortunately, you won't be able to change this on GKE yet, because GKE doesn't give you access to the master(s) configuration yet.
Also documented here is Dynamic Volume Limits introduced in Kubernetes 1.11 and currently in Beta.
I believe you self-answered your first question, the n1-standard-4 VM has 4 vCPUs and per the link that you provided you can attach up to 64 disks. So yes, you should be able to attach 30 persistent disks, a PVC/PV in the GCE storage class maps to GCP VM disk.

Google Kubernetes storage in EC2

I started to use Docker and I'm trying out Google's Kubernetes project for my container orchestration. It looks really good!
The only thing I'm curious of is how I would handle the volume storage.
I'm using EC2 instances and the containers do volume from the EC2 filesystem.
The only thing left is the way I have to deploy my application code into all those EC2 instances, right? How can I handle this?
It's somewhat unclear what you're asking, but a good place to start would be reading about your options for volumes in Kubernetes.
The options include using local EC2 disk with a lifetime tied to the lifetime of your pod (emptyDir), local EC2 disk with lifetime tied to the lifetime of the node VM (hostDir), and an Elastic Block Store volume (awsElasticBlockStore).
The Kubernetes Container Storage Interface (CSI) project is reaching maturity and includes a volume driver for AWS EBS that allows you to attach EBS volumes to your containers.
The setup is relatively advanced, but does work smoothly once implemented. The advantage of using EBS rather than local storage is that the EBS storage is persistent and independent of the lifetime of the EC2 instance.
In addition, the CSI plugin takes care of the disk creation -> mounting -> unmounting -> deletion lifecycle for you.
The EBS CSI driver has a simple example that could get you started quickly