Mount back an EBS snapshot in Auto Scaling - powershell

I am using Auto Scaling with a Load Balancer and have attached 2 EBS volumes.
Now whenever an instance is terminated it stores the snapshot of the EBS volumes.
I have gone through several links but cannot find how to retrieve/mount the EBS volume when a Launch Configuration launches a new instance.
Can I can get any reference or PowerShell script to identify a volume via tag name from the volume list and mount it when the instance is initiating?

There is no automatic facility to mount an existing EBS snapshot or volume when Auto Scaling launches an instance.
Best practice for Auto Scaling is to store data off-instance, such as in Amazon S3 or Amazon EFS. This way, the data is accessible to all instances simultaneously and can be used by new instances that are launched.
There is also no automatic facility to create an EBS snapshot when an Auto Scaling instance is terminated. Rather, there is the option to Delete on Termination, which controls whether the EBS volume should be deleted when the instance is terminated. If this option is off, then the EBS volumes will remain after an instance is terminated. You could write some code (eg in a User Data script) that re-attached an EBS volume to a new instance launched by Auto Scaling but this can get messy. (For example: Which instance to attach? What happens if more instances are launched?)
Bottom line: Yes, you could write a script to do this, but it is a poor architectural design.

Yes, you can attach (mount) an EBS volume to an EC2 instance using the AWS CLI command line tool. You run this command in the EC2 User Data at instance launch.
Running Commands on Your Linux Instance at Launch
AWS CLI attach-volume
Note: There is a problem with this strategy. The ASG Launch Configuration is for creating new EC2 instances that are identical. This would mean that you would be attempting to attach the same EBS volume to each instance which will fail. You may want to consider using EFS instead.
Amazon Elastic File System
Mount EFS on EC2 using the AWS CLI
Note: Use IAM roles to provide your instances with credentials instead of storing credentials on the EC2 instance.
Once you have configured your "master" EC2 instance create a new AMI for your ASG launch configuration.
When mounted on Amazon EC2 instances, an Amazon EFS file system provides a standard file system interface and file system access semantics, allowing you to seamlessly integrate Amazon EFS with your existing applications and tools. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, allowing Amazon EFS to provide a common data source for workloads and applications running on more than one Amazon EC2 instance.

Related

Add EFS volume to ECS for persistent mongodb data

I believe this requirement seems pretty straight forward for anyone trying to host their Tier3 i.e. database in a container.
I have MVP 3x Tier MERN app using -
1x Container instance
3x ECS services (Frontend, Backend and Database)
3x Tasks (1x running task per service)
The Database Task (mongodb) has its task definition updated to use EFS and have tested stopping the task and re-starting a new one for data persistence.
Question - How to ensure auto mount of EFS volume on the ECS container host (SPOT instance). If ECS leverages cloud formation template under the covers, do I need to update or modify this template to gain this persistent efs volume auto mounted on all container ec2 instances? I have come across various articles talking about a script in the ec2 launch config but I don't see any launch config created by ECS / cloud formation.
What is the easiest and simplest way to achieve something as trivial as persistent efs volume across my container host instances. Am guessing task definition alone doesn't solve this problem?
Thanks
Actually, I think below steps achieved persistence for the db task using efs -
Updated task definition for the database container to use EFS.
Mounted the EFS vol on container instance
sudo mount -t efs -o tls fs-:/ /database/data
The above mount command did not add any entries within the /etc/fstab but still seems to be persistent on the new ECS SPOT instance.

What is a preferable way to run a job need large tmp disk space?

I got a service that needs to scan large files and process them, upload them back to the file server.
My problem is that default available space in a pod is 10G which is not enough.
I have 3 options:
use hostFile/emptyDir volume, but this way I can't specify how much space I need, my pods could be scheduled to a node which didn't have enough disk space.
use hostFile persistent volume, but the documents say it is "Single node testing only“
use local persistent volume, but according to the document Dynamic provisioning is not supported yet, I have to manually create pv in each node which seems not acceptable by me, but if there is no other options this will be the only way to go.
Is there any other simpler options than local persistent volume?
Depending on your cloud provider you can mount their block storage options e.g
e.g. Google Cloud Storage, Azure storage by Azure, Elasticblockstore for AWS.
This way you won`t be depended on your node availability for storage. All of them are supported in Kubernetes via plugins as an expanded persistent volume claims. For example:
gcePersistentDisk
A gcePersistentDisk volume mounts a Google Compute Engine (GCE)
Persistent Disk into
your Pod. Unlike emptyDir, which is erased when a Pod is removed,
the contents of a PD are preserved and the volume is merely unmounted.
This means that a PD can be pre-populated with data, and that data can
be "handed off" between Pods.T
This is similar for awsElasticBlockStore or azureDisk
If you want to use AWS S3 there is an S3 Operator which you may find interesting.
AWS S3 Operator will deploy the AWS S3 Provisioner which will
dynamically or statically provision AWS S3 Bucket storage and access.

Move resources/volumes across contexts in Kubernetes clusters

I have a kubernetss cluster which i have started with a context "dev1.k8s.local" and it has a stateful set with EBS -PV(Persistent volumes)
now we are planning to start another context "dev2.k8s.local"
is there a way i can move dev1 context EBS volumes to context "dev2.k8s.local"
i am using K8S 1.10 & KOPS 1.10 Version
A Context is simply a representation of a Kubernetes configuration, typically ~/.kube/config. This file can have multiple configurations in it that are managed manually or with kubectl context.
When you provision a second Kubernetes cluster on AWS using Kops, brand new resources are recreated that have no frame of reference about the other cluster. Your EBS volumes that were created for PVs in your original cluster cannot simply be transferred between clusters using a context entry in your configuration file. That's not how it is designed to work.
Aside from the design problem, there is also a serious technical hurdle involved. EBS volumes are ReadWriteOnce. Meaning that they can only be attached to a single pod at once. The reason this constraint exists is because the EBS connection is block storage that is treated like a physical block device connected to the underlying worker node running your pod. That physical block device does not exist on the worker nodes in your other cluster. So it's impossible to simply move the pointer over.
The best way to accomplish this would be to back up and copy over the disk. How you handle this is up to your team. One way you could do it is by mounting both EBS volumes and copying the data over manually. You could also take a snapshot and restore the data to the other volume.

Shell access to Persistent Volume in Google Cloud

I would like to get shell access to the Persistent Volume I created on the Google Cloud Platform.
I tried using Google Cloud Shell for this. But to be able to do that I need to attach the Persistent Volume through the gcloud commands, and the command requires the instance name. But I don't see the instance name of the Google Cloud Shell when I list the instance names (in gcloud).
Is it possible to get shell access over Google Cloud Shell to the persistent disks? If not how can I get access to the Persistent Volume that I created?
Yes, all disks need to be attached to an instance to allow access to them - you will need to create a compute instance and mount the persistent disk with gcloud compute instances attach-disk [INSTANCE_NAME] --disk [DISK_NAME].
Once you create the new instance the instance name will become visible to you for usage by running gcloud compute instances list
You will then be able to access the disk by ssh'ing into the instance and mounting it.
The following will help with mounting:
https://cloud.google.com/compute/docs/disks/add-persistent-disk
You don't see the Cloud Shell's instance name in your list of VMs because it isn't owned by your project (and thus, to answer your question, you won't have permission to attach your persistent disks to it). You can verify this by querying the "/zone" endpoint of the Cloud Shell's metadata server, via curl "http://metadata.google.internal/computeMetadata/v1/instance/zone" as described in the GCE docs.
As Ryank mentioned, you'd need to attach the disk to an instance owned by your project, then SSH into it.

Google Kubernetes storage in EC2

I started to use Docker and I'm trying out Google's Kubernetes project for my container orchestration. It looks really good!
The only thing I'm curious of is how I would handle the volume storage.
I'm using EC2 instances and the containers do volume from the EC2 filesystem.
The only thing left is the way I have to deploy my application code into all those EC2 instances, right? How can I handle this?
It's somewhat unclear what you're asking, but a good place to start would be reading about your options for volumes in Kubernetes.
The options include using local EC2 disk with a lifetime tied to the lifetime of your pod (emptyDir), local EC2 disk with lifetime tied to the lifetime of the node VM (hostDir), and an Elastic Block Store volume (awsElasticBlockStore).
The Kubernetes Container Storage Interface (CSI) project is reaching maturity and includes a volume driver for AWS EBS that allows you to attach EBS volumes to your containers.
The setup is relatively advanced, but does work smoothly once implemented. The advantage of using EBS rather than local storage is that the EBS storage is persistent and independent of the lifetime of the EC2 instance.
In addition, the CSI plugin takes care of the disk creation -> mounting -> unmounting -> deletion lifecycle for you.
The EBS CSI driver has a simple example that could get you started quickly