How can I delete a Snapshot from DigitalOcean using Terraform? - deployment

Firstly I create a Snapshot image using Packer and after I make a Droplet with a linked Domain using Terraform.
After running terraform destroy the Droplet and the Domain are deleted but the Snapshot created with Packer isn't.
Is it possible to delete the Snapshot?

TF can only delete resource it created. Thus if your image has been created outside of TF, then you have to delete it manually. The alternative is to import the image to TF, and then delete it, but I don't see the point if you just want to delete it.

Related

ecs instances metadata files for EKS

I know that in Amazon ECS container agent by setting the variable ECS_ENABLE_CONTAINER_METADATA=true ecs metadata files are created for the containers.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-metadata.html
Is there any similar feature for the EKS?. I would like to retrieve instance metadata info from a file inside the container instead of using the IMDSv2 api.
you simply can't, you still need to use IMDSv2 api in your service,if you want to have get instance metadata
if you're looking at the Pod Metadata, ref https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
or you can use pod labels too...
Try adding this as part of the user data:
"echo 'ECS_ENABLE_CONTAINER_METADATA=1' > /etc/ecs/ecs.config"
Found here: https://github.com/aws/amazon-ecs-agent/issues/1514

Is there an option to copy image between nodes in kubernetes cluster?

I have a case where we have to patch the docker image in k8s node and retag it to start over the old one. This process ain't so easy and obvious, because I have several nodes.
Therefore, could I do retag process only on one node and then copy a new image to other nodes? If there is a possibility to do so, then should I delete the old image before copying retagged one?
I advise you to clone your deployement and use your retaged image for your new nodes, and scale down the old deployement with the old image tag.
LP
It's not possible to do cluster to cluster copying. You'd need to use kubectl cp to copy it locally, then copy the file back:
kubectl cp :/tmp/test /tmp/test
kubectl cp /tmp/test :/tmp/test
If you are trying to share files bet,nx4ween pods, and only one pods needs write access, you probably want to mount an ro volume on multiple pods, or use an object store like S3. Copying files to and from pods really shouldn't be something you're doing often, that's an anti-pattern.
Best Practice:
could I do retag process only on one node and then copy a new image to other nodes?
Moreover you can create a private repo registry and push/pull your docker images from there.
So make change in your image, push to repo, now all nodes will able to pull the new image.
Ref: Setting Up a Private Docker Registry on Ubuntu 18.04
then should I delete the old image before copying retagged one
No, use image versioning.
Lets assume you are using image MyImage:1.1, now you make some changes and create new image with version 1.2, so your image will be MyImage:1.2
Now in your deployment file, change your image name to MyImage:1.2, and create the deployment. Now your deployment will upgraded with new image.
You can use Rolling Update for the upgrade strategy for zero downtime.
Moral :
In new IT world, we mostly work in multiple clusters with many nodes. We have regular changes or customization as per the client demand or business met. We cant just make change in single node and then pushing it to everyone 1-1, trust me it is very hectic.

How to add files/application to the persistent volume for my pods to read from

I have a setup complete and running for PHP including the creation and mounting of a persistent volume. What I do not understand and so far can not find a how tutorial is on how to add my application code to the volume for my servers/pods to read from.
I am using AWS EKS to house my environment and I do see the volume being created.
My end goal would be to get code in an Azure repo onto the volume.
I used this tutorial to set up the environment - https://www.digitalocean.com/community/tutorials/how-to-deploy-a-php-application-with-kubernetes-on-ubuntu-16-04
Great guide but it never told you how to get your code on the volume.

Best practices for storing images locally to avoid problems when the image source is down (security too)?

I'm using argocd and helm charts to deploy multiple applications in a cluster. My cluster happens to be on bare metal, but I don't think that matters for this question. Also, sorry, this is probably a pretty basic question.
I ran into a problem yesterday where one of the remote image sources used by one of my helm charts was down. This brought me to a halt because I couldn't stand up one of the main services for my cluster without that image and I didn't have a local copy of it.
So, my question is, what would you consider to be best practice for storing images locally to avoid this kind of problem? Can I store charts and images locally once I've pulled them for the first time so that I don't have to always rely on third parties? Is there a way to set up a pass-through cache for helm charts and docker images?
If your scheduled pods were unable to start on a specific node with an Failed to pull image "your.docker.repo/image" error, you should consider having these images already downloaded on the nodes.
Think of how you can docker pull the images on your nodes. It may be a linux cronjob, kubernetes operator or any other solution that will ensure presence of docker image on the node even if you have connectivity issues.
As one of the options:
Create your own helm chart repository to store helm charts locally (optionally)
Create local image registry and push there needed images, also tag them accordingly for future simplicity
On each node add insecure registry by editing /etc/docker/daemon.json and adding
{
"insecure-registries" : ["myregistrydomain.com:5000"]
}
restart docker service on each node to apply changes
change your helm charts templates, set proper image path from local repo
recreate chart with new properties, (optionally)push chart to created in step 1 local helm repo
FInally install the chart - this time it should pick up images from local repo.
You may also be interested in Kubernetes-Helm Charts pointing to a local docker image

Kubernetes persistent volume claim on /var/www/html problem

I have a magento deployment on nginx which uses a persistent volume and a persistent volume claim. Everything works fine, but I am struggeling with one problem. I am using an initContainer to install magento via cli (which works fine) but as soon my POD starts and mounts the PVC to the /var/www/html (my webroot) the data previously (in the initContainer) installed data is lost (or better replaced by the new mount). My workaround was to install magento into /tmp/magento (in the initContainer) and as soon the "real" POD is up, the data from /tmp/magento is copied to /var/www/html. As you can imagine this takes a while and is kind of a permission hell, but it works.
Is there any way, that I can install my app directly in my target directory, without "overmapping" my files? I have to use an PV/PVC because I am mounting the POD directory via NFS and also I don't want to loose my files.
Update: The Magento deployment is inside a docker image and is installed during the docker build. So if I install the data into the target location, the kubernetes mount replaces the data with an empty mount. That's the main reason for the workaround. The goal is to have the whole installation inside the image.
If Magento is already installed inside the imaged and located by some path (say /tmp/magento) but you want it to be accessible by path /var/www/html/magento, why don't you just create a symlink pointing to the existing location?
So your Magento will be installed during the image build process and in the entrypoint an additional command
ln -s /tmp/magento /var/www/html/magento
will be run before the Nginx server starts itself. No need for intiContainers.