gcloud deployment manger - using snapshot as <sourceImage> - deployment

Questions regarding Google cloud deployments using deployment-manager:
1)Is it possible to use a snapshot as the for a REPLICA_POOL deployment template ?
2)If so, how does the "zone" of the snapshot affect the deployment zone? In other words, can i mount a snapshot of us-central-a to a europe-west1-a?
3)What would be the sourceImage Url for snapshot?
Your input/comments would be much appreciated.

Ha! Exactly what i wanted to do...
https://developers.google.com/compute/docs/images#creating_an_image_from_a_root_persistent_disk

Related

How do you deploy GeoIP on ECS Fargate?

How to productionise https://hub.docker.com/r/fiorix/freegeoip such that it is launched as a Fargate task and Also how to take care of the geoipupdate functionality such that the GeoLite2-City.mmdb is updated in the task.
I have the required environment details like GEOIPUPDATE_ACCOUNT_ID, GEOIPUPDATE_LICENSE_KEY and GEOIPUPDATE_EDITION_IDS but could not understand the flow for deployment as there are two separate dockerfile/images for geoip as well as geoipupdate.
Has someone deployed this on Fargate? If yes could you please list down with the high level steps for the same. I have already tried researching if such a thing is deployed on ECS, but I can only find examples for Lambda and EC2.
Thanks

Is it possible to have hot reload with Kubernetes?

I am trying to get into the way of things with the Kubernetes but I'm facing a problem with hot reload.
In the development mode when I am just working on the code and I need the code be synchronized with the pods directly like in Docker when I use volumes to keep the state.
Is there any chance to make it work with the Kubernetes?
I would be thankful for any help with Kubernetes...
From the view of Cloud native(or kubernetes), the infrastructure is immutable and the Pods are the smallest deployable units. So you should replace the pod rather than change it(your code is part of the pod/image). so the correct process is: change code -> build image -> recreate pod in your env But actually, your process still could work just not follow the best practice of cloud native... –
vincent pli
Also, you can try Ksync, that allows you to synchronize application code between your local and Kubernetes cluster. Kindly ask you to refer to official documentation to read more about.

How Generate an az container instance

a bit of context, I'm starting with the devOps, and create a docker-compose.yml to lift two containers, one with my mongodb and one with the express framework mongo-express, but now I want to bring it to my cloud in Azure, but the The truth is that the documentation is very limited and they do not give you a real example of how, for example, to upload a mongo db and that its data is persistent.
So, has anyone done something similar? Or do you have any advice that you can give me?
I'd be really grateful.
You can follow the steps here to use a docker-compose file to deploy the containers to Azure Container Instance and this document also shows the example. And if you want to persist the database data, you can mount the Azure File Share to the containers.

Run out of storage on Service Fabric scale set

I've run out of storage on my Azure Service Fabric sclesets, so can no longer deploy any updates. I'm guessing this is because SF is keeping track of all the deployments and using up space.
Can anyone tell me if there is:
1) A way to tell service fabric to delete old deployments (say older than 10 days ago.)
2) A way to increase the storage available on the scalesets (Service Fabric is currently using the OS disk for deployments)
Regarding your first question,
There is no way to tell SF to auto-delete old packages based on days, you can either:
Do upgrades using the flag -UnregisterUnusedApplicationVersionsAfterUpgrade = $true when running the Deploy-FabricApplication.ps1 script
Update the Deploy-FabricApplication.ps1 script or create a scheduled script to check for unused packages older than a specific version, something like described in this SO
Regarding the second Question:
Yes you can change the disk size via ARM template update,
But the issue might also be the LOGs size, take a look in this question might help solve the problem without bigger disks.

Cannot create Service Fabric Cluster in Azure Portal

I can't start creating service fabric cluster.
When starting creation portal always shows "Rainy Cloud" and nothing can be inserted?
Thanks for reporting this, we found a problem in the portal that may be causing this. We'll be rolling out a fix in the next few days.
BTW, we have a repo on GitHub that you can use to report issues like this for a faster response: https://github.com/Azure/service-fabric-issues