Updating Web Role applications (Azure) without deleting user data - deployment

I've got a Web Role on Azure with 2 Applications and 1 Virtual Directory.
1 Application is a backend, where admins can upload files, which are stored in the virtual directory (which is accessed by both applications).
Everytime I deploy a new version to Azure, all the uploaded content in the virtual directory is deleted - this is what I don't want!
So how is it possible to publish a new version without deleting all my user generated files?
I've already managed to update the application with WebDeploy. But this is only possible for the "main" application, and not the 2nd application (which is configured as a Virtual Application).
Thanks

You can't. The web role is recreated on deployment. It may also occur on hardware failure, azure redeploys your system if an instance fails. Redeploys a clean virtual machine and then deploys your app to it. You should never store data you want to keep on a web role. You need to use blob storage etc to store files you want to persist.

Virtual directories are stored on "Application" partition which is recreated on each upgrade - see this for more information. So the virtual directory folder is not the right place to store stuff you want preserved across upgrades. BTW the "Application" partition only has 1 gigabyte of space and some of that is used for storing your application binary code so you may find yourself in a "disk full" situation at some moment.
If you want to store some data which you don't mind sacrificing on rare occasions - like cached results - you may use "local resources" disk for that which will survive in-place upgrades and reboots. However it is not guaranteed to be preserved if your VM crashes - for such level of preservation you have to use persistent storage like blob storage for example.

Since you are talking about virtual directories and using web deploy to update application outside of the usual Azure package deployment mechanism, it sounds like your architecture/application might be more suited to a persistent VM role rather than a Web role. These are available on Azure in preview only at the moment.
http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/
They let you have persistent storage that will survive a recycle. The storage is actually backed by blob storage, but it looks like a normal disk from the PVM.

Related

Without retention policy or lifecycle rules, would Google Cloud Storage automatically delete files?

My app uses Google Cloud Storage through Firebase with Java, Angular & Flutter. It stores pictures and such there. Now, a lot of older files recently disappeared from Google Cloud Storage. A test version of my app is probably the culprit. But I want to make sure that I got the storage bucket configured correctly.
Please note that I don't have object versioning enabled. From what I know, it would keep a copy of deleted files around. That's why I plan to enable it in the future. But it doesn't help me with files deleted in the past.
Right now, my storage bucket is configured as follows:
Default storage class: Standard
Object versioning: Off
Retention policy: None
Lifecycle rules: None
So with that configuration, would Google Cloud Storage automatically delete files? Like, say, after a year or so?
No. If you don't ask Cloud Storage to delete your files, your files will stay around forever. There's no expiration of any sort by default. Cloud Storage is a popular tool for long term storage/backup/retention.
If you want to be especially careful not to delete certain objects, retention policies and object holds can be used to make it harder to delete objects by accident. For example, if you wanted to temporarily ensure that your scripts would not delete your most important object, you could run:
gsutil retention temp set gs://my_bucket_name/my_important_file.txt
This would set a "temporary object hold" on the object, which would make it so that my_important_file.txt could not be deleted with the delete command until you released the hold.

Shared Access for Home Directory in google cloud Shell

I am currently using the Google Cloud Shell, and I wish to access the persistent disk of another user. (Not using local shell)
More info on topic of inquiry: https://medium.com/google-cloud/no-localhost-no-problem-using-google-cloud-shell-as-my-full-time-development-environment-22d5a1942439
Cloud Shell is a micro vm dedicated to you, free, and with a mounted personal disk.
EDITED: Thanks to #Johnhanley comment, you can access to the cloud shell file of someone else with this code provided. However, you need the credential of the target Cloud Shell env and it's not very secure and recommended.
However, you can mount a fuse directory. And the other user also. With fuse, you navigate in a bucket like in directory. But be carefull, Storage bucket is not a file system: performance and usage aren't the same. Moreover, Fuse don't guaranty the data integrity in case of simultaneous file use, especially writing concurrency. Use with precaution.
But you can have a common workspace if it's your requirement.
If you use Cloud Shell as dev environment, like a computer or a VM, the same best practice are to apply. The dev environment has to be considered as ephemeral (computer can have outage or be lost/stolen, People can leave a company and you no longer have access to their cloud shell), and thereby you have to save your sources frequently on safe space (Git repository, Cloud Storage with Fuse)

Copying directories into minikube and persisting them

I am trying to copy some directories into the minikube VM to be used by some of the pods that are running. These include API credential files and template files used at run time by the application. I have found you can copy files using scp into the /home/docker/ directory, however these files are not persisted over reboots of the VM. I have read files/directories are persisted if stored in the /data/ directory on the VM (among others) however I get permission denied when trying to copy files to these directories.
Are there:
A: Any directories in minikube that will persist data that aren't protected in this way
B: Any other ways of doing the above without running into this issue (could well be going about this the wrong way)
To clarify, I have already been able to mount the files from /home/docker/ into the pods using volumes, so it's just the persisting data I'm unclear about.
Kubernetes has dedicated object types for these sorts of things. API credential files you might store in a Secret, and template files (if they aren't already built into your Docker image) could go into a ConfigMap. Both of them can either get translated to environment variables or mounted as artificial volumes in running containers.
In my experience, trying to store data directly on a node isn't a good practice. It's common enough to have multiple nodes, to not directly have login access to those nodes, and for them to be created and destroyed outside of your direct control (imagine an autoscaler running on a cloud provider that creates a new node when all of the existing nodes are 90% scheduled). There's a good chance your data won't (or can't) be on the host where you expect it.
This does lead to a proliferation of Kubernetes objects and associated resources, and you might find a Helm chart to be a good resource to tie them together. You can check the chart into source control along with your application, and deploy the whole thing in one shot. While it has a couple of useful features beyond just packaging resources together (a deploy-time configuration system, a templating language for the Kubernetes YAML itself) you can ignore these if you don't need them and just write a bunch of YAML files and a small control file.
For minikube, data kept in $HOME/.minikube/files directory is copied to / directory in VM host by minikube.

Node app staging fails at Installing App Management stage

I've been trying to push a new build of an app to Bluemix, but staging keeps failing when it's at "Installing App Management" because it can't create regular files and directories due to the disk quota being exceeded.
I've already tried pushing it with "-k 2G", but it it still fails.
Is there any way to find out how or why the disk quota keeps being exceeded? There's no way I'm near using 2GB of disk space.
Switching to npm v3 is a potentinal solution here as it reduces the number of duplicated dependencies.
You can do that in your package.json, for example:
"engines": { "npm": "3.x" }
By design, the CloudFoundry applications on IBM Bluemix are limited to an available diskquota of 2GB (default is 1 GB): usually if a cloud application needs more than a 1GB (even 1GB is a lot for a cloud application...) it should be redesigned according to cloud patterns, breaking down it into microservices, using external storage services if it needs simply static storage (for example ObjectStorage service on Bluemix).
You have also to consider that a cloud application filesystem is a unreliable filesystem, the application itself could be automatically deployed on a different virtual environment without any evidence for the end users.
Even the logs should be collected on external services (routing the log stream) if you need to keep these safe, otherwise they will be reset as soon as the application will be restarted on a different cluster node.

Copying a virtual machine data drive in Microsoft Azure

Added more details at the bottom of the question.
We are testing deployment scenarios in Azure VM preview and have run into an issue.
Here is our scenario. We have a software stack that we use in all of our servers. We have created an image with all of that stack installed on an attached data drive. We have created a image of the VM that we can use as a template. Now what we want to do is to to create a VM based on that template and create a copy of the data drive and attach it to the newly created VM in an automated manner.
Our problem is that while we have found lots of information about creating drives, we can't find any guidance on how to copy the data drive using Azure for Powershell.
Any thoughts, code, or RTFMs happily accepted.
Cheers,
Terence
We have sucessfully created an operating system image that we can use to create VM's. But there is a data disk that holds our standard software stack that we want to reuse by copying it across VMs. The scenario that we are trying to implement is:
Create a VM from a standard VM image - PBIMaster
Attach a disk as F to that image called PBIMasterDisk
Install all of the software required for our app on F: (to big for the OS disk and besides sticking it on the OS disk seems messy)
Build an image from PBIMaster call it PBIMasterImage save it.
Create a new image from PBIMaster call it Node1
Copy PBIMasterDisk to a new Azure disk call it Node1Software disk
Attach Node1Softwaredisk to Node1 as F:
Since the image has the correct registry settings from the previous installs our stack is ready to go.
9 Add appropriate endpoints.
Rinse and repeat for each additional node.
Hopefully that makes our scenario clearer.
Thanks.
If I understood your objective correctly you already have uploaded two VHD in your subscription and you have also create a VM based on your OS Disk VHD1:
OS Disk (VHD1)
Data Disk (VHD2)
Now you want to copy VHD2 to VHD3 and then attach VHD3 to your VM (which is based on OS disk) via Powershell.
As of there is no powershell command which will let you copy DataDisk (VHD2) to another data disk (i.e VHD3)..
I haven't tried but you can use the following code to try copying your DataDisk:
http://blogs.msdn.com/b/windowsazurestorage/archive/2012/06/12/introducing-asynchronous-cross-account-copy-blob.aspx
This method does copy blobs directly at cloud storage level so there is no bandwidth usage towards on-premise and potentially zero cost if you are in same DC. Trying using the same subscription and see if that solves your problem.