How to recover the specific file which got accidentally deleted in azure file share - azure-data-lake-gen2

What is the exact process for recovering only the deleted file/s in Azure fileshare on the azure storage adls gen2

ADLS Gen2 now supports soft delete feature which allows you to retain the deleted file share for atleast 7 days which you can increase as per your requirement.
When you click on the Show deleted shares you will be able to see the deleted shares in your storage account.
Click on the 3 dots on the right side of the share which has been deleted and click in Undelete option.
You can also restore a entire file share or a specific files from a restore point created by Azure Backup. Refer Restore Azure file shares to know more.

Related

Without retention policy or lifecycle rules, would Google Cloud Storage automatically delete files?

My app uses Google Cloud Storage through Firebase with Java, Angular & Flutter. It stores pictures and such there. Now, a lot of older files recently disappeared from Google Cloud Storage. A test version of my app is probably the culprit. But I want to make sure that I got the storage bucket configured correctly.
Please note that I don't have object versioning enabled. From what I know, it would keep a copy of deleted files around. That's why I plan to enable it in the future. But it doesn't help me with files deleted in the past.
Right now, my storage bucket is configured as follows:
Default storage class: Standard
Object versioning: Off
Retention policy: None
Lifecycle rules: None
So with that configuration, would Google Cloud Storage automatically delete files? Like, say, after a year or so?
No. If you don't ask Cloud Storage to delete your files, your files will stay around forever. There's no expiration of any sort by default. Cloud Storage is a popular tool for long term storage/backup/retention.
If you want to be especially careful not to delete certain objects, retention policies and object holds can be used to make it harder to delete objects by accident. For example, if you wanted to temporarily ensure that your scripts would not delete your most important object, you could run:
gsutil retention temp set gs://my_bucket_name/my_important_file.txt
This would set a "temporary object hold" on the object, which would make it so that my_important_file.txt could not be deleted with the delete command until you released the hold.

Lifecycle Management Policy for Azure Data Lake Gen 2 does NOT remove folders?

I got some files stored in an Azure Data Lake Gen 2 storage account. I've set a lifecycle policy so that files are deleted after 2 days. The files are successfully deleted after said period, but of course, the folders aren't deleted, even when the folder ends up empty.
Is there some way to instruct the policy to delete the folder once it ends up empty? The problem is that I'm ending up with a lot of orphaned folders after the deletion takes place.
Thanks!

About delete binary logs file on Cloud Sql

I have a question about binary log on Google Cloud Sql.
Now that storage on Cloud SQL is constantly increasing, I want to delete the binary logs files. I have read the documentation about it, but it is not clear that when I disable the binary logs function, will the files be deleted immediately or have to wait for the next 7 days for the files to be deleted. Thank you.
https://cloud.google.com/sql/docs/mysql/backup-recovery/pitr#disk-usage
According to the official documentation :
Disk usage and point-in-time recovery
The binary logs are automatically deleted with their associated
automatic backup, which generally happens after about 7 days.
Diagnosing issues with Cloud SQL instances
Binary logs cannot be manually deleted.
Binary logs are automatically deleted with their associated automatic
backup, which generally happens after about seven days.
Therefore you have to wait for about 7 days for the Binary logs and their associated automatic backup to be deleted.

how we can do automatic backup for compute engine disk everyday ? in google cloud

I have created instance in compute engine with windows server 2012. i cant see any option to take automatic backup for instance disk database everyday. there is option of snapshot but we need to operate this manually. please suggest any way to backup automatically and can be restore able on a single click. if is there any other possibility using cloud SQL storage or any other storage please recommend.
thanks
There's an API to take snapshots, see API section here:
https://cloud.google.com/compute/docs/disks/create-snapshots#create_your_snapshot
You can write a simple app to get triggered from Cron or something to take a snapshot periodically.
You have no provision for automatic back up for compute engine disk. But you can do a manual disk backup by creating a snapshot.
Best alternative way is to create a bucket and move your files there. Google cloud buckets have automated back up facility available.
Cloud storage and cloud SQL are your options for automated back ups in google cloud.

Updating Web Role applications (Azure) without deleting user data

I've got a Web Role on Azure with 2 Applications and 1 Virtual Directory.
1 Application is a backend, where admins can upload files, which are stored in the virtual directory (which is accessed by both applications).
Everytime I deploy a new version to Azure, all the uploaded content in the virtual directory is deleted - this is what I don't want!
So how is it possible to publish a new version without deleting all my user generated files?
I've already managed to update the application with WebDeploy. But this is only possible for the "main" application, and not the 2nd application (which is configured as a Virtual Application).
Thanks
You can't. The web role is recreated on deployment. It may also occur on hardware failure, azure redeploys your system if an instance fails. Redeploys a clean virtual machine and then deploys your app to it. You should never store data you want to keep on a web role. You need to use blob storage etc to store files you want to persist.
Virtual directories are stored on "Application" partition which is recreated on each upgrade - see this for more information. So the virtual directory folder is not the right place to store stuff you want preserved across upgrades. BTW the "Application" partition only has 1 gigabyte of space and some of that is used for storing your application binary code so you may find yourself in a "disk full" situation at some moment.
If you want to store some data which you don't mind sacrificing on rare occasions - like cached results - you may use "local resources" disk for that which will survive in-place upgrades and reboots. However it is not guaranteed to be preserved if your VM crashes - for such level of preservation you have to use persistent storage like blob storage for example.
Since you are talking about virtual directories and using web deploy to update application outside of the usual Azure package deployment mechanism, it sounds like your architecture/application might be more suited to a persistent VM role rather than a Web role. These are available on Azure in preview only at the moment.
http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/
They let you have persistent storage that will survive a recycle. The storage is actually backed by blob storage, but it looks like a normal disk from the PVM.