Is the data on Google Nearline and Coldline redundantly stored? - google-cloud-storage

Google mentions an average durability of 99.999999999% for Nearline and Coldline archival solutions. It does not mention about the geo-redundantly of the data. Is the data redundantly stored at multiple facilities? If yes, is the redundancy greater or less than the 'Regional' or 'Multi-Regional' class? Sorry, if this information is mentioned somewhere in the documentation.

If you look on the Storage Classes page, it says:
All storage classes support:
Redundant storage. Cloud Storage is designed for 99.999999999% durability.

Related

Does Google Cloud Storage offer support for QUIC or HTTP/3?

I'm wondering if it's possible to use GCS for low latency object storage. Specifically, does it support access over HTTP/3 and would this reduce latency?

Does Google Cloud Storage Backup and What Are Prices For 1Tb?

I am wondering does Google Cloud backup there servers. For example has anyone ever lost there data, or maybe is there any off chance that this can happen? Or has Google has a strategy to prevent this. Or is it up to us to make backups?
Also I have another question. I have a site with 1TB of downloadable files. I'm wondering on the cost each month and the bandwidth prices?
Thanks
From the Storage Classes documentation page:
All storage classes support:
Redundant storage. Cloud Storage is designed for 99.999999999% durability.
As for pricing, you can plug your numbers into the Google Cloud Platform Pricing Calculator or look directly at the Google Cloud Storage Pricing page.

In google cloud Compute engine persistent disk v/s cloud storage which should i use and why?

Hello all I am doing a start up and for cloud server I am using google cloud platform to launch my android app. Now I was reading through google's docs, but I can't figure out how I am going to put my scripts on google cloud because I came across two things Cloud Storage and other one was Compute engine's Persistent disks. I also google this question and it leads me here https://cloud.google.com/compute/docs/faq#pdgcs. I read it, but now I am more confused because initially I thought that when I am going to create my instance then there there will be a memory selection and also disk selection field so I thought all of my data which are my NOSQL data and both my scripts are going to be inside my Compute Engine disk section, but as I read about this cloud storage section so now I am wondering why even they have these two types of storage wouldn't it be a lot easier to put storage section only at one place?
Please if anyone know about this please answer also if you think this question is very less detailed then sorry I am a newbie in cloud server hosting so I don't know anything about this, it will be really appreciatable if you can just enlighten me here?
The question is whether you need global read/write access to this data or whether each Compute Engine instance will read/write its own data individually.
If you need global access, Cloud Storage is the solution. If only local access is needed, go with Persistent disks as it has lower latency.
From what you described, it looks to me that you probably want to go with Persistent disks.

Sugestions about file storage in Amazon AWS

I'm developing a Asp.Net MVC project that will be hosted in Amazon AWS, but I have some questions about storage of the client's files. The documentation from Amazon is not clear to me and I'm looking for some directions and experiences here.
1 - each client have a few files with low space disk requirements, low update frequency but very high access frequency (like brand image and even sensitive files like certificates). Is appropriate to storage this files in app_data folder in web server?
2 - the most critical to me are sensitive documents (from hundreds to dozen of thousands per client, most like xml signed files). This files has a medium read access frequency but a very high demand for creation. One solution I found is MongoDB, wich give me some freedom to manage the storage policy and allow me a external backup easy, but I'm not sure about that. Other options are to use the Amazon Storage and handle all this files and GBs in there with a lot of folders or maybe use a regular database and save the files as xml or bin.
My concerns are about the amount of data, the security and the reliable in case of disaster as most of this documents has legal value.
You could, but storing them locally, violates the shared nothing architecture and would limit your scaling options. Amazon S3 is a good option here. You can set some files public and serve them direct from s3 (or with cloudfront) and keep other private and provide access via signed urls.
Again, you can put the files on s3 and make them private. You will still probably store references to the files in your database. Generally its not a great idea to store large blob files in a database since they are often not well optimized to access them.

AWS (or other cloud solution) for migrating large data?

I have organized, non-relational data that is in both file system and SQL database. There is application that queries both sources.
What would be some cloud solutions for storing this data, which equates to about 1TB? I'd like to be able to migrate this data into the cloud solution and alter the application to query the data in the cloud.
So far, I've looked at AWS options: SimpleDB, DynamoDB, and MongoDB on an EC2 Intance with EBS for increased storage.
I've also looked into Azure's Table Storage.
SimpleDB has a 10GB limit. DynamoDB is on SSD and might be overkill for my needs. Did I miss something? Are MongoDB on AWS or Azure Table storage suitable options?
I think the solution depends heavily on your data access patterns.
I've used Azure Table Storage and it's great for many things. I've used DynamoDB and it's also good for quite a few things. Both are good table stores, but both have restrictions around read indexes, querying, and transactions. That's sometimes a show stopper. Both will require retooling your data and all the dependent applications.
For your file storage:
(Cheapest, slowest) Migrate your files to a blob store (Azure Blob Storage or AWS S3) and leave them there. Use S3 as a drive for file access. This is slow, but cheap.
(Performant) Use an EC2 instance with EBS drives and store your files there. Access the data on the local file system. This is durable and performant.
For your relational data, leave it relational and store it in a Cloud relational database server. (RDS+MySQL, RDS+SQL Server, SQL Azure, etc).
There's no need to change your applications, and their data patterns, moving to the cloud.