Does Google Cloud Storage offer support for QUIC or HTTP/3? - quic

I'm wondering if it's possible to use GCS for low latency object storage. Specifically, does it support access over HTTP/3 and would this reduce latency?

Related

Does Google Cloud Storage Backup and What Are Prices For 1Tb?

I am wondering does Google Cloud backup there servers. For example has anyone ever lost there data, or maybe is there any off chance that this can happen? Or has Google has a strategy to prevent this. Or is it up to us to make backups?
Also I have another question. I have a site with 1TB of downloadable files. I'm wondering on the cost each month and the bandwidth prices?
Thanks
From the Storage Classes documentation page:
All storage classes support:
Redundant storage. Cloud Storage is designed for 99.999999999% durability.
As for pricing, you can plug your numbers into the Google Cloud Platform Pricing Calculator or look directly at the Google Cloud Storage Pricing page.

Is the data on Google Nearline and Coldline redundantly stored?

Google mentions an average durability of 99.999999999% for Nearline and Coldline archival solutions. It does not mention about the geo-redundantly of the data. Is the data redundantly stored at multiple facilities? If yes, is the redundancy greater or less than the 'Regional' or 'Multi-Regional' class? Sorry, if this information is mentioned somewhere in the documentation.
If you look on the Storage Classes page, it says:
All storage classes support:
Redundant storage. Cloud Storage is designed for 99.999999999% durability.

In google cloud Compute engine persistent disk v/s cloud storage which should i use and why?

Hello all I am doing a start up and for cloud server I am using google cloud platform to launch my android app. Now I was reading through google's docs, but I can't figure out how I am going to put my scripts on google cloud because I came across two things Cloud Storage and other one was Compute engine's Persistent disks. I also google this question and it leads me here https://cloud.google.com/compute/docs/faq#pdgcs. I read it, but now I am more confused because initially I thought that when I am going to create my instance then there there will be a memory selection and also disk selection field so I thought all of my data which are my NOSQL data and both my scripts are going to be inside my Compute Engine disk section, but as I read about this cloud storage section so now I am wondering why even they have these two types of storage wouldn't it be a lot easier to put storage section only at one place?
Please if anyone know about this please answer also if you think this question is very less detailed then sorry I am a newbie in cloud server hosting so I don't know anything about this, it will be really appreciatable if you can just enlighten me here?
The question is whether you need global read/write access to this data or whether each Compute Engine instance will read/write its own data individually.
If you need global access, Cloud Storage is the solution. If only local access is needed, go with Persistent disks as it has lower latency.
From what you described, it looks to me that you probably want to go with Persistent disks.

Blob Storage Server with REST API

I am looking for a solution similar to Amazon S3 or Azure Blob Storage that can be hosted internally instead of remotely. I don't necessarily need to scale out, but I'd like to create a central location where my growing stable of apps can take advantage of file storage. I would also like to formalize file access. Does anybody know of anything like the two services I mentioned above?
I could write this myself, but if something exists then I'd rather now reinvent the wheel, unless that weel has corners :)
The only real alternative to services like S3 and Azure blobs I've seen is Swift, though if you don't plan to scale out this may be overkill for your specific scenario.
The OpenStack Object Store project, known as Swift, offers cloud storage software so that you can store and retrieve lots of data in virtual containers. It's based on the Cloud Files offering from Rackspace.
The OpenStack Object Storage API is implemented as a set of ReSTful (Representational State Transfer) web services. All authentication and container/object operations can be performed with standard HTTP calls
http://docs.openstack.org/developer/swift/

Sharing object between Node.js server with memcached / couchcase cluster

I was looking for a way to share object in several nodes cluster, and after a bit of research I thought it would be best using redis pub/sub. Then, I saw that redis doesn't support cluster yet, which means that a system based on redis will have single point of failure. Since high availability is a key feature for me, this solution is not applicable.
At the moment, I am looking into 2 other solutions for this issue:
Memcached
Couchbase
I have 2 questions:
On top of which solution it would be more efficient to simulate pub/sub?
which is better when keeping clusters in mind?
I was hoping that someone out there faced similar issues and share his experience.
I think it's a bad idea to use memcached and couchbase for pub/sub. Both solutions don't provide built-in pub/sub functions and implementing pub/sub on app side can cause a lot of ops/sec to memcache/couchbase server and as a result you'll get slow performance.
Couchbase stores data into disk, so for temporary storage it's better to use memcaced. It will be faster and will not load your disk.
If you can avoid that "pub/sub" and use memcached/couchbase just as simple HA shared key-value storage - do it. It will be much better then pub/sub.
When you install Couchbase server it provides 2 types of buckets: couchbase (with disk persistance, ability to create views, etc.) and memcached (only in-memory key-value storage). Both types of buckets act in the same way in clusters. Also couchbase support memcache api calls, so you don't need to change code to test both variants.
I've tried to use memcached provider for socket.io "pub/sub" sharing, but as I mentioned before it's ugly. And in my case there were few node.js servers with socket.io, so instead of sharing I've implemented something like "p2p messaging" between servers on top of sockets.
UPD: If you have such big amount of data may be it will be better not to have one shared storage, but use something like sharding with "predictible" data location.