In google cloud Compute engine persistent disk v/s cloud storage which should i use and why? - google-cloud-storage

Hello all I am doing a start up and for cloud server I am using google cloud platform to launch my android app. Now I was reading through google's docs, but I can't figure out how I am going to put my scripts on google cloud because I came across two things Cloud Storage and other one was Compute engine's Persistent disks. I also google this question and it leads me here https://cloud.google.com/compute/docs/faq#pdgcs. I read it, but now I am more confused because initially I thought that when I am going to create my instance then there there will be a memory selection and also disk selection field so I thought all of my data which are my NOSQL data and both my scripts are going to be inside my Compute Engine disk section, but as I read about this cloud storage section so now I am wondering why even they have these two types of storage wouldn't it be a lot easier to put storage section only at one place?
Please if anyone know about this please answer also if you think this question is very less detailed then sorry I am a newbie in cloud server hosting so I don't know anything about this, it will be really appreciatable if you can just enlighten me here?

The question is whether you need global read/write access to this data or whether each Compute Engine instance will read/write its own data individually.
If you need global access, Cloud Storage is the solution. If only local access is needed, go with Persistent disks as it has lower latency.
From what you described, it looks to me that you probably want to go with Persistent disks.

Related

Does Google Cloud Storage Backup and What Are Prices For 1Tb?

I am wondering does Google Cloud backup there servers. For example has anyone ever lost there data, or maybe is there any off chance that this can happen? Or has Google has a strategy to prevent this. Or is it up to us to make backups?
Also I have another question. I have a site with 1TB of downloadable files. I'm wondering on the cost each month and the bandwidth prices?
Thanks
From the Storage Classes documentation page:
All storage classes support:
Redundant storage. Cloud Storage is designed for 99.999999999% durability.
As for pricing, you can plug your numbers into the Google Cloud Platform Pricing Calculator or look directly at the Google Cloud Storage Pricing page.

How to store and organize uploaded images on webserver?

I am writing a server that allows user to upload images. It appears that most people tend to store those files on the filesystem directly.
My question would be if that really is the way how to do it. I'm not familiar with the capacities of a server but what I'm curious about is e.g. how to make sure that the server does not run out of (hard drive) memory?
I would also like to know how one would organize those files for many different users. Is it enough to just store it like war/images/<user-database-id>/<uuid-for-image>.(jpeg|png) by just using the user ID inside the database or are there a lot more things to consider when it comes to storing images?
I think your best bet would be to use a cloud storage system such as Amazon S3, Google Cloud Storage, Rackspace, or MS Azure.
Using a path like the one you suggested ought to be possible but you could also omit the user-database-id if that database already gives you a list of objects owned by that user.

uploading images to php app on GCE and storing them onto GCS

I have a php app running on several instances of Google Compute Engine (GCE). The app allows users to upload images of various sizes, resizes the images and then stores the resized images (and their thumbnails) in the storage disk and their meta data in the database.
What I've been trying to find is a method for storing the images onto Google Cloud Storage (GCS) through the php app running on GCE instances. A similar question was asked here but no clear answer was given there. Any hints or guidance on the best way for achieving this is highly appreciated.
You have several options, all with pros and cons.
Your first decision is how users upload data to your service. You might choose to have customers upload their initial data to Google Cloud Storage, where your app would then fetch it and transform it, or you could choose to have them upload it directly to your service. Let's assume you choose the second option, and you want users to stream data directly to your service.
Your service then transforms the data into a different size. Great. You now have a new file. If this was video, you might care about streaming the data to Google Cloud Storage as you encode it, but for images, let's assume you want to process the whole thing locally and then store it in GCS afterwards.
Now we have to get a file into GCS. It's a PHP app, and so as you have identified, your main three options are:
Invoke the GCS JSON API through the Google API PHP client.
Invoke either the GCS XML or JSON API via custom code.
Use gsutil.
Using gsutil will be the easiest solution here. On GCE, it automatically picks up appropriate credentials for your service account, and it's got several useful performance optimizations and tuning that a raw use of the API might not do without extra work (for example, multithreaded uploads). Plus it's already installed on your GCE instances.
The upside of the PHP API is that it's in-process and offers more fine-grained, programmatic control. As your logic gets more complicated, you may eventually prefer this approach. Getting it to perform as well as gsutil may take some extra work, though.
This choice is comparable to copying files via SCP with the "scp" command line application or by using the libssh2 library.
tl;dr; Using gsutil is a good idea unless you have a need to handle interactions with GCS more directly.

A Local version of Azure Table Storage

Ok first of all I love Azure and table storage.
We're starting a new greenfield project which will be hosted as a SaaS model in the cloud. Azure Table storage is ideal for what we need but one thing stopping us from taking this route is the possibility of someone having to have the application deployed to their local web server rather than a cloud deployment.
This is something i'd rather avoid personally but unfortunately some people insist the their local setup is more secure than any data centre out there.
What i'd really like to know is if someone has created a local implementation of Azure Table Storage. I know microsoft have the emulator which in theory could be used (it stores the data in SQL which may be slow)
Anyone used the emulator for an internal deployment?
I'm happy to look at creating a wrapper for Azure Table Storage using their rest apis but didn't want to do something that's already been done.
Alternately can anyone recommend an alternate? I know there's RavenDB and MongoDB which also look good too but i've not had an exposure to how well they handle under load or when to scale them out.
The emulator is designed to simplify testing - it is definitely not intended to be used as part of a production deployment.
Is it possible to embrace both a cloud only (Azure Web role and Storage) and a hybrid design whereby your application can be hosted within your web server yet still use Azure Storage?
Jason

Blob Storage Server with REST API

I am looking for a solution similar to Amazon S3 or Azure Blob Storage that can be hosted internally instead of remotely. I don't necessarily need to scale out, but I'd like to create a central location where my growing stable of apps can take advantage of file storage. I would also like to formalize file access. Does anybody know of anything like the two services I mentioned above?
I could write this myself, but if something exists then I'd rather now reinvent the wheel, unless that weel has corners :)
The only real alternative to services like S3 and Azure blobs I've seen is Swift, though if you don't plan to scale out this may be overkill for your specific scenario.
The OpenStack Object Store project, known as Swift, offers cloud storage software so that you can store and retrieve lots of data in virtual containers. It's based on the Cloud Files offering from Rackspace.
The OpenStack Object Storage API is implemented as a set of ReSTful (Representational State Transfer) web services. All authentication and container/object operations can be performed with standard HTTP calls
http://docs.openstack.org/developer/swift/