What is "Globally" Unique for GCS Bucket Names? - google-cloud-storage

Question
When Google Cloud Storage says that all GCS Buckets share a common namespace (paragraph 2 here)
When you create a bucket, you specify a globally-unique name, a geographic location where the bucket and its contents are stored, and a default storage class. The default storage class you choose applies to objects added to the bucket that don't have a storage class specified explicitly.
and (bullet 1 here)
Bucket names reside in a single Cloud Storage namespace.
This means that:
Every bucket name must be unique.
Bucket names are publicly visible.
If you try to create a bucket with a name that already belongs to an existing bucket, Cloud Storage responds with an error message. However, once you delete a bucket, you or another user can reuse its name for a new bucket. The time it takes a deleted bucket's name to become available again is typically on the order of seconds; however, if you delete the project that contains the bucket, which effectively deletes the bucket as well, the bucket name may not be released for weeks or longer.
does "single namespace" and "globally" literally mean that across the entire Google Cloud regardless of your organization, project, or region you cannot create any bucket that shares a name with another bucket anywhere else on the entire planet for all existing buckets at any given time?
I have only ever worked on GCP within one organization, and we prefix our buckets with the organization name lots of times but also sometimes we don't. I am not concerned with running out of names or anything like that, I am more just curious what is meant by those things "globally" and "single namespace" and if it means what I think it does.
Given that most times buckets are only referenced by their name with the gs:// prefix I can see how having literal global uniqueness is important for ensuring consistent access experiences without needing to know things like project/organization IDs. Can anybody find a source that confirms this?
Odd Implication/thought experiment
If this is the case something I do wonder given...
There appears to be no cost associated to creating an empty bucket you do not use up to 5k buckets a month, more than that is $0.05 per 10k buckets (source)
There is no limit to the number of buckets you can create in a project (source)
... what is to stop me from creating a free-tier project(s), and iterating over ALL possible GCS bucket names (obviously it would take forever and be quite impractical) but in theory just occupying all bucket names (or at least all human readable ones) and selling those names to individuals who wish to purchase them for their organization if the bucket name does not already exist? I suppose the number of possibilities is astronomical meaning even at $0.05, for it to be profitable one would need to know the ratio of names that will be bought to the list of available ones to determine rational pricing, and even at $0.05 per 10k (with the first 5k a month being free) there is not enough money in the world to create all of the buckets at once. Still, I think about these things.

Yes, "single namespace" and "globally" mean what you said: All GCS buckets must have unique names, regardless of organization, project, and region.

Related

Google Storage - Backup Bucket with Different Key For Each File

I need to backup a bucket in which every file is encrypted with a different key to a different bucket on Google Storage.
I want to create a daily snapshot of the data so in a case where the data has been deleted I could easily recover it.
My Research:
Using gsutil cp -r - because every file has a different key it does not work
Using Google Transfer | cloud - does not work on such buckets from the same reason
List all the files in the bucket and fetch all the keys from the database and copy each file - this will probably be very expensive to do because i have a lot of files and i want to do it daily
Object versioning - Does not cover a case where the bucket has been completely deleted
Are there any other solutions for that problem?
Unfortunately, as you mentioned the only option indeed, would be to follow your number 3 choice. As you said and as clarified in this official documentation here, download of encrypted data is a restricted feature, so you won't be able to download/snapshot the data, without fetching the keys and then copying the files.
Indeed, this will probably make a huge impact in your quota and pricing, since you will be performing multiple operations everyday, for multiple files, which will affect multiple aspects on the pricing. However, this seems to be the only available way right now. In addition to this, I would recommend you to raise a Feature Request in Google's Issue Tracker, so they can check about the possibility of implementing this in the future.
Let me know if the clarifed your doubts!

Always read latest folder from s3 bucket in spark

Below is how my s3 bucket folder structure looks like,
s3://s3bucket/folder1/morefolders/$folder_which_I_want_to_pick_latest/
$folder_which_I_want_to_pick_latest - This folder can always have an incrementing number for every new folder that comes in, like randomnumber_timestamp
Is there a way I can automate this process by always reading the most recent folder in s3 from spark in Scala
The best way to work with that kind of "behavior" is structure your data as a partitioned approach, like year=2020/month=02/day=12, where, every partition is a folder (in aws-console). In this way you can use a simple filter on spark to determine the latest one. (more info: https://www.datio.com/iaas/understanding-the-data-partitioning-technique/)
However, if you are not allowed to re-structure your bucket, the solution could be costly if you don't have a specific identifier and/or reference that you can use to calculate your newest folder. Remember, that in s3 you don't have a concept of folder, you have only an object key (here is where you see the / and in aws console can be visualized as folders), so, to calculate the highest incremental id in $folder_which_I_want_to_pick_latest will eventually check in all the objects stored in the bucket and every object-request in s3 costs. More info: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html.
Here's one option. Consider writing a Lambda function that either runs on a schedule (say if you knew that your uploads always happen between 1pm and 4pm) or is triggered by an S3 object upload (so it happens for every object uploaded to folder1/morefolders/).
The Lambda would write the relevant part(s) of the S3 object prefix into a simple DynamoDB table. The client that needs to know the latest prefix would read it from DynamoDB.

Is it better to store 1 email/file in Google Cloud Storage or multiple emails in one large file?

I am trying to do analytics on emails for some users. To achieve this, I am trying to store the emails on Cloud Storage so I can run Hadoop jobs on them. (Earlier I tried App Engine DataStore but it was having a hard time scaling over that many user's data: hitting various resource limits etc.)
Is it better to store 1 email/file in Cloud Storage or all of a user's emails in one large file? In many examples about cloud storage, I see folks operating on large files, but it seems more logical to keep 1 file/email.
From a GCS scaling perspective there's no advantage to storing everything in one object vs many objects. However, listing the objects in a bucket is an eventually consistent operation. So, if your computation would proceed by first uploading (say) 1 million objects to a bucket, and then immediately starting a computation that lists the objects in the bucket and computing over their content, it's possible the listing would be incomplete. You could address that problem by maintaining a manifest of objects you upload and passing the manifest to the computation instead of having the computation list the objects in the bucket. Alternatively, if you load all the emails into a single file and upload it, you wouldn't need to perform a bucket listing operation.
If you plan to upload the data once and then run a variety of analytics computations (or rev a single computation and run it a number of times), uploading a large number of objects and depending on listing the bucket from your analytics computation would not be a problem, since the eventual consistency problem really only impacts you in the case where you list the bucket shortly after uploading.

Getting large rows out of SQL Azure - but where to go? Tables, Blob or something like MongoDB?

I read through a lot of comparisons between Azure Table/Blob/SQL storage and I think I have a good understanding of all of those ... but still, I'm unsure where to go for my specific needs. Maybe someone with experience in similar scenarios and is able to make a recommendation.
What I have
A SQL Azure DB that stores articles in raw HTML inside a varchar(max) column. Each row also has many metadata columns and many indexes for easy querying. The table contains many references to Users, Subscriptions, Tags and more - so a SQL DB will always be needed for my project.
What's the problem
I already have about 500,000 articles in this table and I expect it to grow by millions of articles per year. Each article's HTML content can be anywhere between a few KB and 1 MB or, in very few cases, larger than 1 MB.
Two problems arise: as Azure SQL storage is expensive, rather earlier than later I'll shoot myself in the head with the costs for storing this. Also, I will hit the 150 GB DB size limit also rather earlier than later. Those 500,000 articles already consume 1,6 GB DB space now.
What I want
It's clear those HTML content has to get out of the SQL DB. While the article table itself has to remain for joining it to users, subscriptions, tags and more for fast relational discovery of the needed articles, at least the colum that holds the HTML content could be outsourced to a cheaper storage.
At first sight, Azure Table storage seems like the perfect fit
Terabytes of data in one large table for very cheap prices and fast queries - sounds perfect to have a singe Table Storage table holding the article contents as an add-on to the SQL DB.
But reading through comparisons here shows it might not even be an option: 64 KB per column would be enough for 98 % of my articles, but there are those 2 % left where for some single articles even the whole 1 MB of the row limit might not be enough.
Blob storage sounds completely wrong, but ...
So there's just one option on Azure left: Blobs. Now, it might not be as wrong as it sounds. In most of the cases, I would need the content of only a single article at once. This should work fine and fast enough with Blob storage.
But I also have queries where I would need 50, 100 or even more rows at once INCLUDING even the content. So I would have to run the SQL query to fetch the needed articles and then fetch every single article out of the Blob storage. I have no experience with that but I can't believe I'd be able to remain in millisecond timespan for the queries when doing that. And queries that take multiple seconds are an absolute no-go for my project.
So it also does not seem to be to be an appropriate solution.
Do I look like a guy with a plan?
At least I have something like a plan. I thought about only "exporting" appropriate records into SQL Table Storage and/or Blob Storage.
Something like "as long as the content is < 64 KB export it to table storage, else keep it in the SQL table (or even export this single XL record into BLOB storage)"
That might work good enough. But it makes things complicated and maybe unnecessary error-prone.
Those other options
There are some other NoSQL DBs like MongoDB and CouchDB that seem to better fit my needs (at least from my naive point of view as someone who just read the specs on paper, I don't have experience with them). But they'd require self-hosting, some thing I'd like to get out of it's way if possible. I'm on Azure to do as little as needed in terms of self-hosting servers and services.
Did you really read until here?
Then thank you very much for your valuable time and thinking about my problems :)
Any suggestions would be greatly appreciated. As you see, I have my ideas and plans, but nothing beats experience from someone who walked down the road before :)
Thanks,
Bernhard
I signed up just solely to help with this question. In the past, I have found useful answers to my problems from Stackoverflow - thank you community - so I thought it would just be fair (perhaps fair is an understatement) to attempt to give something back with this question, as it falls on my alley.
In short, while considering all factors stated in the question, table storage may be the best option - iif you can properly estimate transactions per month: a nice article on this.
You can solve the two limitations that you mentioned, row and column limit, by splitting (plain text method or serializing it) the document/html/data. Speaking from experience with 40 GB+ data stored in Table Storage, where frequently our app retrieves more than 10 rows per each page visit in milliseconds - no argument here! If you need 50+ rows at times, you are looking at low single digits second(s), or you can do them in parallel (and further by splitting the data in different partitions), or in some async fashion. Or, read suggested multi level caching below.
A bit more detail. I tried with SQL Azure, Blob (both page and block), and Table Storage. I can not speak for Mongo DB since, partially for the reasons already mentioned here, I did not want to go that route.
Table Storage is fast; in the range of 20-50 milliseconds, or even faster sometimes (depends, for instance in the same data center i have seen it gone as low as 10 milliseconds), when querying with partition and row key. You may also further have several partitions, in some fashion based on your data and your knowledge about it.
It scales better, in terms of GB's but not transactions
Row and column limitations that you mentioned are a burden, agreed, but not a show stopper. I have written my own solution to split entities, you can too easily, or you can see this already-written-solution (does not solve the whole problem but it is a good start): https://code.google.com/p/lokad-cloud/wiki/FatEntities
Also need to keep in mind that uploading data to table storage is time consuming, even when batching entities due to other limitations (i.e., request size less than 4 MB, upload bandwidth, etc).
But using solely just TableStorage may not be the best solution (thinking about growth and economics). The best solution that we ended up implementing used multi-level caching/storage, starting from static classes, Azure Role Based Cache, Table Storage, and Block Blobs. Lets call this, for readability purposes, level 1A, 1B, 2 and 3 respectively. Using this approach, we are using a medium single instance (2 CPU Cores and 3.5 GB Ram - my laptop has better performance), and are able to process/query/rank 100GB+ of data in seconds (95% of cases in under 1 second). I believe this is fairly impressive given that we check all "articles" before displaying them (4+ million "articles").
First, this is tricky and may or may not be possible in your case. I do not have sufficient knowledge about the data and its query/processing usage, but if you can find a way to organize the data well this may be ideal. I will make an assumption: it sounds like you are trying to search through and find relevant articles given some information about a user and some tags (a variant of a news aggregator perhaps, just got a hunch for that). This assumption is made for the sake of illustrating the suggestion, so even if not correct, I hope it will help you or trigger new ideas on how this could be adopted.
Level 1A data.
Identify and add key entities or its properties in a static class (periodically, depending on how you foresee updates). Say we identify user preferences (e.g., demographics and interest, etc) and tags (tech, politics, sports, etc). This will be used to retrieve quickly who the user is, his/her preferences, and any tags. Think of these as key/value pair; for instance key being a tag, and its value being a list of article IDs, or a range of it. This solves a small piece of a problem, and that is: given a set of keys (user pref, tags, etc) what articles are we interested in! This data should be small in size, if organized properly (e.g., instead of storing article path, you can only store a number). *Note: the problem with data persistence in a static class is that application pool in Azure, by default, resets every 20 minutes or so of inactivity, thus your data in the static class is not persistent any longer - also sharing them across instances (if you have more than 1) can become a burden. Welcome level 1B to the rescue.
Leval 1B data
A solution we used, is to keep layer 1A data in a Azure Cache, for its sole purpose to re-populate the static entity when and if needed. Level 1B data solves this problem. Also, if you face issues with application pool reset timing, you can change that programmatically. So level 1A and 1B have the same data, but one is faster than the other (close enough analogy: CPU Cache and RAM).
Discussing level 1A and 1B a bit
One may point out that it is an overkill to use a static class and cache, since it uses more memory. But, the problem we found in practice, is that, first it is faster with static. Second, in cache there are some limitations (ie., 8 MB per object). With big data, that is a small limit. By keeping data in a static class one can have larger than 8 MB objects, and store them in cache by splitting them (i.e., currently we have over 40 splits). BTW please vote to increase this limit in the next release of azure, thank you! Here is the link: www.mygreatwindowsazureidea.com/forums/34192-windows-azure-feature-voting/suggestions/3223557-azure-preview-cache-increase-max-item-size
Level 2 data
Once we get the values from the key/value entity (level 1A), we use the value to retrieve the data in Table Storage. The value should tell you what partition and Row Key you need. Problem being solved here: you only query those rows relevant to the user/search context. As you can see now, having level 1A data is to minimize row querying from table storage.
Level 3 data
Table storage data can hold a summary of your articles, or the first paragraph, or something of that nature. When it is needed to show the whole article, you will get it from Blob. Table storage, should also have a column that uniquely identifies the full article in blob. In blob you may organize the data in the following manner:
Split each article in separate files.
Group n articles in one file.
Group all articles in one file (not recommended although not as bad as the first impression one may get).
For the 1st option you would store, in table storage, the path of the article, then just grab it directly from Blob. Because of the above levels, you should need to read only a few full articles here.
For the 2nd and 3rd option you would store, in table storage, the path of the file and the start and end position from where to read and where to stop reading, using seek.
Here is a sample code in C#:
YourBlobClientWithReferenceToTheFile.Seek(TableStorageData.start, SeekOrigin.Begin);
int numBytesToRead = (int)TableStorageData.end - (int)TableStorageData.start;
int numBytesRead = 0;
while (numBytesToRead > 0)
{
int n = YourBlobClientWithReferenceToTheFile.Read(bytes,numBytesRead,numBytesToRead);
if (n == 0)
break;
numBytesRead += n;
numBytesToRead -= n;
}
I hope this didn't turn into a book, and hope it was helpful. Feel free to contact me if you have follow up questions or comments.
Thanks!
The proper storage for a file is a blob. But if your query needs to return dozens of blobs at the same time, it will be too slow as you are pointing out. So you could use a hybrid approach: use Azure Tables for 98% of your data, and if it's too large, use a Blob instead and store the Blob URI in your table.
Also, are you compressing your content at all? I sure would.
My thoughts on this: Going the MongoDB (or CouchDB) route is going to end up costing you extra Compute, as you'll need to run a few servers (for high availability). And depending on performance needed, you may end up running 2- or 4-core boxes. Three 4-core boxes is going to run more than your SQL DB costs (plus then there's the cost of storage, and MongoDB etc. will back their data in an Azure blob for duable storage).
Now, as for storing your html in blobs: this is a very common pattern, to offload large objects to blob storage. The GETs should be doable in a single call to blob storage (single transaction) especially with the file size range you mentioned. And you don't have to retrieve each blob serially; you can take advantage of TPL to download several blobs to your role instance in parallel.
One more thing: How are you using the content? If you're streaming it from your role instances, then what I said about TPL should work nicely. If, on the other hand, you're injecting href's into your output page, you can just put the blob url directly into your html page. And if you're concerned about privacy, make the blobs private and generate a short-TTL "shared access signature" granting access for a small time window (this only applies if inserting blob url's into some other html page; it doesn't apply if you're downloading to the role instance and then doing something with it there).
You could use MongoDB's GridFS feature: http://docs.mongodb.org/manual/core/gridfs/
It splits the data into 256k chunks by default (configurable up to 16mb) and lets you use the sharded database as a filesystem which you can use to store and retrieve files. If the file is larger than the chunk size, the mongo db drivers handle splitting up / re-assembling the data when the file needs to be retrieved. To add additional disk space, simply add additional shards.
You should be aware, however that only some mongodb drivers support this and it is a driver convention and not a server feature that allows for this behavior.
A few comments:
What you could do is ALWAYS store HTML content in blob storage and store the blob's URL in table storage. I personally don't like the idea of storing data conditionally i.e. if content of HTML file is more than 64 KB only then store it in blob storage otherwise use table storage. Other advantage you get out of this approach is that you can still query the data. If you store everything in blob storage, you would lose querying capability.
As far as using other NoSQL stores are concerned, only problem I see with them is that they are not natively supported on Windows Azure thus you would be responsible for managing them as well.
Another option would be to store your files as a VHD image in blob storage. Your roles can mount the VHD to their filesystem, and read the data from there.
The complication seems to be that only one VM can have read/write access to the VHD. The others can create a snapshot and read from that, but they won't see updates. Depending on how frequently your data is updated that could work. eg, if you update data at well-known times you could have all the clients unmount, take a new snapshot, and remount to get the new data.
You can also share out a VHD using SMB sharing as described in this MSDN blog post. This would allow full read/write access, but might be a little less reliable and a bit more complex.
you don't say, but if you are not compressing your articles that probably solves your issue then just use table storage.
Otherwise just use table storage and use a unique partition key for each article. If an article's too big put it in 2 rows, as long as you query by partition key you'll get both rows, then use the row key as the index indicating how the articles fit back together
One idea that i have would be to use CDN to store your article content, and link them directly from the client side, instead of any multi phase, operation of getting data from sql then going to some storage.
It would be something like
http://<cdnurl>/<container>/<articleId>.html
Infact same thing can be done with Blob storage too.
The advantage here is that this becomes insanely fast.
Disadvantage here is that security aspect is lost.
Something like Shared Access Signature can be explored for security, but I am not sure how helpful would it be for client side links.

How to organize large number of objects in cloud storage?

I'm looking for suggestions on how to organize large number of objects.
Assuming the incoming rate is about 60,000,000 files per day and I would like to keep them for 180 days.
With hourly partition, there will be 4320 (24 * 180) directories at the top level. And each directory will contain ~2,500,000 files on average.
If I only need to fetch the files individually by its full path and I do not need to list the content of the directory, is there any issue with leaving all 2500000 of them in the same level?
Or should I hash the filenames and store them in multiple sub directories? (like how it's typically done if stored on a traditional file system)
There's no limit on the number of objects you can store in a bucket, and breaking objects into more "subdirectories" doesn't make any scalability or performance difference. To the Google Cloud Storage service all object names are flat: the "/" in the path just looks like any other character in the object name.
Mike Schwartz, Google Cloud Storage Team