Implementing blob storage - rest

I'm looking for a way to implement (provide) blob storage for an application I'm building.
What I need is the following:
Access is done using simple keys (like primary keys; I don't need a hierarchy);
Blobs with sizes will be from 1KiB to 1GiB. Both scenarios must be fast and supported (so systems that work based on large blocks, like I believe Hadoop does, are out);
Streaming access to blobs (i.e. to be able to read random parts of the blob);
Access over REST;
No eventual consistency.
My infrastructure requirements are as follows:
Horizontally scalable, but sharding is OK (so it is not necessary that the system natively supports horizontal scaling);
High availability (so replication and automatic failover);
I can't use Azure or Google blob storage; this is a private cloud application.
I'm prepared to implement such a system myself, but I prefer an out of the box system that implements this or at least parts of it.
I have e.g. looked at Hadoop, but that has eventual consistency, so is out. There seem to be a number of Linux DFS implementations, but these all work using mounting and I just need REST access. Also it looks like the range of blob sizes makes things difficult.
What system could I use for this?

It's a pretty old post, but I'm looking pretty much the same. I've found the stack of GridFS and ngnix-based HTTP access module.

Related

Which is an advantage of storing images directly in MongoDB instead of serverside folder

I suppose that storing images (or any binary data - pdfs, movies, etc. ) outside of DB (MongoDB in my case) and putting them in public server folder can be at least faster (no encoding, decoding and things around that).
But since there is such an option in MondoDB, I'd like to know advantages of using this, and use cases, when that approach is recommended.
Replication: It is pretty easy to set up a highly available replica set. So even if one machine goes down, the files would still be available. While this is possible to achieve by various means for a simple filesystem as well, the overhead for this might well eliminate the performance advantage (if there is any: MongoDB has quite sophisticated internal caching going on). Furthermore, setting up a DRBD and making sure consistency and availability requires quite more knowledge and administrative effort than with MongoDB. Plus, you'd need to have your DB be highly available as well.
Scalability: It can get quite complicated and/or costly when your files exceed the storage capacity of a single node. While in theory you can scale vertically, there is a certain point where the bang you get for the buck decreases and scaling horizontally makes more sense. However, with a filesystem approach, you'd have to manage which file is located at which node, how and when to balance and whatnot. MongoDB's GridFS in a sharded environment does this for you automatically and – more important – transparently. You neither have to reinvent the wheel nor maintain it.
Query by metadata: While in theory you can do this by an approach with a database and links to a filesystem, GridFS comes with means to insert arbitrary metadata and query by it. Again, this saves you reinventing the wheel. As an interesting example is that finding duplicates is quite easy with GridFS: a hash sum is automatically calculated for each file in GridFS. With a rather simple aggregation, you can find dupes and then deal with them accordingly.
When you have large amount of binary data and you want to take advantage of sharding, you can go with storing the binary data in mongo db using gridfs. But from performance point of view, Obviously as you pointed storing the images in a file system is a better way.

MongoDB/CouchDB for storing files + replication?

if I would like to store a lot of files + replicate the db, what NoSql databse would be the best for this kind of job?
I was testing MongoDB and CouchDB and these DBs are really nice and easy to use. If it would be possible I would use one of them for storing files. Now I see the difference between Mongo and Couch, but I cannot explain which one is better for storing files. And if Im talking about storing files I mean files with 10-50MB but also maybe files with 50-500MB - and maybe a lot of updates.
I found here a nice table:
http://weblogs.asp.net/britchie/archive/2010/08/17/document-databases-compared-mongodb-couchdb-and-ravendb.aspx
Still not sure which of these properties are the best for filestoring and replication. But maybe I should choose another NoSql DB?
That table is way out of date:
Master-Slave replication has been deprecated in favour of replica sets for starters and also consistency is wrong there as well. You will want to completely re-read this section on the MongoDB docs.
Map/Reduce is only JavaScript, there is no others.
I have no idea what that table means by attachments but GridFS is a storage standard built into the drivers to help make storing large files in MongoDB easier. Meta-data is also supported through this method.
MongoDB is on version 2.2 so anything it mentions about versions before is now obsolete (i.e. sharding and single server durability).
I do not have personal experience with CouchDBs interface for storing files however I wouldn't be surprised if there was hardly any differences between the two. I would think this part is too subjective for us to answer and you will need to just go for which one suites you better.
It is actually possible to build MongoDB clusters multi-regional (which S3 buckets are not and cannot be replicated as such without work) and replicate the most accessed files in a specific part of the world through MongoDB to these clusters.
I mean the main upshot I have found at times is that MongoDB can act like S3 and Cloudfront put together which is great since you have the redundant storage and the ability to distribute your data.
However that being said S3 is very valid option here and I would seriously give it a try, you might not be looking for the same stuff as me in a content network.
Database storage of files do not come without their serious downsides, however speed shouldn't be a huge problem here since you should get the same speed from a none Cloudfront fronted S3 as you should get from MongoDB really (remember S3 is a redundant storage network, not a CDN).
If you were to use S3 you would then store a row in your database that points to the file and houses meta-data about it.
There is a project called CBFS by Dustin Sallings (one of the Couchbase founders, and creator of spymemcached and core contributor of memcached) and Marty Schoch that uses Couchbase and Go.
It's an Infinite Node file store with redundancy and replication. Basically your very own S3 that supports lots of different hardware and sizes. It uses REST HTTP PUT/GET/DELETE, etc. so very easy to use. Very fast, very powerful.
CBFS on Github: https://github.com/couchbaselabs/cbfs
Protocol: https://github.com/couchbaselabs/cbfs/wiki/Protocol
Blog Post: http://dustin.github.com/2012/09/27/cbfs.html
Diverse Hardware: https://plus.google.com/105229686595945792364/posts/9joBgjEt5PB
Other Cool Visuals:
http://www.youtube.com/watch?v=GiFMVfrNma8
http://www.youtube.com/watch?v=033iKVvrmcQ
Contact me if you have questions and I can put you in touch.
Have you considered Amazon S3 as an option? It's highly available, proven and has redundant storage etc....
CouchDB, even though I personally like it a lot as it works very well with node.js, has the disadvantage that you need to compact it regularly if you don't want to waste too much diskspace. In your case if you are going to be doing a lot of updates to the same documents, that might be an issue.
I can't really commment on MongoDB as I haven't used it, but again, if file storage is your main concern, then have a look at S3 and similar as they are completely focused on filestorage.
You could combine the two where you store your meta data in a NoSql or Sql datastore and your actual files in a separate file store but keeping those 2 stores in sync and replicated might be tricky.

memcached like software with disk persistence

I have an application that runs on Ubuntu Linux 12.04 which needs to store and retrieve a large number of large serialized objects. Currently the store is implemented by simply saving the serialized streams as files, where the filenames equal the md5 hash of the serialized object. However I would like to speed things up replacing the file-store by one that does in-memory caching of objects that are recently read/written, and preferably does the hashing for me.
The design of my application should not get any more complicated. Hence preferably would be a storing back-end that manages a key-value database and caching in an abstracted and efficient way. I am a bit lost with all of the key/value stores that are out there, and much of the topics/information seems to be outdated. I was initially looking at something like memcached+membase, but maybe there are better solutions out there. I looked into redis, mongodb, couchdb, but it is not quite clear to me if they fit my needs.
My most important requirements:
Transparent saving to a persistent store in a way that the most recently written/read objects are quickly available by automatically caching them in memory.
Store should survive a reboot. Hence in memory objects should be saved on disk asap.
Currently I am calculating the md5 manually. It would actually be nicer if the back-end does this for me. Hence the ability to get the hash-key when an object is stored, and be able to retrieve the object later using the hashkey.
Big plus is that if there are packages available for Ubuntu 12.04, either in universe or through launchpad or whatever.
Other than this, the software should preferably be light not be more complicated than necessary (I don't need distributed map-reduce jobs, etc)
Thanks for any advice!
I would normally suggest Redis because it will be fast and in-memory with asynch persistant store. Plus you'll find you can use their different data types for other purposes so not as single-purpose as memcached. As far as auto-hashing, I don't think it does that as you define your own keys when you store objects (as in most of them).
One downside to Redis is if you're storing a TON of binary objects, you'll be limited to available memory in RAM (unless sharding) so could reach performance limitations. In that case you may store objects on file system, hash them, and store keys in Redis and match that to filename stored on file server and you'd be fine.
--
An alternate option would be to check out ElasticSearch which is like Mongo in that it stores objects native as JSON, but it includes the Lucene search engine on top with RESTful API interface. It "warms up" data in memory for fast response, but is also a persistent store and the nicest part is it auto-shards and auto-clusters using multicast to find other nodes.
--
Hope that helps and if so, share the love! ;-)
I'd look at MongoDB. It caches things efficiently using your OS to page data in and out, and is pretty simple to setup. Redis and Memcached won't be good solutions for you because they keep everything in RAM. Other, simpler solutions like LevelDB or BDB would also probably be suitable. I don't think any database going to compute hashes automatically for you. It sounds like you already have code for this though.

Is Cassandra good for storing files?

I'm developing a php platform that will make huge use of images, documents and any file format that will come in my mind so i was wondering if Cassandra is a good choice for my needs.
If not, can you tell me how should i store files? I'd like to keep using cassandra because it's fault-tolerant and uses auto-replication among nodes.
Thanks for help.
From the cassandra wiki,
Cassandra's public API is based on Thrift, which offers no streaming abilities
any value written or fetched has to fit in memory. This is inherent to Thrift's
design and is therefore unlikely to change. So adding large object support to
Cassandra would need a special API that manually split the large objects up
into pieces. A potential approach is described in http://issues.apache.org/jira/browse/CASSANDRA-265.
As a workaround in the meantime, you can manually split files into chunks of whatever
size you are comfortable with -- at least one person is using 64MB -- and making a file correspond
to a row, with the chunks as column values.
So if your files are < 10MB you should be fine, just make sure to limit the file size, or break large files up into chunks.
You should be OK with files of 10MB. In fact, DataStax Brisk puts a filesystem on top of Cassandra if I'm not mistaken: http://www.datastax.com/products/enterprise.
(I'm not associated with them in any way- this isn't an ad)
As fresh information, Netflix provides utilities for their cassandra client called astyanax for storing files as handled object stores. Description and examples can be found here. It can be a good starting point to write some tests using astyanax and evaluate Cassandra as a file storage.

Difference between Memcached and Hadoop?

What is the basic difference between Memcached and Hadoop? Microsoft seems to do memcached with the Windows Server AppFabric.
I know memcached is a giant key value hashing function using multiple servers. What is hadoop and how is hadoop different from memcached? Is it used to store data? objects? I need to save giant in memory objects, but it seems like I need some kind of way of splitting this giant objects into "chunks" like people are talking about. When I look into splitting the object into bytes, it seems like Hadoop is popping up.
I have a giant class in memory with upwards of 100 mb in memory. I need to replicate this object, cache this object in some fashion. When I look into caching this monster object, it seems like I need to split it like how google is doing. How is google doing this. How can hadoop help me in this regard. My objects are not simple structured data. It has references up and down the classes inside, etc.
Any idea, pointers, thoughts, guesses are helpful.
Thanks.
memcached [ http://en.wikipedia.org/wiki/Memcached ] is a single focused distributed caching technology.
apache hadoop [ http://hadoop.apache.org/ ] is a framework for distributed data processing - targeted at google/amazon scale many terrabytes of data. It includes sub-projects for the different areas of this problem - distributed database, algorithm for distributed processing, reporting/querying, data-flow language.
The two technologies tackle different problems. One is for caching (small or large items) across a cluster. And the second is for processing large items across a cluster. From your question it sounds like memcached is more suited to your problem.
Memcache wont work due to its limit on the value of object stored.
memcache faq . I read some place that this limit can be increased to 10 mb but i am unable to find the link.
For your use case I suggest giving mongoDB a try.
mongoDb faq . MongoDB can be used as alternative to memcache. It provides GridFS for storing large file systems in the DB.
You need to use pure Hadoop for what you need (no HBASE, HIVE etc). The Map Reduce mechanism will split your object into many chunks and store it in Hadoop. The tutorial for Map Reduce is here. However, don't forget that Hadoop is, in the first place, a solution for massive compute and storage. In your case I would also recommend checking Membase which is implementation of Memcached with addition storage capabilities. You will not be able to map reduce with memcached/membase but those are still distributed and your object may be cached in a cloud fashion.
Picking a good solution depends on requirements of the intended use, say the difference between storing legal documents forever to a free music service. For example, can the objects be recreated or are they uniquely special? Would they be requiring further processing steps (i.e., MapReduce)? How quickly does an object (or a slice of it) need to be retrieved? Answers to these questions would affect the solution set widely.
If objects can be recreated quickly enough, a simple solution might be to use Memcached as you mentioned across many machines totaling sufficient ram. For adding persistence to this later, CouchBase (formerly Membase) is worth a look and used in production for very large game platforms.
If objects CANNOT be recreated, determine if S3 and other cloud file providers would not meet requirements for now. For high-throuput access, consider one of the several distributed, parallel, fault-tolerant filesystem solutions: DDN (has GPFS and Lustre gear), Panasas (pNFS). I've used DDN gear and it had a better price point than Panasas. Both provide good solutions that are much more supportable than a DIY BackBlaze.
There are some mostly free implementations of distributed, parallel filesystems such as GlusterFS and Ceph that are gaining traction. Ceph touts an S3-compatible gateway and can use BTRFS (future replacement for Lustre; getting closer to production ready). Ceph architecture and presentations. Gluster's advantage is the option for commercial support, although there could be a vendor supporting Ceph deployments. Hadoop's HDFS may be comparable but I have not evaluated it recently.