Which one will you choose as your surrogate key implementation ?
Local UUID
That is generated locally in the application, no network trip to retrieve it
But the length is long, and can affect the size of your storage size usage
Lengthy URL with the long UUID
The tiniest fear that UUID collision will happen
Or .. Network-unique-counter id (not sure on what is the proper term for this)
I imagine a remote Redis with the atomic INC or Mongo with $inc
The cost of network trip
Is much shorter, takes up less space and resulting in much shorter URL
No fear on collision, even on clustered applications
If you are using MongoDB, you should look into using BSON ObjectIDs:
http://www.mongodb.org/display/DOCS/Object+IDs
They are created by default as the _id field unless you specify otherwise and create the _id field yourself (which can also be an ObjectID, just created by you). No fear of collision, and you could get a natively supported ID type in the DB that you can also use in your application. Seems like a win-win, as long as you use MongoDB of course ;)
You can combine both approaches. Have a look for twitter's SnowFlake algorithm. The algorithm will produce global unique integers (64bit) but without any coordination, a pure local algortim.
For a low concurrency app, you can probably use network counter id.
But except for url, there is no interest for low concurrency (= not a lot of data).
In case of heavy concurrency access, so a lot of data, so a lot of clusters, you redis engine + associated network will be probably to slow for this solution.
In conclusion :
- network counter seems to be sexy but useless, in my opinion, with MongoDB.
On MongoDB collision, due to the creation algorithm, the collision is near zero. I explain, a part of the uuid is build with the machine address, which should be unique and you can get this address before putting your cluster in production.
Related
I have read in a couple of places that some databases use bloom filters for finding a match when querying a database. In my example I'm using Postgresql which is one of those databases.
My question comes up when talking about implementing a bloom filter with Redis which has a module that allows you to use a bloom filter when entering a member in a set. (Keep in mind the complexity of the lookup process though, not retrieving that value from disk)
Now the benefits of using Redis is to store values in memory, which when then trying to retrieve that value is more performant vs looking it up in a rdbms because that value is stored on a disk.
In my example, say I'm checking if a username already exists, is it still worth using Redis in memory solution with a bloom filter vs just checking with a Postgresql query?
My flow is something like:
CheckIfUserExsits() // using Redis bloom filter
If TRUE then confirm with rdbms // do to x% probability of false positive nature of bloom filter
If rdbms == MATCH then reply with "User does exist"
Else don't check rdbms at all // do to 0% probability of false negative nature of bloom filter
This flow is supposed to be more preformant because you're not querying a rdbms and doing this quickly via memory lookup returning false more efficiently.
However since all I care about is whether a member exists or not, to increase performance on replying with false, does the Redis step really help? Because if Postgresql is already querying a table using a bloom filter then the performance should already be relatively fast.
I have read in a couple of places that some databases use bloom filters for finding a match when querying a database. In my example I'm using Postgresql which is one of those databases.
Could you provide links to the things you were reading?
PostgreSQL does have a bloom filter index type extension (in "contrib"), but the index has to be created explicitly and it would not be useful for your use case anyway. It answers the question, for each individual row, "does this row satisfy a set of conditions". It does not answer the question "does any row in this table satisfy this one single condition".
PostgreSQL also has a C-language bloom data structure for internal use, but again your requirement is not one of those things it is used for.
Bloom filters of the type you want would be hard to implement in the face of PostgreSQL's ACID/MVCC model and its storage model.
If you really need this (and I doubt you do) then using redis seems like a good tool for the job. But how will you keep them in sync?
Don't add Redis unless you actually have performance issues. It adds a second datastore and development will be significantly slowed down and see more bugs because every data change needs to be synchronised between the two.
Redis also doesn't help much when it just replaces a function that's at the core of what Postgres also does (well). Postgres also uses memory to cache data. Just make sure it's configured with appropriate memory limits.
Where Redis can be useful is in caching "derived" data, i. e. anything the database and your application have spent significant time on.
Bloom filters is similar: do use them when you need hundreds of thousands of lookups every second. But default to using your existing database. Postgres is capable of rather decent performance –– I've been surprised by it quite a few times.
Only optimise after measuring and identifying a problem.
I'll explain the use cases first.
High read rates (10000+ p/s), large dataset (lots of string codes(think promocodes) looking for matchs, strings 10 - 20chars). Needs fast response time.
First thought was memcached. However to combat downtime if memcache goes down and starts repopulating the cache from a db like mysql.... i was thinking redis for auto repopulation of cache.
Is it true that redis does not persist to the hdd but instead a flush needs to be called for it to be backed up?
My hope is to use the code string as the key making lookup super quick. Value will be an id linking it to a db record thats not needed by the api.
If i had to guess how many unique strings will be stored..... 10M + after a few months.
Iv also looked at Cassandra briefly and mongodb. Im thinking mongodb will not be enough due to it not storing entire list in memory?
Any insight into these systems is very helpful. Feel like im going around in circles.
The api is made in nodejs. (If it matters)
10K/s is definitely not a high rate for a DB like Cassandra, according that your schema is done wisely. I bet it's the same for the others.
10M unique strings per months is peanuts for modern big data systems.
Whatever big data solution you retain, you will have to design the schema acording to the type of data and operational needs.
IMO, the important ones are the following 2 questions :
What you mean by "looking for matchs"?
If you need indexing and search using substrings or regexps, you need a search engine: ElasticSearch or SOLR are great. Warning that E/S does replication and sharding but it's distribution model is still not 100% safe.
None of the systems you mentionned will provide the reactivity you seem to look for.
If you will query using static strings: a key-value store or column oriented database like Cassandra will be just the perfect fit. So all are good fit.
What is a fast response time?
With selecting the right technology and appropriate schemas all those systems will give you great response time under hundreds of milliseconds, but will it be fast enough for you?
REDIS and MemCached being in-memory will provide the faster responses.
And as a conclusion, the API being in node.js is irrelevant for the choice of your storage and indexing technology, unless you want to stick with Javascript for everything and MongoDB is more friendly for you, it can be a decent candidate depending on your search use cases.
How do I enforce a unique constraint in Key-Value store where the unique data is longer than the key length limit?
I currently use CouchBase to store the document below:
{
url: "http://google.com",
siteName: "google.com",
data:
{
//more properties
}
}
Unique constraint is defined at url + siteName. I however can't use those properties as the key since the length can be longer than the key length limit of CouchBase.
I currently have two solutions in mind but I think that both are not good enough.
Solution 1
Document key is the SHA1 hash of url + siteName.
Advantages: easy to implement
Disadvantages: collisions can occur
Solution 2
Document key is the hash(url + siteName) + index.
This is same as Solution 1 but key includes index in-case a collision occurs.
To add a document, the application server:
set index to 0
Store document with the key = hash(url + siteName) + index
If duplicate key conflict occurred, read document back
Does existing document have same url and sitename with the one we are storing?
If yes, throw an exception is duplicates aren't allowed
If no, increment index and go back to step 2
This is currently my favorite solution because it can handle collisions
I a NoSQL n00b! How can I enforce unique constraints in a Key-Value store?
After reading your question, here are my thoughts/opinions, which I think should help give rationale for choosing your first option.
Couchbase is an in-memory cache/dictionary. To store many (read "very large incomprehensible number") values, it requires both RAM and disk space. Regardless of how much space each document occupies, all of the document keys are stored in RAM. If you were therefore permitted to store an arbitrarily large value for the key, your server farm would consume RAM faster than you could supply it, and your design would fall apart.
With item #1 being the case, your application needs to be designed such that key sizes are as small as practicable. Dictionary key/hash value computation is up to application API (in the same way that this is left to the .Net or Java API - which likewise compute hashes on the string inputs). The same method to produce a hash should be used regardless of input, for the sake of consistency.
The SHA1 has has an extremely low collision probability, and it is designed that way to make "breaking" of the encryption computationally infeasible. This is the foundation behind the "fingerprint" in bitcoins. See here and here for tasty reading on the topic.
Given what I know about hashes, and given the fact that URLs always start with the same set of characters, this theoretically lowers the likelihood of collision even further.
If you are, in fact, storing enough documents that the odds of a SHA1 collision are significant, then there are almost certainly at least a dozen other issues that will affect your application's usability and reliability in a more significant way, and you should devote your energy to thinking about those things.
The hard part about being an engineer is recognizing the need to take a step back from the engineering and say when "good" is "good enough." That being said, option 1 looks like the best choice, it's simple and consistent. If properly applied, that's all you need. Check the box on this one and move on to your next issue.
I’d go for solution 1 however for choosing the hashing function you should consider the following things:
how many data you have? => how large should be the generated hash in order to reduce the probability of colisions to a minimum? - here the best might be SHA-512 which has 512 bits large output hash, compared to the 160 bits from SHA-1
what performance do you need from the hashing function? SHA-x are pretty slow compared to md5 and depending on the number of items you want to store md5 could be pretty good as well.
in the end you can also have a combination, use sitename+url as a key if it is short enough, switch to sitename+hash(url) in case this combination can be short enough and in the end only hash both together.
on a related note I’ve found also this question http://www.couchbase.com/communities/q-and-a/key-size-limits-couchbasemembase-again where one answer suggests to compress the keys if it is possible for you.
You could actually use normal gzip compression and encode the text. I’m not sure how well this would work on your usecase, you’ll have to check it, but I used it for JSON files and managed to reduce it down to ~20% - however it was a huge 8MB file so the compression possibilities for your key might be much lower.
Considering that an UUID rfc 4122 (16 bytes) is much larger than a MongoDB ObjectId (12 bytes), I am trying to find out how their collision probability compare.
I know that is something around quite unlikely, but in my case most ids will be generated within a large number of mobile clients, not within a limited set of servers. I wonder if in this case, there is a justified concern.
Compared to the normal case where all ids are generated by a small number of clients:
It might take months to detect a collision since the document creation
IDs are generated from a much larger client base
Each client has a lower ID generation rate
in my case most ids will be generated within a large number of mobile clients, not within a limited set of servers. I wonder if in this case, there is a justified concern.
That sounds like very bad architecture to me. Are you using a two-tier architecture? Why would the mobile clients have direct access to the db? Do you really want to rely on network-based security?
Anyway, some deliberations about the collision probability:
Neither UUID nor ObjectId rely on their sheer size, i.e. both are not random numbers, but they follow a scheme that tries to systematically reduce collision probability. In case of ObjectIds, their structure is:
4 byte seconds since unix epoch
3 byte machine id
2 byte process id
3 byte counter
This means that, contrary to UUIDs, ObjectIds are monotonic (except within a single second), which is probably their most important property. Monotonic indexes will cause the B-Tree to be filled more efficiently, it allows paging by id and allows a 'default sort' by id to make your cursors stable, and of course, they carry an easy-to-extract timestamp. These are the optimizations you should be aware of, and they can be huge.
As you can see from the structure of the other 3 components, collisions become very likely if you're doing > 1k inserts/s on a single process (not really possible, not even from a server), or if the number of machines grows past about 10 (see birthday problem), or if the number of processes on a single machine grows too large (then again, those aren't random numbers, but they are truly unique on a machine, but they must be shortened to two bytes).
Naturally, for a collision to occur, they must match in all these aspects, so even if two machines have the same machine hash, it'd still require a client to insert with the same counter value in the exact same second and the same process id, but yes, these values could collide.
Let's look at the spec for "ObjectId" from the documentation:
Overview
ObjectId is a 12-byte BSON type, constructed using:
a 4-byte value representing the seconds since the Unix epoch,
a 3-byte machine identifier,
a 2-byte process id, and
a 3-byte counter, starting with a random value.
So let us consider this in the context of a "mobile client".
Note: The context here does not mean using a "direct" connection of the "mobile client" to the database. That should not be done. But the "_id" generation can be done quite simply.
So the points:
Value for the "seconds since epoch". That is going to be fairly random per request. So minimal collision impact just on that component. Albeit in "seconds".
The "machine identifier". So this is a different client generating the _id value. This is removing possibility of further "collision".
The "process id". So where that is accessible to seed ( and it should be ) then the generated _id has more chance of avoiding collision.
The "random value". So another "client" somehow managed to generate all of the same values as above and still managed to generate the same random value.
Bottom line is, if that is not a convincing enough argument to digest, then simply provide your own "uuid" entries as the "primary key" values.
But IMHO, that should be a fair convincing argument to consider that the collision aspects here are very broad. To say the least.
The full topic is probably just a little "too-broad". But I hope this moves consideration a bit more away from "Quite unlikely" and on to something a little more concrete.
I'm building a system that tracks and verifies ad impressions and clicks. This means that there are a lot of insert commands (about 90/second average, peaking at 250) and some read operations, but the focus is on performance and making it blazing-fast.
The system is currently on MongoDB, but I've been introduced to Cassandra and Redis since then. Would it be a good idea to go to one of these two solutions, rather than stay on MongoDB? Why or why not?
Thank you
For a harvesting solution like this, I would recommend a multi-stage approach. Redis is good at real time communication. Redis is designed as an in-memory key/value store and inherits some very nice benefits of being a memory database: O(1) list operations. For as long as there is RAM to use on a server, Redis will not slow down pushing to the end of your lists which is good when you need to insert items at such an extreme rate. Unfortunately, Redis can't operate with data sets larger than the amount of RAM you have (it only writes to disk, reading is for restarting the server or in case of a system crash) and scaling has to be done by you and your application. (A common way is to spread keys across numerous servers, which is implemented by some Redis drivers especially those for Ruby on Rails.) Redis also has support for simple publish/subscribe messenging, which can be useful at times as well.
In this scenario, Redis is "stage one." For each specific type of event you create a list in Redis with a unique name; for example we have "page viewed" and "link clicked." For simplicity we want to make sure the data in each list is the same structure; link clicked may have a user token, link name and URL, while the page viewed may only have the user token and URL. Your first concern is just getting the fact it happened and whatever absolutely neccesary data you need is pushed.
Next we have some simple processing workers that take this frantically inserted information off of Redis' hands, by asking it to take an item off the end of the list and hand it over. The worker can make any adjustments/deduplication/ID lookups needed to properly file the data and hand it off to a more permanent storage site. Fire up as many of these workers as you need to keep Redis' memory load bearable. You could write the workers in anything you wish (Node.js, C#, Java, ...) as long as it has a Redis driver (most web languages do now) and one for your desired storage (SQL, Mongo, etc.)
MongoDB is good at document storage. Unlike Redis it is able to deal with databases larger than RAM and it supports sharding/replication on it's own. An advantage of MongoDB over SQL-based options is that you don't have to have a predetermined schema, you're free to change the way data is stored however you want at any time.
I would, however, suggest Redis or Mongo for the "step one" phase of holding data for processing and use a traditional SQL setup (Postgres or MSSQL, perhaps) to store post-processed data. Tracking client behavior sounds like relational data to me, since you may want to go "Show me everyone who views this page" or "How many pages did this person view on this given day" or "What day had the most viewers in total?". There may be even more complex joins or queries for analytic purposes you come up with, and mature SQL solutions can do a lot of this filtering for you; NoSQL (Mongo or Redis specifically) can't do joins or complex queries across varied sets of data.
I currently work for a very large ad network and we write to flat files :)
I'm personally a Mongo fan, but frankly, Redis and Cassandra are unlikely to perform either better or worse. I mean, all you're doing is throwing stuff into memory and then flushing to disk in the background (both Mongo and Redis do this).
If you're looking for blazing fast speed, the other option is to keep several impressions in local memory and then flush them disk every minute or so. Of course, this is basically what Mongo and Redis do for you. Not a real compelling reason to move.
All three solutions (four if you count flat-files) will give you blazing fast writes. The non-relational (nosql) solutions will give you tunable fault-tolerance as well for the purposes of disaster recovery.
In terms of scale, our test environment, with only three MongoDB nodes, can handle 2-3k mixed transactions per second. At 8 nodes, we can handle 12k-15k mixed transactions per second. Cassandra can scale even higher. 250 reads is (or should be) no problem.
The more important question is, what do you want to do with this data? Operational reporting? Time-series analysis? Ad-hoc pattern analysis? real-time reporting?
MongoDB is a good option if you want the ability to do ad-hoc analysis based on multiple attributes within a collection. You can put up to 40 indexes on a collection, though the indexes will be stored in-memory, so watch for size. But the result is a flexible analytical solution.
Cassandra is a key-value store. You define a static column or set of columns that will act as your primary index right up front. All queries run against Cassandra should be tuned to this index. You can put a secondary on it, but that's about as far as it goes. You can, of course, use MapReduce to scan the store for non-key attribution, but it will be just that: a serial scan through the store. Cassandra also doesn't have the notion of "like" or regex operations on the server nodes. If you want to find all customers where the first name starts with "Alex", you'll have to scan through the entire collection, pull the first name out for each entry and run it through a client-side regex.
I'm not familiar enough with Redis to speak intelligently about it. Sorry.
If you are evaluating non-relational platforms, you might also want to consider CouchDB and Riak.
Hope this helps.
Just found this: http://blog.axant.it/archives/236
Quoting the most interesting part:
This second graph is about Redis RPUSH vs Mongo $PUSH vs Mongo insert, and I find this graph to be really interesting. Up to 5000 entries mongodb $push is faster even when compared to Redis RPUSH, then it becames incredibly slow, probably the mongodb array type has linear insertion time and so it becomes slower and slower. mongodb might gain a bit of performances by exposing a constant time insertion list type, but even with the linear time array type (which can guarantee constant time look-up) it has its applications for small sets of data.
I guess everything depends at least on data type and volume. Best advice probably would be to benchmark on your typical dataset and see yourself.
According to the Benchmarking Top NoSQL Databases (download here)
I recommend Cassandra.
If you have the choice (and need to move away from flat fies) I would go with Redis. Its blazingly fast, will comfortably handle the load you're talking about, but more importantly you won't have to manage the flushing/IO code. I understand its pretty straight forward but less code to manage is better than more.
You will also get horizontal scaling options with Redis that you may not get with file based caching.
I can get around 30k inserts/sec with MongoDB on a simple $350 Dell. If you only need around 2k inserts/sec, I would stick with MongoDB and shard it for scalability. Maybe also look into doing something with Node.js or something similar to make things more asynchronous.
The problem with inserts into databases is that they usually require writing to a random block on disk for each insert. What you want is something that only writes to disk every 10 inserts or so, ideally to sequential blocks.
Flat files are good. Summary statistics (eg total hits per page) can be obtained from flat files in a scalable manner using merge-sorty map-reducy type algorithms. It's not too hard to roll your own.
SQLite now supports Write Ahead Logging, which may also provide adequate performance.
I have hand-on experience with mongodb, couchdb and cassandra. I converted a lot of files to base64 string and insert these string into nosql.
mongodb is the fastest. cassandra is slowest. couchdb is slow too.
I think mysql would be much faster than all of them, but I didn't try mysql for my test case yet.