I'm using MongoDB, and I would like to generate unique and cryptical IDs for blog posts (that will be used in restful URLS) such as s52ruf6wst or xR2ru286zjI.
What do you think is best and the more scalable way to generate these IDs ?
I was thinking of following architecture :
a periodic (daily?) batch running to generate a lot of random and uniques IDs and insert them in a dedicated MongoDB collection with InsertIfNotPresent
and each time I want to generate a new blog post, I take an ID from this collection and mark it as "taken" with UpdateIfCurrent atomic operation
WDYT ?
This is exactly why the developers of MongoDB constructed their ObjectID's (the _id) the way they did ... to scale across nodes, etc.
A BSON ObjectID is a 12-byte value
consisting of a 4-byte timestamp
(seconds since epoch), a 3-byte
machine id, a 2-byte process id, and a
3-byte counter. Note that the
timestamp and counter fields must be
stored big endian unlike the rest of
BSON. This is because they are
compared byte-by-byte and we want to
ensure a mostly increasing order.
Here's the schema:
0123 456 78 91011
time machine pid inc
Traditional databases often use
monotonically increasing sequence
numbers for primary keys. In MongoDB,
the preferred approach is to use
Object IDs instead. Object IDs are
more synergistic with sharding and
distribution.
http://www.mongodb.org/display/DOCS/Object+IDs
So I'd say just use the ObjectID's
They are not that bad when converted to a string (these were inserted right after each other) ...
For example:
4d128b6ea794fc13a8000001
4d128e88a794fc13a8000002
They look at first glance to be "guessable" but they really aren't that easy to guess ...
4d128 b6e a794fc13a8000001
4d128 e88 a794fc13a8000002
And for a blog, I don't think it's that big of a deal ... we use it production all over the place.
What about using UUIDs?
http://www.famkruithof.net/uuid/uuidgen as an example.
Make a web service that returns a globally-unique ID so that you can have many webservers participate and know you won't hit any duplicates?
If your daily batch didn't allocate enough items? Do you run it midday?
I would implement the web-service client as a queue that can be looked at by a local process and refilled as needed (when server is slower) and could keep enough items in queue not to need to run during peak usage. Makes sense?
This is an old question but for anyone who could be searching for another solution.
One way is to use simple and fast substitution cipher. (The code below is based on someone else's code -- I forgot where I took it from so cannot give proper credit.)
class Array
def shuffle_with_seed!(seed)
prng = (seed.nil?) ? Random.new() : Random.new(seed)
size = self.size
while size > 1
# random index
a = prng.rand(size)
# last index
b = size - 1
# switch last element with random element
self[a], self[b] = self[b], self[a]
# reduce size and do it again
size = b;
end
self
end
def shuffle_with_seed(seed)
self.dup.shuffle_with_seed!(seed)
end
end
class SubstitutionCipher
def initialize(seed)
normal = ('a'..'z').to_a + ('A'..'Z').to_a + ('0'..'9').to_a + [' ']
shuffled = normal.shuffle_with_seed(seed)
#map = normal.zip(shuffled).inject(:encrypt => {} , :decrypt => {}) do |hash,(a,b)|
hash[:encrypt][a] = b
hash[:decrypt][b] = a
hash
end
end
def encrypt(str)
str.split(//).map { |char| #map[:encrypt][char] || char }.join
end
def decrypt(str)
str.split(//).map { |char| #map[:decrypt][char] || char }.join
end
end
You use it like this:
MY_SECRET_SEED = 3429824
cipher = SubstitutionCipher.new(MY_SECRET_SEED)
id = hash["_id"].to_s
encrypted_id = cipher.encrypt(id)
decrypted_id = cipher.decrypt(encrypted_id)
Note that it'll only encrypt a-z, A-Z, 0-9 and a space leaving other chars intact. It's sufficient for BSON ids.
The "correct" answer, which is not really a great solution IMHO, is to generate a random ID, and then check the DB for a collision. If it is a collision, do it again. Repeat until you've found an unused match. Most of the time the first will work (assuming that your generation process is sufficiently random).
It should be noted that, this process is only necessary if you are concerned about the security implications of a time-based UUID, or a counter-based ID. Either of these will lead to "guessability", which may or may not be an issue in any given situation. I would consider a time-based or counter-based ID to be sufficient for blog posts, though I don't know the details of your situation and reasoning.
Related
I wish to have stored many (N ~ about 150) boolean values of web app "environment" variables.
What is the proper way to get them stored?
creating N columns and one (1) row of data,
creating two (2) or three (3) columns (id smallserial, name varchar(255), value boolean) with N rows of data,
by using jsonb data type,
by using area data type,
by using bit string bit varying(n),
by another way (please advise)
Note: name may be too long.
Tia!
Could you perhaps use a bit string? https://www.postgresql.org/docs/7.3/static/datatype-bit.html. (Set the nth bit to 1 when the nth attribute would have been "true")
Depends how you wants to access them in normal usage.
Do you need to access one of this value at time, in this case JSONB is a really good way, really easy and quick to find a record, or do you need to get all of them in one call, in this case Bit String Types are the best, but you need to be really careful around, order and transcription for writing and reading..
Any of the options will do, depending on your circumstances. There is little need to optimise storage if you have only 150 values. Unless, of course there can be a very large number of these sets of 150 values or you are working in a very restricted environment like an embedded system (in which case a full-blown database client is probably not what you're looking for).
There is no definite answer here, but I will give you a few guidelines to consider. As from experience:
You don't want to have an anonymous string of values that is interpreted in code. When you change anything later on, your 1101011 or 0x12f08a will be rendered an fascinatingly enigmatic problem.
When the number of your fields starts to grow, you will regret if they are all stored in a single cell on a single row, because you will either be developing some obscure SQL or transforming a larger-than-needed dataset from the server.
When you feel that boolean values are really not enough, you start to wonder if there is a possibility to store something else too.
Settings and environmental properties are seldom subject to processor or data intensive processing, so follow the easiest path.
As my recommendation based on the given information and some educated guessing, you'll probably want to store your information in a table like
string key | integer set_idx | string value
---------------------------------------------------------
use.the.force | 1899 | 1
home.directory | 1899 | /home/dvader
use.the.force | 1900 | 0
home.directory | 1900 | /home/yoda
Converting a 1 to boolean true is cheap, and if you have only one set of values, you can ignore the set index.
User.find(:all, :order => "RANDOM()", :limit => 10) was the way I did it in Rails 3.
User.all(:order => "RANDOM()", :limit => 10) is how I thought Rails 4 would do it, but this is still giving me a Deprecation warning:
DEPRECATION WARNING: Relation#all is deprecated. If you want to eager-load a relation, you can call #load (e.g. `Post.where(published: true).load`). If you want to get an array of records from a relation, you can call #to_a (e.g. `Post.where(published: true).to_a`).
You'll want to use the order and limit methods instead. You can get rid of the all.
For PostgreSQL and SQLite:
User.order("RANDOM()").limit(10)
Or for MySQL:
User.order("RAND()").limit(10)
As the random function could change for different databases, I would recommend to use the following code:
User.offset(rand(User.count)).first
Of course, this is useful only if you're looking for only one record.
If you wanna get more that one, you could do something like:
User.offset(rand(User.count) - 10).limit(10)
The - 10 is to assure you get 10 records in case rand returns a number greater than count - 10.
Keep in mind you'll always get 10 consecutive records.
I think the best solution is really ordering randomly in database.
But if you need to avoid specific random function from database, you can use pluck and shuffle approach.
For one record:
User.find(User.pluck(:id).shuffle.first)
For more than one record:
User.where(id: User.pluck(:id).sample(10))
I would suggest making this a scope as you can then chain it:
class User < ActiveRecord::Base
scope :random, -> { order(Arel::Nodes::NamedFunction.new('RANDOM', [])) }
end
User.random.limit(10)
User.active.random.limit(10)
While not the fastest solution, I like the brevity of:
User.ids.sample(10)
The .ids method yields an array of User IDs and .sample(10) picks 10 random values from this array.
Strongly Recommend this gem for random records, which is specially designed for table with lots of data rows:
https://github.com/haopingfan/quick_random_records
All other answers perform badly with large database, except this gem:
quick_random_records only cost 4.6ms totally.
the accepted answer User.order('RAND()').limit(10) cost 733.0ms.
the offset approach cost 245.4ms totally.
the User.all.sample(10) approach cost 573.4ms.
Note: My table only has 120,000 users. The more records you have, the more enormous the difference of performance will be.
UPDATE:
Perform on table with 550,000 rows
Model.where(id: Model.pluck(:id).sample(10)) cost 1384.0ms
gem: quick_random_records only cost 6.4ms totally
For MYSQL this worked for me:
User.order("RAND()").limit(10)
You could call .sample on the records, like: User.all.sample(10)
The answer of #maurimiranda User.offset(rand(User.count)).first is not good in case we need get 10 random records because User.offset(rand(User.count) - 10).limit(10) will return a sequence of 10 records from the random position, they are not "total randomly", right? So we need to call that function 10 times to get 10 "total randomly".
Beside that, offset is also not good if the random function return a high value. If your query looks like offset: 10000 and limit: 20 , it is generating 10,020 rows and throwing away the first 10,000 of them,
which is very expensive. So call 10 times offset.limit is not efficient.
So i thought that in case we just want to get one random user then User.offset(rand(User.count)).first maybe better (at least we can improve by caching User.count).
But if we want 10 random users or more then User.order("RAND()").limit(10) should be better.
Here's a quick solution.. currently using it with over 1.5 million records and getting decent performance. The best solution would be to cache one or more random record sets, and then refresh them with a background worker at a desired interval.
Created random_records_helper.rb file:
module RandomRecordsHelper
def random_user_ids(n)
user_ids = []
user_count = User.count
n.times{user_ids << rand(1..user_count)}
return user_ids
end
in the controller:
#users = User.where(id: random_user_ids(10))
This is much quicker than the .order("RANDOM()").limit(10) method - I went from a 13 sec load time down to 500ms.
I am working on a database that (hopefully) will end up using a primary key with both numbers and letters in the values to track lots of agricultural product. Due to the way in which the weighing of product takes place at more than one facility, I have no other option but to maintain the same base number but use letters in addition to this base number to denote split portions of each lot of product. The problem is, after I create record number 99, the number 100 suddenly floats up and underneath 10. This makes it difficult to maintain consistency and forces me to replace this alphanumeric lot ID with a strictly numeric value in order to keep it sorted (which I use "autonumber" as the data type). Either way, I need the alphanumeric lot ID, and so having 2 ID's for the same lot can be confusing for anyone inputting values into the form. Is there a way around this that I am just not seeing?
If you're using query as a data source then you may try to sort it by string converted to number, something like
SELECT id, field1, field2, ..
ORDER BY CLng(YourAlphaNumericField)
Edit: you may also try Val function instead of CLng - it should not fail on non-numeric input
Why not properly format your key before saving ? e.g: "0000099". You will avoid a costly conversion later.
Alternatively, you could use 2 fields as the composite PK. One with the Number (as Long) and one with the Location (as String).
I have an email token collection in my mongodb database for a meteor app and I stick these email tokens in the reply address of my email (eg. #example.com) so that when I parse it I know what it's relating to.
The problem I have is that the email token uses the default _id algorithm to generate a unique id and that algorithm generates a string that is a mixture of upper case and lower case characters.
However, I've discovered that some email clients, lowercases the entire reply address, which means that I can only identify the addresses case-insensitvely.
I guess now I have two options.
1) The easiest option would be to match the email tokens with the reply address case insensitively. What would be the chance of clashes in that respect?
2) Make the email token some sort of guid and generate this guid independent of the mongodb ID creation.
Yes, you would get issues. Meteor uses both upper and lower case values in its 17 character id values. You can have a look at the code in the Random package: https://github.com/meteor/meteor/tree/devel/packages/random.
So it would be possible to get two distinct values of which the differences could only be casing. This could cause mixups if your client's email applications convert the address to lowercase characters.
In your case it is best not to use Random.id(), rather to make up your own Random character generator. Something like this might work:
var lowerCaseId = function() {
var digits = [],
self = this;
for (var i = 0; i < 17; i++) {
digits[i] = Random.choice("23456789abcdefghijkmnopqrstuvwxyz");
}
return digits.join("");
};
Also of note is the meteor _id value is built up of 'unmistakeable characters' - There are no characters that can cause confusion such as 0 vs O, 1 vs I, etc.
If you don't use it in your _id field, you would have to generate a value with this and check it does not exist in your database before inserting it, or using a unique index for it.
Additionally also be aware there will be a significant decrease in entropy since the number of possible combinations will have dropped with the loss of the uppercased characters. If this is of significance to you, you could increase the number of digits from 17 in the code above.
Meteor is generating it's own Id's which are different from the MongoDB ObjectId's. As noted, these would be subject to clash when converting case or checking case insensitively. This is kind of interesting and I'm not sure of the project's reasons for this.
Under the hood however the mongodb node native driver. So the ObjectId creation functions should be available if you want to use them.
https://github.com/mongodb/js-bson/blob/master/lib/bson/objectid.js#L68-L74
The important part is in these calls:
value.toString(16)
So the radix here is set to 16 for hex or all the characters 0-9a-f.
You can also note in drivers that they will Regex check like this:
^[0-9a-fA-F]{24}$
So it would seem that case sensitivity is not an issue.
Still if you want to use something alternate there is a section in the documentation that might serve as a useful guide.
http://docs.mongodb.org/manual/core/document/#the-id-field
I'm just trying to get a grip on when you would need to use a hash and when it might be better to use an array. What kind of real-world object would a hash represent, say, in the case of strings?
I believe sometimes a hash is referred to as a "dictionary", and I think that's a good example in itself. If you want to look up the definition of a word, it's nice to just do something like:
definition['pernicious']
Instead of trying to figure out the correct numeric index that the definition would be stored at.
This answer assumes that by "hash" you're basically just referring to an associative array.
I think you're looking at things in the wrong direction. It is not the object which determines if you should use a hash but the manner in which you are accessing it. A common use of a hash is when using a lookup table. If your objects are strings and you want to check if they exist in a Dictionary, looking them up will (assuming the hash works properly) by O(1). WIth sorting, the time would instead be O(logn), which may not be acceptable.
Thus, hashes are ideal for use with Dictionaries (hashmaps), sets (hashsets), etc.
They are also a useful way of representing an object without storing the object itself (for passwords).
The phone book - key = name, value = phone number.
I also think of the old World Book Encyclopedias (actual books). Each article is "hashed" into a single book (cat goes in the "C" volume).
Any time you have data that is well served by a 1-to-1 map.
For example, grades in a class:
"John Smith" => "B+"
"Jacob Jenkens" => "C"
etc
In general hashes are used to find things fast - a hash map can be used to assosiate one thing with another fast, a hash set will just store things "fast".
Please consider also the hash function complexity and cost when considering whether it's better to use a hash container or a normal less then container - the additional size of the hash value and the time needed to compute a "perfect" hash, and the time needed to make a 1:1 comparision at the end in case of a hash function conflict may in fact be a lot higher then just going through a tree structure with logharitmic complexity using the less then operators.
When you need to associate one variable with another. There isn't a "type limit" to what can be a key/value in a hash.
Hashed have many uses. Aside from cryptographic uses, they are commonly used for quick lookups of information. To get similarly quick lookups using an array you would need to keep the array sorted and then used a binary search. With a hash you get the fast lookup without having to sort. This is the reason most scripting languages implement hashing under one name or another (dictionaries, et al).
I use one often for a "dictionary" of settings for my app.
Setting | Value
I load them from the database or config file, into hashtable for use by my app.
Works well, and is simple.
One example could be zip code associated with an area, city or any postal address.
A good example is a cache with lot's of elements in it. You have some identifer by which you want to look up the a value (say an URL, and you want to find the according cached webpage). You want these lookups to be as fast as possible and don't want to search through all the stored pages everytime some URL is requested. A hash table is a great data structure for a problem like this.
One real world example I just wrote is when I was adding up the amount people spent on meals when filing expense reports.I needed to get a daily total with no idea how many items would exist on a particular day and no idea what the date range for the expense report would be. There are restrictions on how much a person can expense with many variables (What city, weekend, etc...)
The hash table was the perfect tool to handle this. The key was the date the value was the receipt amount (converted to USD). The receipts could come in in any order, i just keep getting the value for that date and adding to it until the job was done. Displaying was easy as well.
(php code)
$david = new stdclass();
$david->name = "david";
$david->age = 12;
$david->id = 1;
$david->title = "manager";
$joe = new stdclass();
$joe->name = "joe";
$joe->age = 17;
$joe->id = 2;
$joe->title = "employee";
// option 1: lets put users by index
$users[] = $david;
$users[] = $joe;
// option 2: lets put users by title
$users[$david->title] = $david;
$users[$joe->title] = $joe;
now the question: who is the manager?
answer:
$users["manager"]