How can I set LRU on Redis which is not on instance level but on some particular.
Let's say a hash.
I am using hash-key as one cache bucket and field as key and value is value.
So it is like :
Redis;s key-field-value = type-key-field for me.
If it is not straight forward then I would like to use another DB Level LRU.
( One Redis instance have 16 DB, I would like to use DB 1 as LRU That means everything which goes in DB 1 will follows LRU, and which goes in DB-2,3... will not follow LRU ).
I implemented it with Hash + Sorted Set.
in my case equivalent to Redis specification is :
Hash
key, field, value = type, key, value
Sorted Set :
key, score, value = type, lru_counter, key
( Takes the lowest range (if you want to remove 5 elements, zrange(type, 0, 4) ) which will give you least recently used 5 keys )
Hash will store actual cache.
And Sorted Set will store Just keys ( as members ) with scores. Every put and get in redis hash ( on any key ), will increment lru_counter ( an integer variable ) and put entry in sorted set with the same key ( type, lru_counter, key ) and lru_counter score.
Hence the recent put/get will have highest score ( lru_counter ) value in Sorted Set for the same key.
And when it comes to removal, I takes out the lowest scored members ( zrange, which is key for Hash ) and remove from both Sorted Set and Hash.
Related
If I have a set of key value pairs, and the keys are unique, is there any advantage in not using the keys as ObjectIDs?
I have some keys stored on database that need to be reset periodically.
It goes like this
{
name: "Bob",
limits:{
one: 1000
two: 14
three: 19
}
}
i looked for a query that assigns values to all keys inside an object, at first I thought $each would help me but it has a totally different purpose.
What i want is to assign, let's say, 0 to all the keys inside limits
Note: key names from limits aren't constant across users and aren't always known.
It would be exactly the same as Object.keys(limit).forEach(itm=> limit[itm]=0) does for regular JS, but in Mongo's context
You're halfway through:
You can iterate the collection and apply the forEach method for every cursor document
db.collection.find().forEach(function(d) {
Object.keys(d.limits).forEach(item=> d.limits[item]=0);
db.collection.save(d);
})
Caution: this can be very slow.
I've created a Redis key / value index this way :
set 7:12:321 '{"some:"JSON"}'
The key is delimited by a colon separator, each part of the key represents a hierarchic index.
get 7:12:321 means that I know the exact hierarchy and want only one single item
scan 7:12:* means that I want every item under id 7 in the first level of hierarchy and id 12 in the second layer of hierarchy.
Problem is : if I want the JSON values, I have to first scan (~50000 entries in a few ms) then get every key returned by scan one by one (800ms).
This is not very efficient. And this is the only answer I found on stackoverflow searching "scanning Redis values".
1/ Is there another way of scanning Redis to get values or key / value pairs and not only keys ? I tried hscan with as follows :
hset myindex 7:12:321 '{"some:"JSON"}'
hscan myindex MATCH 7:12:*
But it destroys the performance (almost 4s for the 50000 entries)
2/ Is there another data structure in Redis I could use in the same way but which could "scan for values" (hset ?)
3/ Should I go with another data storage solution (PostgreSQL ltree for instance ?) to suit my use case with huge performance ?
I must be missing something really obvious, 'cause this sounds like a common use case.
Thanks for your answers.
Optimization for your current solution
Instead of geting every key returned by scan one-by-one, you should use mget to batch get key-value pairs, or use pipeline to reduce RTT.
Efficiency problem of your current solution
scan command iterates all keys in the database, even if the number of keys that match the pattern is small. The performance decreases when the number of keys increases.
Another solution
Since the hierarchic index is an integer, you can encode the hierarchic indexes into a number, and use that number as the score of a sorted set. In this way, instead of searching by pattern, you can search by score range, which is very fast with a sorted set. Take the following as an example.
Say, the first (right-most) hierarchic index is less than 1000, the second index is less than 100, then you can encode the index (e.g. 7:12:321) into a score (321 + 12 * 1000 + 7 * 100 * 1000 = 712321). Then set the score and the value into a sorted set: zadd myindex 712321 '{"some:"JSON"}'.
When you want to search keys that match 7:12:*, just use zrangebyscore command to get data with a score between 712000 and 712999: zrangebyscore myindex 712000 712999 withscores.
In this way, you can get key (decoded with the returned score) and value together. Also it should be faster than the scan solution.
UPDATE
The solution has a little problem: members of sorted set must be unique, so you cannot have 2 keys with the same value (i.e. json string).
// insert OK
zadd myindex 712321 '{"the_same":"JSON"}'
// failed to insert, members should be unique
zadd myindex 712322 '{"the_same":"JSON"}'
In order to solve this problem, you can combine the key with the json string to make it unique:
zadd myindex 712321 '7:12:321-{"the_same":"JSON"}'
zadd myindex 712321 '7:12:322-{"the_same":"JSON"}'
You could consider using a Sorted Set and lexicographical ranges as long as you only need to perform prefix searches. For more information about this and indexing in general refer to http://redis.io/topics/indexes
Updated with an example:
Consider the following -
$ redis-cli
127.0.0.1:6379> ZADD anotherindex 0 '7:12:321:{"some:"JSON"}'
(integer) 1
127.0.0.1:6379> ZRANGEBYLEX anotherindex [7:12: [7:12:\xff
1) "7:12:321:{\"some:\"JSON\"}"
Now go and read about this so you 1) understand what it does and 2) know how to avoid possible pitfalls :)
I have a collection of 15000 documents. Some documents have sr_no with numeric values and other with absent of sr_no.
Now i want to get entries like all documents comes first which has sr_no with asc then all others.
I tried .find().sort({sr_no:1}) but it return all null entries first then asc with sr_no.
This question seems too close with duplicate. But slightly defer
with numeric key.
I answered it with hack below.
I used a dirty hack for this.
MongoDB doc says that they have priorities for sorting as posted below image.
So when i sort with asc then it sort first all null (empty key consider as null) entries then sort numeric entries.
What is hack here ?
Store sr_no : "" with empty string default.
Now it will sort first numeric values then string.
How are compound shard key used to generate new chunks? I mean, if I have a shard key like
{key1 : 1, key2 : 1}
and the collection is being populated as we go, how does MongoDB create the new chunks boundaries given that there are two keys? I can see them in the config server BUT I can not read them. they look like
[min key1,min key2] ---> [max key1, max key2]
and many often, min key 2> max key 2. How does that make sense?
In other words, how are the chunks min and max being set on new chunks given the shard key is compound?
key 1 is of type string and key 2 is of type int
I would appreciate it if you could explain it by an example.
The boundary is always from positive to negative infinity. As you insert it will break that initial chunk into smaller ones.
Here is a thread which should answer your question.