Namespaces in Redis? - nosql

Is it possible to create namespaces in Redis?
From what I found, all the global commands (count, delete all) work on all the objects. Is there a way to create sub-spaces such that these commands will be limited in context?
I don't want to set up different Redis servers for this purpose.
I assume the answer is "No", and wonder why wasn't this implemented, as it seems to be a useful feature without too much overhead.

A Redis server can handle multiple databases... which are numbered. I think it provides 32 of them by default; you can access them using the -n option to the redis-cli shell scripting command and by similar options to the connection arguments or using the "select()" method on its connection objects. (In this case .select() is the method name for the Python Redis module ... I presume it's named similarly for other libraries and interfaces.
There's an option to control how many separate databases you want in the configuration file for the Redis server daemon as well. I don't know what the upper limit would be and there doesn't seem to be a way to dynamically change that (in other words it seems that you'd have to shutdown and restart the server to add additional DBs). Also, there doesn't seem to be an away to associate these DB numbers with any sort of name nor to impose separate ACLS, nor even different passwords, to them. Redis, of course, is schema-less as well.

If you are using Node, ioredis has transparent key prefixing, which works by having the client prepend a given string to each key in a command. It works in the same way that Ruby's redis-namespace does. This client-side approach still puts all your keys into the same database, but at least you add some structure, and you don't have to use multiple databases or servers.
var fooRedis = new Redis({ keyPrefix: 'foo:' });
fooRedis.set('bar', 'baz'); // Actually sends SET foo:bar baz

If you use Ruby you can look at these gems:
https://github.com/resque/redis-namespace
https://github.com/jodosha/redis-store

Related

Is it possible when using MongoTemplate to dynamically set read preference for a particular query?

In our application, we manage a number of MongoTemplate instances, each representing a client database. For the majority of database operations, we want to use the secondaryPreferred read preference in order to leverage our cluster's read replicas and distribute load. However, in at least one case we need to read from the primary to get the most recent data. I don't see any way to override the read preference for this single query. I see this issue on the JIRA board, but it's been open for 6 years and associated the StackOverflow link is dead. Assuming that won't be implemented, I'm trying to figure out some alternate solutions. Does this seem like a correct assessment of the possible options?
Create two MongoClients with the different read preferences, and use them to create a separate set of MongoTemplates for primary and secondary reads. I'm concerned that this probably creates double the number of connections to the cluster (although perhaps it's not a concern, if the additional connections all go to the secondaries).
Use the MongoTemplate.setReadPreference() method to temporarily change the read preference before performing the operation, then reset it once finished. It seems like this would be vulnerable to race conditions, however.
Sidestep the Spring Data framework and use executeCommand() directly, which supports a readPreference argument. This means we'd lose all of the benefits and abstraction of Spring Data and have to manipulate the BSON objects directly.
The Query class has a slaveOk() method, but this is the inverse of what I'm looking for and it seems like it's deprecated.
Any further information is appreciated as well. Thanks!
As a workaround solution we can override the method prepareCollection(MongoCollection<Document> collection) in MongoTemplate (refer: here) and change the read preference for the needed query alone and let the rest of the cases follow the default read preference
As a side note, seems like slaveOk() is doing literally nothing
https://github.com/spring-projects/spring-data-mongodb/blob/f00991dc293dceee172b1ece6613dde599a0665d/spring-data-mongodb/src/main/java/org/springframework/data/mongodb/core/MongoTemplate.java#L3328
switch (option) {
case NO_TIMEOUT:
cursorToUse = cursorToUse.noCursorTimeout(true);
break;
case PARTIAL:
cursorToUse = cursorToUse.partial(true);
break;
case SECONDARY_READS:
case SLAVE_OK:
break;
default:
throw new IllegalArgumentException(String.format("%s is no supported flag.", option));
}

MongoDB 2.6: maxTimeMS on an instance/database level

MongoDB 2.6 introduced the .maxTimeMS() cursor method, which allows you to specify a max running time for each query. This is awesome for ad-hoc queries, but I wondered if there was a way to set this value on a per-instance or per-database (or even per-collection) level, to try and prevent locking in general.
And if so, could that value then be OVERWRITTEN on a per-query basis? I would love to set an instance level timeout of 3000ms or thereabouts (since that would be a pretty extreme running time for queries issued by my application), but then be able to ignore it if I had a report to run.
Here's the documentation from mongodb.org, for reference: http://docs.mongodb.org/manual/reference/method/cursor.maxTimeMS/#behaviors
Jason,
Currently MongoDB does not support a global / universal "maxTimeMS". At the moment this option is applicable to the individual operations only. If you wish to have such a global setting available in MongoDB, I would suggest raising a SERVER ticket at https://jira.mongodb.org/browse/SERVER along with use-cases that can take advantage of such setting.
I know this is old but came here with a similar question and decided to post my findings, The timeout, as a global parameter, is supported by the drivers as part of the connection string, which makes sense because it's the driver the one that can control this global parameters, here you can find documentation about this: https://docs.mongodb.com/manual/reference/connection-string/, but each driver can handle this slightly different (c# for example uses Mongo Settings parameter for this, python has it as parameters in the init constructor, etc)
To test this you can start a test server using mtools like this:
mlaunch --replicaset --name testrepl --nodes 3 --port 27000
Then an example in python will be like:
from pymongo import MongoClient
c = MongoClient(host="mongodb://localhost:27000,localhost:27001,localhost:27002/?replicaSet=testrepl&wtimeoutMS=2000&w=3")
c.test_database.col.insert({ "name": "test" })
I'm using the URI method so this can be used in other drivers, but Python also supports the parameters w and wtimeout, in this example all the write operations will be defaulted to 2 segs and 3 nodes have to be confirmed before returning, if you restart the database and use the wtimeout of 1 (meaning 1 ms) you will see an exception because the replication will take a bit longer to initialize the first time you execute the python script.
Hope this helps others coming with the same question.

mongodb why do we need getSisterDB

While playing with mognodb console help, I found a db.getSisterDB() method.
And I am curious what is the purpose of this method. Looking through mongodb documentation and a quick google search did not yield satisfactory results.
By typing db.getSisterDb.help generates an error and typing db.getSisterDB gives the following definition of this method:
function ( name ){
return this.getMongo().getDB( name );
}
which suggests that this is just a wrapper around getDB. My suggestion that it is used in to access databases in a replica set, but I would like to listen to a person who can give me a more thorough explanation.
In the shell, db is a reference to the current database. If you want to query against a different DB in the same mongod instance, the way to get a proper reference to it would be to use this method (which has an alias, more gender neutral getSiblingDB).
If you wanted to use the longer syntax, you could: db.getMongo().getDB(name) gets you the same thing as db.getSiblingDB(name) or db.getSisterDB(name) but the former is longer to type.
All of the above work the same way in standalone mongod as well as replica sets (and sharded clusters).
Im going to add to the accepted answer because I did not find what I wanted as a first result.
getSiblingDB exists for scripting, where the use helper is not available
getSiblingDB is the newer between the identical getSisterDB, so use sibling as getSisterDB is no longer in documentation
when used in the shell, getSiblingDB serves the purpose of getting a database without changing the db variable

Mapping to legacy MongoDB store

I'm attempting to write up a Yesod app as a replacement for a Ruby JSON service that uses MongoDB on the backend and I'm running into some snags.
the sql=foobar syntax in the models file does not seem too affect which collection Persistent.MongoDB uses. How can I change that?
is there a way to easily configure mongodb (preferably through the yaml file) to be explicitly read only? I'd take more comfort deploying this knowing that there was no possible way the app could overwrite or damage production data.
Is there any way I can get Persistent.MongoDB to ignore fields it doesn't know about? This service only needs a fraction of the fields in the collection in question. In order to keep the code as simple as possible, I'd really like to just map to the fields I care about and have Yesod ignore everything else. Instead it complains that the fields don't match.
How does one go about defining instances for models, such as ToJSON. I'd like to customize how that JSON gets rendered but I get the following error:
Handler/ProductStat.hs:8:10:
Illegal instance declaration for ToJSON Product'
(All instance types must be of the form (T t1 ... tn)
where T is not a synonym.
Use -XTypeSynonymInstances if you want to disable this.)
In the instance declaration forToJSON Product'
1) seems that sql= is not hooked up to mongo. Since sql is already doing this it shouldn't be difficult for Mongo.
2) you can change the function that runs the queries
in persistent/persistent-mongoDB/Database/Persist there is a runPool function of PersistConfig. That gets used in yesod-defaults. We should probably change the loadConfig function to check a readOnly setting
3) I am ok with changing the reorder function to allow for ignoring, although in the future (if MongoDB returns everything in ordeR) that may have performance implications, so ideally you would list the ignored columns.
4) This shouldn't require changes to Persistent. Did you try turning on TypeSynonymInstances ?
I have several other Yesod/Persistent priorities to attend to before these changes- please roll up your sleeves and let me know what help you need making them. I can change 2 & 3 myself fairly soon if you are committed to testing them.

Using several database setups with Lucene.Net

Hi
I am developing a search function for an web application with Lucene.Net and NHibernate.Search. The application is used by a lots of companies but is runned as a single service, using different databases for different companies. Therefore I would need an index directory for each database rather than one directory for the entire application. Is there a way of achieve this in Lucene.Net?
I have also considering storing the indexes for each company in there respecitive database but havent found any satisfying compontents for this. I have read about Compass and JdbcDirectory for Java but I need something for C# or NHibernate. Does anyone know if there is a port of JdbcDirectory or something simular for C#?
Hmm, it looks like you can't change anything at the session factory level using normal nhibernate.search. You may need separate instances of a configuration, or maybe try something along the lines of Fluent NHibernate Search to ease the pain.
Piecing it together from the project's wiki it appears you could do something like this to spin up separate session factories pointing to different databases / index directories:
Fluently.Configure()
.Database(SQLiteConfiguration.Standard.InMemory())
.Search(s => s.DefaultAnalyzer().Standard()
.DirectoryProvider().FSDirectory()
.IndexBase("~/Index")
.IndexingStrategy().Event()
.MappingClass<LibrarySearchMapping>())
.BuildConfiguration()
.BuildSessionFactory();
The "IndexBase" property and the connection are the parts you'll need to define per customer. Once you get the session factories set up you could resolve them using whatever strategy you use currently.
Hope it helps