MongoDB: Using *NOT-dot* notation in sort - mongodb

I'm having an issue with the Mongo sort on nested collection and Google search didn't help:
Dot notation works (returns first element from sorted collection):
db.myCollection.find().sort({ 'comments.Comment' : -1 })[0]
Array (Not-dot) notation doesn't work (always returns first element from un-sorted collection):
db.myCollection.find().sort({ "comments['Comment']" : -1 })[0]
For some business reasons I would like my app to be dynamic and handle spaces/pluses/and few more un-standard characters as the keys in the documents,
So far I was ok with it but sort always returns first (unordered) result if it can't understand the key which I want to sort on.

Simply put:
"For some business reasons I would like my app to be dynamic and handle spaces/pluses/and few more un-standard characters as the keys in the documents"
Yeah, well bad luck it's not valid JSON notation, it may be JavaScript notation but that doesn't mean it's valid JSON. And the BSON spec derives from this fact.
You have dot (.) notation and that is it. So basically your condition is parsed as "invalid" and is ignored, hence no sorting is done how you expect.
Feel free to raise a JIRA issue with MongoDB if you believe this is important.

Related

Gremlin: Is there a way to find the character based on the index of a string?

I have vertex "office" and property "name" on OrientDB. I want to find the offices, by name, where the name does not have a "-" as the third character of the string. I imagine this would require some java code within the gremlin query.This is my best attempt, but it is resulting in office names that do in fact have a "-" as their third character.
g.V().hasLabel('office')
.where(values('name').map{it.get().charAt(2)}.is(neq('-')))
.project('Office Name')
.by(values('name'))
Since Gremlin doesn't support String operations (like split, charAt, etc.), your only chance is a lambda. Seems like you figured that out already, but your solution looks too overcomplicated to me. You can use something much simpler, like:
g.V().hasLabel('office').
has('name', filter {it.get()[2] != '-'}).
project('Office Name').
by('name')
However, note, that this filter will throw an exception if the office namer has less than 3 characters. Thus, you should better check that the String is long enough:
g.V().hasLabel('office').
has('name', filter {it.get().length() > 2 && it.get()[2] != '-'}).
project('Office Name').
by('name')
...or use RegEx pattern matching (which is pretty nice and easy in Groovy):
g.V().hasLabel('office').
has('name', filter {it.get() ==~ /.{2}-.*/}).
project('Office Name').
by('name')
The main reason why your traversal didn't work though, is that charAt returns a Character which is then compared to the String -, hence every office name will pass the neq filter.

How does MongoDB sort/compare objects

Couldn't find any clear documentation on how does MongoDB compare/sort complex objects. I've tried some examples and found out property order does matter and property names also matter
Examples:-
Order matter
{“name”: {“first”: “A”, “last”: “B”}} != {“name”: {“last”: “B”, “first”: “A”}}
Values matter
{“name”: {“first”: “A”}} < {“name”: {“first”: “B”}}
property names matter
{“name”: {“**f**irst”: “A”}} < {“name”: {“**g**irst”: “A”}}
So wondering how exactly does that work, I'm sure stuff like missing properties would also affect this.
If you sort on embedded object fields like name, the sort comparisons are done using at the binary representation (BSON object) level, which isn't very useful.
What you typically want to do instead is identity the specific fields within those objects using dot notation, putting them in the order you want:
// Sort on last name, and then first name
db.test.find().sort({'name.last': 1, 'name.first': 1})

MongoDB - Using regex wildcards for search that properly filter results

I have a Mongo search set up that goes through my entries based on numerous criteria.
Currently the easiest way (I know it's not performance-friendly due to using wildcards, but I can't figure out a better way to do this due to case insensitivity and users not putting in whole words) is to use regex wildcards in the search. The search ends up looking like this:
{ gender: /Womens/i, designer: /Voodoo Girl/i } // Should return ~200 results
{ gender: /Mens/i, designer: /Voodoo Girl/i } // Should return 0 results
In the example above, both searches are returning ~200 results ("Voodoo Girl" is a womenswear label and all corresponding entries have a gender: "Womens" field.). Bizarrely, when I do other searches, like:
{ designer: /Voodoo Girl/i, store: /Store XYZ/i } // should return 0 results
I get the correct number of results (0). Is this an order thing? How can I ensure that my search only returns results that match all of my wildcarded queries?
For reference, the queries are being made in nodeJS through a simple db.products.find({criteria}) lookup.
To answer the aside real fast, something like ElasticSearch is a wonderful way to get more powerful, performant searching capabilities in your app.
Now, the reason that your searches are returning results is that "mens" is a substring of "womens"! You probably want either /^Mens/i and /^Womens/i (if Mens starts the gender field), or /\bMens\b/ if it can appear in the middle of the field. The first form will only match the given field from the beginning of the string, while the second form looks for the given word surrounded by word boundaries (that is, not as a substring of another word).
If you can use the /^Mens/ form (note the lack of the /i), it's advisable, as anchored case-sensitive regex queries can use indexes, while other regex forms cannot.
$regex can only use an index efficiently when the regular expression has an anchor for the beginning (i.e. ^) of a string and is a case-sensitive match.

Mongoid, find object by searching by part of the Id?

I want to be able to search for my objects by searching for the last 4 characters of the id. How can I do that?
Book.where(_id: params[:q])
Where the param would be something like a3f4, and in this case the actual id for the object that I want to be found would be:
bc313c1f5053b66121a8a3f4
Notice the last for characters are what we searched for. How can I search for just "part" of my objects id? instead of having my user search manually by typing in the entire id?
I found in MongoDB's help docs, that I can provide a regex:
db.x.find({someId : {$regex : "123\\[456\\]"}}) // use "\\" to escape
Is there a way for me to search using the regular mongo ruby driver and not using Mongoid?
Usually, in Mongoid you can search with a regexp like you normally would with a string in your call to where() ie:
Book.where(:title => /^Alice/) # returns all books with titles starting with 'Alice'
However this doesn't work in your case, because the _id field is not stored as a string, but as an ObjectID. However, you could add (and index) a field on your models which could provide this functionality for you, which you can populate in an after_create callback.
<shameless_plug>
Alternatively, if you're just looking for a shorter solution to the default Mongoid IDs, I could suggest something like mongoid_token which makes it pretty easy to add shorter tokens/ids to your Mongoid documents.
</shameless_plug>

TermQuery not returning on a known search term, but WildcardQuery does

Am hoping someone with enough insight into the inner workings of Lucene might be able to point me in the right direction =)
I'll skip most of the surrounding irellevant code, and cut right to the chase. I have a Lucene index, to which I am adding the following field to the index (variables replaced by their literal values):
document.Add( new Field("Typenummer", "E5CEB501A244410EB1FFC4761F79E7B7",
Field.Store.YES , Field.Index.UN_TOKENIZED));
Later, when I search my index (using other types of queries), I am able to verify that this field does indeed appear in my index - like when looping through all Fields returned by Document.GetFields()
Field: Typenummer, Value: E5CEB501A244410EB1FFC4761F79E7B7
So far so good :-)
Now the real problem is - why can I not use a TermQuery to search against this value and actually get a result.
This code produces 0 hits:
// Returns 0 hits
bq.Add( new TermQuery( new Term( "Typenummer",
"E5CEB501A244410EB1FFC4761F79E7B7" ) ), BooleanClause.Occur.MUST );
But if I switch this to a WildcardQuery (with no wildcards), I get the 1 hit I expect.
// returns the 1 hit I expect
bq.Add( new WildcardQuery( new Term( "Typenummer",
"E5CEB501A244410EB1FFC4761F79E7B7" ) ), BooleanClause.Occur.MUST );
I've checked field lengths, I've checked that I am using the same Analyzer and so on and I am still on square 1 as to why this is.
Can anyone point me in a direction I should be looking?
I finally figured out what was going on. I'm expanding the tags for this question as it, much to my surprise, actually turned out to be an issue with the CMS this particular problem exists in. In summary, the problem came down to this:
The field is stored UN_TOKENIZED, meaning Lucene will store it excactly "as-is"
The BooleanQuery I pasted snippets from gets sent to the Sitecore SearchManager inside a PreparedQuery wrapper
The behaviour I expected from this was, that my query (having already been prepared) would go - unaltered - to the Lucene API
Turns out I was wrong. It passes through a RewriteQuery method that copies my entire set of nested queries as-is, with one exception - all the Term arguments are passed through a LowercaseStrategy()
As I indexed an UPPERCASE Term (UN_TOKENIZED), and Sitecore changes my PreparedQuery to lowercase - 0 results are returned
Am not going to start an argument of whether this is "by design" or "by design flaw" implementation of the Lucene Wrapper API - I'll just note that rewriting my query when using the PreparedQuery overload is... to me... unexpected ;-)
Further teachings from this; storing the field as TOKENIZED will eliminate this problem too, as the StandardAnalyzer by default will lowercase all tokens.