Transform text value to lower case when indexing - google-cloud-firestore

Any idea how can I transform text value to lower case when indexing in firestore function. I have tried this
const newRef = db.collection("user").where("profile.companyName".toLowerCase(), ">=", data.value.toLowerCase())
but it give me an error at this part
"profile.companyName".toLowerCase()
Is it even possible to do it? I really don't want to save all names to lowercase to be able to properly indexing through my names and then capitalise them

Firestore doesn't have an API or mechanism to do this for you. You will have to store the lowercased version of the string in a field, and use that field to make your query.

Related

Firestore query on maps

In a firestore query, how do I check whether an element is the key in a map or not?
For example, I have this document:
I want to check if user's UID matches one of the UIDs in the "authors" map data structure. All the answers that I've seen so far so "where" but I don't think that's allowed syntax for Firestore queries anymore?
You can't query on keys like that (at least not that I know of).
I instead recommend adding a field authorUids that is an array of the UIDs of the authors. With that array, you can then use the array-contains operators:
collectionRef.where('authorUids', 'array'contains', 'ppGr1M8s...');
Can't imagine how you got the impression that "where" is no longer valid (it is) - but in particular, "where" is a test on the value of a field (not it's existence), AND there is no test for "null" nor "not equal to".
BUT - speculating a tad here - you might be able to fake a non-null test in your case:
collectionRef.where('authors.'+ userUID + '.0', ">", U+0000)
(fix the notation as needed) meaning
setting fieldPath to the concatenated string author.ppGr1M8sQWVrrsna6MlcQqxzLA3.0 in your example
and the field Value to the Unicode character value 0 (i.e. the minimal lexical value possible for a string)
so ANY value of the string is greater than null, if it exists at all.
firestore documentation states that documents that do not contain the specified fieldpath will not be returned, but you still need a valid test on the value. I strongly suspect this will result in creating a lot of inefficient indexes, and is highly NOT recommended.
An interesting exercise if this approach actually functions (I haven't tried it and don't intend to) - but really, find another structure - the convoluted explanation of the hack shows what a poor idea it really is.
The most important decisions, especially for a NoSQL database, are your structure/schema decisions - don't put too much effort into forcing yourself to work around bad schema/structure.

MongoID Data Type for TEXT

Whats the best option to use for MongoID data type for the regular MongoDB TEXT data type.
Wondering why MongoID doesnt have a data type TEXT.
is it okay to use STRING type and store large amounts of data.
P.S coming from SQL background.
According to the mongoid documentation all fields are strings, unless we explicitly specify an other data types. Unlike SQL's varchar and text differences, strings in mongo have no limitation (the only limitation is that of the 16MB maximum document size) so there is no need to worry about size.
Yes, strings in MongoDB have unlimited length (up to document max size, of course (16MB)). So there was no reason to introduce separate TEXT column type, as do relational DBs.
Just use string type.
You can use String datatype only. There is no reason to use any other datatype as string provides you unlimited length. You can directly use String type for MongoDB text fields.

REST interface and leading zeroes

I'm using Mongo's simple REST interface to query data in my collection.
One field I'm searching on is a mixture of numeric and character data, e.g. 000107011JXK
If I do the following query:
http://[server]:[port]/[db]/[collection]/?filter_[field]=000107011JXK
... Mongo removes leading zeroes and only searches on the numerical part of the criteria (e.g. 107011) which obviously does not bring back the required results.
Is there any way I can get around this issue?
Thanks in advance for any assistance.

search in a map field

I'm using morphia. As u know for a simple search I can use this:
q.field("fieldname").containsIgnoreCase(texttosearch);
But my field type is map. So I must change it like this(use dot):
q.field("mapname.fieldname").containsIgnoreCase(texttosearch);
But again I want to search in all fields. I can simply do this by repeating for all fields. The problem is my field count is not static.
How can I solve this?
You should store it not as a map, but as an array/list of key/value pairs. Then you can search the key field, and index it. This is not something morphia does for you at the moment but something which could be an alternate storage format for Map. It would be a very different format.
Take a look at this discussion for more background

Can Kinosearch do mathematical comparisons on numbers like "greater-than"?

I am using Perl's KinoSearch module to index a bunch of text.
Some of the text has numeric fields associated with each word. For example, the word "Pizza" in the index may have a dollar field value like "5.50" (dollars).
How can I write a query in KinoSearch that will find all words that have a dollar value greater than 5?
I'm not even sure if a full-text search engine can do this kind of thing. It seems more like a SQL query.
After a bunch of searching (heh, heh), I found this in the docs: RangeQuery
I may be able to make that work. But it appears the new required classes are not part of the standard release, yet.