Use of collation in mongodb $regex - mongodb
Since v3.4 collations are available for find operations, especially as matches for diacritic characters are concerned. While a find query with a definite value ($eq opeartor or corresponding construct) will match letters and correspondent diacritics, the same is not true if a $regex is used in order to achieve a match on a partial search string (a 'LIKE').
Is there a to make the $regex query use the collation the same way than the $eq query?
consider example collection testcoll:
{ "_id" : ObjectId("586b7a0163aff45945462bea"), "city" : "Antwerpen" },
{ "_id" : ObjectId("586b7a0663aff45945462beb"), "city" : "Antwërpen" }
this query will find both records
db.testcoll.find({city: 'antwerpen'}).collation({"locale" : "en_US", "strength" : 1});
the same query using a regex will not (finds the record with 'Antwerpen' only)
db.testcoll.find({city: /antwe/i}).collation({"locale" : "en_US", "strength" : 1});
I face this same issue today and I searched the Internet like crazy trying to find a solution. Didn't find any. So I came up with my on solution, a little frankenstein that worked for me.
I created a function which removes all the special characters from a string and then replaces all the characters that could be special to the equivalent regexp that could be special. In the end I just add a "i" option to cover the capitalize strings in my DB.
export const convertStringToRegexp = (text: string) => {
let regexp = '';
const textNormalized = text
.normalize('NFD')
.replace(/[\u0300-\u036f]/g, '') // remove all accents
.replace(/[|\\{}()[\]^$+*?.]/g, '\\$&') // remove all regexp reserved char
.toLowerCase();
regexp = textNormalized
.replace(/a/g, '[a,á,à,ä,â,ã]')
.replace(/e/g, '[e,é,ë,è,ê]')
.replace(/i/g, '[i,í,ï,ì,î]')
.replace(/o/g, '[o,ó,ö,ò,õ,ô]')
.replace(/u/g, '[u,ü,ú,ù,û]')
.replace(/c/g, '[c,ç]')
.replace(/n/g, '[n,ñ]')
.replace(/[ªº°]/g, '[ªº°]');
return new RegExp(regexp, 'i'); // "i" -> ignore case
};
And in my find() method, I just use this function with $regex option, like this:
db.testcoll.find({city: {$regex: convertStringToRegexp('twerp')} })
/*
Output:
[
{ "_id" : ObjectId("586b7a0163aff45945462bea"), "city" : "Antwerpen" },
{ "_id" : ObjectId("586b7a0663aff45945462beb"), "city" : "Antwërpen" }
]
*/
I also create a .spec.ts file (using Chai) to test this function. Of course you could adapt to Jest.
describe('ConvertStringToRegexp', () => {
it('should convert all "a" to regexp', () => {
expect(convertStringToRegexp('TAÁdaáh!')).to.deep.equal(
/t[a,á,à,ä,â,ã][a,á,à,ä,â,ã]d[a,á,à,ä,â,ã][a,á,à,ä,â,ã]h!/i
);
});
it('should convert all "e" to regexp', () => {
expect(convertStringToRegexp('MEÉeéh!')).to.deep.equal(
/m[e,é,ë,è,ê][e,é,ë,è,ê][e,é,ë,è,ê][e,é,ë,è,ê]h!/i
);
});
it('should convert all "i" to regexp', () => {
expect(convertStringToRegexp('VÍIiishí!')).to.deep.equal(
/v[i,í,ï,ì,î][i,í,ï,ì,î][i,í,ï,ì,î][i,í,ï,ì,î]sh[i,í,ï,ì,î]!/i
);
});
it('should convert all "o" to regexp', () => {
expect(convertStringToRegexp('ÓOoóhhhh!!!!')).to.deep.equal(
/[o,ó,ö,ò,õ,ô][o,ó,ö,ò,õ,ô][o,ó,ö,ò,õ,ô][o,ó,ö,ò,õ,ô]hhhh!!!!/i
);
});
it('should convert all "u" to regexp', () => {
expect(convertStringToRegexp('ÚUhuuúll!')).to.deep.equal(
/[u,ü,ú,ù,û][u,ü,ú,ù,û]h[u,ü,ú,ù,û][u,ü,ú,ù,û][u,ü,ú,ù,û]ll!/i
);
});
it('should convert all "c" to regexp', () => {
expect(convertStringToRegexp('Cacacacaca')).to.deep.equal(
/[c,ç][a,á,à,ä,â,ã][c,ç][a,á,à,ä,â,ã][c,ç][a,á,à,ä,â,ã][c,ç][a,á,à,ä,â,ã][c,ç][a,á,à,ä,â,ã]/i
);
});
it('should remove all special characters', () => {
expect(
convertStringToRegexp('hello 123 °º¶§∞¢£™·ª•*!##$%^WORLD?.')
).to.deep.equal(
/h[e,é,ë,è,ê]ll[o,ó,ö,ò,õ,ô] 123 [ªº°][ªº°]¶§∞¢£™·[ªº°]•\*!##\$%\^w[o,ó,ö,ò,õ,ô]rld\?\./i
);
});
it('should accept all regexp reserved characters', () => {
expect(
convertStringToRegexp('Olá [-[]{}()*+?.,\\o/^$|#s] Mundo! ')
).to.deep.equal(
/* eslint-disable #typescript-eslint/no-explicit-any */
/[o,ó,ö,ò,õ,ô]l[a,á,à,ä,â,ã] \[-\[\]\{\}\(\)\*\+\?\.,\\[o,ó,ö,ò,õ,ô]\/\^\$\|#s\] m[u,ü,ú,ù,û][n,ñ]d[o,ó,ö,ò,õ,ô]! /i
);
});
});
Documentation
Case insensitive regular expression queries generally cannot use indexes effectively. The $regex implementation is not collation-aware and is unable to utilize case-insensitive indexes.
There is no need to use collation on top of regex. You can functionally implement this behaviour using the correct regex.
Considering to the Antwerpen example the following regex gives you all the matches in the database:
/antw[eë]rpen/i
To generate the above regex you have to regex-replace your search string first using the following replace formula:
str.replace(/e/ig, '[eë]')
And of course you have to do it with all diactric character. Also you can simply use the following library: diacritic-regex.
Related
Find records where field ends with other field
I have title, director and englishTitle fields. { title: "Iron Man", director: "Someone Important", englishTitle: "Iron Man Someone Important" } I need to find all the records that have englishTitle ending with director's value. How can I perform such query with MongoDB?
As described here, you can use regex : https://docs.mongodb.com/manual/reference/operator/query/regex/ In your case it would be { englishTitle: { $regex: /^.*director$/ } } For finding the value of director, I suppose you can use "$where" https://docs.mongodb.com/manual/reference/operator/query/where/ db.myCollection.find( function() { var possibleDirector = this.englishTitle.substr(this.englishTitle.length - 1 - this.director.length); return (possibleDirector === this.director); } ); (maybe it would require little polishing like checking the length to not obtaint negative value in substr)
MongoDB Full and Partial Text Search
Env: MongoDB (3.2.0) with Mongoose Collection: users Text Index creation: BasicDBObject keys = new BasicDBObject(); keys.put("name","text"); BasicDBObject options = new BasicDBObject(); options.put("name", "userTextSearch"); options.put("unique", Boolean.FALSE); options.put("background", Boolean.TRUE); userCollection.createIndex(keys, options); // using MongoTemplate Document: {"name":"LEONEL"} Queries: db.users.find( { "$text" : { "$search" : "LEONEL" } } ) => FOUND db.users.find( { "$text" : { "$search" : "leonel" } } ) => FOUND (search caseSensitive is false) db.users.find( { "$text" : { "$search" : "LEONÉL" } } ) => FOUND (search with diacriticSensitive is false) db.users.find( { "$text" : { "$search" : "LEONE" } } ) => FOUND (Partial search) db.users.find( { "$text" : { "$search" : "LEO" } } ) => NOT FOUND (Partial search) db.users.find( { "$text" : { "$search" : "L" } } ) => NOT FOUND (Partial search) Any idea why I get 0 results using as query "LEO" or "L"? Regex with Text Index Search is not allowed. db.getCollection('users') .find( { "$text" : { "$search" : "/LEO/i", "$caseSensitive": false, "$diacriticSensitive": false }} ) .count() // 0 results db.getCollection('users') .find( { "$text" : { "$search" : "LEO", "$caseSensitive": false, "$diacriticSensitive": false }} ) .count() // 0 results MongoDB Documentation: Text Search $text Text Indexes Improve Text Indexes to support partial word match
As at MongoDB 3.4, the text search feature is designed to support case-insensitive searches on text content with language-specific rules for stopwords and stemming. Stemming rules for supported languages are based on standard algorithms which generally handle common verbs and nouns but are unaware of proper nouns. There is no explicit support for partial or fuzzy matches, but terms that stem to a similar result may appear to be working as such. For example: "taste", "tastes", and tasteful" all stem to "tast". Try the Snowball Stemming Demo page to experiment with more words and stemming algorithms. Your results that match are all variations on the same word "LEONEL", and vary only by case and diacritic. Unless "LEONEL" can be stemmed to something shorter by the rules of your selected language, these are the only type of variations that will match. If you want to do efficient partial matches you'll need to take a different approach. For some helpful ideas see: Efficient Techniques for Fuzzy and Partial matching in MongoDB by John Page Efficient Partial Keyword Searches by James Tan There is a relevant improvement request you can watch/upvote in the MongoDB issue tracker: SERVER-15090: Improve Text Indexes to support partial word match.
As Mongo currently does not supports partial search by default... I created a simple static method. import mongoose from 'mongoose' const PostSchema = new mongoose.Schema({ title: { type: String, default: '', trim: true }, body: { type: String, default: '', trim: true }, }); PostSchema.index({ title: "text", body: "text",}, { weights: { title: 5, body: 3, } }) PostSchema.statics = { searchPartial: function(q, callback) { return this.find({ $or: [ { "title": new RegExp(q, "gi") }, { "body": new RegExp(q, "gi") }, ] }, callback); }, searchFull: function (q, callback) { return this.find({ $text: { $search: q, $caseSensitive: false } }, callback) }, search: function(q, callback) { this.searchFull(q, (err, data) => { if (err) return callback(err, data); if (!err && data.length) return callback(err, data); if (!err && data.length === 0) return this.searchPartial(q, callback); }); }, } export default mongoose.models.Post || mongoose.model('Post', PostSchema) How to use: import Post from '../models/post' Post.search('Firs', function(err, data) { console.log(data); })
Without creating index, we could simply use: db.users.find({ name: /<full_or_partial_text>/i}) (case insensitive)
If you want to use all the benefits of MongoDB's full-text search AND want partial matches (maybe for auto-complete), the n-gram based approach mentioned by Shrikant Prabhu was the right solution for me. Obviously your mileage may vary, and this might not be practical when indexing huge documents. In my case I mainly needed the partial matches to work for just the title field (and a few other short fields) of my documents. I used an edge n-gram approach. What does that mean? In short, you turn a string like "Mississippi River" into a string like "Mis Miss Missi Missis Mississ Mississi Mississip Mississipp Mississippi Riv Rive River". Inspired by this code by Liu Gen, I came up with this method: function createEdgeNGrams(str) { if (str && str.length > 3) { const minGram = 3 const maxGram = str.length return str.split(" ").reduce((ngrams, token) => { if (token.length > minGram) { for (let i = minGram; i <= maxGram && i <= token.length; ++i) { ngrams = [...ngrams, token.substr(0, i)] } } else { ngrams = [...ngrams, token] } return ngrams }, []).join(" ") } return str } let res = createEdgeNGrams("Mississippi River") console.log(res) Now to make use of this in Mongo, I add a searchTitle field to my documents and set its value by converting the actual title field into edge n-grams with the above function. I also create a "text" index for the searchTitle field. I then exclude the searchTitle field from my search results by using a projection: db.collection('my-collection') .find({ $text: { $search: mySearchTerm } }, { projection: { searchTitle: 0 } })
I wrapped #Ricardo Canelas' answer in a mongoose plugin here on npm Two changes made: - Uses promises - Search on any field with type String Here's the important source code: // mongoose-partial-full-search module.exports = exports = function addPartialFullSearch(schema, options) { schema.statics = { ...schema.statics, makePartialSearchQueries: function (q) { if (!q) return {}; const $or = Object.entries(this.schema.paths).reduce((queries, [path, val]) => { val.instance == "String" && queries.push({ [path]: new RegExp(q, "gi") }); return queries; }, []); return { $or } }, searchPartial: function (q, opts) { return this.find(this.makePartialSearchQueries(q), opts); }, searchFull: function (q, opts) { return this.find({ $text: { $search: q } }, opts); }, search: function (q, opts) { return this.searchFull(q, opts).then(data => { return data.length ? data : this.searchPartial(q, opts); }); } } } exports.version = require('../package').version; Usage // PostSchema.js import addPartialFullSearch from 'mongoose-partial-full-search'; PostSchema.plugin(addPartialFullSearch); // some other file.js import Post from '../wherever/models/post' Post.search('Firs').then(data => console.log(data);)
If you are using a variable to store the string or value to be searched: It will work with the Regex, as: { collection.find({ name of Mongodb field: new RegExp(variable_name, 'i') } Here, the I is for the ignore-case option
The quick and dirty solution, that worked for me: use text search first, if nothing is found, then make another query with a regexp. In case you don't want to make two queries - $or works too, but requires all fields in query to be indexed. Also, you'd better not to use case-insensitive rx, because it can't rely on indexes. In my case I've made lowercase copies of used fields.
Good n-gram based approach for fuzzy matching is explained here (Also explains how to score higher for Results using prefix Matching) https://medium.com/xeneta/fuzzy-search-with-mongodb-and-python-57103928ee5d Note : n-gram based approaches can be storage extensive and mongodb collection size will increase.
I create an additional field which combines all the fields within a document that I want to search. Then I just use regex: user = { firstName: 'Bob', lastName: 'Smith', address: { street: 'First Ave', city: 'New York City', } notes: 'Bob knows Mary' } // add combined search field with '+' separator to preserve spaces user.searchString = `${user.firstName}+${user.lastName}+${user.address.street}+${user.address.city}+${user.notes}` db.users.find({searchString: {$regex: 'mar', $options: 'i'}}) // returns Bob because 'mar' matches his notes field // TODO write a client-side function to highlight the matching fragments
full/partial search in MongodB for a "pure" Meteor-project I adpated flash's code to use it with Meteor-Collections and simpleSchema but without mongoose (means: remove the use of .plugin()-method and schema.path (altough that looks to be a simpleSchema-attribute in flash's code, it did not resolve for me)) and returing the result array instead of a cursor. Thought that this might help someone, so I share it. export function partialFullTextSearch(meteorCollection, searchString) { // builds an "or"-mongoDB-query for all fields with type "String" with a regEx as search parameter const makePartialSearchQueries = () => { if (!searchString) return {}; const $or = Object.entries(meteorCollection.simpleSchema().schema()) .reduce((queries, [name, def]) => { def.type.definitions.some(t => t.type === String) && queries.push({[name]: new RegExp(searchString, "gi")}); return queries }, []); return {$or} }; // returns a promise with result as array const searchPartial = () => meteorCollection.rawCollection() .find(makePartialSearchQueries(searchString)).toArray(); // returns a promise with result as array const searchFull = () => meteorCollection.rawCollection() .find({$text: {$search: searchString}}).toArray(); return searchFull().then(result => { if (result.length === 0) throw null else return result }).catch(() => searchPartial()); } This returns a Promise, so call it like this (i.e. as a return of a async Meteor-Method searchContact on serverside). It implies that you attached a simpleSchema to your collection before calling this method. return partialFullTextSearch(Contacts, searchString).then(result => result);
import re db.collection.find({"$or": [{"your field name": re.compile(text, re.IGNORECASE)},{"your field name": re.compile(text, re.IGNORECASE)}]})
How to properly instantiate a Waterline Model Object from a sails-mongo native result?
I am using SailsJS on a project and I need to use native() for certain querys. The problem I have is that I can't find a proper way to instantiate a Waterline Model Object from the mongo collection find result. I have being searching information about this and the only thing I have found is the following: var instance = new Model._model(mongo_result_item); This should work properly, but when I do instance.save(function(err, ins){}); the model throws an error because of the "_id" field, that should be "id". I have took a look into sails-mongo code and I found that for the "find" method they do this: // Run Normal Query on collection collection.find(where, query.select, queryOptions).toArray(function(err, docs) { if(err) return cb(err); cb(null, utils.normalizeResults(docs, self.schema)); }); So the normalizeResults does the magic with the "_id" attribute, and other stuff. The way I am doing this right now is to require the sails-mongo utils.js file to have access to this method. Full sample: var mongoUtils = require('sails-mongo/lib/utils.js'); SampleModel.native(function(nativeErr, collection){ collection.find({ 'field' : value }).toArray(function(collectionErr, results){ if (!results || results.length == 0) return res.restfullInvalidFieldValue({ msg : 'INVALID_VALUE' }); var norm_results = mongoUtils.normalizeResults(results); var instance = new SampleModel._model(norm_results[0]); }); }); Is there a better / proper way to achieve this ? I need to do a native search because I have found a problem with Waterline find() method using strings, where the search should be case sensitive. Every string field on the model is being used as a regular expression match of the form : /^{string}$/i Searching by a regular expression with the case insensitive flag will give me problems. In the other hand, doing { field : { $regex : new RegExp('^'+regexp_escaped_string+'$') } } could be possible, but I think it will perform worst than { field : value }. If someone have found a different workaround for the case insensitive problem, please, point me in the right direction. Thanks in advance.
$regex might help you to search case insensitive string using option paramteter as "i", you can also specify custom regex instead for more information see $regex mongodb documentation. /** * PetController * * #description :: Server-side logic for managing pets * #help :: See http://links.sailsjs.org/docs/controllers */ module.exports = { searchByName: function (req,res) { Pet .native(function(err, collection) { if (err) return res.serverError(err); collection.find( { name: { $regex: /like-my-name/, $options: "i" } // here option "i" defines case insensitive }, { name: true }) .toArray(function (err, results) { if (err) return res.serverError(err); return res.ok(results); }); }); } }; See here for more on native mongo query - https://stackoverflow.com/a/54830760/1828637
How to search for text or expression in multiple fields
db.movies.find({"original_title" : {$regex: input_data, $options:'i'}}, function (err, datares){ if (err || datares == false) { db.movies.find({"release_date" : {$regex: input_data + ".*", $options:'i'}}, function (err, datares){ if(err || datares == false){ db.movies.find({"cast" : {$regex: input_data, $options:'i'}}, function (err, datares){ if(err || datares == false){ db.movies.find({"writers" : {$regex: input_data, $options:'i'}}, function (err, datares){ if(err || datares == false){ db.movies.find({"genres.name" : {$regex: input_data, $options:'i'}}, function (err, datares){ if(err || datares == false){ db.movies.find({"directors" : {$regex: input_data, $options:'i'}}, function (err, datares){ if(err || datares == false){ res.status(451); res.json({ "status" : 451, "error code": "dataNotFound", "description" : "Invalid Data Entry." }); return; } else{ res.json(datares); return; } }); } else { res.json(datares); return; } }); } else { res.json(datares); return; } }); } else { res.json(datares); return; } }); } else { res.json(datares); return; } }); } else { res.json(datares); return; } }); I am trying to implement a so called "all-in-one" search so that whenever a user types in any kind of movie related information, my application tries to return all relevant information. However I have noticed that this transaction might be expensive on the backend and sometimes the host is really slow. How do I smoothly close the db connection and where should I use it? I read here that it is best not to close a mongodb connection in node.js >>Why is it recommended not to close a MongoDB connection anywhere in Node.js code? Is the a proper way to implement a all-in-one search kind of a thing by using nested find commands?
Your current approach is full of problems and is not necessary to do this way. All you are trying to do is search for what a can gather is a plain string within a number of fields in the same collection. It may possibly be a regular expression construct but I'm basing two possibilities on a plain text search that is case insensitive. Now I am not sure if you came to running one query dependant on the results of another because you didn't know another way or though it would be better. Trust me on this, that is not a better approach than anything listed here nor is it really required as will be shown: Regex query all at once The first basic option here is to continue your $regex search but just in a singular query with the $or operator: db.movies.find( { "$or": [ { "original_title" : { "$regex": input_data, "$options":"i"} }, { "release_date" : { "$regex": input_data, "$options":"i"} }, { "cast" : { "$regex": input_data, "$options":"i"} }, { "writers" : { "$regex": input_data, "$options":"i"} }, { "genres.name" : { "$regex": input_data, "$options":"i"} }, { "directors" : { "$regex": input_data, "$options":"i"} } ] }, function(err,result) { if(err) { // respond error } else { // respond with data or empty } } ); The $or condition here effectively works like "combining queries" as each argument is treated as a query in itself as far as document selection goes. Since it is one query than all the results are naturally together. Full text Query, multiple fields If you are not really using a "regular expression" built from regular expression operations i.e ^(\d+)\bword$, then you are probably better off using the "text search" capabilities of MongoDB. This approach is fine as long as you are not looking for things that would be generally excluded, but your data structure and subject actually suggests this is the best option for what you are likely doing here. In order to be able to perform a text search, you first need to create a "text index", specifically here you want the index to span multiple fields in your document. Dropping into the shell for this is probably easiest: db.movies.createIndex({ "original_title": "text", "release_date": "text", "cast" : "text", "writers" : "text", "genres.name" : "text", "directors" : "text" }) There is also an option to assign a "weight" to fields within the index as you can read in the documentation. Assigning a weight give "priority" to the terms listed in the search for the field that match in. For example "directors" might be assigned more "weight" than "cast" and matches for "Quentin Tarantino" would therefore "rank higher" in the results where he was a director ( and also a cast member ) of the movie and not just a cast member ( as in most Robert Rodriguez films ). But with this in place, performing the query itself is very simple: db.movies.find( { "$text": { "$search": input_data } }, function(err,result) { if(err) { // respond error } else { // respond with data or empty } } ); Almost too simple really, but that is all there is to it. The $text query operator knows to use the required index ( there can only be one text index per collection ) and it will just then look through all of the defined fields. This is why I think this is the best fit for your use case here. Parallel Queries The final alternate I'll give here is you still want to demand that you need to run separate queries. I still deny that you do need to only query if the previous query does not return results, and I also re-assert that the above options should be considered "first", with preference to text search. Writing dependant or chained asynchronous functions is a pain, and very messy. Therefore I suggest leaning a little help from another library dependency and using the node-async module here. This provides an aync.map.() method, which is perfectly suited to "combining" results by running things in parallel: var fields = [ "original_title", "release_date", "cast", "writers", "genres.name", "directors" ]; async.map( fields, function(field,callback) { var search = {}, cond = { "$regex": input_data, "$options": "i" }; search[field] = cond; // assigns the field to search db.movies.find(search,callback); }, function(err,result) { if(err) { // respond error } else { // respond with data or empty } } ); And again, that is it. The .map() operator takes each field and transposes that into the query which in turn returns it's results. Those results are then accessible after all queries are run in the final section, "combined" as if they were a single result set, just as the other alternates do here. There is also a .mapSeries() variant that runs each query in series, or .mapLimit() if you are otherwise worried about using database connections and concurrent tasks, but for this small size this should not be a problem. I really don't think that this option is necessary, however if the Case 1 regular expression statements still apply, this "may" possibly provide a little performance benefit due to running queries in parallel, but at the cost of increased memory and resource consumption in your application. Anyhow, the round up here is "Don't do what you are doing", you don't need to and there are better ways to handle the task you want to achieve. And all of them are mean cleaner and easier to code.
Meteor & mongoDB LIKE query
Im trying to produce a query from what the user has searched for. I have an array of strings which i just want to send through in the mongoDB selector, my problem is with the /text/ syntax, it works perfectly from the mongoDB console like this: Items.find({ $or: [{name: /doc/}, {tags: /doc/}, {name: /test/}, {tags: /test/}] }); But i cannot manage to write the same syntax in javascript, i've tried several version. var mongoDbArr = []; searchArray.forEach(function(text) { mongoDbArr.push({name: /text/}); mongoDbArr.push({tags: /text/}); }); return Items.find({ $or: mongoDbArr}); But it only searches for "text" and not whats in the variable. And like this: var mongoDbArr = []; searchArray.forEach(function(text) { mongoDbArr.push({name: "/" + text + "/"}); mongoDbArr.push({tags: "/" + text + "/"}); }); return Items.find({ $or: mongoDbArr}); But that doesn't give me any results back. What am i missing?
You have to build your regular expressions with javascript: var mongoDbArr = []; searchArray.forEach(function(text) { mongoDbArr.push({name: new RegExp(text)}); mongoDbArr.push({tags: new RegExp(text,"i")}); }); return Items.find({ $or: mongoDbArr}); Or use a regular expressions query with mongodb: mongoDbArr.push({name: { $regex : text, $options:"i" } }); mongoDbArr.push({tags: { $regex : text, $options:"i" } }); To escape special characters you can do this before you use text (from JQuery UI's source) text = text.replace(/[\-\[\]{}()*+?.,\\\^$|#\s]/g, "\\$&");
Try this in the publish function in the server: return YourCollection.find({'$or' : [ { 'field1':{'$regex':searchString} }, { 'field2':{'$regex':searchString} }, { 'field3':{'$regex':searchString} }, ] });