How to index files such as .txt,.pdf,.doc etc using lucene.net? - lucene.net

I am new to Lucene .net.How to index files such as .txt,.pdf,.doc etc using lucene.net?and what all files we can index using lucene.net?

Lucene.net is agnostic to indexing particular files. You must index the files yourself.
I would use IFilters to pull out the text in a document and then use Lucene.net to create the search index.
you can search codeproject.com for multiple articles about using IFilters & lucene.net

Before you index files you need to extract text from them in a proper way. Lucene or Lucene.net don't do that. For text extraction you can use IFilter in windows. IFilters may not be stable and you need to use COM which has threading issues. In addition, using different ifilters with different versions of documents is a real trouble.
http://www.codeproject.com/Articles/13391/Using-IFilter-in-C
www.ifilter.org
There are commercial alternatives for text extraction but they are really expensive.
http://www.isys-search.com/products/document-filters
http://www.oracle.com/us/technologies/embedded/025613.htm
Apache Tika is a good open source alternative for commercial ones. It is in Java.
http://tika.apache.org/
I strongly recommend to use Apache Solr/Lucene with a good Solr .NET client instead of Lucene.net. Solr has Tika integration built-in that will achieve what you want to do. You don't need to know Java in order to use Solr. It is a standalone web service that can run on a lightweight application server.
If you build a document search solution with Lucene.Net you will have many problems which have already been addressed in Solr.
http://www.lucidimagination.com/devzone/technical-articles/content-extraction-tika
http://wiki.apache.org/solr/ExtractingRequestHandler
There is good discussion about Lucene vs Solr here.
Search Engine - Lucene or Solr

Related

Is there a direct comparison between Lucene.Net syntax and Amazon Cloud Search syntax

I have a large application that has hundreds of lines of complex queries in lucene.net, and I want to be able to move to Amazon Cloud Search.
Instead of re-writing all the queries, I was thinking of writing some sort of converter. Before I do though, I thought I would make sure that there is a direct comparison for every type of Lucene Query? Things like inner clauses etc.
Better yet, is there already a library that does it?
I aware that there is a .net library for query cloud search, and also the aws sdk, but I want to have something that allows easy switching between local lucene.net and ACS.
It's way easier than that -- just select CloudSearch's Lucene query parser via the parameter q.parser=lucene with your queries. http://docs.aws.amazon.com/cloudsearch/latest/developerguide/searching.html
lucene—specify search criteria using the Apache Lucene query parser
syntax. If you currently use the Lucene syntax, using the lucene query
parser enables you to migrate your search services to an Amazon
CloudSearch domain without having to completely rewrite your search
queries in the Amazon CloudSearch structured search syntax.

Running Lucene-based search on Grails application using MongoDB

Currently I am investigating in ways to implement a Lucene-based search on a Grails application using MongoDB.
Requirements include the following:
The data to index is stored in a MongoDB
Data only gets inserted (no updates, no deletions)
The application has to run on the CloudBees platform
The search should be implemented without any external services like Searchly or WebSolr
So far this does not seem to be very complicated as there are Grails plugins. However, the main problem I am facing is that my application uses dynamic MongoDB collections. So I do not have a domain class for each and every collection. Instead, the collections that should be indexed can have arbitrary names and schemas. As a result I cannot use Grails plugins like searchable as these seem to only work on fixed domain classes (or am I wrong about that?).
Does anybody have experience on how to implement a search in such a context? Any tips, links, hints, or recommendations?
You can use one index and multiple types for your dynamic MongoDB collections. However that logic should be coded by yourself since integration modules done within a mind set of domain model indexing.
For ElasticSearch you can use Jest via groovy for ElasticSearch https://github.com/searchbox-io/Jest
Searchly offers MongoDB integration out of the box unfortunately for a single collection. therefore for now you also need to query MongoDB(the collection you have created dynamically), index the data to index under new type and query it.
my old post is deleted due to not being related answer, well it is OK :)

Full text search options for MongoDB setup

We are planning to store millions of documents in MongoDB and full text search is very much required. I read Elasticsearch and Solr are the best available solutions for full text search.
Is Elastic search is mature enough to be used for Mongodb full text search? We also be sharding the collections. Does Elasticsearch works with Sharded collections?
What are the advantages and disadvantages of using Elasticsearch or Solr?
Is MongoDB capable of doing full text search?
There are some search capabilities in MongoDB but it is not as feature-rich as search engines.
http://www.mongodb.org/display/DOCS/Full+Text+Search+in+Mongo
We use Mongo with Solr to make content searchable. We prefer Solr because
It is easy to configure and customize
It has large community (This is really helpful if you are working with opensource tools)
Since we didn't work with ES i could not say much about it. You can found some discussions about Solr vs ES on the links below.
Solr vs ES 1
Solr vs ES 2
Solr vs ES 3
I have a professional experience with both Solr/MySQL and ElasticSearch/MongoDB.
If you are going to query a lot your search engine, you already shard your MongoDB (I mean, if you want to shard too your search engine): you should use ElasticSearch, unless what you want to do can't be done with ElasticSearch. And you should use it even if you are not going to shard.
ElasticSearch is a new project on top of Lucene that brings the sharding mechanism, from someone who is used to distributed environments and search (Shay Bannon made Compass and worked for Gigaspaces, the datagrid editor).
ElasticSearch is as easy as MongoDB to shard, I think it is even simpler and the default works great for most cases.
I don't like Solr so much.
The query langage is not structured at all (but it's the case of plugins and Lucene, and I think you can use this unstructured query langage with ES too)
I don't think there is a proper Solr client. Solr java client sucks, and I hearh PHP guys also complaining, while ElasticSearch Java client is very nice, much more typesafe and offers async support (nice if you use Netty for exemple). With Solr, you will do a LOT of string concatenation.
Less easy to scale
Not so new project, I felt the technical dept it has. ElasticSearch is born from Compass, so I guess all the technical dept has been dropped to have a fresh new approach.
Concerning data importing, I have experience with both Solr DataImportHandler and ElasticSearch rivers (CouchDB and MongoDB). What I can tell you is:
Solr permits to do more things, but in a very unstructured XML way, and the documentation doesn't help you so much to understand what is really happing once you are out of the hello world and try to use some advanced features.
ElasticSearch approach is more simple and also limited but has out of the box support for some technologies while DataImportHandler seems more complex-SQL friendly
With my Solr project I had to use manual indexation for some documents, but it was mostly because of the impossibility to denormalize the needed data into a document (the Solr project uses MySQL).
There is also a new MongoDB connector for both Solr and ElasticSearch which I need to test asap :)
http://blog.mongodb.org/post/29127828146/introducing-mongo-connector
So in the end, I'll definitly choose ElasticSearch, because:
It now has a great community
Many people I know with experience with Solr like ElasticSearch
The client side is safer and structured, and provides async with Java Futures
Both can probably import data from MongoDB easily with the new connector
As far as I know, it permits to do almost everything Solr does (in my experience but I'm not a search engine expert)
It adds sharding out of the box
It adds percolation which can help to built realtime scalable applications (but you'll probably need an additional messaging technology)
The source code I read has nearly no technical dept compared to Solr (at least on the client side), and it seems easy to create plugins.
In terms of MongoDB natively, no it doesn't have full text search support. You can see that it is a popular feature request:
https://jira.mongodb.org/browse/SERVER-380
From what I know of the ES river plugin for MongoDB, it tails the oplog for it's functionality. Since a sharded setup would have multiple oplogs and there would be no way to easily alter that code to connect via a mongos.
Similarly for Solr, the examples I have seen usually involve similar behavior to the ES plugin. Some more solid info here:
http://blog.knuthaugen.no/2010/04/cooking-with-mongodb-and-solr.html
I have not got any experience using one but others have made comparisons before, take a look here:
Solr vs. ElasticSearch
ElasticSearch, Sphinx, Lucene, Solr, Xapian. Which fits for which usage?
MongoDB can't do efficient full text search. You can do wildcard searches on fields, but i don't think these use indexes efficiently.
I would recommend using the river functionality of ElasticSearch to automatically push the documents from MongoDB to ElasticSearch.
elasticsearch-river-mongodb is a MongoDB to Elasticsearch river that when a document changes in MongoDB, ElasticSearch will monitoring the oplog and then automatically update its index.
This minimises the problem of keeping the two datastores in sync, as ElasticSearch is just monitoring the replication tables of Mongo.
Mongo is not at al good for fulltext search.
Obviously you need to index you fields for fast searching, and indexing fields containing BIG data (long long strings) will be failed in mongo. it has a limit of 1k for index, if you have content more thn 1k, it will be ignored by index and will not be displayed in your search results. obviously if you are trying to perform a full text search for your articles, mongo is not at al a good choice.
Currently, in MongoDB 2.4.6, there now IS a full-text search in MongoDB and it is more feature rich, then in previous versions. On http://docs.mongodb.org/manual/core/text-search/ are described the capabilities of the new functionality.
Worth mentioning:
tokenizes and stems the search term(s) during both the index creation and the text command execution. assigns a score to each document that
contains the search term in the indexed fields. The score determines the relevance of a document to a given search query.
However, in this answer (from September 2013) https://stackoverflow.com/a/18631775/1920149 you can see, that mongo still warns from using this functionality in production. This functionality is still in beta stage.
Full text search become possible in product environment with Mongodb since the version 2.6 by creating text index on the required fields.
indexe text in mongodb

advanced searching mongodb using mongomapper, sunspot/solr or sphinx?

I have am using mongodb with mongomapper to store all my products. Each product belongs to multiple categories that have many levels i.e. category, sub category etc.
Each product has many search fields that are embedded documents in product.
All this is working and I now want to add search to the app.
The search system needs text search: multiple, dynamic, faceted search including min/max range search.
I have been looking into sunspot gem but having difficulty setting it up on dev let alone trying to run it in production! And I have also looked at sphinx.
But I am wondering if using just mongomapper / mongodb will be quick enough and the best way, as its quite a complex search system ?
Any help / suggestions / experiences / tutorials and examples on this would be most appreciated.
Thanks a lot,
Rick
I've been involved with a very large Sphinx powered search and I think its awful. Very difficult to configure if you want anything past a very simple full-text search. Solr\Lucene, on the other hand, is incredibly flexible and was unbelievably easier to setup and get running.
I am not using Solr in conjunction with MongoDB to power full text search with all the extra goodies, like facets, etc. Depending on how you configure Solr, you may not need to even hit your MongoDB for data. Or, you may tell Solr to index fields, but not to store them and instead you just store the ObjectId's that correspond to data inside of MongoDB.
If your search truly is a complex search system, I very strongly recommend that you do not use MongoDB for search and go with Solr. One big reason is that MongoDb doesnt have a full text feature - instead, it has regular expression matches. The Regex matches work wonderfully but will only use indexes in certain cases.

NoSQL (MongoDB) vs Lucene (or Solr) as your database [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
With the NoSQL movement growing based on document-based databases, I've looked at MongoDB lately. I have noticed a striking similarity with how to treat items as "Documents", just like Lucene does (and users of Solr).
So, the question: Why would you want to use NoSQL (MongoDB, Cassandra, CouchDB, etc) over Lucene (or Solr) as your "database"?
What I am (and I am sure others are) looking for in an answer is some deep-dive comparisons of them. Let's skip over relational database discussions all together, as they serve a different purpose.
Lucene gives some serious advantages, such as powerful searching and weight systems. Not to mention facets in Solr (which Solr is being integrated into Lucene soon, yay!). You can use Lucene documents to store IDs, and access the documents as such just like MongoDB. Mix it with Solr, and you now get a WebService-based, load balanced solution.
You can even throw in a comparison of out-of-proc cache providers such as Velocity or MemCached when talking about similar data storing and scalability of MongoDB.
The restrictions around MongoDB reminds me of using MemCached, but I can use Microsoft's Velocity and have more grouping and list collection power over MongoDB (I think). Can't get any faster or scalable than caching data in memory. Even Lucene has a memory provider.
MongoDB (and others) do have some advantages, such as the ease of use of their API. New up a document, create an id, and store it. Done. Nice and easy.
This is a great question, something I have pondered over quite a bit. I will summarize my lessons learned:
You can easily use Lucene/Solr in lieu of MongoDB for pretty much all situations, but not vice versa. Grant Ingersoll's post sums it up here.
MongoDB etc. seem to serve a purpose where there is no requirement of searching and/or faceting. It appears to be a simpler and arguably easier transition for programmers detoxing from the RDBMS world. Unless one's used to it Lucene & Solr have a steeper learning curve.
There aren't many examples of using Lucene/Solr as a datastore, but Guardian has made some headway and summarize this in an excellent slide-deck, but they too are non-committal on totally jumping on Solr bandwagon and "investigating" combining Solr with CouchDB.
Finally, I will offer our experience, unfortunately cannot reveal much about the business-case. We work on the scale of several TB of data, a near real-time application. After investigating various combinations, decided to stick with Solr. No regrets thus far (6-months & counting) and see no reason to switch to some other.
Summary: if you do not have a search requirement, Mongo offers a simple & powerful approach. However if search is key to your offering, you are likely better off sticking to one tech (Solr/Lucene) and optimizing the heck out of it - fewer moving parts.
My 2 cents, hope that helped.
You can't partially update a document in solr. You have to re-post all of the fields in order to update a document.
And performance matters. If you do not commit, your change to solr does not take effect, if you commit every time, performance suffers.
There is no transaction in solr.
As solr has these disadvantages, some times NoSQL is a better choice.
UPDATE: Solr 4+ Started supporting commit and soft-commits. Refer to the latest document https://lucene.apache.org/solr/guide/8_5/
We use MongoDB and Solr together and they perform well. You can find my blog post here where i described how we use this technologies together. Here's an excerpt:
[...] However we observe that query performance of Solr decreases when index
size increases. We realized that the best solution is to use both Solr
and Mongo DB together. Then, we integrate Solr with MongoDB by storing
contents into the MongoDB and creating index using Solr for full-text
search. We only store the unique id for each document in Solr index
and retrieve actual content from MongoDB after searching on Solr.
Getting documents from MongoDB is faster than Solr because there is no
analyzers, scoring etc. [...]
Also please note that some people have integrated Solr/Lucene into Mongo by having all indexes be stored in Solr and also monitoring oplog operations and cascading relevant updates into Solr.
With this hybrid approach you can really have the best of both worlds with capabilities such as full text search and fast reads with a reliable datastore that can also have blazing write speed.
It's a bit technical to setup but there are lots of oplog tailers that can integrate into solr. Check out what rangespan did in this article.
http://denormalised.com/home/mongodb-pub-sub-using-the-replication-oplog.html
From my experience with both, Mongo is great for simple, straight-forward usage. The main Mongo disadvantage we've suffered is the poor performance on unanticipated queries (you cannot created mongo indexes for all the possible filter/sort combinations, you simple can't).
And here where Lucene/Solr prevails big time, especially with the FilterQuery caching, Performance is outstanding.
Since no one else mentioned it, let me add that MongoDB is schema-less, whereas Solr enforces a schema. So, if the fields of your documents are likely to change, that's one reason to choose MongoDB over Solr.
#mauricio-scheffer mentioned Solr 4 - for those interested in that, LucidWorks is describing Solr 4 as "the NoSQL Search Server" and there's a video at http://www.lucidworks.com/webinar-solr-4-the-nosql-search-server/ where they go into detail on the NoSQL(ish) features. (The -ish is for their version of schemaless actually being a dynamic schema.)
If you just want to store data using key-value format, Lucene is not recommended because its inverted index will waste too much disk spaces. And with the data saving in disk, its performance is much slower than NoSQL databases such as redis because redis save data in RAM. The most advantage for Lucene is it supports much of queries, so fuzzy queries can be supported.
MongoDB Atlas will have a lucene-based search engine soon. The big announcement was made at this week's MongoDB World 2019 conference. This is a great way to encourage more usage of their high revenue MongoDB Atlas product.
I was hoping to see it rolled into the MongoDB Enterprise version 4.2 but there's been no news of bringing it to their on-prem product line.
More info here: https://www.mongodb.com/atlas/full-text-search
The third party solutions, like a mongo op-log tail are attractive. Some thoughts or questions remain about whether the solutions could be tightly integrated, assuming a development/architecture perspective. I don't expect to see a tightly integrated solution for these features for a few reasons (somewhat speculative and subject to clarification and not up to date with development efforts):
mongo is c++, lucene/solr are java
maybe lucene could use some mongo libs
maybe mongo could rewrite some lucene algorithms, see also:
http://clucene.sourceforge.net/
http://lucy.apache.org/
lucene supports various doc formats
mongo is focused on JSON (BSON)
lucene uses immutable documents
single field updates are an issue, if they are available
lucene indexes are immutable with complex merge ops
mongo queries are javascript
mongo has no text analyzers / tokenizers (AFAIK)
mongo doc sizes are limited, that might go against the grain for lucene
mongo aggregation ops may have no place in lucene
lucene has options to store fields across docs, but that's not the same thing
solr somehow provides aggregation/stats and SQL/graph queries