I want to know as we have index creation feature in mognodb to speed up the query process https://docs.mongodb.org/v3.0/indexes/ what do we have for elasticsearch for this purpose? I googled it but I was unable to find any suitable information, I used indexing in mongodb on most frequently used fields to speed up the query process and now I want to do same in elasticsearch i want to know is there anything that elasticsearch provides .Thanks
Elasticsearch also has indices: https://www.elastic.co/blog/what-is-an-elasticsearch-index
They are also used as part of the database's key features to provide swift search capabilities.
It is annoying that "index" is used in a different context with ES and many other databases. I'm not as familiar with MongoDB so I'll resort to their documentation at v3.0/core/index-types.
Basically Elasticsearch was designed to serve efficient "filtering" (yes/no queries) and "scoring" (relevance ranking via tf-idf etc.), and it uses Lucene as the underlying inverted index.
MongoDB concepts and their ES counter-parts:
Single Field Index: trivially supported, perhaps as not_analyzed fields for exact matching
Compound Index: Lucene applies AND filter condition via efficient bitmaps, can ad-hoc merge any "single field" indexes
Multikey Index: Transparent support, no difference values and an array of values
Geospatial Index: directly supported via geo-shapes
Text Index: In some way ES was optimized for this use-case as analyzed field type
In my view at search applications relevance is more important that plain filtering the results, as some words occur at almost every document and thus are less relevant when searching.
Elasticsearch has other very useful concepts as well such as aggregations, nested documents and child/parent relationships.
Related
I have an application that needs to do filter the data based on more than 7+ fields.
2+ of these fields are array and currently stored on MongoDB (each of them individually store almost thousands of hexadecimal id). In MongoDB it's not possible to create parallel indexes (for very understandable reasons) Therefore, I'm just able to index based on one single field. In the following thread, the similar issue has been already discussed.
elasticsearch v.s. MongoDB for filtering application
The answer provides some good insights about how ElasticSearch differs from NoSQL databases. But I'm still confused about, will ElasticSearch be performant if I just create the nested mappings for two array fields.
Will the described "Vector Space Model" help me on filtering based on multiple array fields with a good performance when I do exact match / range searches?
This question is about choosing the type of database to run queries on for an application. Keeping other factors aside for the moment, and given that the choice is between mongodb and elastic, the key criterion is that the query should be resolved in near real time. The queries will be ad-hoc and as such can contain any of the fields in the JSON objects and will likely contain aggregations and subaggregations. Furthermore, there will not be nested objects and none of the fields will be containing 'descriptive' text (like movie reviews etc.), i.e., all the fields will be keyword type fields like State, Country, City, Name etc.
Now, I have read that elasticsearch performance is near real time and that elasticsearch uses inverted indices and creates them automatically for every field.
Given all the above, my questions are as follows.
(there is a similar question posted in stack but I do not think it answers my questions
elasticsearch v.s. MongoDB for filtering application)
1) Since the fields in the use case I mentioned do not contain descriptive text and hence would not require the full-text search capability and other additional features that elastic provides (especially for text search), what would be a better choice between elastic and mongo? How would elastic search and mongo query/aggregation performance compare if I were to create single field indices on all the available fields in mongo?
2) I am not familiar with advanced indexing, so I am assuming that it would be possible to create indices on all available fields in mongo (either using multiple single field indices or maybe compound indices?). I understand that this will come with a cost for storage and write speed which is true for elastic as well.
3) Also, in elastic the user can trade off write speed (indexing rate) with the speed with which the written document becomes available (refresh_interval) for a query. Is there a similar feature in mongo?
I think the size of your data set is also a very important aspect about choosing DB engine. According to this benckmark (2015), if you have over 10 millions of documents, Elasticsearch could be a better choice. If your data set is small there should be no obvious different about performance between Elasticsearch and MongoDB.
I have a sparse database. Some fields are of Boolean type (these fields should be indexed), some other fields are of Nominal type (again, these fields should also be indexed) whereas some other fields are of Text type (but those ones should not be indexed). I would like to save my data in a database so that I can search based on any combination of the indexed fields and get back the results. Should I consider using Elasticsearch, MongoDB or another databases?
Any help is appreciated.
According to above mentioned description I suggest MongoDB is best suitable for your requirement as MongoDB has powerful index management and it supports multiple types of indexes.
Indexes allow MongoDB to process and fulfill queries quickly by
creating small and efficient representations of the documents in a
collection.
For more detailed description regarding index types in mongodb please refer the documentation mentioned in following URL
https://docs.mongodb.org/manual/core/index-types/
I 'm familiar with mongodb.
you know, there are many index types in mongodb, such as:
multikey index : http://docs.mongodb.org/manual/core/index-multikey/
, which is very useful for keyword search, I ever used it to build a simple search engine.
compound index is also very useful in mongodb : http://docs.mongodb.org/manual/tutorial/create-a-compound-index/ which is used for multi fields' query.
but I need to migrate my database from mongodb to hbase, do you know some similar index in hbase which can realize the same function with multikey and compound index in mongodb?
HBase doesn't support secondary indexes, that's one of the trade-offs in order to be able to scale to massive data sets. These are the options you have:
http://hbase.apache.org/book/secondary.indexes.html
It all depends on the amount of data you're going to handle and your access patterns. For me, both dual writing to "index" tables & summary tables are the best approaches, just keep in mind that this has to be done manually.
There is no concept of indexing in HBase as of now. I know there is some demand within the community for Indexing. But there are other projects which provide indexing on top of Hbase, One particular one i looked at was Huawei Hindex
If you have RDBMS you probably have to use Solr to index your relational tables to fully nested documents.
Im new to non-sql databases like Mongodb, CouchDB and Cassandra, but it seems to me that the data you save is already in that document structure like the documents saved in Solr/Lucene.
Does this mean that you don't have to use Solr/Lucene when using these databases?
Is it already indexed so that you can do full-text search?
It depends on your needs. They have a full text search. In CouchDB the search is Lucene (same as solr). Unfortunately, this is just a full text index, if you need complex scoring or DisMax type searching, you'll likely want the added capabilities of an independent Solr Index.
Solr (Lucene) uses an algorithm to returns relevant documents from a query. It will returns a score to indicate how relevant each document is related to the query.
It is different than what a database (relational or not) does, which is returning results that matches or not a query.