I have some question about Druid Query.
According to official document, there are two query language which are Druid query and Native query.
In my use, I feel more comfortable to use Druid Sql because I don't know about Native Query well and have a simple code.
But, I don't know the difference in performance between two query languages.
Is there a large difference in performance between them?
I saw a druid forum writing in 2019.07. In that document, Recommand using Druid SQL in Druid 0.15.0 or after. (for now, latest Druid version is 0.23.0)
Which are better to use Druid Query or Native Query??
But, I don't know the difference in performance between two query
languages.
Is there a large difference in performance between them?
Every Druid SQL query gets translated into a native query. According to the docs, there's a slight overhead in translating the query from SQL to native, but that's
the only minor performance penalty to using Druid SQL compared to
native queries.
To specifically answer your question of
Which are better to use Druid Query or Native Query??
continue using what you are comfortable with.
If you'd like to learn more about best practices and native queries, the linked doc goes into quite a bit of detail.
Related
I have a large application that has hundreds of lines of complex queries in lucene.net, and I want to be able to move to Amazon Cloud Search.
Instead of re-writing all the queries, I was thinking of writing some sort of converter. Before I do though, I thought I would make sure that there is a direct comparison for every type of Lucene Query? Things like inner clauses etc.
Better yet, is there already a library that does it?
I aware that there is a .net library for query cloud search, and also the aws sdk, but I want to have something that allows easy switching between local lucene.net and ACS.
It's way easier than that -- just select CloudSearch's Lucene query parser via the parameter q.parser=lucene with your queries. http://docs.aws.amazon.com/cloudsearch/latest/developerguide/searching.html
lucene—specify search criteria using the Apache Lucene query parser
syntax. If you currently use the Lucene syntax, using the lucene query
parser enables you to migrate your search services to an Amazon
CloudSearch domain without having to completely rewrite your search
queries in the Amazon CloudSearch structured search syntax.
Because of external reasons we thinking to switch from MongoDB to Cassandra. Cassandra is scale good, write fast, read good. But there we really stuck is queries features. We using MongoDB queries features activelly and we also use mongo's aggregation features very activelly. So could you please point me to alternatives technology, which could compensate monodb rich queries and aggragation framework? Could it be Hadoop or Spark?
Apache Spark is most powerful cassandra complement. With Spark you can group, join, sort, filter, and whatever you imagine. There are some projects that built an abstraction layer in Spark over Cassandra and let you apply this operations.
Two commonly projects are:
Stratio Deep
Datastax Connector
I have been studying NoSQL and Hadoop for Data Warehousing however I never worked with this technologies before and I would like to inquire if this following is possible to check if I got my understanding of this technologies right.
If I have my data stored in MongoDB, can I use Hadoop with Hive to make Hiveql queries directly to MongoDB and store the output of those queries as views back in MongoDB again, instead of the HDFS?
Also If I understand correctly most of the NoSQL databases don't support joins and aggregates, but it's possible to make them through map-reduce. If HiveQL queries are map-reduce jobs when I do a join in HiveQL would it already be automatically "joining" the MongoDB data in map-reduce for me, with no need to be worried about the lack of support for joins and aggregates in MongoDB?
MongoDB does have very good support for Aggregation kind of functions. There are no joins of-course. The way MongoDB Schema is usually designed is such that you would typically not need a join.
HiveQL operates on 'Tables' in HDFS. That's the default behavior.
But you have a MongoDB-Hadoop Connector: http://docs.mongodb.org/ecosystem/tools/hadoop/
which will let you query MongoDB data from within Hadoop.
To use Map Reduce you can do that with MongoDB itself (without Hadoop).
See this: http://docs.mongodb.org/manual/core/map-reduce/
I am planning to build a DataWarehouse in MongoDB for the first time. It has been suggested to me that I should use Hadoop for map-reduce in case I need some more complex analyses of the datasets.
Having discovered Hive, I liked the idea of doing mapreduces through a language similar with SQL. But my doubt is, can I make HiveQL queries directly into mongodb without needing to build an Hive DW on top of Hadoop? Because in all use cases I found it seems to only work in the data found in the Hadoop HDFS.
You could use MongoDB Connector for Hadoop:
http://docs.mongodb.org/ecosystem/tools/hadoop/
Mongo DB on its own has Map-Reduce paradigm too:
http://docs.mongodb.org/manual/core/map-reduce/
I've been reading up on MongoDB. I am particularly interested in the aggregation frameworks ability. I am looking at taking multiple dataset consisting of at least 10+ million rows per month and creating aggregations off of this data. This is time series data.
Example. Using Oracle OLAP, you can load data at the second/minute level and have this roll up to hours, days, weeks, months, quarters, years etc...simply define your dimensions and go from there. This works quite well.
So far I have read that MongoDB can handle the above using it's map reduce functionality. Map reduce functionality can be implemented so that it updates results incrementally. This makes sense since I would be loading new data say weekly or monthly and I would expect to only have to process new data that is being loaded.
I have also read that map reduce in MongoDB can be slow. To overcome this, the idea is to use a cheap commodity hardware and spread the load across multiple machines.
So here are my questions.
How good (or bad) does MongoDB handle map reduce in terms of performance? Do you really need a lot of machines to get acceptable performance?
In terms of workflow, is it relatively easy to store and merge the incremental results generated by map reduce?
How much of a performance improvement does the aggregation framework offer?
Does the aggregation framework offer the ability to store results incrementally in a similar manner that the map/reduce functionality that already exists does.
I appreciate your responses in advance!
How good (or bad) does MongoDB handle map reduce in terms of performance? Do you really need a lot of machines to get acceptable performance?
MongoDB's Map/Reduce implementation (as of 2.0.x) is limited by its reliance on the single-threaded SpiderMonkey JavaScript engine. There has been some experimentation with the v8 JavaScript engine and improved concurrency and performance is an overall design goal.
The new Aggregation Framework is written in C++ and has a more scalable implementation including a "pipeline" approach. Each pipeline is currently single-threaded, but you can run different pipelines in parallel. The aggregation framework won't currently replace all jobs that can be done in Map/Reduce, but does simplify a lot of common use cases.
A third option is to use MongoDB for storage in combination with Hadoop via the MongoDB Hadoop Connector. Hadoop currently has a more scalable Map/Reduce implementation and can access MongoDB collections for input and output via the Hadoop Connector.
In terms of workflow, is it relatively easy to store and merge the incremental results generated by map reduce?
Map/Reduce has several output options, including merging the incremental output into a previous output collection or returning the results inline (in memory).
How much of a performance improvement does the aggregation framework offer?
This really depends on the complexity of your Map/Reduce. Overall the aggregation framework is faster (and in some cases, significantly so). You're best doing a comparison for your own use case(s).
MongoDB 2.2 isn't officially released yet, but the 2.2rc0 release candidate has been available since mid-July.
Does the aggregation framework offer the ability to store results incrementally in a similar manner that the map/reduce functionality that already exists does.
The aggregation framework is currently limited to returning results inline so you have to process/display the results when they are returned. The result document is also restricted to the maximum document size in MongoDB (currently 16MB).
There is a proposed $out pipeline command (SERVER-3253) which will likely be added in future for more output options.
Some further reading that may be of interest:
a presentation at MongoDC 2011 on Time Series Data Storage in MongoDB
a presentation at MongoSF 2012 on MongoDB's New Aggregation Framework
capped collections, which could be used similar to RRD
Couchbase map reduce is designed for building incremental indexes, which can then be dynamically queried for the level of rollup you are looking for (much like the Oracle example you gave in your question).
Here is a write up of how this is done using Couchbase: http://www.couchbase.com/docs/couchbase-manual-2.0/couchbase-views-sample-patterns-timestamp.html