Which nosql solution fits my application HBase OR Hypertable OR Cassandra - nosql

I have an application with 100 million of data and growing. I want to scale out before it hits the wall.
I have been reading stuff about nosql technologies which can handle Big Data efficiently.
My needs:
There are more reads than writes.But writes are also significantly large in numbers (read:write=4:3)
Can you please explain difference among HBase, Hypertable and Cassandra? Which one fits my requirements?

Both HBase and Hypertable require hadoop. If you're not using Hadoop anyway (e.g. need to solve map/reduce related problems) - I'd go with cassandra as it is stand-alone

If you already having data Hive is the best solution for your application, Or you develop app from the scratch look into below link that explain overview of the nosql world
http://nosql-database.org/
http://www.vineetgupta.com/2010/01/nosql-databases-part-1-landscape/

Related

Storing IOT data in MongoDb

I am currently streaming IOT data to my MongoDB which is running in a Docker Container(hosted in AWS). Per day I am getting a couple of thousands of data points.
I will be using this data gathered for some intensive data analysis and ML which will run on day to day basis.
So is this how normally how big data is stored? What are the industrial standards and best practices?
It depends on a lot of factors, for example, the type of data one is analyzing, how much data one has and how quickly you need it.
For applications such as user behavior analysis, relational DB is best.
Well, if the data fits into a spreadsheet, then it is better suited for a SQL-type database such as Postgres, BigQuery as relational databases are good at analyzing data in rows and columns.
For semi-structured data, think social media, texts or geographical data which requires a large amount of text mining or image processing, NoSQL type database such as MongoDB, CouchDB works best.
On the other hand, in relational databases, one can use SQL to query them. SQL as a language is well-known among data analysts and engineers and is also easy to learn than most programming languages.
Databases that commonly used in the industry to store Big Data are:
Relational Database Management System: As data engine storage, the platform employs the B-Tree structure. B-Tree concepts are used to organize the index and data, and logarithmic time is used to write and read the data.
MongoDB: You can use this platform if you need to de-normalize
tables. It is apt if you want to resort to documents that comprise all the allied nested structures in a single document for maintaining consistency.
Cassandra: This database platform is perfect for upfront queries and fast writing. However, the query performance is slightly less, and that makes it ideal for Time-Series data. Cassandra uses the
Long-Structured-Merge-Tree format in the storage engine.
Apache HBase: This data management platform has similarities with
Cassandra in its formatting. HBase also comes with the same performance metrics as Cassandra.
OpenTSDB: The platform is perfect for IoT user-cases where the information gathers thousands within seconds. The collected questions are needed for the dashboards.
Hope it helps.

hadoop with mongodb and hadoop vs mongodb

I'm trying to understand key differences between mongoDB and Hadoop.
I understand that mongoDB is a database, while Hadoop is an ecosystem that contains HDFS. Some similarities in the way that data is processed using either technology, while major differences as well.
I'm confused as to why someone would use mongoDB over the Hadoop cluster, mainly what advantages does mongoDB offer over Hadoop. Both perform parallel processing, both can be used with Spark for further data analytics, so what is the value add one over the other.
Now, if you were to combine both, why would you want to store data in mongoDB as well as HDFS? MongoDB has map/reduce, so why would you want to send data to hadoop for processing, and again both are compatible with Spark.
First lets look at what we're talking about
Hadoop - an ecosystem. Two main components are HDFS and MapReduce.
MongoDB - Document type NoSQL DataBase.
Lets compare them on two types of workloads
High latency high throughput (Batch processing) - Dealing with the question of how to process and analyze large volumes of data. Processing will be made in a parallel and distributed way in order to finalize and retrieve results in the most efficient way possible. Hadoop is the best way to deal with such a problem, managing and processing data in a distributed and parallel way across several servers.
Low Latency and low throughput (immediate access to data, real time results, a lot of users) - When dealing with the need to show immediate results in the quickest way possible, or make small parallel processing resulting in NRT results to several concurrent users a NoSQL database will be the best way to go.
A simple example in a stack would be to use Hadoop in order to process and analyze massive amounts of data, then store your end results in MongoDB in order for you to:
Access them in the quickest way possible
Reprocess them now that they are on a smaller scale
The bottom line is that you shouldn't look at Hadoop and MongoDB as competitors, since each one has his own best use case and approach to data, they compliment and complete each other in your work with data.
Hope this makes sense.
Firstly, we should know what these two terms mean.
HADOOP
Hadoop is an open-source tool for Big Data analytics developed by the Apache foundation. It is the most popularly used tool for both storing as well as analyzing Big Data. It uses a clustered architecture for the same.
Hadoop has a vast ecosystem and this ecosystem comprises of some robust tools.
MongoDB
MongoDB is an open-source, general-purpose, document-based, distributed NoSQL database built for storing Big Data. MongoDB has a very rich query language which results in high performance. MongoDB is a document-based database, which implies that it stores data in JSON-like format documents.
DIFFERENCES
Both these tools are good enough for harnessing Big Data. It depends on your requirements. For some projects, Hadoop would be a good option and some MongoDB fits well.
Hope this helps you to distinguish between the two.

How to transport and index Cassandra data on Elastic Search?

I'm starting a nodejs application where I want to index Cassandra data on Elastic Search, but what would be the best way to do that?, I gave a look to Storm to accomplish just that but doesn't seem to be the solution. Primarily, I was thinking to use one client for Cassandra and one client for Elastic Search and apply inserts/updates/deletes twice on my application, being one per client, but doesn't appear to be the way to go, and I'm worried about the consistency of this. There's a better way to transport Cassandra data to be indexed on Elastic Search? Storm would help me to accomplish that? Could someone suggest any techniques to transport one database data to another? I'm in a really doubt here with nowhere to look.
Do you want to move the data from Cassandra to ElasticSearch once and only once? Or you want to keep them in sync?
In both cases, I think Storm is a good fit. I used in the past to move data from our RDBMS into Apache Solr. One thing to keep in mind is the limit of writes that Solr/Elastic search can do. If you increased the parallelism, then you are bringing them to the knees.
Another option could be Apache Hadoop but it is only suitable for one time copying or if you want to copy the data (same data of yesterday + what could be new) every day.

Log viewing utility database choice

I will be implementing log viewing utility soon. But I stuck with DB choice. My requirements are like below:
Store 5 GB data daily
Total size of 5 TB data
Search in this log data in less than 10 sec
I know that PostgreSQL will work if I fragment tables. But will I able to get this performance written above. As I understood NoSQL is better choice for log storing, since logs are not very structured. I saw an example like below and it seems promising using hadoop-hbase-lucene:
http://blog.mgm-tp.com/2010/03/hadoop-log-management-part1/
But before deciding I wanted to ask if anybody did a choice like this before and could give me an idea. Which DBMS will fit this task best?
My logs are very structured :)
I would say you don't need database you need search engine:
Solr based on Lucene and it packages everything what you need together
ElasticSearch another Lucene based search engine
Sphinx nice thing is that you can use multiple sources per search index -- enrich your raw logs with other events
Scribe Facebook way to search and collect logs
Update for #JustBob:
Most of the mentioned solutions can work with flat file w/o affecting performance. All of then need inverted index which is the hardest part to build or maintain. You can update index in batch mode or on-line. Index can be stored in RDBMS, NoSQL, or custom "flat file" storage format (custom - maintained by search engine application)
You can find a lot of information here:
http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis
See which fits your needs.
Anyway for such a task NoSQL is the right choice.
You should also consider the learning curve, MongoDB / CouchDB, even though they don't perform such as Cassandra or Hadoop, they are easier to learn.
MongoDB being used by Craigslist to store old archives: http://www.10gen.com/presentations/mongodb-craigslist-one-year-later

HBase cassandra couchdb mongodb..any fundamental difference?

I just wanted to know if there is a fundamental difference between hbase, cassandra, couchdb and monogodb ? In other words, are they all competing in the exact same market and trying to solve the exact same problems. Or they fit best in different scenarios?
All this comes to the question, what should I chose when. Matter of taste?
Thanks,
Federico
Those are some long answers from #Bohzo. (but they are good links)
The truth is, they're "kind of" competing. But they definitely have different strengths and weaknesses and they definitely don't all solve the same problems.
For example Couch and Mongo both provide Map-Reduce engines as part of the main package. HBase is (basically) a layer over top of Hadoop, so you also get M-R via Hadoop. Cassandra is highly focused on being a Key-Value store and has plug-ins to "layer" Hadoop over top (so you can map-reduce).
Some of the DBs provide MVCC (Multi-version concurrency control). Mongo does not.
All of these DBs are intended to scale horizontally, but they do it in different ways. All of these DBs are also trying to provide flexibility in different ways. Flexible document sizes or REST APIs or high redundancy or ease of use, they're all making different trade-offs.
So to your question: In other words, are they all competing in the exact same market and trying to solve the exact same problems?
Yes: they're all trying to solve the issue of database-scalability and performance.
No: they're definitely making different sets of trade-offs.
What should you start with?
Man, that's a tough question. I work for a large company pushing tons of data and we've been through a few years. We tried Cassandra at one point a couple of years ago and it couldn't handle the load. We're using Hadoop everywhere, but it definitely has a steep learning curve and it hasn't worked out in some of our environments. More recently we've tried to do Cassandra + Hadoop, but it turned out to be a lot of configuration work.
Personally, my department is moving several things to MongoDB. Our reasons for this are honestly just simplicity.
Setting up Mongo on a linux box takes minutes and doesn't require root access or a change to the file system or anything fancy. There are no crazy config files or java recompiles required. So from that perspective, Mongo has been the easiest "gateway drug" for getting people on to KV/Document stores.
CouchDB and MongoDB are document stores
Cassandra and HBase are key-value based
Here is a detailed comparison between HBase and Cassandra
Here is a (biased) comparison between MongoDB and CouchDB
Short answer: test before you use in production.
I can offer my experience with both HBase (extensive) and MongoDB (just starting).
Even though they are not the same kind of stores, they solve the same problems:
scalable storage of data
random access to the data
low latency access
We were very enthusiastic about HBase at first. It is built on Hadoop (which is rock-solid), it is under Apache, it is active... what more could you want? Our experience:
HBase is fragile
administrator's nightmare (full of configuration settings where default ones are less than perfect, nontransparent configuration, changes from version to version,...)
loses data (unless you have set the X configuration and changed Y to... you get the point :) - we found that out when HBase crashed and we lost 2 hours (!!!) of data because WAL was not setup properly
lacks secondary indexes
lacks any way to perform a backup of database without shutting it down
All in all, HBase was a nightmare. Wouldn't recommend it to anyone except to our direct competitors. :)
MongoDB solves all these problems and many more. It is a delight to setup, it makes administrating it a simple and transparent job and the default configuration settings actually make sense. You can perform (hot) backups, you can have secondary indexes. From what I read, I wouldn't recommend MapReduce on MongoDB (JavaScript, 1 thread per node only), but you can use Hadoop for that.
And it is also VERY active when compared to HBase.
Also:
http://www.google.com/trends?q=HBase%2CMongoDB
Need I say more? :)
UPDATE: many months later I must say MongoDB delivered on all accounts and more. The only real downside is that hosting companies do not offer it the way they offer MySQL. ;)
It also looks like MapReduce is bound to become multi-threaded in 2.2. Still, I wouldn't use MR this way. YMMV.
Cassandra is good for writing the data. it has advantage of "writes never fail". It has no single point failure.
HBase is very good for data processing. HBase is based on Hadoop File System (HDFS) so HBase dosen't need to worry for data replication, data consistency. HBase has the single point of failure. I am not really sure that what does it's mean if it has single point of failure then it is somhow similar to RDBMS where we have single point of failure. I might be wrong in sense since I am quite new.
How abou RIAK ? Does someone has experience using RIAK. I red some where that you need to pay, I am not sure. Need explanation.
One more thing which one you will prefer to use when you are only concern to reading a lot of data. You don't have any concern with writing. Just imagine you have database with pitabyte and you want to make fast search which NOSQL database would you prefer ?