Design of real time sentiment analysis - mongodb

We're trying to design a real time sentiment analysis system (on paper) for a school project. We got some (very vague) negative feedback on how we store our data, but it isn't fully clear why this would be a bad idea or how we'd best improve this.
The setup of the system is as follows:
data from real time news RSS feeds is gathered in a kafka messaging queue, which connects to our preprocessing platform. This preprocessing step would transform all the news articles into semi-structured data, which we can do a sentiment analysis on.
We then want to store both the sentiment analysis and the preprocessed, semi-structured news article for reference.
We were thinking of using MongoDB as a database to do so since you have a lot of freedom in defining different fields in the value (in the key:value pair you store) instead of Cassandra (which would be faster).
The basic use case is for people to look up an institution and get the sentiment analysis of a bunch of news articles in a certain timeframe.
As a possible improvement: do we need to use a NoSQL database or would it make sense to use a SQL database? I think our system could benefit from being denormalized (as is the case by default in NoSQL) and we wouldn't be needing any operations such as join operations that are significantly faster in SQL systems.
Does anyone know of existing systems that do similar things, for comparison?
Any input would be highly appreciated.

Related

Does NoSQL Fit the Bill for Reporting Software?

I am currently developing a software system that imports and normalizes historical data in various formats (XML, JSON, CSV). As of right now, we are using SQL server, and are looking to find the best replacement for this tool (Postgres or NoSQL). 90% of the time, the (archived/historical/static)data is accessed via a web client, and is used in a READ only format with users picking and choosing canned reports. Changes to the data only occur to update incorrect information .
The replacement DB must be able to store and report on 10s of millions of rows very quickly, and scale across multiple servers with ease (data replication, clustering, etc). There must also be data integrity, so if I update one KPI (lets say Cost per Hr), then all the reports that rely on the KPI will be updated accordingly.
Having no prior experience with NoSQL databases, I am wondering if it is even the right choice to use in a reporting software. We would like to allow for users to create their own custom reports, and that means being able to query any data as opposed to our canned reports, but I don't know if this would throw a wrench in the comparison between SQL vs NoSQL.
There are a few too many variables in the question, to comfortably answer it in entirety, but here's an attempt.
Your choice in SQL vs NoSQL should be based on data structure. Scalability is generally a second-tier concern, and is only slightly easy on some NoSQL platforms, but as always, isn't always free.
If you're looking for 10s of millions of rows 'very quickly' you are seriously testing the limits of what you can do with it. An RDBMS would allow you a plethora of options at the cost of speed, and a NoSQL although quite fast an inputting at that speed, would make you code most of the RDBMS smartness in your application. Chose your poison.
Updating a metric and 'automagically' updating reports is clearly a business-logic smartness, that shouldn't be tied down to platform selection.
PostgreSQL has in the near past, really picked up a lot of arsenal to deal with file formats (JSON et al) and is clearly worth a try (sans easy scalability).
Having said that, you should really investigate Postgres' otherwise forgotten asset, FDWs. You can clearly consider using a NoSQL setup to ingest large unstructured data, and thence utilize postgres' powerful semantics to use that and create a asynchronous yet structured backend for your application. If done well, that could mean the best of both worlds.

Item recommendation service

I'm supposed to make book recommendation service using MyMediaLite. So far I have collected books from website using Nutch crawler and storing info into hbase. The problems is that I actually not fully understand, how all this thing works. By examples, I have to pass a test data and training data files, with user-item id pairs and rating. But what about other information of book, like categories and authors? How it is possible to find "similar" books, by their information etc, without information about user (so far)? Is it possible to pass data directly from hbase, without storing it to file and then leading in?
Or for this job better suits Apache Mahout or LibRec?
User-item-rating information, often in a matrix, is the basis for collaborative filtering algorithms (user-user CF, item-item CF, matrix factorization, and others). You're using other people's opinions to form recommendations. There's no innate recognition of the content of the items themselves. For that, you'll need some sort of content-based filtering algorithm or data mining technique. These are often used in the "user cold start" scenario you described: you have lots of information about items but not about a particular user's preferences.
First, think about your end goal and the data you have. Based on your product needs and available data, you can choose the right algorithm for your purposes. I highly recommend the RecSys course on Coursera for learning more: https://www.coursera.org/learn/recommender-systems. It's taught by a leader in the field.

What is the best data structure to store road network data

I am working on a map-matching/trajectory matching project. What I am unsure after reading a number of research papers is what is the most efficient data structure to store the road network (described by a weighted directed graph) so as to facilitate real time searching (Fast). I am getting things like grids, MTrees, Quadtrees...do I need a database in the backend for these. I am working on MATLAB at the moment but can shift languages. What are the programming languages used in actual satellite navigators.
Help will be much appreciated
There are many such index structures like K-D Trees, R-Trees etc. that can be built on the data that you store on the database. You can store your data on SQL Server and use these indexes. You might also want to take a look at SQL Server Spatial tools that helps to perform Spatial queries on your data.

NoSQL for time series/logged instrument reading data that is also versioned

My Data
It's primarily monitoring data, passed in the form of Timestamp: Value, for each monitored value, on each monitored appliance. It's regularly collected over many appliances and many monitored values.
Additionally, it has the quirky feature of many of these data values being derived at the source, with the calculation changing from time to time. This means that my data is effectively versioned, and I need to be able to simply call up only data from the most recent version of the calculation. Note: This is not versioning where the old values are overwritten. I simply have timestamp cutoffs, beyond which the data changes its meaning.
My Usage
Downstream, I'm going to have various undefined data mining/machine learning uses for the data. It's not really clear yet what those uses are, but it is clear that I will be writing all of the downstream code in Python. Also, we are a very small shop, so I can really only deal with so much complexity in setup, maintenance, and interfacing to downstream applications. We just don't have that many people.
The Choice
I am not allowed to use a SQL RDBMS to store this data, so I have to find the right NoSQL solution. Here's what I've found so far:
Cassandra
Looks totally fine to me, but it seems like some of the major users have moved on. It makes me wonder if it's just not going to be that much of a vibrant ecosystem. This SE post seems to have good things to say: Cassandra time series data
Accumulo
Again, this seems fine, but I'm concerned that this is not a major, actively developed platform. It seems like this would leave me a bit starved for tools and documentation.
MongoDB
I have a, perhaps irrational, intense dislike for the Mongo crowd, and I'm looking for any reason to discard this as a solution. It seems to me like the data model of Mongo is all wrong for things with such a static, regular structure. My data even comes in (and has to stay in) order. That said, everybody and their mother seems to love this thing, so I'm really trying to evaluate its applicability. See this and many other SE posts: What NoSQL DB to use for sparse Time Series like data?
HBase
This is where I'm currently leaning. It seems like the successor to Cassandra with a totally usable approach for my problem. That said, it is a big piece of technology, and I'm concerned about really knowing what it is I'm signing up for, if I choose it.
OpenTSDB
This is basically a time-series specific database, built on top of HBase. Perfect, right? I don't know. I'm trying to figure out what another layer of abstraction buys me.
My Criteria
Open source
Works well with Python
Appropriate for a small team
Very well documented
Has specific features to take advantage of ordered time series data
Helps me solve some of my versioned data problems
So, which NoSQL database actually can help me address my needs? It can be anything, from my list or not. I'm just trying to understand what platform actually has code, not just usage patterns, that support my super specific, well understood needs. I'm not asking which one is best or which one is cooler. I'm trying to understand which technology can most natively store and manipulate this type of data.
Any thoughts?
It sounds like you are describing one of the most common use cases for Cassandra. Time series data in general is often a very good fit for the cassandra data model. More specifically many people store metric/sensor data like you are describing. See:
http://rubyscale.com/blog/2011/03/06/basic-time-series-with-cassandra/
http://www.datastax.com/dev/blog/advanced-time-series-with-cassandra
http://engineering.rockmelt.com/post/17229017779/modeling-time-series-data-on-top-of-cassandra
As far as your concerns with the community I'm not sure what is giving you that impression, but there is quite a large community (see irc, mailing lists) as well as a growing number of cassandra users.
http://www.datastax.com/cassandrausers
Regarding your criteria:
Open source
Yes
Works well with Python
http://pycassa.github.com/pycassa/
Appropriate for a small team
Yes
Very well documented
http://www.datastax.com/docs/1.1/index
Has specific features to take advantage of ordered time series data
See above links
Helps me solve some of my versioned data problems
If I understand your description correctly you could solve this multiple ways. You could start writing a new row when the version changes. Alternatively you could use composite columns to store the version along with the timestamp/value pair.
I'll also note that Accumulo, HBase, and Cassandra all have essentially the same data model. You will still find small differences around the data model in regards to specific features that each database offers, but the basics will be the same.
The bigger difference between the three will be the architecture of the system. Cassandra takes its architecture from Amazon's Dynamo. Every server in the cluster is the same and it is quite simple to setup. HBase and Accumulo or more direct clones of BigTable. These have more moving parts and will require more setup/types of servers. For example, setting up HDFS, Zookeeper, and HBase/Accumulo specific server types.
Disclaimer: I work for DataStax (we work with Cassandra)
I only have experience in Cassandra and MongoDB but my experience might add something.
So your basically doing time based metrics?
Ok if I understand right you use the timestamp as a versioning mechanism so that you query per a certain timestamp, say to get the latest calculation used you go based on the metric ID or whatever and get ts DESC and take off the first row?
It sounds like a versioned key value store at times.
With this in mind I probably would not recommend either of the two I have used.
Cassandra is too rigid and it's too heirachal, too based around how you query to the point where you can only make one pivot of graph data from (I presume you would wanna graph these metrics) the columfamily which is crazy, hence why I dropped it. As for searching (which Facebook use it for, and only that) it's not that impressive either.
MongoDB, well I love MongoDB and I am an elite of the user group and it could work here if you didn't use a key value storage policy but at the end of the day if your mind is not set and you don't like the tech then let me be the very first to say: don't use it! You will be no good at a tech that you don't like so stay away from it.
Though I would picture this happening in Mongo much like:
{
_id: ObjectID(),
metricId: 'AvailableMessagesInQueue',
formula: '4+5/10.01',
result: NaN
ts: ISODate()
}
And you query for the latest version of your calculation by:
var results = db.metrics.find({ 'metricId': 'AvailableMessagesInQueue' }).sort({ ts: -1 });
var latest = results.getNext();
Which would output the doc structure you see above. Without knowing more of exactly how you wish to query and the general servera and app scenario etc thats the best I can come up with.
I fond this thread on HBase though: http://mail-archives.apache.org/mod_mbox/hbase-user/201011.mbox/%3C5A76F6CE309AD049AAF9A039A39242820F0C20E5#sc-mbx04.TheFacebook.com%3E
Which might be of interest, it seems to support the argument that HBase is a good time based key value store.
I have not personally used HBase so do not take anything I say about it seriously....
I hope I have added something, if not you could try narrowing your criteria so we can answer more dedicated questions.
Hope it helps a little,
Not a plug for any particular technology but this article on Time Series storage using MongoDB might provide another way of thinking about the storage of large amounts of "sensor" data.
http://www.10gen.com/presentations/mongodc-2011/time-series-data-storage-mongodb
Axibase Time-Series Database
Open source
There is a free Community Edition
Works well with Python
https://github.com/axibase/atsd-api-python. There are also other language wrappers, for example ATSD R client.
Appropriate for a small team
Built-in graphics and rule engine make it productive for building an in-house reporting, dashboarding, or monitoring solution with less coding.
Very well documented
It's hard to beat IBM redbooks, but we're trying. API, configuration, and administration is documented in detail and with examples.
Has specific features to take advantage of ordered time series data
It's a time-series database from the ground-up so aggregation, filtering and non-parametric ARIMA and HW forecasts are available.
Helps me solve some of my versioned data problems
ATSD supports versioned time-series data natively in SE and EE editions. Versions keep track of status, change-time and source changes for the same timestamp for audit trails and reconciliations. It's a useful feature to have if you need clean, verified data with tracing. Think energy metering, PHMR records. ATSD schema also supports series tags, which you could use to store versioning columns manually if you're on CE edition or you need to extend default versioning columns: status, source, change-time.
Disclosure - I work for the company that develops ATSD.

MongoDB for Forex

I was wondering, can MongoDB be used for storing Forex data which would be later presented on client applications as real time data with analisys in form of graphs? I will have different sources with different feeds which can not be found from mainstream data providers.
Look at these papers coming from the MongoSF convention. Particularly about the analytics. Be aware that the data storage is only one aspect of - in this case - a very complex system design.
MongoDB can be used to store Forex data, the same all databases (that I can think of) will be able to. I think the big question is what do you want to get out of your data storage?
If you are after high performance, then NoSQL is certainly a good direction to go in, as they typically provide better speeds on large datasets when the table relationships get complex.
To be honest though, regardless of feeds - Forex data can be typically stored with a DateTime/High/Low/Open/Close/Currency/Interval right? I use SQL Server to do a very similar thing than what you described, and accessing the stored data is NOT the performance bottleneck. When you start trying to translate the data into the graphs and add indicators etc.. that's when the good design decisions pay off.
MongoDB can be used to store Forex data just like the rest of the data base systems. However, if one is after high performance NOSQL is a better option because it provides better speeds on large sets of data.
A little information on MongoDB and use with financial markets:
https://www.mongodb.com/blog/post/mongodb-single-platform-all-financial-data-ahl
Arctic is a great open source datastore solution which uses MongoDB and Python:
http://www.slideshare.net/JamesBlackburn1/2015-pydata-highperformance-iot-and-financial-data-storage-with-python-and-mongodb