Is there any GIS extension for Apache Cassandra? - nosql

I want to use Cassandra for my web application, because it will manage a lot information. The problem is that it will also handle a lot of geographical data, so I need a GIS (http://en.wikipedia.org/wiki/Geographic_information_system) cassandra extension to capture, store, manipulate, analyze, manage, and present all types of geographical data.
Something like PostGIS for PostgreSQL. Does it already exists? Something simillar? Any suggestions?
Thanks for your help in advance :)

Well, one of our clients at PlayOrm(a client on top of cassandra with it's own command line client) is heavy into GIS so we are going to be adding features to store GIS data though I think they already exist. I meet with someone next week regarding this so in the meantime, you may want to checkout PlayOrm.
Data will be read from cassandra and displayed on one fo the largest monitors I have seen with some big huge machine(s) backing it with tons of graphic's cards....pretty cool setup.
Currently, PlayOrm does joins and Scalable-SQL but it is very likely we will be adding spatial queries to the mix if we continue to do GIS data.

Related

Conceptual Help needed with Database Queries

So, I don't have a specific issue here other than a general lack of knowledge, and I am hoping that the big brains on here can give me a nudge in the right direction or maybe refer me to an online resource that could help...here is the general problem I am trying to solve.
I have a mongo database that holds a handful of collections where I store data retrieved from some software that we use for our day to day operations. We are grabbing this data from the API and storing it in Mongo to build up a historical source of data (the API is limited in the timeframe of data that can be retrieved.)
The window for the historical data from the API is 7 days.
The data in question has a unique id for each item we pull from the API, so this allows us to grab a record, store it, and modify it as required if it changes over time. This has been working just fine for our needs, until we started to notice a few discrepencies between the data we stored in Mongo, and what we would get out of our software when we ran "reports." After digging into it, turns out there are a few edge cases where a record would be deleted from the software, but if we already grabbed that record through the API, then it would remain in our Mongo Database.
I'm looking for any advice on how to handle this situation. Ideally I suppose we would like to remove these deleted records from our mongoDB inorder to match what is in the software...but I'm having trouble dreaming up the process to make this happen. Apparently this is one of many gaping holes in my entirely self-taught knowledge of this stuff.
Thanks for any assistance.

content Revision History moving database?

I keep a content revision history for a certain content type. It's stored in MongoDB. But since the data is not frequently accessed I don't really need it there, taking up memory. I'd put it in a slower hard disk database.
Which database should I put it in? I'm looking for something that's really cheap and with cloud hosting available. And I don't need speed. I'm looking at SimpleDB, but it doesn't seem very popular. rdbms doesn't seem easy enough to handle since my data is structured into documents. What are my options?
Thanks
Depends on how often you want to look at this old data:
Why don't you mongodump it to your local disk and mongorestore when you want it back.
Documentation here
OR
Setup a local mongo instance and clone the database using the information here
Based on your questions and comments, you might not find the perfect solution. You want free or dirt cheap storage, and you want to have your data available online.
There is only one solution I can see feasible:
Stick with MongoDB. SimpleDB does not allow you to store documents, only key-value pairs.
You could create a separate collection for your history. Use a cloud service that gives you a free tier. For example, http://MongoLab.com gives you 240Mb free tier.
If you exceed the free tier, you can look at discarding the oldest data, moving it to offline storage, or start paying for what you are using.
If you data grows a lot you will have to make the decision whether to pay for it, keep it available online or offline, or discard it.
If you are dealing with a lot of large objects (BLOBS or CLOBS), you can also store the 'non-indexed' data separate from the database. This keeps the database both cheap and fast. The large objects can be retrieved from any cheap storage when needed.
Cloudant.com is pretty cool for hosting your DB in the cloud and it uses Big Couch which is a nosql thing. I'm using it for my social site in the works as Couch DB (Big Couch) similar has an open ended structure and you talk to it via JSON. Its pretty awesome stuff but so weird to move from SQL to using Map-Reduce but once you do its worth it. I did some research because I'm a .NET guy for a long time but moving to Linux and Node.js partly out of bordom and the love of JavaScript. These things just fit together because Node.js is all JavaScript on the backend and talks seemlessly to Couch DB and the whole thing scales like crazy.

Is nosql a right tool for multy level forum like comment system?

I want to build a web app similar to Reddit.com, where you have multy level of comments, lots of reads and writes. I was wondering if nosql and mongoDB in particular is the right tool for this?
Comments -- it's really thing for nosql database, no doubt. You avoiding multiple joins to itself. And it's means that your system can scale out!
With mongodb you can store all hierarchy within one document. Some peoples can say that here will be problems with atomic updates, but i guess that it's not a problem because of you can load and save back entire comments tree. In any way you can easy redesign your system later to support atomic updates and avoid issues with concurrency.
Reddit itself uses Cassandra. If you want something "similar to reddit.com," maybe you should look at their source -- https://github.com/reddit/reddit/wiki.
Here's what David King (ketralnis) said earlier this year about the Cassandra 0.7 release: "Running any large website is a constant race between scaling your user base and scaling your infrastructure to support it. Our traffic more than tripled this year, and the transparent scalability afforded to us by Apache Cassandra is in large part what allowed us to do it on our limited resources. Cassandra v0.7 represents the real-life operations lessons learned from installations like ours and provides further features like column expiration that allow us to scale even more of our infrastructure."
However, Rick Branson notes that Reddit doesn't take full advantage of Cassandra's features, so if you were to start from scratch, you'd want to do some things differently.

I need suggestions for a distributed media storage data store

I want to develop one multimedia system, the system need to save millions videos and images, so I want to select a distributed storage subsystem. who can give me some suggestion ? thanks!
I guess that best option for the 'millions videos and images' is content distribution/delivery network (CDN):
CDN is a server setup which allows for
faster, more efficient delivery of
your media files. It does this by
maintaining copies of your media at
different points of presence (POPs)
along a global network to ensure quick
client access and the fastest delivery
possible
If you will use CDN you no need care about many problems(distribution, fast access). Integration with CDN also should be very simple.
#yi_H
You can configure your writes to be first replicated to multiple nodes before it return to the client. Now whether or not that is needed is of course unto the use case. And definitely involves a performance hit. So if you are implementing a write heavy analytical database, it will have a significant impact on write throughput.
All other points you make about the question in terms of lack of requirements etc, I second that.
Having replicated file system with metadata in a nosql database is a very common way of doing things. #why did you consider this kinda approach?
Have you taken a look at Mongodb gridfs? I have never used it, but it is something I would take a look at to see if it gives you any ideas.
Yo gave us (near) zero information about what your requirements are. Eg:
Do you want atomic transactions?
Is the system read or write heavy?
Do you need fast queries or want to batch-process the data set?
How big are the videos?
Do you want to distribute data locally (on a LAN) or spanning multiple data centers / continents?
How are we supposed to pick the right tool if we don't know what it needs to support?
Without any knowledge of the system I would advise using some kind of FS replication for the videos and images and then storing the metadata associated with the items either in MongoDB, MySQL Master-Master or MySQL Cluster.
Distributed related to what?
If you are talking of replication to distribute:
MongoDb only restricted to Master-Slave replication, so only one node is able to read/write which leaves you with a single point of failure for a really distributed system.
CouchDB is able to peer-to-peer replicate.
Find a very good comparison here and here also compared with hbase.
With CouchDB you also have to be aware that you are going to talk http to the database and have build in webservices.
Regards,
Chris
An alternative is to use MongoDB's GridFS, serving as a (very easily manageable) redundant and distributed filesystem.
Some will say that it's slow on reads, (and it is, mostly because of the nature of its design) but that doesn't have to mean it's a dealbreaker for your system in whole, because if you need performance later on, you could always put Varnish or Squid in front of the filesystem tier.
For all I know, Squid also supports on-disk cache for all the less-hot files.
Sources:
http://www.mongodb.org/display/DOCS/GridFS
http://www.squid-cache.org/Doc/config/cache_dir/

How to collect statistics about data usage in an enterprise application?

For optimization purposes, I would like to collect statistics about data usage in an enterprise Java application. In practise, I would like to know which database tables, and moreover, which individual records are accessed most frequently.
What I thought I could do is write an aspect that records all data access and asynchronously writes the results to a database but I feel that I would be re-inventing the wheel by doing so. So, are there any existing open source frameworks that already tackle this problem or is it somehow possible to squeeze this information directly from MySQL?
This might be useful - have you seen the UserTableMonitoring project?