Azure Table Vs MongoDB on Azure - mongodb

I want to use a NoSQL database on Windows Azure and the data volume will be very large. Whether a Azure Table storage or a MongoDB database running using a Worker role can offer better performance and scalability? Has anyone used MongoDB on Azure using a Worker role? Please share your thoughts on using MongoDB on Azure over the Azure table storage.

Table Storage is a core Windows Azure storage feature, designed to be scalable (100TB 200TB 500TB per account), durable (triple-replicated in the data center, optionally georeplicated to another data center), and schemaless (each row may contain any properties you want). A row is located by partition key + row key, providing very fast lookup. All Table Storage access is via a well-defined REST API usable through any language (with SDKs, built on top of the REST APIs, already in place for .NET, PHP, Java, Python & Ruby).
MongoDB is a document-oriented database. To run it in Azure, you need to install MongoDB onto a web/worker roles or Virtual Machine, point it to a cloud drive (thereby providing a drive letter) or attached disk (for Windows/Linux Virtual Machines), optionally turn on journaling (which I'd recommend), and optionally define an external endpoint for your use (or access it via virtual network). The Cloud Drive / attached disk, by the way, is actually stored in an Azure Blob, giving you the same durability and georeplication as Azure Tables.
When comparing the two, remember that Table Storage is Storage-as-a-Service: you simply access a well-known REST endpoint. With MongoDB, you're responsible for maintaining the database (e.g. whenever MongoDB Inc (formerly 10gen) pushes out a new version of MongoDB, you'll need to update your server accordingly).
Regarding MongoDB Inc's alpha version pointed to by jtoberon: If you take a close look at it, you'll see a few key things:
The setup is for a Standalone mongodb instance, without replica-sets or shards. Regarding replica-sets, you still get several benefits using the Standalone version, due to the way Blob storage works.
To provide high-availability, you can run with multiple instances. In this case, only one instance serves the database, and one is a 'warm-standby' that launches the mongod process as soon as the other instance fails (for maintenance reboot, hardware failure, etc.).
While 10gen's Windows Azure wrapper is still considered 'alpha,' mongod.exe is not. You can launch the mongod exe just like you'd launch any other Windows exe. It's just the management code around the launching, and that's what the alpa implementation is demonstrating.
EDIT 2011-12-8: This is no longer in an alpha state. You can download the latest MongoDB+Windows Azure project here, which provides replica-set support.
For performance, I think you'll need to do some benchmarking. Having said that, consider the following:
When accessing either Table Storage or MongoDB from, say, a Web Role, you're still reaching out to the Windows Azure Storage system.
MongoDB uses lots of memory for its own cache. For this reason, lots of high-scale MongoDB systems are deployed to larger instance sizes. For Table Storage access, you won't have the same memory-size consideration.
EDIT April 7, 2015
If you want to use a document-based database as-a-service, Azure now offers DocumentDB.

I have used both.
Azure Tables : dead simple, fast, really hard to write even simple queries.
Mongo : runs nicely, lots of querying capabilities, requires several instances to be reliable.
In a nutshell,
if your queries are really simple (key->value), you must run a cost comparison (mainly number of transactions against the storage versus cost of hosting Mongo on Azure). I would rather go to table storage for that one.
If you need more elaborate queries and don't want to go to SQL Azure, Mongo is likely your best bet.

I realize that this question is dated. I'd like to add the following info for those who may come upon this question in their searches.
Note that now, MongoDB is offered as a fully managed service on Azure. (officially in Beta as of Apr '15)
See:
http://www.mongodb.com/partners/cloud/microsoft
or
https://azure.microsoft.com/en-us/blog/announcing-new-mongodb-instances-on-microsoft-azure/
See (including pricing):
https://azure.microsoft.com/en-us/marketplace/partners/mongolab/mongolab/

My first choice is AzureTables because SAAS model and low cost and SLA 99.99% http://alexandrebrisebois.wordpress.com/2013/07/09/what-if-20000-windows-azure-storage-transactions-per-second-isnt-enough/
some limits..
http://msdn.microsoft.com/en-us/library/windowsazure/jj553018.aspx
http://www.windowsazure.com/en-us/pricing/calculator/?scenario=data-management
or AzureSQL for small business
DocumentDB
http://azure.microsoft.com/en-us/documentation/services/documentdb/
http://azure.microsoft.com/en-us/documentation/articles/documentdb-limits/
second choice is many cloud providers including Amazon offer S3
or Google tables https://developers.google.com/bigquery/pricing
nTH choice manage the SHOW all by myself have no sleep MongoDB well I will look again the first two SAAS
My choice if I am running "CLOUD" I will go for SAAS model as much as possible "RENT-IT"...
The question is what my app needs is it AzureTables or DocumentDB or AzureSQL
DocumentDB documentation
http://azure.microsoft.com/en-us/documentation/services/documentdb/
How Azure pricing works
http://azure.microsoft.com/en-us/pricing/details/documentdb/
this is fun
http://www.documentdb.com/sql/demo

At Build 2016 it was announced that DocumentDB would support all MongoDB drivers. This solves some of the lack of tooling issues with DocDB and also makes it easier to migrate Mongo apps.

Above answers are all good - but the real answer depends on what your requirements are. You need to understand what size of data you are processing, what types of operations you want to perform on the data and then select the solution that meets your needs.

One thing to remember is Azure Table Storage doesn't support complex data types.It supports every property in entity to be a String or number or boolean or date etc.
One can't store an object against a key,which i feel is must for NoSql DB.
https://learn.microsoft.com/en-us/rest/api/storageservices/fileservices/understanding-the-table-service-data-model scroll to Property Types

Related

How to achieve synchronization and backup using Realm Sync?

I am planning to develop a cross-platform app for iOS and Android and would like to sync the data via Realm Sync. Realm Sync is serverless. Let's assume the following case: A user only has one device, uses the app, saves data and the device breaks down. Then there would be no way for that user to recover their data, since Realm Sync is just a sync and not a backup, right? But how can you implement synchronization and backup within the framework of Realm? What role does MongoDB Atlas play?
Many thanks in advance!
Note: I am an undergraduate student.
Realm Sync is cloud based - meaning there's a cloud based server that stores the app data remotely
The data is stored locally first and then automatically sync'd to the cloud at a later time (usually within seconds or faster). That's why Realm is considered an "offline first" database; data is stored locally (offline) and then copied/stored online.
Realm can either be a purely local only database with no cloud sync or can be a synchronized online as a "real-time" cloud database. (or both!)
When sync is used, the database that backs Realm is called MongoDB, and is a NoSQL database.
Generally speaking, when a Realm client app is developed, the SDK you choose is the layer between your code and objects and the MongoDB back-end database.
The SDK allows you to code in an object-oriented way without the need to directly work with the low level NoSQL objects.
You create models, relationships and the UI and the SDK takes the app data, massages it, and stores it in MongoDB as NoSQL.
As a followup so future readers don't have to read through all the comments:
Q) what does the term "serverless" mean?
A) I like this definition
Serverless offloads all management responsibility for backend cloud infrastructure and operations tasks - provisioning, scheduling, scaling, patching and more - to the cloud provider
and the more straight and too the point:
Serverless is a cloud development model that allows developers to build and run applications without having to manage servers.
Q) Doesn't that mean that the data is then only temporarily stored in the
cloud
A) Not at all. As mentioned above, Realm data is always written locally first and then the SDK sync's it to the cloud. If you totally erase your device, once you reinstall and run the app, Realm will pull down the data from the server (sync)
Q) Because the prices for Realm Sync also do not include storage space
costs: Pricing
A) That link includes storage per plan and costs: In summary
Shared plan is up to 5GB, the serverless is up to 1TB and the dedicated is 4TB per shard. Then, Shared is $0/Month, Serverless is .30/million reads and dedicated is a flat $57/month.
Q) Is it then possible to save the user data on this (an Atlas
cluster)?
A) YES! That's the whole idea! BUT. If you're developing in Realm, it's you job to craft a great app using the SDK, and let the SDK interface with Atlas (MongoDB) on the backend. Depending on your use case you may rarely need to do anything with Atlas directly.
The big picture is that when coding with Realm, you work with objects, structures and relationships in a more much natural object-oriented way - the SDK does the heavy lifting of taking that data and 'converting' it for storage on the Realm Sync server (MongoDB Atlas) - as it ends up being NoSQL.

How do I populate a google big table instance with data using an external url?

I've a google big table instance that need to be populate with data that are in a Postgres Database. My product team give a URL's that allow me to replicate the database. So using simple words I need to duplicate the Postgres database into the google instance and the way that my product team give me is using this url, how can I do this? any tutorial that can help me?
If you are already running PostgreSQL and would like to have a mirror of it on Google Cloud Platform, the best and simplest approach may be to run your own PostgreSQL instance on a Google Compute Engine virtual machine which can be done via several approaches, e.g.,
tutorial for launching PostgreSQL, or
click-to-deploy solution for PostgreSQL by Bitnami
Then, you would want to continuously mirror data from your local instance to the PostgreSQL instance running in Google Cloud to be able to query it. Another SO answer suggests that there are two major approaches to this:
Master/Master replication (Bucardo)
Master/Slave replication (Slony)
Based on your use case where you want to keep your local PostgreSQL instance as the canonical one, and just replicate to Google Cloud for the purpose of querying it, you want a Master/Slave replication, and have the PostgreSQL instance be the read-only replica, so you probably want to use the Slony approach.
For a more in-depth look at PostgreSQL solutions for high availability, load balancing, and replication, see the comparison in the manual.

Google Cloud SQL Read replica's in other regions

We are currently investigating the options to make a partly switch to Google Cloud SQL. What we are searching for is a setup by which data is available for reading in multiple regions to increase the speed of the web-application. Writing from multiple regions would off course be great, but that's not really something MySQL does when you also want to have speed on your side :-)
What we would like to setup is a master-slave setup through which the Master would be in Europe and slaves (for reading) would be available in the US and Asia. This way we can provide information to our customers from a VM + SQL instance in Asia without having to connect to a database in Europe.
As far as I am aware it is not possible to currently add a read-instance outside of the region of the master. Is that correct?
Or, would it be possible to create our own MySQL read-only instance and let it replicate from a Google Cloud SQL instance? This would not be preferable (database administration, server administration) but is off course an option.
You can do cross-region replication in Cloud SQL, although it is not straight forward because the performance will not be great. You have to create a master in Cloud SQL, then create a replica with external master pointing at the master you created: https://cloud.google.com/sql/docs/replication#external-master
You can go in the other direction as well: https://cloud.google.com/sql/docs/replication#replication-external
These features are only supported for first generation of Cloud SQL.
Cloud Spanner is a relational database that supports transactional consistency on a global scale. It is an SQL Database and works great in a Multi-region environment. Therefore, It can be a good choice for your case. For more info, please check https://cloud.google.com/spanner/

OrientDB in Azure

We would like to use OrientDB Graph in an Azure environment. Does anybody has experience using it? We also would like to know if high availability from OrientDB is required under Azure cloud? Azure already offers high availability for Azure storage, Azure Drive and SQL. I understand that they have replications and load balancing built in.
This is super important because we prefer not to get into the business of replications and infrastructure management.
Thanks
So you can spin up 2 or more machines and install OrientDB on them, then configure them together as a distributed cluster. However I haven't been able to find any way that is simpler, easier to do. I am interested in this topic too.
Azure does have features such as geo-replication, which is protects your data against a major data-center incident but doesn't provide any performance benefit and will not make it highly available.
Although pretty reliable, occasionally Microsoft will reboot servers for updates, so to protect against downtime you can use affinity groups so that, of your 2 or more servers, one will always be online. This however does need to be used in conjunction with database replication and ideally load balancing.
It's also worth noting that OrientDB recommends clusters have an odd number of servers as this can prevent conflicts when synchronising data after a communication issue between the servers.
I am using it in amazon and I had to create a java project to monitor http requests inserts and queries. The queries are very fast but takes longer inserting data .
I recommend this type of graph database mode to decrease the time of the queries. Also if you have empty fields OrientDB manages very well compared to other databases .
If you need help with the java project can response to this post and I´ll help u.
I hope it helps. Good luck.

Windows Azure TDS emulation on a production non-Azure IIS server

I am developing a c# web application that will be hosted in Windows Azure and use Table Data Storage (TDS).
I want to architect my application such that I can also (as an option) deploy the application to a traditional IIS server with some other NoSql back-end. Basically, I want to give my customers the option to either pay me in the software as a service model, OR purchase a license of my application that they can install on a (non-azure) production server of their own.
How can I best architect my data layer and middle tier to achieve both goals?
I will likely need a Windows Azure Worker Role and an Azure Queue. How complicated is to replicate these? Can I substitue a custom Windows Service and some other queuing technology?
How I can the entities in my data model be written such that I can deploy to Azure TDS or some other storage when not deploying to Azure? Would MongoDB or similar be useful for this?
Surely there is a way to architect for Azure without being married to it.
I will likely need a Windows Azure Worker Role and an Azure Queue. How complicated is to replicate these? Can I substitue a custom Windows Service and some other queuing technology?
Yes - a Windows service with some other queuing technology would fit this reasonably well - and worker roles have a main/Run loop which is easy to use within a Windows Service.
How I can the entities in my data model be written such that I can deploy to Azure TDS or some other storage when not deploying to Azure? Would MongoDB or similar be useful for this?
NoSql is a general term encapsulating lots of different technologies. I think Azure TDS currently belongs to the Key-Value store family of NoSql, while MongoDB is more of a document database offering much richer functionality than TDS - see http://en.wikipedia.org/wiki/NoSQL_(concept). For mimmicking Azure TDS I think maybe a variant of something like Redis might work (although I believe Redis itself has wider functionality then TDS currently)
In general, it depends on the shape of your data, but I suspect if you can fit it in Azure TDS, then you'll be able to fit it into your choice of other storage too.
Surely there is a way to architect for Azure without being married to it.
Yes - as you've suggested in your question, you can architect your app so it can work on other technologies instead. In fact, this is quite a similar challenge to the traditional SQL data abstraction methods. However, I think there are a few places where you'll find TDS pushing you in certain
directions which won't fit well with other stores - e.g. Azure pushes you much more towards data replication; has very specific rules on keys; offers high performance using very specific mechanisms; and offers limited transaction integrity in very specific situations. These factors may mean that you do have to indeed change some middle tier layers as well as some data layers in order to get the most out of your app in both its Azure and non-Azure variations.
One other thought - It might be easier to offer your clients a multitenant SaaS version on Azure, and a singletenant version hosted on Azure - but this does depend on the clients!
I found a viable solution. I found that I can use EF Code First with SQL Server or SQL CE if I design my entities with the same PartitionKey & RowKey compound key structure that Azure Table Storage requires.
With a little help from Lokad Cloud (http://code.google.com/p/lokad-cloud/) to perform the interaction with Azure Table Storage, I was able to craft a common DataContext that provides crud operations against either EF's DbContext OR Lokad's TableStorageProvider.
I even found a nice way to manage relationships between entities and lazy-load them properly.
The solution is a bit complex and needs more testing. I will blog about it and post the link here when ready.