How do I populate a google big table instance with data using an external url? - postgresql

I've a google big table instance that need to be populate with data that are in a Postgres Database. My product team give a URL's that allow me to replicate the database. So using simple words I need to duplicate the Postgres database into the google instance and the way that my product team give me is using this url, how can I do this? any tutorial that can help me?

If you are already running PostgreSQL and would like to have a mirror of it on Google Cloud Platform, the best and simplest approach may be to run your own PostgreSQL instance on a Google Compute Engine virtual machine which can be done via several approaches, e.g.,
tutorial for launching PostgreSQL, or
click-to-deploy solution for PostgreSQL by Bitnami
Then, you would want to continuously mirror data from your local instance to the PostgreSQL instance running in Google Cloud to be able to query it. Another SO answer suggests that there are two major approaches to this:
Master/Master replication (Bucardo)
Master/Slave replication (Slony)
Based on your use case where you want to keep your local PostgreSQL instance as the canonical one, and just replicate to Google Cloud for the purpose of querying it, you want a Master/Slave replication, and have the PostgreSQL instance be the read-only replica, so you probably want to use the Slony approach.
For a more in-depth look at PostgreSQL solutions for high availability, load balancing, and replication, see the comparison in the manual.

Related

Two-way replication between MongoDB and DynamoDB

We certainly want to use separate databases since the front-end team finds it robust to work with MongoDB Atlas and AWS cloud architects find it easy to work with DynamoDB.
Our architecture:
Web application uses MongoDB to insert, update and retrieve data.
The MongoDB is synced in real-time with DynamoDB.
Background AWS services use DynamoDB for inserting, updating and retrieving data.
The changes in either DynamoDB or MongoDB are replicated to each other.
Tried so far:
We currently do have a sync in place with DyanmoDB streams and MongoDB atlas trigger to listen to changes on each database and forward them to the other. We use lambas for this, but our replication logic is not robust yet.
AWS Database Migration Service with ongoing replication has been suggested but haven't been able to get it to work in our use case. Perhaps, this is one option.
3rd party services like: https://www.cdata.com/sync/
Ideal Fit
The most ideal solution would be an AWS-based solution if not a reliable 3rd party service.
Greatly appreciate any resources or thoughts on this! :)

Google Cloud SQL Read replica's in other regions

We are currently investigating the options to make a partly switch to Google Cloud SQL. What we are searching for is a setup by which data is available for reading in multiple regions to increase the speed of the web-application. Writing from multiple regions would off course be great, but that's not really something MySQL does when you also want to have speed on your side :-)
What we would like to setup is a master-slave setup through which the Master would be in Europe and slaves (for reading) would be available in the US and Asia. This way we can provide information to our customers from a VM + SQL instance in Asia without having to connect to a database in Europe.
As far as I am aware it is not possible to currently add a read-instance outside of the region of the master. Is that correct?
Or, would it be possible to create our own MySQL read-only instance and let it replicate from a Google Cloud SQL instance? This would not be preferable (database administration, server administration) but is off course an option.
You can do cross-region replication in Cloud SQL, although it is not straight forward because the performance will not be great. You have to create a master in Cloud SQL, then create a replica with external master pointing at the master you created: https://cloud.google.com/sql/docs/replication#external-master
You can go in the other direction as well: https://cloud.google.com/sql/docs/replication#replication-external
These features are only supported for first generation of Cloud SQL.
Cloud Spanner is a relational database that supports transactional consistency on a global scale. It is an SQL Database and works great in a Multi-region environment. Therefore, It can be a good choice for your case. For more info, please check https://cloud.google.com/spanner/

Multiple server instance load and database balancing

Please correct me if I am wrong but I guess handling more requests and load by adding more machines or balancing the load between multiple servers is horizontal scaling. So, if I add more servers, how do I distribute the database? Do I create only one database to hold the user records with multiple servers? Or do I split the database too? Then what about database integrity? How to synchronize it? I am a newbie and really confused but eager to learn. I would like to use postgresql for my project and would like to know some basic things before I start. What I want to do is, create two servers for the application load balancing (please correct me if its not what I have to do). How will I manage the database across these without database integrity? Do I have to replicate the data within the two servers? How do you guyz have multiple instance and manage database? Do I need to go through sharding for this? What would be the best approach to have many instance without database integrity in accordance to postgresql. I would really appreciate if anyone could explain it to me. Thank you!
Not sure if you are looking just for a service, which could bring you what you need so you don't have to spend time on it, or if on the other hand, you would like to implement that on your end, which I guess could be quite complex.
If you are looking for your own solution, maybe you should take a look at Postgres-XC, which is a group that provides a database cluster based upon PostgreSQL database.
On the other hand, if you are just interested in the development process and don't want to spend time on this when you can have it on the cloud, maybe you would like to take a look at EnterpriseDB which provides PostgreSQL on the cloud.
For your application, you can also use a cloud service in which you can even auto-scale your app depending on some parameters as it is explained here.

How can I connect to Heroku Postgres from a Google Spreadsheet

I'd like to use a Google spreadsheet to display my database analytics
I'd like to be able to do summary queries on my Heroku Postgres database using Google Apps Script and then display and chart them in a Google spreadsheet.
Heroku offers a number of ways to connect to Heroku Postgres:
https://devcenter.heroku.com/articles/heroku-postgresql
Likewise Google Apps script offers access to a number of different external services
https://developers.google.com/apps-script/defaultservices
I've never attempted this before and so am interested in what is simplest.
JDBC seems possible but are there any other options?
As far as I can see, the only overlap between the two is JDBC which I have no experience with but feels like a bit of a heavyweight third protocol to use to get between the systems.
IS JDBC the best way to get the data across or is there something simpler I'm missing?
Set up a dataclip from dataclips.heroku.com with your desired data described as a SQL query.
Append .csv to the resulting URL
Use that URL on the google spreadsheet's importData function, like so:
=importData("https://dataclips.heroku.com/[your-dataclip].csv")
I prefer to use Skyvia for connecting Google Sheets and Heroku Postgres without coding. Here is how I do it: https://skyvia.com/data-integration/integrate-google-sheets-heroku-postgres. All I need is to specify the connections to Google Sheets and Heroku Postgres and select data to replicate. Skyvia will copy the specified Google Sheets data to Heroku Postgres and maintain this copy up-to-date automatically with incremental updates.
QueryClips is exactly what you need. This is its primary use case.

Azure Table Vs MongoDB on Azure

I want to use a NoSQL database on Windows Azure and the data volume will be very large. Whether a Azure Table storage or a MongoDB database running using a Worker role can offer better performance and scalability? Has anyone used MongoDB on Azure using a Worker role? Please share your thoughts on using MongoDB on Azure over the Azure table storage.
Table Storage is a core Windows Azure storage feature, designed to be scalable (100TB 200TB 500TB per account), durable (triple-replicated in the data center, optionally georeplicated to another data center), and schemaless (each row may contain any properties you want). A row is located by partition key + row key, providing very fast lookup. All Table Storage access is via a well-defined REST API usable through any language (with SDKs, built on top of the REST APIs, already in place for .NET, PHP, Java, Python & Ruby).
MongoDB is a document-oriented database. To run it in Azure, you need to install MongoDB onto a web/worker roles or Virtual Machine, point it to a cloud drive (thereby providing a drive letter) or attached disk (for Windows/Linux Virtual Machines), optionally turn on journaling (which I'd recommend), and optionally define an external endpoint for your use (or access it via virtual network). The Cloud Drive / attached disk, by the way, is actually stored in an Azure Blob, giving you the same durability and georeplication as Azure Tables.
When comparing the two, remember that Table Storage is Storage-as-a-Service: you simply access a well-known REST endpoint. With MongoDB, you're responsible for maintaining the database (e.g. whenever MongoDB Inc (formerly 10gen) pushes out a new version of MongoDB, you'll need to update your server accordingly).
Regarding MongoDB Inc's alpha version pointed to by jtoberon: If you take a close look at it, you'll see a few key things:
The setup is for a Standalone mongodb instance, without replica-sets or shards. Regarding replica-sets, you still get several benefits using the Standalone version, due to the way Blob storage works.
To provide high-availability, you can run with multiple instances. In this case, only one instance serves the database, and one is a 'warm-standby' that launches the mongod process as soon as the other instance fails (for maintenance reboot, hardware failure, etc.).
While 10gen's Windows Azure wrapper is still considered 'alpha,' mongod.exe is not. You can launch the mongod exe just like you'd launch any other Windows exe. It's just the management code around the launching, and that's what the alpa implementation is demonstrating.
EDIT 2011-12-8: This is no longer in an alpha state. You can download the latest MongoDB+Windows Azure project here, which provides replica-set support.
For performance, I think you'll need to do some benchmarking. Having said that, consider the following:
When accessing either Table Storage or MongoDB from, say, a Web Role, you're still reaching out to the Windows Azure Storage system.
MongoDB uses lots of memory for its own cache. For this reason, lots of high-scale MongoDB systems are deployed to larger instance sizes. For Table Storage access, you won't have the same memory-size consideration.
EDIT April 7, 2015
If you want to use a document-based database as-a-service, Azure now offers DocumentDB.
I have used both.
Azure Tables : dead simple, fast, really hard to write even simple queries.
Mongo : runs nicely, lots of querying capabilities, requires several instances to be reliable.
In a nutshell,
if your queries are really simple (key->value), you must run a cost comparison (mainly number of transactions against the storage versus cost of hosting Mongo on Azure). I would rather go to table storage for that one.
If you need more elaborate queries and don't want to go to SQL Azure, Mongo is likely your best bet.
I realize that this question is dated. I'd like to add the following info for those who may come upon this question in their searches.
Note that now, MongoDB is offered as a fully managed service on Azure. (officially in Beta as of Apr '15)
See:
http://www.mongodb.com/partners/cloud/microsoft
or
https://azure.microsoft.com/en-us/blog/announcing-new-mongodb-instances-on-microsoft-azure/
See (including pricing):
https://azure.microsoft.com/en-us/marketplace/partners/mongolab/mongolab/
My first choice is AzureTables because SAAS model and low cost and SLA 99.99% http://alexandrebrisebois.wordpress.com/2013/07/09/what-if-20000-windows-azure-storage-transactions-per-second-isnt-enough/
some limits..
http://msdn.microsoft.com/en-us/library/windowsazure/jj553018.aspx
http://www.windowsazure.com/en-us/pricing/calculator/?scenario=data-management
or AzureSQL for small business
DocumentDB
http://azure.microsoft.com/en-us/documentation/services/documentdb/
http://azure.microsoft.com/en-us/documentation/articles/documentdb-limits/
second choice is many cloud providers including Amazon offer S3
or Google tables https://developers.google.com/bigquery/pricing
nTH choice manage the SHOW all by myself have no sleep MongoDB well I will look again the first two SAAS
My choice if I am running "CLOUD" I will go for SAAS model as much as possible "RENT-IT"...
The question is what my app needs is it AzureTables or DocumentDB or AzureSQL
DocumentDB documentation
http://azure.microsoft.com/en-us/documentation/services/documentdb/
How Azure pricing works
http://azure.microsoft.com/en-us/pricing/details/documentdb/
this is fun
http://www.documentdb.com/sql/demo
At Build 2016 it was announced that DocumentDB would support all MongoDB drivers. This solves some of the lack of tooling issues with DocDB and also makes it easier to migrate Mongo apps.
Above answers are all good - but the real answer depends on what your requirements are. You need to understand what size of data you are processing, what types of operations you want to perform on the data and then select the solution that meets your needs.
One thing to remember is Azure Table Storage doesn't support complex data types.It supports every property in entity to be a String or number or boolean or date etc.
One can't store an object against a key,which i feel is must for NoSql DB.
https://learn.microsoft.com/en-us/rest/api/storageservices/fileservices/understanding-the-table-service-data-model scroll to Property Types