Is it possible to have an app running at aws EC2 and have it's database running at heroku's postgres?
In case it is, what are the downsides I should consider?
Since heroku is hosted at AWS, is there a way to know where is the location of the machine running my database?
Hosting my app in the same region of the database would help to keep the performance?
I would like to hear some opinions about this, I've been searching the topic without much success.
You can determine the public-facing location of your Heroku DB at any given time with a traceroute ... but there's no guarantee that it'll stay at that location, or that there isn't any internal re-routing going on. You'd probably want to speak directly with Heroku support about ways to make sure your Heroku DB instances are local to your AWS application instances, as that certainly would benefit performance. See if you can find out which availability zone, or at least which major region, they run the DB in, and whether you can "pin" your database instance to a given region/zone.
Amazon's RDS looks OK, but doesn't support PostgreSQL. Please keep nagging them to.
I'd probably just run the DB on AWS if performance wasn't particularly important. Use a raid10 of provisioned IOPS EBS volumes on an EBS-optimized instance and you'll get kind-of-ok performance (but at a really big price); alternately, you can use non-crash-safe ssd-based instance store servers and rely on replication and backups to keep your data safe.
I dont have any experience on Heroku PostgreSQL.
Generally of course you can run your own service on Amazon EC2 and use the managed database services of Heroku.
Downsides might be
nobody guarantees, that Herouku exclusively uses AWS and you probably can't determine the physical Heroku service location within the cloud so you will have to deal with network latencies
in addition to your external traffic fees you'll have to pay for the database traffic unless you talk to a server in the same availability zone in the same region
My suggestion ( without knowing any detail about the pros of Heroku )
Have a look at Amazon RDS if you don't want to run a database server on our own.
http://aws.amazon.com/de/rds/
I am operating around 70 server instances on AWS, both RDS and EC2 for more than a year now and I can't imagine any simpler way to keep your stuff running
Related
I am building my own webapp which requires a huge database. I want to build and manage my own Mongo database on AWS rather than using Mongo Atlas. Which will be more cost saving? And whether I should go for Mongo Atlas? What will be its advantage over my own database?
There are pros and cons for both approaches:
Running MongoDB on AWS
Pros:
Complete control over how you run the database and how resources are allocated on the server. This could even be together with an application server on the same EC2 instance depending on your traffic and load. This might help with cost saving if your database is huge but isn't likely to see much traffic.
Cons:
You will be responsible for ensuring database availability and applying security patches as and when they are available. You may also have to setup firewalls and protect the EC2 instance and database in other ways that would be trivial to do on a hosted service like Atlas.
Data sharding and clustering can be a real pain to manage by yourself.
Running on Atlas
Pros:
Completely managed service where you don't have to be concerned about performance optimization or scalability. You pay for the services and Mongodb takes care of the rest.
You can focus on building a great application instead of spending your time on administering the database and the EC2 instance on which the database runs.
Cons:
You will be constrained by the options offered by Atlas. For most use cases this should be fine, but if you really want a specific change, it would be difficult to implement it if Mongodb doesn't already support it as a part of Atlas.
Think running your application on EC2 vs buying a server on-premise and running your application on that.
Being a managed service, costs might also be higher if your database does not see much traffic.
HOSTING yourself: You can get one or more AWS ec2 instances(which are VMs) where you can install and run Mongo DB yourself and manage it like you wanted to, making sure that you spin up more instances when the workload becomes large and there are instances up and running at all times to enable high availability.
Cost (high) - Management responsibilities (lots) - Full MongoDB functionality
MongoDB Atlas is a managed service, you don't need to worry about management tasks like scaling of your database and high availability when a single/more instances die... You pay a very low cost for it - this is run by MongoDb themselves on AWS, Azure, Google cloud;
Cost (low) - Management responsibilities (some) - Full MongoDB functionality
Now AWS has its own Mongo compatible database called DocumentDB - this is also a managed database, so you don't need to worry about scalability, high availability etc. This is only available on AWS so super simple and convinient.
Cost (low) - Management responsibilities (minimal) - Limited MongoDB functionality
I have a scenario as follows,
One cloud server is running an application with PGSQL as DB
Multiple local servers are running with same application with PGSQL as DB
User may access the cloud server for read/write data
User may access any of the local server to read/write data
What I need is synchronisation between all these databases. The synchronisation can be done live if connectivity is available, or immediately when connectivity is available.
Please guide me with some inputs, where can i start from.
Rethink your requirements.
Multimaster replication is full of pitfalls, and it is easy to get your databases out of sync unless you plan carefully. You'd probably be better off with a single master node.
That said, you could look at BDR by 2ndQuadrant which provides such functionality.
i am working on my next project currently which works 100% on mongo,
my past projects worked on SQL + Mongo on which i used AWS RDS + AWS EC2 and could connect them both in AWS internal IP which result me with much faster connection.
Now in mongo there is alot of fancy cloud servers like MLab and MongoDB Atlas which is actually cheaper then AWS.
My concern is that moving back to external DB connection will be slower and more network consuming then the internal connection in RDS
Have anyone experienced in such issue? maybe the different isn't that big as i make it but i need it to be optimized
This depends on your setup. Many of the "fancy" services also host stuff on AWS, so latency is minimal. Some even offer "private environments" or such, so you can hide your databases from public view.
The only thing left to care about is the amount of network traffic. But this will be your problem regardless of your database host. You can test this relatively easily (e.g. get a trial from one of the providers and test for throughput, or raise your own MongoDB docker cluster to use as a test etc) just to get an idea of the performance range you'll be in.
We're about to dive into Odoo (OpenERP). We're planning on using Amazon EC2 for the actual installation, and put the postgreSQL database server on Amazon RDS. (like this guide http://toolkt.com/site/install-openerp-server-and-postgresql-on-separate-servers/ )
If the RDS is only allowed to talk to the EC2 server, does this mitigate any security issues compared to a regular Odoo installation (where database and front facing webserver are on the same machine)? Is this an advisable setup?
Input data in your post is very vague to give you exact answer, but you may consider the following:
RDS can talk to EC2 or any other clients and application servers. Connection only depends on your configuration. You can configure VPC and configure/restrict access to your database and application servers there.
Depending on the size of your system (in terms of I/O, number of users , etc), of course you may want to configure separate database instance and application servers. At scale this separation is important.
In short, Nither any Disadvantage nor any security issues.
In Detail Odoo with AWS EC2,
We "SnippetBucket.com" Team had implemeneted already RDS and better know odoo security.
RDS is bit very expensive.
RDS make private instead of public in AWS
make complete secured.
As well AWS Security helps to make extra protection with inbound and outbound ports. Totally Safe.
Note: AWS "RDS Aurora-Postgresql" is 4X faster than official postgresql. AWS RDS support specific versions by AWS.
Was wondering if anyone out there has any experience in deploying a Zend community app to the cloud (e.g. AWS or similar)?
I'm new to cloud hosting having always been fortunate enough in the past to work for folks who have dedicated servers, my main concern (non-zend specific) is how you manage resilience at the database level? FOr example I would in a traditional setup have 2 boxes running the DB (Mysql) in Master/Slave mode with the master replicating to the slave. Assuming any HD failure of the Master I could swap the DB connection over from the Master to the slave and rebuild master at a later point? is this done differently in the cloud?
Any help/pointers greatly appreciated?
It depends on the type of cloud service that you use. If you're using AWS to get your own virtual machine ( Amazon EC2 ) then it's basically the same as having a dedicated server and you can keep a master slave setup and work them much the same way.
However, if you plan on using Amazon's cloud database service ( Amazon Simple DB ) then you don't have to worry about masters and slaves since Amazon does this for you and makes sure that you always have access to your data. The only thing is that it's in beta.
One of the points of the cloud is to take your mind off the hardware. Amazon worries about that.
You might still want to have two virtual machines in case amazon is doing maintenance that might cause your vm to become unavailable, however, Amazon stresses that it would be highly available and never go down really, so long as you pay.