Hosting a Windows service - service

I'd like to host a Windows service application using e.g. Azure on AWS. I know I can modify my application to be an Azure worker role. Can I do the same thing with AWS or is there a better hosting provider. My application uses a database preferably SQL Server.

AWS is an IaaS - Infrastructure as a service, you can very much take a Windows Server 2008 R2 - EC2 instance and treat it as your own laptop or computer or server(just take RD) and deploy your windows service.
Regarding the DB, you can install SQL Server or MySql in the same instance (if you are fine with performance) or put the DB server in a separate EC2 instance. If you are worried about adding more EC2 instances, RDS is also an option available.

Since the AWS AMI (Amazon Machine Image) is a standard Windows 2008 operating system (various editions) you can create and deploy a Windows service as you normally would - and in the .NET world, Windows Service is still the way to go. Azure worker roles only work in Windows Azure.
With SQL Server on AWS you can use an AMI with SQL on already or install your own (see the pricing options for windows/SQL instances) but you will not have failover functionality as you can only have express or standard. To have redundant SQL servers you would need to do something like database mirroring or replication and keep it running yourself. There is no database service for SQL on AWS like there is with RDS (mySQL and Oracle) or SQL Azure

Related

RESTful services and MYSQL deployment in cloud

I have developed RESTful services with Asp.NET, Web API 2.0 and MySQL.
What are my options to deploy this in to the Cloud? I don't want a complete EC2 instance or Azure Virtual Machine.
Are there any cloud platform services where I can only get IIS server and a MYSQL database?
See below for good links on Azure and AWS options. Since you mention IIS, Azure may be your best bet. Keep in mind you should try and keep your API and DB in the same cloud data center to improve performance and reduce cost for ingress and egress.
From an Azure perspective:
Take a look at their MySQL as a service offering (in preview)
And then you can host your code in a couple of ways.
Asp.Net in an App Service
An Azure Function
Using a combination of the above you can leverage PaaS and avoid having to manage your own VMs.
Further, look in to using a consumption plan to pay for only what you use.
From an AWS perspective
Use Amazon RDS (MySQL)
Use Lambda to host your API
Again, here you wont need to manage servers either.

Postgres Databases synchronization

I have to deploy odoo with postgres database on amazon cloud. That i can do by simply setting up EC2 server and setting uo odoo on it. In case if internet in down, I want to be able to access same services and already saved data offline as well. For that I plan to install odoo with postgres database on my local machine (in my office) as well. Now I can access odoo services from anywhere (using cloud) when there is internet available. But in case if internet is down, I must be able to use locally installed odoo to get same services. For that purpose I need two things
Both databases should be exact replicas of each other.
On recovering from internet disconnectivity, I want the changes made in my local ( office) database reflect in the database at amazon cloud.
I am new to this stuff, kindly suggest the best possible approach (architecture) in this scenario.

Setting up backup strategy for backing up postgresql database on cloud foundry

We have setup a community postgresql service on Cloud Foundry (IBM Blumix). This is a free service and no automated backup and recovery is supported out of the box.
Is there a way to set up a standby server or a regular backup in case there is any data corruption/failure?
IBM compose and ElephantSQL can provide this service at a cost, butwe are not ready for it yet.
PostgreSQL is an experimental service and there is not a dashboard and other advanced features (Daily backup for example) that you can find in other services that you mentioned. If you want to do a backup you could write an ad-hoc script that 'saves'\exports all tables as you want and run it every day.
If you need PostegreSQL you can create a PostegreSQL by compose service $17.50 / mo for the first GB and $12 for Extra GB )
We used Postgresql Studio and deployed it on IBM Bluemix. The database service was connected to the pgstudio interface (This restricts the access to only connected databases). We also had to make minor changes to pgstudio so that we could use pg_dump with the interface.
The result: We could manually dump the data. This solution works well as we could take regular dumps (though manually).
In the free tier you are right in saying that you cant get the backup. Those features are available only in Compose for PostgresSQL service - but that's a paid service.

Any disadvantages or security issues for having website and databases on separate servers?

We're about to dive into Odoo (OpenERP). We're planning on using Amazon EC2 for the actual installation, and put the postgreSQL database server on Amazon RDS. (like this guide http://toolkt.com/site/install-openerp-server-and-postgresql-on-separate-servers/ )
If the RDS is only allowed to talk to the EC2 server, does this mitigate any security issues compared to a regular Odoo installation (where database and front facing webserver are on the same machine)? Is this an advisable setup?
Input data in your post is very vague to give you exact answer, but you may consider the following:
RDS can talk to EC2 or any other clients and application servers. Connection only depends on your configuration. You can configure VPC and configure/restrict access to your database and application servers there.
Depending on the size of your system (in terms of I/O, number of users , etc), of course you may want to configure separate database instance and application servers. At scale this separation is important.
In short, Nither any Disadvantage nor any security issues.
In Detail Odoo with AWS EC2,
We "SnippetBucket.com" Team had implemeneted already RDS and better know odoo security.
RDS is bit very expensive.
RDS make private instead of public in AWS
make complete secured.
As well AWS Security helps to make extra protection with inbound and outbound ports. Totally Safe.
Note: AWS "RDS Aurora-Postgresql" is 4X faster than official postgresql. AWS RDS support specific versions by AWS.

Elastic Beanstalk Deployment with MongoDB

Would really appreciate some suggestions for resources on how to properly deploy with Elastic Beanstalk with the following stack:
MongoDB
Rails (Puma)
Sidekiq/Redis
Elasticsearch
Do I need to get all these things setup in ebextension files? Or is it a matter of settings things up manually in AWS and then routing them together properly somewhere?
You definitely don't want to run all those on your Elastic Beanstalk servers. Elastic Beanstalk will automatically add or remove servers based on your traffic/server load. You don't want your database to be on one of those servers when it gets deleted.
Elastic Beanstalk is a Platform as a Service that is great for running web servers. There are other services on AWS such as ElastiCache (Redis/Memcached as a service) and Elasticsearch as a service. There are also third parties that provide services that run on AWS such as RedisLabs (Redis as a service) and MongoLab (MongoDB as a service).
You can decide to use any of these services to reduce the amount of system administration work you have to do yourself. Or you can manually setup EC2 Linux servers (outside of Elastic Beanstalk) and install things like Rails and MongoDB and ElasticSearch on them and manage them yourself.
For your case I would recommend something like the following:
Rails: ElasticBeanstalk
MongoDB: MongoLab
Redis: RedisLabs
Elasticsearch: AWS Elasticsearch Service
You would want to setup each of those services and then simply add the connection information for each of them to your Elastic Beanstalk environment so Rails can use them.
Edit:
Here are the best instructions on setting up MongoDB on EC2 manually: https://docs.mongodb.org/ecosystem/platforms/amazon-ec2/
For ElastiCache and Elasticsearch, you just click around in the AWS console to provision a Redis server and get the URLs to connect to. Once you have set all these things up, you just need to put the connection parameters in your ElasticBeanstalk environments as custom environment variables, something like:
MONGO_DB_URL="Your MongoDB EC2 internal IP address"
REDIS_URL="the url ElastiCache provided you"
Then read those environment variables in your application when creating connections to those services.
Also, you are going to have to learn about setting up your VPN and security groups to enable everything to connect. For example you will want your Elastic Beanstalk servers in one security group, and MongoDB server(s) in another group. Then you will have to configure the MongoDB security group to allow access from the beanstalk group on the MongoDB port. It's similar for ElastiCache. I think for Elasticsearch you will have to create an IAM role with access to the Elasticsearch API, and then assign that role to your Beanstalk servers.
Of course there is also the administrative tasks of setting up Linux servers for your MongoDB cluster, configuring clustering, fail-over, automated backups, log archives, periodic security updates, etc. I know you have all this AWS credit, but you should weigh moving everything over to AWS versus the cost of all the administrative tasks you will be spending time on. Elastic Beanstalk, Elasticsearch and ElasticCache are a no-brainer if you are getting them for free, but my MongoLab bill would have to be fairly high to justify setting all that up and managing it myself.