Windows Azure production vs staging server and Facebook integration - facebook

We use Windows Azure Cloud services to host our application. One of the great features of Windows Azure is the Production/Staging model. You can have the clients of your application routed to your production server, while you can test your new code running on a staging server. For example, you can configure Azure to point a production server to http://www.coolapp.com while designating a staging server for the same app to something like this: http://7f8e9d5ba73a4f7ea9ebd65a02ee195d.cloudapp.net.
Physically both of these servers are publicly facing. If you were to know the cryptic URL of a staging server you would be able to browse to the app just as easily as you would browse to www.coolapp.com. However, the presence of a GUID in the URL makes it virtually impossible for someone to guess it, thus making the staging server "private". This gives a nice mechanism to the developers of an application to deploy and test the new bits on a staging server before releasing them to public. Once they make sure that things look good, with a flip of a switch they swap the two servers, making staging server a production server and vice versa.
This model creates a small problem for us in relation to Facebook integration. To be able to integrate Facebook plugins you have to register your app with them. FB will then issue an AppId and an AppSecret keys. These keys are tied to the URL of your application. So in order for my app to work with FB plugins I need to obtain one set of keys that is tied to 7f8e9d5ba73a4f7ea9ebd65a02ee195d.cloudapp.net, and another set that is tied to www.coolapp.com.
When I read about Windows Azure, they really urge developers to treat staging vs. production servers as the same. The only difference between them should be the URL. In other words, Azure does not recommend basing your app logic on which server the code happens to be running on as Azure has no inherent knowledge of this. Staging vs. production is just a handy "abstraction" if you will. I guess you see the problem here. In our example above, I have to use one set of keys issued by FB versus another depending on which URL (production vs. staging) my app is running at. I assume I am not the first one running into this problem. What are the correct ways of handling this? One obvious way is to sniff the URL property of the Request object and branch my logic that way. However, intuition tells me this is a hack. Any other ideas?
Regards,
Archil

The mechanisms I know of are:
using "production" within a totally separate service account to "testing" - this leaves "staging" within the production service to be used as an area for "deployment candidates" and provides a separate clean testing domain with a non-changing URL for earlier "dev and test" work.
using different .cscfg files for staging and production - and being careful to update this .cscfg before you do any live switching.
sniffing the incoming URL - as you suggest
Personally, I use the first of these techniques - its easy and it helps prevent nasty accidents
As an aside, one of the techniques we've used for "removing" the Guid from staging is to CNAME the Guid with a really short TTL on the DNS - this allows us to quickly and automatically update the CNAME record for the staging server when we deploy.

Related

Automate publishing .NET Core App to multiple subdomains

I have a .NET Core application used by many clients. Each client has its own subdomain. Each client has its own database. All databases are identical. Different data, same schema. The subdomains and database names look something like this.
client1.myapp.com Client1Db
client2.myapp.com Client2Db
client3.myapp.com Client3Db
I use Visual Studio Web Deploy to publish. For each client, I have publishing-settings.
Each set of publishing-settings specifies the proper subdomain, the database name for that subdomain, and the Entity Framework Migration Info for the subdomain. (See images below)
I end up with a list of .pubxml files in my VisualStudioSolution\Properties\PublishProfiles folder that looks kind of like this...
client1.myapp.com.pubxml
client1.myapp.com.pubxml.user
client2.myapp.com.pubxml
client2.myapp.com.pubxml.user
client3.myapp.com.pubxml
client3.myapp.com.pubxml.user
It was not a big deal to publish these individually when I only had a few subdomains to publish, but now I'm getting enough clients that it is becoming a pain.
Can someone recommend a way to automate this process? I'm not even sure where to start. Thanks!
You may use a DevOps pipeline in which you can use, for example "Jenkins X" and deploy a single app to multiple servers.
You can start with: Jenkins jobs in multiple servers
There is others CD (continuous delivery) platforms and services, but this question have better place in devops.stackexchange

How Do Service Connections Work For On-Prem Agents Connecting To On-Prem Services?

This question is purposefully general because I'm trying to understand things more from an architectural perspective, because that will impact which group I need to contact. My team is using Azure DevOps (cloud) with on-prem build agents. The agents connect to ADO via a proxy.
We use several tools in-house provided by vendors with ADO plugins in the Marketplace that require us to set up service connections. Because the services are installed on-prem, the endpoints we enter are not available via the Web (e.g. https://vendor-product.my-company.com).
If I log into the build machine and open up IE, I am able to connect to the service endpoint URL. However, whenever I try to run a task from ADO, it fails with some kind of connection-related issue ("The underlying connection was closed: An unexpected error occurred on a send", "Task ended with an exception: Error: read ECONNRESET", etc.).
The way I thought it worked, all the work takes place on the build machine itself, so the calls would be going from my-build-server.my-company.com to https://vendor-product.my-company.com. Those error messages though make me wonder if the connection is actually coming from https://dev.azure.com.
So the questions I have are:
For situations like this, is the connection to a service endpoint going to be seen as coming from my on-prem build agent, or from ADO (or does it vary based on how the vendor writes their plugin)?
If the answer to #1 is "it varies", is there any way for me to tell just from the plugin itself without having to contact the vendor? (In my experience some of the vendor reps don't understand how the cloud works.)
and/or
Because my build agent was configured to use a proxy when I set it up, is it going to use that proxy for all connections, even internal ones? I think I can set up a proxy bypass list for the agents but I presently only have read access to the build box. I can request temporary elevated access but I'd need some level of confidence that's what the issue is.
Hope I explained the situation clearly, thanks in advance for any insight.

AWS deployment without using SSH

I've read some articles recently on setting up AWS infrastructure w/o enabling SSH on Ec2 instances. My web app requires a binary to run. So how can I deploy my application to an ec2 instance w/o using ssh?
This was the article in question.
http://wblinks.com/notes/aws-tips-i-wish-id-known-before-i-started/
Although doable, like the article says, it requires to think about servers as ephemeral servers. A good example of this is web services that scale up and down depending on demand. If something goes wrong with one of the servers you can just terminate your server and spin up another one.
Generally, you can accomplish this using a pull model. For example at bootup pull your code from a git/mecurial repository and then execute scripts to setup your instance. The script will setup all the monitoring required to determine whether your server and application are up and running appropriately. You would still need an SSH client for this if you want to pull your code using ssh. (Although you could also do it through HTTPS)
You can also use configuration management tools that don't use ssh at all like Puppet or Chef. Essentially your node/server will pull all your application and server configuration from the Puppet master or the Chef server. The Puppet agent or Chef client would then perform all the configuration/deployment/monitoring changes for your application to run.
If you with this model I think one of the most critical components is monitoring. You need to know at all times if there's something wrong with one of your server and in the event something goes wrong discard the server and spin up a new one. (Even better if this whole process is automated)
Hope this helps.

Umbraco on Azure: can I change hostname?

I've deployed in Windows Azure a website made with Umbraco, using
Windows Azure Accelerator for Umbraco.
For development and test i used a test Hostname. Now it's time to switch to the official DNS hostname..
How can I change current hostname?
Actually i configured hostname at deployment time (the only way i know to do this) but i can't deploy again, since many files have been changed working on website on Azure.
EDIT
Let me explain: at the step prompt in the image (during web site deploying) I used as Domain Name "test.mywebsite.com", and configured real DNS.
Now the website is configured, so I'd like to make mywebsite.com point to that site;
But is'nt enough if i configure mywebsite DNS! Shall I deploy again? An will I lose any of the changes I made?
I'd like to make two comments on your question:
1) In order to host your Azure application under a custom host name, you will need to sign up with a DNS provider that supports C-NAME records (most do). I suggest someone like GoDaddy.com because by default C-NAME records can only resolve your "www.domainname.com" records and cannot do anything for queries where "www." is dropped from the URL. DNS providers like GoDaddy also have an option to redirect all traffic destined for "domainname.com" to a URL of your choice. This is a huge deal for Azure apps. Frankly speaking, it is somewhat disappointing that for all the PaaS and IaaS features of Azure, DNS was not included in the overall package.
2) I am a little worried when you say that you can no longer redeploy your app due to the changes made. Can you elaborate on that? Have you made changes to the application's code running on VM's in Azure without going through redeployment process? If so, this is a huge no-no. Your VM's running in Azure are not "permanent". Microsoft and your redeployment process can (and will) re-stage those VM's to the original package at any given time. Microsoft will re-image your VM's at least once a month during their monthly OS upgrades. But they can also do so when they need to move your VM to another rack, etc. Whatever changes that you make to your app must be either stored in source-control before deployment or in a permanent storage facility like SQL Azure, Azure Storage, etc.
HTH
Finally i think that the answers to my questions are:
-Shall I deploy again? Yes, i must deploy again
-Will I lose any of the changes I made? Many changes will be mantained since are stored into DB. But I have to do many activities to make new website work!
This answer confirms my theory:
In my case, I created and uploaded a site with a name, let's say
http://www.contoso.com and then paid a domain from a registrar let's say
http://www.example.com, when I mapped
http://MyAcceleratorsService.cloudapp.net/ to my new domain
( http://www.example.com ) and tried to open that domain I got the home page of
the Accelerator and not the uploaded site.
I had to upload the site again to Azure (using UploadUmbracoSite.cmd
from Accelerator application) and when uploading enter the same domain
name as the one I registered: http://www.example.com. Then, I was able to
browse my uploaded site as expected.
As for your question, will upload site again using
UploadUmbracoSite.cmd (is in the Setup folder) and will enter the new
domain name when requested.
Exactly what I was trying to avoid.. but the only solution, i suppose.
Well it was not easy to publish again, i got errors of many type (i suppose tied to some components that i've installed after deploy and that are not installed in new deployed website).. i'm going to solve them.
Edit
Completed my work:
- loads of different attempts, no-one worked
- CTP backup of DB
- deleted DB and website
- new full deploy of umbraco
- CTP restore of DB
finally:
-all work on content is OK
-all work on styles, pages, templates is lost
Changing hostname is hard; dont'use test hostname but definitive hostname from the beginning.
If anyone has suggest, i'll be pleased to test it, anyway
This is not really an answer to your question, but it might be a solution to your problem: Use a CNAME record to make the production DNS name point to your development name. E.g. www.productionname.com will the point to www.testname.com. I am not sure if everything will just work out of the box, but it seems to be worth a try.
This requires, that your hosting provider allows you to set up CNAME records.
http://en.wikipedia.org/wiki/CNAME_record

Sensible deployment using EC2

We're currently using RightScale, and every time we deploy, we execute a script on the server or server array that we want to update. It pulls the code from a GitHub repository, creates a new folder in /var/www/releases/TIMESTAMP, and symlinks the document root, /var/www/current, to that directory.
We're looking to get a better deployment strategy, such as something where we SSH into one of the servers on the private network, and run a command-line script to deploy what we want to deploy.
However, this means that this one server has to have its public key in the authorized_keys of all of the servers we want to deploy to. Is this safe? Wouldn't this be a single server that would allow all the other servers to be accessed?
What's the best way to approach this?
Thanks!
We use a similar strategy to deploy, though we're not with Rightscale anymore.
I think generally that approach is fine and I'd be interested to learn what you think is not serious about it.
If you want to do your ssh thing, then I'd go about it the following:
Lock down ssh using security groups, e.g. open ssh only up to specific IP or servers with a deploy security-group, or similar. The disadvantage here is that you might lock yourself out when the other servers are down, etc..
I'd put public keys on each instance to allow a password-less login. If you're security concious, you rotate those keys on a monthly basis or for example, when employees are leaving, etc..
Use fabric or capistrano to log into your servers (from the deploy master) using ssh and do your deployment.
Again, I think Rightscale's approach is not unique to them. A lot of services do it like that. The reason is that e.g. when you symlink and keep the previous version around, it's easier to rollback and so on.