How To Deploy Web Application - deployment

We have an internal web system that handles the majority of our companies business. Hundreds of users use it throughout the day, it's very high priority and must always be running. We're looking at moving to ASP.NET MVC 2; at the moment we use web forms. The beauty of using web forms is we can instantaneously release a single web page as opposed to deploying the entire application.
I'm interested to know how others are deploying their applications whilst still making them accessible to the user. Using the deployment tool in Visual Studio would supposedly cause a halt. I'm looking for a method that's super quick.
If you had high priority bug fixes for example, would it be wise to perhaps mix web forms with MVC and instead replace the view with a code-behind web form until you make the next proper release which isn't a web form?
I've also seen other solutions on the same server of having the same web application run side-by-side and either change the root directory in IIS or change the web.config to point to a different folder, but the problem with this is that you have to do an entire build and deploy even if it were for a simple bug fix.
EDIT: To elaborate, how do you deploy the application without causing any disruption to users.
How is everyone else doing it?

I guess you can run the MVC application uncompiled also? and just replace .cs/views and such on the run.
A websetup uninstall/install is very quick, but it kills the application pool.. which might cause problem. Depending on how your site is built.
The smoothest way is to run it on two servers and store the sessions in sql server or shared state. Then you can just bring S1 down and patch it => bring s1 back up again and bring S2 down => patch S2 and then bring it up again. Al thought this might not work if you make any major changes to the session parts of the code.

Have multiple instances of your website running on multiple servers. The best way to do it is to have a production environment, a test environment, and a developement environment. You can create test cases and run the load every time you have a new build, if can get through all the tests, move the version into production ;).

You could have two physical servers each running IIS and hosting a copy of the site. OR you could run two copies of the site under different IIS endpoints on the SAME server.
Either way you cut it you are going to need at least two copies of the site in production.
I call this an A<->B switch method.
Firstly, have each production site on a different IP address. In your company's DNS, add an entry set to one of the IPs and give it a really short TTL. Then you can update site B and also pre-test/warm-up the site by hitting the IP address. When it's ready to go, get your DNS switched to the new site B. Once your TTL has expired you can take down site A and update it.
Using a shared session state will help to minimise the transition of users between sites.

Related

Continuous deployment: how to deploy new features that are affecting client and server at the same time?

The server holds logic, iOS/Android App holds UI. Common case.
How do I suppose to deploy new features in this case with continuous deployment methodology?
I assume that server-side deploy looks like that:
I'm triggering new feature deployment, load balancer starts redirecting 1% of all users to the server instance with the new feature. If everything goes smoothly, then load balancer starts redirecting 10%, 30%, etc up to 100%.
The same can be done for client apps, using, say, Codepush.
So, if I'll deploy server without an app, then there will be no new features usage and therefore no problems with new deployment for sure.
So, probably I have to deploy app first and put some kind of server version checker, so if the server has api for this new feature, the UI for this feature is being shown, and if the app is connected to the wrong server, the new UI is hidden.
That's seems primitive. I need to persist socket connection to the same server to avoid hitting the wrong server, right? And what if instance/zone/region will go down and the user will be suddenly redirected to another sone/region and new server will not have the new feature api? Probably, my assumption is wrong.
So, how do I suppose to deploy new features in this case with continuous deployment methodology?
I would say that your question is more of version compatibility nature of server/client API than CD. We have a similar requirement where a server and the clients communicate and both are constantly enhanced with features. I don't know your production software architecture which might change the needs accordingly but I'll try to come up with some ideas.
I'm going to describe two cases which might apply for you.
First case:
The thing is easier when you do not face the situation that new client versions need to communicate with old server versions. The new server version is deployed first and old clients simply do not use the new feature, as you've already pointed out. In this situation my recommendation is to deploy the server app first and then start to roll out the new client apps. If that's possible I would do that. It applies only when the new feature doesn't force you to break the API.
Second case:
In the case that new client app versions need to talk to an old server app, which I would try to avoid at all costs, the new client needs some switch inside to deactivate feature e.g. B when it's talking to an old server that doesn't support this feature. An API version counter could be the solution. But it requires the client to be able to distinct between server versions. In REST you often see the .../v1/.. inside the URL but could be solved differently as well. Hopefully the API provides some mechanism to get the version the server speaks.
We faced both cases at the same time, the protocol changed over the time including breaking changes, so we needed to implement an API version negotiation mechanism.

How to make sessions persistent in Scalatra?

I have a webapp using the Scala-based Scalatra web framework. The problem is, anytime the application is re-deployed, or anytime the app-server is rebooted, all session data is lost. This means (to name one downside) users must re-login every time we make an update to the site.
Some research reveals there are, apparently, "container-specific" ways to make sessions persist across app and server reboots (e.g., in the case of Tomcat), but this has two shortcomings:
If the app is not always deployed in the same container (and in the case of Scalatra, an embedded Jetty is used for dev purposes) then I'll need separate configuration for each container.
Using a server-local configuration file is much more fickle -- it's likely to get lost in server migrations, and it won't be automatically available to each instance (e.g., to each developer) of the app, whereas something stored with the core application code is much easier to test, retain, and generally keep track of.
So, to sum up...
Is there a generic, container-neutral way to make sessions persistent? Even if only by overriding appropriate methods in the Java/Servlet stack and storing the session data manually?
Barring that, is there a way to store relevant configuration for multiple containers (e.g., for both Jetty and Tomcat) in my application code (web.xml or similar)?
Thanks -- any insights appreciated!

What are the limitations of the flask built-in web server

I'm a newbie in web server administration. I've read multiple times that flask built-in web server is not designed for "production", and must be used only for tests and debug...
But what if my app touchs only a thousand users who occasionnaly send data to the server ?
If it works, when will I have to bother with the configuration of a more sophisticated web server ? (I am looking for approximative metrics).
In a nutshell, I would love to find what the builtin web server can do (with approx thresholds) and what it cannot.
Thanks a lot !
There isn't one right answer to this question, but here are some things to keep in mind:
With the right amount of horizontal scaling, it is quite possible you could keep scaling out use of the debug server forever. When exactly you would need to start scaling (or switch to using a "real" web server) would also depend on the environment you are hosting in, the expectations of the users, etc.
The main issue you would probably run into is that the server is single-threaded. This means that it will handle each request one at a time, serially. This means that if you are trying to serve more than one request (including favicons, static items like images, CSS and Javascript files, etc.) the requests will take longer. If any given requests happens to take a long time (say, 20 seconds) then your entire application is unresponsive for that time (20 seconds). This is only the default, of course: you could bump the thread counts (or have requests be handled in other processes), which might alleviate some issues. But once again, it can still be slow under a "high" load. What is considered a "high" load will be dependent on your application and the expectations of a maximum acceptable response time.
Another issue is security: if you are concerned at ALL about security (and not just the security of the data in the application itself, but the security of the box that will be running it as well) then you should not use the development server. It is not ready to withstand any sort of attack.
Finally, the development server could just fail outright. It is not designed to be used as a long-running process (days, weeks, months), and so it has not been well tested to work in this capacity.
So, yes, it has limitations. Yes, you could still conceivably use it in production. And yes, I would still recommend using a "real" web server. If you don't like the idea of needing to install something like Apache or Nginx, you can still go with a solution that is still as easy as "run a python script" by using some of the WSGI Standalone servers, which can run a server that is designed to be in production with something just as simple as running python run_app.py in the command line. You typically just need to create a 4-5 line python script to import and create the server object, point it to your Flask app, and run it.
gunicorn could be run with only the following on the command line, no extra script needed:
gunicorn myproject:app
...where "myproject" is the Python package that contains the app Flask object. Keep in mind that one of developers of gunicorn would probably recommend against this approach. See https://serverfault.com/questions/331256/why-do-i-need-nginx-and-something-like-gunicorn.
The OP has long-since moved on, but for those who encounter this question in the future I would just add that setting up an Apache server, even on a laptop, is free and pretty easy. It can be readily configured for as few or as many features as you want just by uncomment in or commenting out lines in the config file. There might be an even easier GUI method for doing that nowdays, but just editing the configs is simple.

Upgrading an app running on Lift Framework?

I've recently discovered the lift framework and have read that it's stateful.
Therefore, if I had a high-traffic site running on Lift - say something that was running a chat application that required users to be logged in - and I wanted to upgrade my app, would doing so kick everyone out of chat and make them have to log in again?
None of the previous answers are correct. Many of the artefacts held within the LiftSession are non-serilizable, so cant be stuffed into a database. You have two options for doing rollig upgrades of stateful applications:
1) Session bleeding. Basically you ween one of the deployments sessions away until their sessions have ended or X duration passes and then you remove the app from production whilst automatically rerouting traffic to another instance of Lift. Google around for rolling upgrades using HAProxy as this should help you from the cluster perspective.
2) If your state is fairly trivial (mostly primitive-style types: ints, strings etc) then you could think about using ContainerVar/MigratableSession and clustering the state using terracotta or similar. This comes with a range of limits though because it then uses the HTTPSession rather than LiftSession.
You might want to checkout chapter 15 of Lift in Action which details that latter solution in a fair amount of detail.
If you keep your state in memory and redeploy the web application, that state will be lost. You could save it to a database or a file before redeploying though and read it back from there.

Can Microsoft Windows Workflow route to specific workstations?

I want to write a workflow application that routes a link to a document. The routing is based upon machines not users because I don't know who will ever be at a given post. For example, I have a form. It is initially filled out in location A. I now want it to go to location B and have them fill out the rest. Finally, it goes to location C where a supervisor will approve it.
None of these locations has a known user. That is I don't know who it will be. I only know that whomever it is is authorized (they are assigned to the workstation and are approved to be there.)
Will Microsoft Windows Workflow do this or do I need to build my own workflow based on SQL Server, IP Addresses, and so forth?
Also, How would the user at a workstation be notified a document had been sent to their machine?
Thanks for any help.
I think if I was approaching this problem workflow would work to do it. It is a state machine you want that has three states:
A Start
B Completing
C Approving
However workflow needs to work in one central place (trust me on this, you only want to have one workflow run time running at once, otherwise the same bit of work can be done multiple times see our questions on MSDN forum). So a central server running the workflow is the answer.
How you present this to the users can be done in multiple ways. Dave suggested using an ASP.NET site to identify the machines that are doing the work, which is probably how I would do it. However you could also write a windows forms client that would do the same thing. This would require using something like SOAP / WCF to facilitate communication between client form applications and the central workflow service. This would have the advantage that you could use a system try icon to alert the user.
You might also want to look at human workflow engines, as they are designed to do things such as this (and more), I'm most familiar with PNMsoft's Sequence
You can design a generic "routing" workflow that will cause data to go to a workstation. The easiest way to do this would be to embed the workflow in an ASP.NET application. Each workstation should visit the application with a workstation ID in the querystring:
http://myapp/default.aspx?wid=01
When the form is filled out at workstation A, the workflow running in the web app can enter it into the "work bin" of the next workstation. Anyone sitting at the computer for which the form is destined will see it appear in their list of forms to review. You can use AJAX to make it slick and auto-updating.