Remote application deployment - deployment

We have a c#, .NET 4.0, windows application which we deploy to a terminal server. (Developed using VS 2010). This application makes use of several WCF services sitting on another server.
Our users access the front-end via remote desktop session. (They all have a .RDP file on their desktops.)
My question is regarding the deployment of this front-end. Currently, if we need to do an emergency deployment during business hours, we need to kick all the users off that are hooked into the app (as they are using the dll's that we need to replace). This is not ideal, obviously. We work in quite a business-critical environment, so these deployments are unavoidable. I've investigated ClickOnce, but have read that you cannot use this with terminal services application here. (Which kind of makes sense since it's essentially one app being "accessed" by several clients...)
I would like to be able to do a "silent" deployment whereby the user knows nothing about the fix until they restart their instance of the application. I'm not sure this is even possible?
I would appreciate any guidance or suggestions on this!

Yep, I do this all the time with a RD app -- you just need to move or rename the DLLs instead of deleting them. Windows allows moves and renames when DLLs are in use, but prevents you from deleting them. If you use Windows Installer to deploy your app, it will do the moves automatically (and delete the old versions when the system is next rebooted).
Once you replace the DLLs this way, existing sessions will continue to use the old, renamed versions, and new sessions will use the new versions. Of course, depending on how many DLLs you have, how long it takes your app to load them into memory, and how much activity you have on your server, you could run into a scenario where the app loads some of the old DLLs and some of the new ones when you're in the middle of updating them, but that would likely be rare.

Related

PhpStorm - debug on local, deploy on remote server

I'm working on a website using PhpStorm. For a long time I developed it locally, but then I got hosting and a remote ftp server.
I created a new project in PhpStorm with the settings for remote host, and I found that deploying code takes long time (over a minute) before I can see the result, which is quite uncomfortable when debugging.
Is there any possibility to work with code on a local server, and, when I think that the project is ready for deploy, just send it to the server.
I understand, that I can just work in two different projects and just deploy the "ready" version to server via FTP, but maybe there is some more comfortable way?
There is several answers to this question, and most of them opinion based but i will try and keep it objective.
Case 1
A big corporation gives every developer a sandbox, to test their code from, the corp requires every developer to keep their code on the sandbox.
Using mounted drives could be extremely slow. Especially when PhpStorm is indexing.
Case 2
An easy way to keep an auto backup of your code it to use the build in (s)ftp(s) upload/deploy.
Solution
In both cases you could use the auto deploy feature that saves every changes to the server, that way the deploy doesn't take over a minute, but is usually already there before you know it.
I cannot recommend to use the deployment for Production as it will not pass through your version control, SAT, security setups etc. In that case I would suggest something like rocketeer etc.
EDIT:
As for 2 projects, well you can define 2 different deployment servers, and use the default one for your testing, with auto upload or something, and then the other one can be selected from the deployment menu.

TFS Intranet Automated Deploy Strategy

I have introduced branching/merging to my team and have talked before about how it would be great to automatically build and deploy code checked into the staging/master branches, but I'm a junior dev, not very ops-y.
The trouble I'm having, is that we create intranet applications and store them on our own VM's which we have access to, but we also have load balancing which is causing me grief!
I can get a build to automate (well, I haven't got all the bugs figured out but I'm working my way through them) - and I can even get the build to automatically create a zip file ready for deployment.
Is it possible to configure several servers for deployment?
I.E
1) I check in some code to stage
***Automatically***
2) Code builds
3) Build completes, Unit tests run and they complete
4) Code is packaged into a .zip
5) .Zip is deployed across the three load balancing servers (all with the same file path).
***
Maybe worth noting we currently have our TFS server running Visual Studio so the code is built on the same server it is all stored, but this is not the server we run live code from.
Any help or tutorials specific to my setup would be GREATLY appreciated, I really want to turn this departments releasing strategies around!
I am going to address only the deployment aspect. There are a lot of different ways that this can be handled, such as:
Customizing the build template
Writing custom .Net code and inserting it into the build template (which would also involve customizing the template)
Creating a Batch or Powershell script set to run after the build completes
Using a separate tool such as OctoDeploy or Release Manager to handle the deployments
The first thing you need to do is separate the build and deployment steps in your head. While they are tightly coupled in your model, they are two totally different tasks that need to be handled different ways.
The second thing is to stop thinking like a developer when it comes to the deployment portion. While there will likely be a programmatic solution, you'll need to identify the manual steps first.
You stated that you're not very ops-y, by which I assume you mean you're more Developer and not Systems Analyst. If that is the case, then the third thing you'll need to do is get someone who is involved, such as your current release team.
There are 3 major things that need to be done then:
EVERYTHING needs to be standardized. If you can't standardize something, then standardize the way that it's non-standard (example: You have a bulk list of servers you need to deploy to, and you need to figure out which ones to deploy to based on their name, which can be anything. In that case, a rule needs to be put in place that all QA servers need to have QA in their name, User Acceptance servers need UAT, Production need PROD, etc.).
Figure out how you're going to communicate from the build to the deployment, which builds are going to deployed, to which servers, and where the code is going to be picked up from
You need to document every manual step, and every exception to those steps, and every exception to those exceptions.
Once you have all those pieces in place, you need to then go through each manual step and automate it, whether that's through Batch, Powershell, or a custom-built application. Once you have all the steps automated, you'll have both the build and deploy pieces complete.
After you're able to execute a single "manual" automatic deployment to a single environment, you're then ready to figure out how you want to run it for multiple environments. This can be as complex as an XML file that is iterated through, to simply calling the same command multiple times with different parameters.
A quick summary of how I've done this at my current job (where using a third-party deployment tool was not an option):
Created a tool using .Net WinForms to allow us to "manually" run automated builds (We use the interface to determine the input parameters, and the custom classes under the hood do all the heavy lifting. These custom classes are in a separate project that builds to their own dll. This also allows us to test tweaks and changes to the process in a testing environment before we roll it out to our production build server)
Set up an XML file for each set of environment (QA, UAT, Prod, etc.) that contains all of the servers that need to be deployed to in that environment, including destination paths, scheduled tasks, and Windows Services
Customize the TFS build template and include the custom classes created for the custom tool, which will read the XML file and iterate through each server entry to perform the deployments
I'm more than happy to help with more specific examples and assistance, I look at things a bit different than most people and it helps when it comes to release management.

How does a production deployment for a cloud based software happen?

Lets take the example of a heavily used cloud based software.
When a deployment happens, let's say users are online.
Won't the server require stop & start after deploy? How is the service continuity maintained?
How will the ongoing user sessions / unsaved data be continued post deploy?
How is the risk managed? (Lets say an issue comes up after deploy and you need to revert to the older version, now imagine a user has already worked on the new version and saved some data with it, which is not compatible with previous versions)
Whether the server require stop & start after deploy depends on the technology being used. Any state information that is kept within the server can be externalized (i.e. written to disk) for the purpose of updating the application. Whether this is neccessary and whether this is done depends on the technology being used.

ClickOnce check for update without executing app

I work in a service organization where users of our internal tools are often disconnected. It is often the case that service engineers on service assignments are "stranded" with an outdated version of some internal tool.
These tools are deployed using ClickOnce publish VS2010 .NET4 . If the users run all their apps while still connected to corporate network, they would get a notification that a new version was available. As the number of various tools increase, the chance increases that some app is not updated.
Is it possible to automate this process, by a batch file or something?
So that the engineers just need to run one file when connected to corporate nw to get all the newest versions of their installed tools?
Added:
An easier way of saying it would be to have "something like Windows update" operating on corporate net, but for internal ClickOnce apps.
Very interesting question. I can't think of a quick way to do this, but it's definitely possible.
I would create another ClickOnce app whose job is to update the other ClickOnce apps. This app needs the url of each app's .application file. If all engineers are supposed to have all apps, that's easy. If not, maybe you could look through their start menu and find all the ClickOnce Application Reference files. Those files contain the url.
Next, just launch the url and pass a query string argument...
http://server/MyApp/MyApp.application?UpdateOnly=true
In the startup of your applications, you can check the query string argument and shut down the app if it's run with UpdateOnly=true.
One side note. If you set the minimum required version of each of your apps to the latest version, users won't get prompted with the new version dialog. Seems like you'd want to do that or the user would still have to pay attention and do a lot of clicking.

Deployment strategy for distributing application upgrades across large organization

We will be embarking on an Application developement project (.NET 3.5) for a large organization. As we started thinking about the upgrades we would be giving across the machines, we are looking at options like ClickOnce.
What we need is a push model, as long as the client machine is connected to the network, the server can send updates. I believe ClickOnce is a pull model(although by specifying minimum version we can kind of push). Also ClickOnce downloads complete files only, it cannot download the change (byte difference) among the files.
Can anyone point me to a better tool that can be used here. Also better strategies, if any, are welcome, we are in a very early stage of the project.
I don't have a definitive answer on better options, but I've used ClickOnce and can offer some advice.
There are several update options with ClickOnce (before starting, after starting, check every time, check every X Hours/Days/Weeks, etc). You can also throw those out and write code to check for updates. It's not a "push" from the server, but your client could poll for updates which would be the next best thing. Just remember, the application is going to have to restart after the update to see changes.
ClickOnce only downloads changed files. However, the progress dialog always shows the entire size of the application even if it's only downloading a single file. Everyone worries about that, but it's just a bug with the progress dialog.
Finally, I'm a big fan of keeping it simple. It's really easy to over-think these things and create a monstrosity that was never needed. We went through something similar at my company. We were so worried about users downloading unnecessary bytes, we broke our apps up into more, smaller assemblies. It turned into a nightmare; apps were harder to maintain and performed worse on the client. We finally undid it all and wasted weeks just to end up where we started.
I'm not saying you don't need the features you're asking for, I don't know your scenario. Just educate yourself first and know what you're getting yourself into.
We use clickonce at my company (about few hundred users for the app geographically dispersed). By specifying the minimum version we can make sure that every app installation gets updated after deployment automatically. You are right that clickonce downloads full files only but only files that have changed since previous version. If that is still a concern you can break your application into more smaller assemblies. I think you can also use netmodules but then Visual Studio has not built in support for that.
In general clickonce has worked good for us.
I am just in the process of implementing such a service on top of my distributed application platform. In essence I have developed a "push" model for corporates that follows these basic principles:
Software upgrades are "managed" from the server, NOT from the client, which is in line with the deployment of corporate software as opposed to user software (this is a very important point)
Software upgrades can be customised per client application on the server, i.e. the server can deploy unique configurations to every client if required
Software upgrades can be deployed to clients at different times, or all at the same time, or any combination of the two
The software upgrade version can be specified per client, i.e. different versions can be deployed to different clients as required
All software upgrades for all clients can be "managed" from a single server, i.e. the software upgrading "service" is consistent across any application, and all applications can utilise the software upgrading "service"
Clients can implement a software upgrade policy of automatic (application restarts as soon as the upgrade has been downloaded and available at the client), manual (application needs to be "sent" a custom "force upgrade"
message"), or on restart (application upgrades on shutdown if an upgrade has been downloaded and is available)
All auto-upgrading functionality is transparent to any running applications as this is all performed in autonomous background threads and all inter-process communication and file transfer is handled by my framework
In essence this now allows me (or will allow me when I have tidied a few things up and thoroughly tested the implementation) to manage the version of any application developed by me from a central server after it has been initially installed, without any client intervention.