Of all the programs I wrote so far, If I want it to work on another work station, I just have to copy and paste the executable and necessary files needed to make it run (e.g: .o files, binary files..).
But all the program built for commercial use always comes with an installer. For example PC games. So my question is: What is the main benefits/reasons of doing installation when we could just simply copy the files over to the targetted work station?
-One of the reason is probably to prevent piracy. But other than that, I'm sure there are other stronger reasons?
The Complexity of Deployment
Only the simplest applications can work with a simple file copy, and even then you need to have a convenient way to actually download and do the copying of the files to the right location - and this is what a setup is for. The setup is also a marketing tool that can be used for branding and consistency across products as well as allowing installation of a trial version of the product - a very important part of selling software.
Finally a setup provides upgrade and patching features for new versions as well as uninstall and cleanup of the system when the user wants to remove your software. A good setup may also be signed with digital certificates to ensure the file can not be hampered with in transit, and that the vendor is certifiable and hence serious. All of these things are crucial for a serious product.
It is important to remember that the setup experience is the users first encounter with the quality of your product. If the setup fails the product can't be evaluated at all. This would seem to be the most expensive error to make in software development.
Errors in deployment are cumulative in the sense that once you have a deployed error, you generally have no access to the machine in question for debugging - and the fix could easily do more damage. You are managing a delivery process, not just debugging code and binaries. Each delivery adds risk and complexity and pretty soon you can have an impossibility to maintain on your hands if you are not careful. Furthermore all machines your setup is run on will almost certainly be in a totally different state than another computer.
Deployment (setups) is therefore the complex process of migrating any computer from one stable state to another. This requires a disciplined approach. The setup should install all required files and settings and ensure the product is configured for first launch or ready to be configured upon launch without failure. This can be a very complex task. The list of things a setup may need to do is growing all the time, and for every new versions of Windows it seems new obstacles are put in place to make deployment harder. Such obstacles include the UAC prompts, self-repair lockdown on terminal servers, changed core MSI caching behavior, new folder redirects, virtualization features, new and changed signing features with encryption and digital certificates, Active X killbits security lockdown, 64 bit complexities, etc... The list goes on.
Application virtualization is a big issue these days. It essentially encapsulates computer programs from the underlying operating system on which it is executed. This essentially still involves a deployment package for your application, but a fully virtualized application is not installed in the traditional sense. The application behaves at runtime like it is directly interfacing with the original operating system and all the resources managed by it, but can be isolated or sandboxed to varying degrees.
An Overview of Deployment Tasks
The tasks and features needed in a setup range from the very fundamental and basic with built-in Windows Installer or third party tool support, to the highly customized ad hoc solutions where you have to actually code something yourself to deal with unique deployment requirements.
Deployment tools really contain most you would ever need for any deployment, but certain things are still coded on a case by case basis. These ad hoc solutions are implemented as "custom actions" in Windows Installer, and they are without a shadow of a doubt the leading cause of deployment failures. See the "Very Advanced" section for more on custom actions.
Overuse of custom actions and a lot of ad hoc coding tends to indicate flawed application design, but in certain cases you are just dealing with new technology and you have to roll your own solution to get your solution deployed. This is exactly what custom actions are for. Over time standardized solutions should be created and preferred. And small changes in application design can often eliminate complicated custom actions. This is a very important fact about software deployment - there are so many variables that one should opt for simplicity whenever possible.
At a basic overview level, deployment must account for:
Setup Fundamentals
All third party tools provide good support for these setup fundamentals, but there are some differences. The installation of prerequisites may be the area where third party tools and free frameworks like WiX differ the most in terms of ease of use - at the time of writing. The support is there, but it can be a little bit challenging to set up.
Check if the system is suitable for installation for the package in question.
Disk space.
OS type & version.
Language version.
Computer architecture x86/x64.
Unsuitable platforms: Thin Client / Citrix / Terminal Services
Customized setup required due to custom lockdown.
Maybe even malware situation (I wish - can cause mysterious deployment problems).
etc...
Scan for presence and if necessary install prerequisites and runtimes.
Allowing easy deployment of prerequisites and runtimes is a task with extensive support in third party deployment tools. There is limited support for this in Windows Installer itself. The basic feature for runtime distribution in Windows Installer is the merge module - essentially the "include file equivalent" for MSI files. The standard way to deploy shared files. A merge module is compiled into your MSI at build time - sort of early binding in developer terms.
Some prerequisites are installed via Windows Installer merge modules. Others are generally installed using their own setup file (various formats).
Examples: Active X for games, Crystal Reports, Microsoft Report Viewer Runtime, MySQL, SQL Server Runtime, VB6 Runtime, ASP.NET MVC Runtime, Java Runtime, Silverlight, Microsoft XNA, VC++ Runtime, .NET runtime versions, Visual Studio Tools For Office Runtime, Visual F# Runtime, MSXML Runtime, MS Access Runtime, Apache Tomcat, Various Primary Interop Assemblies, PowerShell versions, etc...
Finally, several core Microsoft components such as Windows Installer versions and PowerShell versions generally come down via Windows Update and might be better to exclude from your setup (just check for existence, and tell user to run Windows Update if component is missing). Actual practice here varies.
Provide a GUI suitable for input of required settings from the user.
It is common practice to enter and validate license keys in a setup.
Personally I think this is better done from the application itself for both practical and security reasons - making piracy more difficult, allowing trial installs, reducing excessive setup support calls (you wouldn't believe it...), etc...
For complex setups a lot of GUI could be required to gather deployment settings - particularly for server setups with IIS, MS SQL, COM+ and other advanced components.
Allow installation in silent mode for corporate use.
Extremely important - all corporate deployment is automatic and silent (no GUI shown during installation), except certain server installs.
Smaller companies may run your setup in GUI mode. In my experience they generally do.
Home users generally always run your setup in GUI mode.
Know your target group, and definitely make sure you support silent running if you target corporate customers. However all setups should work in silent mode, and if you follow MSI design rules and best practice it "comes for free".
Adding Basic Stuff
These basic tasks have full support in the Windows Installer engine itself, and all third party tools provide fairly equivalent support for all of them despite variations in GUI features and ease of use.
Install files and registry settings.
Install odbc, file associations, shortcuts and icons.
Update application and system-wide path settings.
Update and merge text based files such as INI files.
Register COM files and enable .NET COM Interop if need be.
Install .NET assemblies to the GAC, and run custom .NET installer classes.
Install side-by-side windows assemblies to WinSxS.
Deliver signed and certified files (also applies to the setup file itself).
Install and control Windows services.
Install Control Panel Applets.
Update environment variables.
I won't dwell on these issues or flesh them out with too many details. All of these deployment tasks should be reasonably well supported in all deployment tools and frameworks available. However, many people mess up their deployment by not using the built-in deployment features and instead relying on custom actions for such trivial tasks. Entirely added risk for no gain whatsoever.
In particular we often see custom actions used to install Windows services - and this is usually a sign of a very badly designed service, or at other times just ignorance of how to do deployment. Both issues together is also common. Deploying such a service often involves applying custom ACL permissioning and modified NT privileges to make a service run with user rights instead of as LocalSystem - which is generally the only correct way to run Windows services. Running a service with user credentials is a "deployment anti-pattern" worth mentioning in passing (more on this later).
Another common custom action use that is always wrong is to install files to the GAC via a custom action. There is good built-in support for this in Windows Installer and any excuses to install via a custom action is almost certainly hiding a bad design or some generalized madness :-). It is also a fact that many deploy far too many things to the GAC overall, but that is a development issue: When should I deploy my assemblies into the GAC?
Finally, .NET installer classes are intended for developers to test their components during development - it should not be used for deployment. It is essentially just the .NET equivalent of self-registration (which is also not acceptable for MSI - you need to extract the COM information and add to the MSI tables - see link for details). An MSI is declarative - it should contain all changes to be applied to the system so that proper rollback and management can be ensured. So the message is that .NET installer classes should only be used for development and testing. Once you build an MSI to deploy your application you should use MSI constructs to achieve proper deployment with rollback support and intelligent management. We see these .NET installer classes used mostly for service and GAC install. In an MSI this translates to using the ServiceInstall and ServiceControl tables for services, and just marking a component for GAC install to install to the GAC (must be a signed assembly). Once you know how, it is easy and you won't miss the .NET installer classes because MSI works like "automagic" when you do this right. You get reliable rollback for free, with ease.
Adding Advanced Stuff (often server stuff)
Despite support in all deployment tools for most of these issues, I have often found that I needed to implement custom actions and ad hoc solutions to achieve proper deployment in certain cases. This is particularly the case for COM+ and IIS deployment. WiX provides highly customized support for both types of deployment, but I have limited experience using it.
The update and installation of XML files is a task supported by each deployment tool since there is no built-in support for this in the Windows Installer engine - which is quite amazing at this point.
With regards to database installation and particularly updates, I am on the fence thinking it should be done from applications with proper user authentication and interactive use, instead of a "one shot" and impersonated deployment operation (that might fail seemingly without good exception management or recovery options). Or in other cases it seems updates should be a managed process involving users raising corporate tickets handled by professional DBOs. Some more details below.
Configuration of IIS, Apache, or other web servers.
This is a whole world of its own, particularly with regards to IIS. I have found deployment tools lacking in features to deploy sites as requested by developers and corporate teams.
Though largely untested by me, the WiX framework provides a very flexible implementation of IIS configuration and deployment.
I expect a lot of custom actions are in use to achieve special deployment configurations.
Run SQL server scripts against databases.
Create db, connect to db, update db, run stored procedures, maybe even trigger backups or schedule new tasks, etc... I don't know all that people do here.
Should this be done in the application instead, or by a DBO? That seems much more reliable. A setup is "one shot", an application can be restarted and you try again - a better exception handling.
Plus an MSI setup has a highly limited GUI severely limited in events due to the overall MSI design (proper Win32 dialogs can be spawned from the limited MSI GUI, but it takes a lot of effort - I have only done it once).
Crucially a setup can run with elevated rights, but that is just on the local machine. Authentication is still needed against the database (unless Windows Authentication is used).
A database update is a transaction on its own that would run as a part of the overall Windows Installer transaction. It is not obvious how to handle errors or what to do in terms of rollback if the installation fails.
Needless to say this can all get very complex to handle in your setup. It is an (enterprise) configuration task in my view, not just a deployment task. Insightful comments very welcome on this issue - I am on the fence with regards to best practice.
If you are delivering a client / server solution to your customers and need a way to set up the (server side?) databases "fresh" with defaults to help your customers "get going" with your solution, then database deployment definitely makes sense to me. But update scripts run as part of installation targeting existing databases would worry me in terms of reliability and management - not to mention safety.
For corporate database updates it would seem a proper process involving a DBO would be more secure. They can run a proper backup before updates are applied and then true rollback is in place if problems are found in UAT.
Installing ActiveX browser components (certificate based through browser).
Install of signed CAB file downloaded from a Web page (admin only, can be captured as an MSI for mass deployment with elevated rights).
Defaults to install in "C:\Windows\Downloaded Installations".
Complications can arise if the version in the CAB file differs from the version requested by the Web page (triggers CONFLICT folders to be generated as installs keep re-running).
Update and merge XML files.
Advanced because it is (amazingly) not natively supported by Windows Installer.
Supported with extensions by both WiX and third party deployment tools.
Configure and control COM+ components.
Tech note: I have failed several times to achieve this properly with several third party tools. There seems to be an overall lack of required features.
I normally end up manually configuring the COM+ application and then exporting an MSI from the Component Services administrative tool that is then used for deployment.
This exported MSI is not good at all - fragile if you try to make any modifications. It contains an undocumented .apl file with the application's attributes and any dependent DLL or data files are not auto-included.
WiX provides support for COM+ (not tested at all by me). I hope it is good :-).
Just for reference: Understanding COM+ Application Installation.
Add custom event logs, set up performance monitors, add firewall rules, and other windows extensions. Supported by most deployment tools these days - including WiX. These features are not natively supported by the Windows Installer engine.
Set up connections to mobile devices and deploy.
Can involve "some strangeness" and weird proprietary solutions.
A custom, native dll might be required to achieve smooth deployment (Pocket PC back in the day - not sure how things work these days).
Install drivers of various kinds.
Much easier and more reliable now for signed drivers than before.
Supported by all third party tools and WiX (using dpinst.exe in the background).
Hooking up the application to advanced server features (deployed separately).
Automatic update systems.
License servers. Floating licenses, or regular licenses.
Online resources of various types. Help, templates, discussions, SDKs, developer tools, etc...
Online stores.
Most of the time this just involves setting a link or registry key to point to the server resources, but sometimes it is more complex.
Adding Very Advanced Stuff (custom actions)
When there is no built-in support for a certain operation or task in Windows Installer itself, or in any of the various third party tools available, you are left having to implement the feature yourself.
When you use Windows Installer, this involves running custom actions of various types (Windows Installer's mechanism for running executable, custom installation logic during installation).
Custom actions are purpose built executables (binaries: dll, exe) and scripts capable of making advanced modifications to the system during installation that are not supported by Windows installer natively or by the deployment tool in use (WiX, Installshield, Advanced Installer, etc...).
Custom actions that make modifications to the system run with elevated rights so that changes can be made to the system even if the logged on user does not have admin rights. There is essentially no limit to what these custom actions can do. They are armed and dangerous.
Custom actions are the leading causes of deployment errors and failure.
Hands down. If an MSI install fails it is most often related to a failing custom action.
Custom actions are difficult to write and debug due to the complexity of Windows Installer. They must be used only when necessary and they must be written with full rollback support so that they are capable of undoing all changes that were applied to the system in case the installer fails and must roll back changes.
This is hard and difficult work and custom actions are a big, complex and error prone issue - a can of worms.
Often minor application design changes can allow custom actions to be replaced by standard MSI features, or various MSI extensions available in third party tools and in WiX.
Executables and scripts that run correctly on their own may fail when run as part of an MSI due to the complex impersonation, elevation and runtime design of Windows Installer. These are not trivial things to get right. An MSI install is an intricate transaction with elevated and impersonated sequences that is very hard to deal with.
Custom action types
Windows Installer supports custom actions implemented as purpose built, native (win32) executables and dlls as well as scripts such as JavaScript or VBScript.
Some even use .NET binaries (C#, VB.NET, DTF, etc...) to run custom actions - this is not recommended due to their prerequisite need for the .NET Framework. These binaries are referred to as "managed code" and can't run without the correct .NET framework installed.
Finally there are PowerShell custom actions that are both scrips and managed code combined - and they should not be used since they require the .NET framework.
In the future, when the .NET framework might be guaranteed to exist on all Windows computers this managed code might be a viable options for general use, but as of now the consensus seems to be that these actions are too risky and unreliable.
Common, sample custom actions (some common, custom tasks are frequently implemented as custom actions because they are not natively supported by Windows Installer but frequently needed).
Manage Windows Shares (usually create).
Apply custom ACL permissioning (there is some built-in MSI support for this).
Modify NT privileges.
Configure DCOM.
Manage groups and users.
Configure per-user Office Addins.
Persist installer properties (for repair and reinstall).
Custom and company specific launch conditions.
IP-Configuration redirects for IIS
Encrypt or obfuscate content for data security
Etc...
Most of the custom functionalities mentioned above are now available in the WiX framework as a custom C++ dll - and other tools have some similar, custom features. You should always prefer these ready-made solutions to your own custom actions since rollback is properly implemented in WiX and the implementation is well tested.
Applying custom ACL permissions and modifying NT privileges are considered "deployment anti-patterns" by most deployment specialists. The requirement to do so indicates poor (lazy) application design.
Custom action summary.
Writing a custom action on your own should be a rare event that is unique and that has not been done (better) before.
Minor application re-design can often eliminate unwise and complex deployment constructs. In fact, almost always.
For example: application configuration should happen on first application launch, and not during the setup.
The setup should prepare the application for first launch, and perform tasks that require elevated rights (only).
User data initialization is a particularly bad thing to use setup scripts to perform. All of this should be done in the application launch sequence.
You should enforce proper rollback support.
This is complex and hard work.
Almost all script custom actions I have seen do not implement rollback at all.
You should write with minimal dependencies.
Preferably use C++ or Installscript or maybe JavaScript (only for internal, corporate deployment in my view). Avoid VB Script, and definitely avoid .NET code in C#/DTF or PowerShell scripts. There is some discussion on the issue of managed code. MSI experts like Chris Painter believes C#/DTF custom actions are ready for prime time, whereas the general consensus seems to be to err on the side of caution and rely on C++ dlls until a proper .NET runtime environment can be guaranteed. Here is a long-winded "discussion" of this issue: Windows Installer fails on Win 10 but not Win 7 using WIX
Robust code is difficult write in script. Scripts are fragile, hard to debug, lack advanced language features (particularly error handling) and are vulnerable to anti-virus blocking.
The only real advantages of scripts are that they are transparent and inspectable and the whole source is embedded in the MSI file (no version control issues). Corporate teams that hand off work to each other frequently might use JavaScript (there is a lot of legacy VB Script use as well - but that language is very poor for error handling).
Managed code has runtime requirements that can't be guaranteed at the time of writing - and this has been the situation for a very long time now.
PowerShell is both managed code and a script. Avoid it. Installshield supports it as a type of custom action. It remains to be seen how successful it will be. I would never use it unless forced to.
And much more...
Additional complications For Deployment
There are many additional complications when delivering a professional setup such as delivering setups in different languages (localization), branding setups for different resellers (OEM), ensuring the setup works on all required operating systems in different language versions, delivering separate setups for x86 and x64 machines, delivering a scaled down "viewer version" of the application, making combined setups for client and server installations (can be run on both the server and the client installing different components - not recommended if you ask me - details), and not to mention deploying to different embedded devices such as phones, pocket pcs, smart phones etc...
Certain "Deployment Anti-Patters" are also problematic to deal with (the linked answer is an "experiment" and I am not too happy with it - a work in progress, but it is intended as a check list for developers for their deployment efforts to avoid really common problems). These are bad constructs required in setups to make poorly designed applications run properly. They include things such as applying custom permissioning (write access in otherwise locked down paths, etc...), customizing NT privileges (typically "run as service" for a user account, or much worse), or applying excessive use of complex custom actions that make unpredictable changes to the system (these can really be anything and be very dangerous). Messing up the silent install is also a huge, common problem - it is terrible for corporate use of your setup. Deploying excessive amounts of user-specific data with your setup can also be problematic (hard to control complications). And there are many other, more specific problems to relate to.
Here is a post with the overall issue of setup and deployment seen in the larger context of application marketing and sales.
Doing Your Own Deployment
You will need a tool or a framework to deliver your own setups. Here is an answer describing different tools used to create installers: What installation product to use? InstallShield, WiX, Wise, Advanced Installer, etc. All attempts have been made to make the descriptions as objective as possible - describing real world experience with positives and negatives.
The commercial tools described in the link above are most excellent tools - and they tend to speed things up with good GUIs and ready-made solutions for common requirements, but developers should consider trying WiX - the new way to create MSI files. Please read this post for background information:
Windows Installer and the creation of WiX (read this if you are trying to "find your feet with WiX" and want to understand what the technology is all about and where it is coming from).
WiX has a learning curve but is "developer friendly" in many ways. For one it is a project type in Visual Studio (once you install it), and it allows a setup to be defined in XML and compiled to MSI as you would a normal binary. This allows proper source control, branching and collaboration. Plus it is free and open source. I feel it is OK to recommend a free framework, especially since it is well maintained. Expect a learning experience though. Here are some suggestions for a "flying start" with WiX.
Many programs make use of graphics, sound, and other drivers which are supplied and maintained by third parties. In many cases, these drivers may use underlying hardware or other system features in ways that Windows itself knows nothing about. If two programs, each with its own driver and unaware of the other's existence, tried to use the same hardware, they would likely interfere with each other in unpredictable undesirable ways (e.g. one might overwrite graphical textures loaded by the other). To avoid such problems, Microsoft recommends that has applications install drivers in such a way that the two programs that need the same driver can share the same driver instance.
The approach Microsoft takes is not the only means of ensuring that multiple programs using the same hardware go through the same driver. A system could also have programs temporarily load drivers when they start, and have drivers automatically unload when they're done. The difficulty with that approach is that if a program which uses an old driver is launched, and while it is running a program which needs a newer version of that driver is launched, the new program would not be able to run unless or until the old program shuts down its driver and switches to using a new one. Such a hassle is probably unavoidable, but having to deal with such things every time a program is launched is probably less bothersome than dealing with it only once when a program is installed.
All that having been said, while it may be helpful to be able to install a program once and have any "driver" issues taken care of once and for all, there's also something to be said for being able to simply run a program without having to make "permanent" modifications to the system. There shouldn't be any particular obstacles to programs being able to use either "temporary" or permanent drivers, but I know of no particular efforts to facilitate such designs.
Beside copying the files for You, the installer may also add registry entries needed by the program (if any), add values to environment variables (PATH), create icons on desktop, so You don't have to do this manually etc.
To quote Wikipedia, "Installation typically involves code being copied/generated from the installation files to new files on the local computer for easier access by the operating system." For simple programs, there is no need to install anything, but more complex ones can update, add links, etc. automatically if installed.
Related
I'm a complete novice regarding software deployment. I don't have the infrastructure to experiment with servers and networks, and so I don't really know how software is deployed in enterprise.
Let's suppose I've created a very simple Windows .exe program. (You open it, and it shows a "Hello World" message box, together with a 'close' button). The program has no external dependencies and therefore is fully 'portable'.
Now let's suppose my program is wanted by a variety of different companies. They each wish to deploy my program across all of their Windows machines. Each company has some mechanism by which it can deploy software automatically to all of their client machines, but this mechanism may be different in each case. (Speaking as a complete novice, I don't really know what types of 'mechanisms' exist.)
What should I do to prepare my program for easy deployment?
Also, could anyone please describe what the most common deployment 'mechanisms' are? Thanks!
I've worked for both ISV's (12 years) and Enterprise IT departments (5 years) so I understand the nature of your question.
At a minimum, you need to create an installer that supports silent (non-interactive) installation, upgrading, and uninstall. You can technically do this using a wide variety of tools but your customers are going to prefer you create an MSI. They prefer this because Windows Installer (.MSI) provides a standardized mechanism with consistent command lines, logging, transactional installation (rollback changes on failure), is rich in metadata ( observable; no black box) and is transformable ( end user can modify the MSI using an onion skin approach to do things like change the name or location or existence of a shortcut, install a service using a specific username/password and so on ).
So as long as you are creating properly authored MSI's you can ignore the deployment method to a certain degree because you are abstracted from it. The trick is to understand the deployment requirements of your application ( easy in your example) and how to implement these requirements in Windows Installer (a somewhat steep learning curve).
I personally use InstallShield and Windows Installer XML (WiX). I also have written a tool called IsWiX (open source on CodePlex) that provides an InstallShield like authoring experience on top of WiX. The scenario you describe can easily be achieved using InstallShield LE (Limited Edition) which is free to Visual Studio customers. More complicated scenarios require more knowledge and advanced tools.
New to SCCM 2012 and have a lot of applications with msi's that I am creating applications for. I've also been able to create a few applications with exe's. Is the only reason to use a package, if you need to run multiple programs or apps in succession?
One of the reasons to use a package vs application model is handling configuration outside of the MSI / EXE. Say for instance, your MSI / EXE does everything but it doesn't set any of the configuration items like the license information, or which server you need to connect to. Now most of these things can be handled via a custom action within your MST (Transform) by using Installshield, but if you have an executable it gets a bit more complicated because there isn't really anyway to "hook" into the installer to provide additional configuration items.
Recently, I've joined the Windows team in my enterprise and with my developer background (Java, .NET and Web in general), I was pretty quickly interested in PowerShell. I can see its value over plain old batch files, VB, ... which is why I'd like to promote its usage and, little by little, push people to favor it over the rest unless there's a reason not to do so.
Deploying PowerShell seems pretty straightforward since we can easily approve the relevant patches in WSUS and configure the execution policy via GPO for the AD integrated servers.
My questions are in fact more about the distribution and usage of PowerShell and PowerShell modules (e.g., PCSX, PowerShellPack, home made, ...).
For those of you who have already deployed PowerShell in your enterprise:
Do you have some sort of standard package for PowerShell with a set of modules that you deploy on each server? If you do, then how do you deploy new versions of the installed modules?
Did you put a central PowerShell repository in place where you store all your PowerShell modules? If so, is that repository accessible globally or do you also secondary repositories that you synchronize?
I'm pretty used to tools such as Maven, Ivy and other dependency management software, which is why I'm a bit disappointed by what PowerShell has to offer in this regard.
I've found a very nice article about this subject and will probably go down the same path, as it corresponds to my requirements.
Do you use WinRM? Do you connect directly from workstations or do you have central management servers? Did you limit access to WinRM to those management servers?
Do you use WinRM in a non-managed environment (servers not in an AD domain)? How do you configure WinRM?
We have a network zone in which the servers aren't part of an AD domain, thus I can't rely on the Kerberos authentication for WinRM.
Globally, what is your experience, are you satisfied with the results?
Edit:
Regarding question 2, we've decided to put a central repository in place.
The idea will be to have a main repository which will be under version control (GIT) and to which we'll be the only ones to have write access.
From that repository, we will copy the modules using an rsync like tool (in our case that'll be robocopy) to other secondary repositories (which will be read-only copies). Only those repositories will be accessible by the clients (we'll just have to update the PSModulePath on those clients to make sure they can access the repository).
We'll also stage our releases, thus in the repository, there'll be multiple versions available: Development, Integration and Production.
Let's cover each issue by category.
Evangelism
To start off interest in PowerShell to your coworkers, I would suggest starting off with the bread and butter of automation. Find a common pain point that is relatively easy to implement (to get something out there in front of your coworkers quickly) and automate it with PowerShell. Then expand from there.
Another good idea is to start a "Script Club" at your office where you do some training and share ideas or problems about scripting in PowerShell. You can start out with once every few weeks and see how it goes. At my work, we have a book club where we go through various technical books on testing, design, and programming, it works well.
Packaging
Modules - PowerShell modules are the best form of packaging. They are quite easy to use and offer some nice features such as easy deployment and private variables/functions.
Scripts - Scripts are a good idea for work-in-progress or to start off since your coworkers will certainly be comfortable with scripts.
Deployment
There are a few options currently for deployment.
Deploy to every machine - This could avoid some network issues and gives you more flexibility with each machine, but the downside is that updating modules may be more of a pain.
Central repository (a.k.a. a network share) - You could also store all your modules on a central network share. This would avoid issues with deploying to every machine and you can make the modules read-only if you want to control modifications. But you would still have to deploy a profile to each machine to add the network share to the $env:PSModulePath variable. It would be best to set this in the all-users-all-hosts profile. From then on you shouldn't need to update it unless the path changes.
NuGet - NuGet is an open source project that is bringing package management to .NET development. What is nice about it is the ability to have public repositories and local/private repositories. There is already an initial version of a PowerShell module that will leverage NuGet for PowerShell module deployment.
Remote Access
Being a build engineer, I have relatively few machines and full control over them. So I have remoting enabled. You would have to ask some IT guys for better advice.
We will be embarking on an Application developement project (.NET 3.5) for a large organization. As we started thinking about the upgrades we would be giving across the machines, we are looking at options like ClickOnce.
What we need is a push model, as long as the client machine is connected to the network, the server can send updates. I believe ClickOnce is a pull model(although by specifying minimum version we can kind of push). Also ClickOnce downloads complete files only, it cannot download the change (byte difference) among the files.
Can anyone point me to a better tool that can be used here. Also better strategies, if any, are welcome, we are in a very early stage of the project.
I don't have a definitive answer on better options, but I've used ClickOnce and can offer some advice.
There are several update options with ClickOnce (before starting, after starting, check every time, check every X Hours/Days/Weeks, etc). You can also throw those out and write code to check for updates. It's not a "push" from the server, but your client could poll for updates which would be the next best thing. Just remember, the application is going to have to restart after the update to see changes.
ClickOnce only downloads changed files. However, the progress dialog always shows the entire size of the application even if it's only downloading a single file. Everyone worries about that, but it's just a bug with the progress dialog.
Finally, I'm a big fan of keeping it simple. It's really easy to over-think these things and create a monstrosity that was never needed. We went through something similar at my company. We were so worried about users downloading unnecessary bytes, we broke our apps up into more, smaller assemblies. It turned into a nightmare; apps were harder to maintain and performed worse on the client. We finally undid it all and wasted weeks just to end up where we started.
I'm not saying you don't need the features you're asking for, I don't know your scenario. Just educate yourself first and know what you're getting yourself into.
We use clickonce at my company (about few hundred users for the app geographically dispersed). By specifying the minimum version we can make sure that every app installation gets updated after deployment automatically. You are right that clickonce downloads full files only but only files that have changed since previous version. If that is still a concern you can break your application into more smaller assemblies. I think you can also use netmodules but then Visual Studio has not built in support for that.
In general clickonce has worked good for us.
I am just in the process of implementing such a service on top of my distributed application platform. In essence I have developed a "push" model for corporates that follows these basic principles:
Software upgrades are "managed" from the server, NOT from the client, which is in line with the deployment of corporate software as opposed to user software (this is a very important point)
Software upgrades can be customised per client application on the server, i.e. the server can deploy unique configurations to every client if required
Software upgrades can be deployed to clients at different times, or all at the same time, or any combination of the two
The software upgrade version can be specified per client, i.e. different versions can be deployed to different clients as required
All software upgrades for all clients can be "managed" from a single server, i.e. the software upgrading "service" is consistent across any application, and all applications can utilise the software upgrading "service"
Clients can implement a software upgrade policy of automatic (application restarts as soon as the upgrade has been downloaded and available at the client), manual (application needs to be "sent" a custom "force upgrade"
message"), or on restart (application upgrades on shutdown if an upgrade has been downloaded and is available)
All auto-upgrading functionality is transparent to any running applications as this is all performed in autonomous background threads and all inter-process communication and file transfer is handled by my framework
In essence this now allows me (or will allow me when I have tidied a few things up and thoroughly tested the implementation) to manage the version of any application developed by me from a central server after it has been initially installed, without any client intervention.
What are the recommendations for including your compiler, libraries, and other tools in your source control system itself?
In the past, I've run into issues where, although we had all the source code, building an old version of the product was an exercise in scurrying around trying to get the exact correct configuration of Visual Studio, InstallShield and other tools (including the correct patch version) used to build the product. On my next project, I'd like to avoid this by checking these build tools into source control, and then build using them. This would also simplify things in terms of setting up a new build machine -- 1) install our source control tool, 2) point at the right branch, and 3) build -- that's it.
Options I've considered include:
Copying the install CD ISO to source control - although this provides the backup we need if we have to go back to an older version, it isn't a good option for "live" use (each build would need to start with an install step, which could easily turn a 1 hour build into 3 hours).
Installing the software to source control. ClearCase maps your branch to a drive letter; we could install the software under this drive. This doesn't take into account non-file part of installing your tools, like registry settings.
Installing all the software and setting up the build process inside a virtual machine, storing the virtual machine in source control, and figuring out how to get the VM to do a build on boot. While we capture the state of the "build machine" with ease, we get the overhead of a VM, and it doesn't help with the "make the same tools available to developers issue."
It seems such a basic idea of configuration management, but I've been unable to track down any resources for how to do this. What are the suggestions?
I think the VM is your best solution. We always used dedicated build machines to get consistency. In the old COM DLL Hell days, there were dependencies on (COMCAT.DLL, anyone) on non-development software installed (Office). Your first two options don't solve anything that has shared COM components. If you don't have any shared components issue, maybe they will work.
There is no reason the developers couldn't take a copy of the same VM to be able to debug in a clean environment. Your issues would be more complex if there are a lot of physical layers in your architecture, like mail server, database server, etc.
This is something that is very specific to your environment. That's why you won't see a guide to handle all situations. All the different shops I've worked for have handled this differently. I can only give you my opinion on what I think has worked best for me.
Put everything needed to build the
application on a new workstation
under source control.
Keep large
applications out of source control,
stuff like IDEs, SDKs, and database
engines. Keep these in a directory as ISO files.
Maintain a text document, with the source code, that has a list of the ISO files that will be needed to build the app.
I would definitely consider the legal/licensing issues surrounding the idea. Would it be permissible according to the various licenses of your toolchain?
Have you considered ghosting a fresh development machine that is able to build the release, if you don't like the idea of a VM image? Of course, keeping that ghosted image running as hardware changes might be more trouble than it's worth...
Just a note on the versionning of libraries in your version control system:
it is a good solution but it implies packaging (i.e. reducing the number of files of that library to a minimum)
it does not solves the 'configuration aspect' (that is "what specific set of libraries does my '3.2' projects need ?").
Do not forget that set will evolves with each new version of your project. UCM and its 'composite baseline' might give the beginning of an answer for that.
The packaging aspect (minimum number of files) is important because:
you do not want to access your libraries through the network (like though dynamic view), because the compilation times are much longer than when you use local accessed library files.
you do want to get those library on your disk, meaning snapshot view, meaning downloading those files... and this is where you might appreciate the packaging of your libraries: the less files you have to download, the better you are ;)
My organisation has a "read-only" filesystem, where everything is put into releases and versions. Releaselinks (essentially symlinks) point to the version being used by your project. When a new version comes along it is just added to the filesystem and you can swing your symlink to it. There is full audit history of the symlinks, and you can create new symlinks for different versions.
This approach works great on Linux, but it doesn't work so well for Windows apps that tend to like to use things local to the machine such as the registry to store things like configuration.
Are you using a continuous integration (CI) tool like NAnt to do your builds?
As a .Net example, you can specify specific frameworks for each build.
Perhaps the popular CI tool for whatever you're developing in has options that will allow you to avoid storing several IDEs in your version control system.
In many cases, you can force your build to use compilers and libraries checked into your source control rather than relying on global machine settings that won't be repeatable in the future. For example, with the C# compiler, you can use the /nostdlib switch and manually /reference all libraries to point to versions checked in to source control. And of course check the compilers themselves into source control as well.
Following up on my own question, I came across this posting referenced in the answer to another question. Although more of a discussion of the issue than an aswer, it does mention the VM idea.
As for "figuring out how to build on boot": I've developed using a build farm system custom-created very quickly by one sysadmin and one developer. Build slaves query a taskmaster for suitable queued build requests. It's pretty nice.
A request is 'suitable' for a slave if its toolchain requirements match the toolchain versions on the slave - including what OS, since the product is multi-platform and a build can include automated tests. Normally this is "the current state of the art", but doesn't have to be.
When a slave is ready to build, it just starts polling the taskmaster, telling it what it's got installed. It doesn't have to know in advance what it's expected to build. It fetches a build request, which tells it to check certain tags out of SVN, then run a script from one of those tags to take it from there. Developers don't have to know how many build slaves are available, what they're called, or whether they're busy, just how to add a request to the build queue. The build queue itself is a fairly simple web app. All very modular.
Slaves needn't be VMs, but usually are. The number of slaves (and the physical machines they're running on) can be scaled to satisfy demand. Slaves can obviously be added to the system any time, or nuked if the toolchain crashes. That'ss actually the main point of this scheme, rather than your problem with archiving the state of the toolchain, but I think it's applicable.
Depending how often you need an old toolchain, you might want the build queue to be capable of starting VMs as needed, since otherwise someone who wants to recreate an old build has to also arrange for a suitable slave to appear. Not that this is necessarily difficult - it might just be a question of starting the right VM on a machine of their choosing.