What are the recommendations for including your compiler, libraries, and other tools in your source control system itself?
In the past, I've run into issues where, although we had all the source code, building an old version of the product was an exercise in scurrying around trying to get the exact correct configuration of Visual Studio, InstallShield and other tools (including the correct patch version) used to build the product. On my next project, I'd like to avoid this by checking these build tools into source control, and then build using them. This would also simplify things in terms of setting up a new build machine -- 1) install our source control tool, 2) point at the right branch, and 3) build -- that's it.
Options I've considered include:
Copying the install CD ISO to source control - although this provides the backup we need if we have to go back to an older version, it isn't a good option for "live" use (each build would need to start with an install step, which could easily turn a 1 hour build into 3 hours).
Installing the software to source control. ClearCase maps your branch to a drive letter; we could install the software under this drive. This doesn't take into account non-file part of installing your tools, like registry settings.
Installing all the software and setting up the build process inside a virtual machine, storing the virtual machine in source control, and figuring out how to get the VM to do a build on boot. While we capture the state of the "build machine" with ease, we get the overhead of a VM, and it doesn't help with the "make the same tools available to developers issue."
It seems such a basic idea of configuration management, but I've been unable to track down any resources for how to do this. What are the suggestions?
I think the VM is your best solution. We always used dedicated build machines to get consistency. In the old COM DLL Hell days, there were dependencies on (COMCAT.DLL, anyone) on non-development software installed (Office). Your first two options don't solve anything that has shared COM components. If you don't have any shared components issue, maybe they will work.
There is no reason the developers couldn't take a copy of the same VM to be able to debug in a clean environment. Your issues would be more complex if there are a lot of physical layers in your architecture, like mail server, database server, etc.
This is something that is very specific to your environment. That's why you won't see a guide to handle all situations. All the different shops I've worked for have handled this differently. I can only give you my opinion on what I think has worked best for me.
Put everything needed to build the
application on a new workstation
under source control.
Keep large
applications out of source control,
stuff like IDEs, SDKs, and database
engines. Keep these in a directory as ISO files.
Maintain a text document, with the source code, that has a list of the ISO files that will be needed to build the app.
I would definitely consider the legal/licensing issues surrounding the idea. Would it be permissible according to the various licenses of your toolchain?
Have you considered ghosting a fresh development machine that is able to build the release, if you don't like the idea of a VM image? Of course, keeping that ghosted image running as hardware changes might be more trouble than it's worth...
Just a note on the versionning of libraries in your version control system:
it is a good solution but it implies packaging (i.e. reducing the number of files of that library to a minimum)
it does not solves the 'configuration aspect' (that is "what specific set of libraries does my '3.2' projects need ?").
Do not forget that set will evolves with each new version of your project. UCM and its 'composite baseline' might give the beginning of an answer for that.
The packaging aspect (minimum number of files) is important because:
you do not want to access your libraries through the network (like though dynamic view), because the compilation times are much longer than when you use local accessed library files.
you do want to get those library on your disk, meaning snapshot view, meaning downloading those files... and this is where you might appreciate the packaging of your libraries: the less files you have to download, the better you are ;)
My organisation has a "read-only" filesystem, where everything is put into releases and versions. Releaselinks (essentially symlinks) point to the version being used by your project. When a new version comes along it is just added to the filesystem and you can swing your symlink to it. There is full audit history of the symlinks, and you can create new symlinks for different versions.
This approach works great on Linux, but it doesn't work so well for Windows apps that tend to like to use things local to the machine such as the registry to store things like configuration.
Are you using a continuous integration (CI) tool like NAnt to do your builds?
As a .Net example, you can specify specific frameworks for each build.
Perhaps the popular CI tool for whatever you're developing in has options that will allow you to avoid storing several IDEs in your version control system.
In many cases, you can force your build to use compilers and libraries checked into your source control rather than relying on global machine settings that won't be repeatable in the future. For example, with the C# compiler, you can use the /nostdlib switch and manually /reference all libraries to point to versions checked in to source control. And of course check the compilers themselves into source control as well.
Following up on my own question, I came across this posting referenced in the answer to another question. Although more of a discussion of the issue than an aswer, it does mention the VM idea.
As for "figuring out how to build on boot": I've developed using a build farm system custom-created very quickly by one sysadmin and one developer. Build slaves query a taskmaster for suitable queued build requests. It's pretty nice.
A request is 'suitable' for a slave if its toolchain requirements match the toolchain versions on the slave - including what OS, since the product is multi-platform and a build can include automated tests. Normally this is "the current state of the art", but doesn't have to be.
When a slave is ready to build, it just starts polling the taskmaster, telling it what it's got installed. It doesn't have to know in advance what it's expected to build. It fetches a build request, which tells it to check certain tags out of SVN, then run a script from one of those tags to take it from there. Developers don't have to know how many build slaves are available, what they're called, or whether they're busy, just how to add a request to the build queue. The build queue itself is a fairly simple web app. All very modular.
Slaves needn't be VMs, but usually are. The number of slaves (and the physical machines they're running on) can be scaled to satisfy demand. Slaves can obviously be added to the system any time, or nuked if the toolchain crashes. That'ss actually the main point of this scheme, rather than your problem with archiving the state of the toolchain, but I think it's applicable.
Depending how often you need an old toolchain, you might want the build queue to be capable of starting VMs as needed, since otherwise someone who wants to recreate an old build has to also arrange for a suitable slave to appear. Not that this is necessarily difficult - it might just be a question of starting the right VM on a machine of their choosing.
Related
Of all the programs I wrote so far, If I want it to work on another work station, I just have to copy and paste the executable and necessary files needed to make it run (e.g: .o files, binary files..).
But all the program built for commercial use always comes with an installer. For example PC games. So my question is: What is the main benefits/reasons of doing installation when we could just simply copy the files over to the targetted work station?
-One of the reason is probably to prevent piracy. But other than that, I'm sure there are other stronger reasons?
The Complexity of Deployment
Only the simplest applications can work with a simple file copy, and even then you need to have a convenient way to actually download and do the copying of the files to the right location - and this is what a setup is for. The setup is also a marketing tool that can be used for branding and consistency across products as well as allowing installation of a trial version of the product - a very important part of selling software.
Finally a setup provides upgrade and patching features for new versions as well as uninstall and cleanup of the system when the user wants to remove your software. A good setup may also be signed with digital certificates to ensure the file can not be hampered with in transit, and that the vendor is certifiable and hence serious. All of these things are crucial for a serious product.
It is important to remember that the setup experience is the users first encounter with the quality of your product. If the setup fails the product can't be evaluated at all. This would seem to be the most expensive error to make in software development.
Errors in deployment are cumulative in the sense that once you have a deployed error, you generally have no access to the machine in question for debugging - and the fix could easily do more damage. You are managing a delivery process, not just debugging code and binaries. Each delivery adds risk and complexity and pretty soon you can have an impossibility to maintain on your hands if you are not careful. Furthermore all machines your setup is run on will almost certainly be in a totally different state than another computer.
Deployment (setups) is therefore the complex process of migrating any computer from one stable state to another. This requires a disciplined approach. The setup should install all required files and settings and ensure the product is configured for first launch or ready to be configured upon launch without failure. This can be a very complex task. The list of things a setup may need to do is growing all the time, and for every new versions of Windows it seems new obstacles are put in place to make deployment harder. Such obstacles include the UAC prompts, self-repair lockdown on terminal servers, changed core MSI caching behavior, new folder redirects, virtualization features, new and changed signing features with encryption and digital certificates, Active X killbits security lockdown, 64 bit complexities, etc... The list goes on.
Application virtualization is a big issue these days. It essentially encapsulates computer programs from the underlying operating system on which it is executed. This essentially still involves a deployment package for your application, but a fully virtualized application is not installed in the traditional sense. The application behaves at runtime like it is directly interfacing with the original operating system and all the resources managed by it, but can be isolated or sandboxed to varying degrees.
An Overview of Deployment Tasks
The tasks and features needed in a setup range from the very fundamental and basic with built-in Windows Installer or third party tool support, to the highly customized ad hoc solutions where you have to actually code something yourself to deal with unique deployment requirements.
Deployment tools really contain most you would ever need for any deployment, but certain things are still coded on a case by case basis. These ad hoc solutions are implemented as "custom actions" in Windows Installer, and they are without a shadow of a doubt the leading cause of deployment failures. See the "Very Advanced" section for more on custom actions.
Overuse of custom actions and a lot of ad hoc coding tends to indicate flawed application design, but in certain cases you are just dealing with new technology and you have to roll your own solution to get your solution deployed. This is exactly what custom actions are for. Over time standardized solutions should be created and preferred. And small changes in application design can often eliminate complicated custom actions. This is a very important fact about software deployment - there are so many variables that one should opt for simplicity whenever possible.
At a basic overview level, deployment must account for:
Setup Fundamentals
All third party tools provide good support for these setup fundamentals, but there are some differences. The installation of prerequisites may be the area where third party tools and free frameworks like WiX differ the most in terms of ease of use - at the time of writing. The support is there, but it can be a little bit challenging to set up.
Check if the system is suitable for installation for the package in question.
Disk space.
OS type & version.
Language version.
Computer architecture x86/x64.
Unsuitable platforms: Thin Client / Citrix / Terminal Services
Customized setup required due to custom lockdown.
Maybe even malware situation (I wish - can cause mysterious deployment problems).
etc...
Scan for presence and if necessary install prerequisites and runtimes.
Allowing easy deployment of prerequisites and runtimes is a task with extensive support in third party deployment tools. There is limited support for this in Windows Installer itself. The basic feature for runtime distribution in Windows Installer is the merge module - essentially the "include file equivalent" for MSI files. The standard way to deploy shared files. A merge module is compiled into your MSI at build time - sort of early binding in developer terms.
Some prerequisites are installed via Windows Installer merge modules. Others are generally installed using their own setup file (various formats).
Examples: Active X for games, Crystal Reports, Microsoft Report Viewer Runtime, MySQL, SQL Server Runtime, VB6 Runtime, ASP.NET MVC Runtime, Java Runtime, Silverlight, Microsoft XNA, VC++ Runtime, .NET runtime versions, Visual Studio Tools For Office Runtime, Visual F# Runtime, MSXML Runtime, MS Access Runtime, Apache Tomcat, Various Primary Interop Assemblies, PowerShell versions, etc...
Finally, several core Microsoft components such as Windows Installer versions and PowerShell versions generally come down via Windows Update and might be better to exclude from your setup (just check for existence, and tell user to run Windows Update if component is missing). Actual practice here varies.
Provide a GUI suitable for input of required settings from the user.
It is common practice to enter and validate license keys in a setup.
Personally I think this is better done from the application itself for both practical and security reasons - making piracy more difficult, allowing trial installs, reducing excessive setup support calls (you wouldn't believe it...), etc...
For complex setups a lot of GUI could be required to gather deployment settings - particularly for server setups with IIS, MS SQL, COM+ and other advanced components.
Allow installation in silent mode for corporate use.
Extremely important - all corporate deployment is automatic and silent (no GUI shown during installation), except certain server installs.
Smaller companies may run your setup in GUI mode. In my experience they generally do.
Home users generally always run your setup in GUI mode.
Know your target group, and definitely make sure you support silent running if you target corporate customers. However all setups should work in silent mode, and if you follow MSI design rules and best practice it "comes for free".
Adding Basic Stuff
These basic tasks have full support in the Windows Installer engine itself, and all third party tools provide fairly equivalent support for all of them despite variations in GUI features and ease of use.
Install files and registry settings.
Install odbc, file associations, shortcuts and icons.
Update application and system-wide path settings.
Update and merge text based files such as INI files.
Register COM files and enable .NET COM Interop if need be.
Install .NET assemblies to the GAC, and run custom .NET installer classes.
Install side-by-side windows assemblies to WinSxS.
Deliver signed and certified files (also applies to the setup file itself).
Install and control Windows services.
Install Control Panel Applets.
Update environment variables.
I won't dwell on these issues or flesh them out with too many details. All of these deployment tasks should be reasonably well supported in all deployment tools and frameworks available. However, many people mess up their deployment by not using the built-in deployment features and instead relying on custom actions for such trivial tasks. Entirely added risk for no gain whatsoever.
In particular we often see custom actions used to install Windows services - and this is usually a sign of a very badly designed service, or at other times just ignorance of how to do deployment. Both issues together is also common. Deploying such a service often involves applying custom ACL permissioning and modified NT privileges to make a service run with user rights instead of as LocalSystem - which is generally the only correct way to run Windows services. Running a service with user credentials is a "deployment anti-pattern" worth mentioning in passing (more on this later).
Another common custom action use that is always wrong is to install files to the GAC via a custom action. There is good built-in support for this in Windows Installer and any excuses to install via a custom action is almost certainly hiding a bad design or some generalized madness :-). It is also a fact that many deploy far too many things to the GAC overall, but that is a development issue: When should I deploy my assemblies into the GAC?
Finally, .NET installer classes are intended for developers to test their components during development - it should not be used for deployment. It is essentially just the .NET equivalent of self-registration (which is also not acceptable for MSI - you need to extract the COM information and add to the MSI tables - see link for details). An MSI is declarative - it should contain all changes to be applied to the system so that proper rollback and management can be ensured. So the message is that .NET installer classes should only be used for development and testing. Once you build an MSI to deploy your application you should use MSI constructs to achieve proper deployment with rollback support and intelligent management. We see these .NET installer classes used mostly for service and GAC install. In an MSI this translates to using the ServiceInstall and ServiceControl tables for services, and just marking a component for GAC install to install to the GAC (must be a signed assembly). Once you know how, it is easy and you won't miss the .NET installer classes because MSI works like "automagic" when you do this right. You get reliable rollback for free, with ease.
Adding Advanced Stuff (often server stuff)
Despite support in all deployment tools for most of these issues, I have often found that I needed to implement custom actions and ad hoc solutions to achieve proper deployment in certain cases. This is particularly the case for COM+ and IIS deployment. WiX provides highly customized support for both types of deployment, but I have limited experience using it.
The update and installation of XML files is a task supported by each deployment tool since there is no built-in support for this in the Windows Installer engine - which is quite amazing at this point.
With regards to database installation and particularly updates, I am on the fence thinking it should be done from applications with proper user authentication and interactive use, instead of a "one shot" and impersonated deployment operation (that might fail seemingly without good exception management or recovery options). Or in other cases it seems updates should be a managed process involving users raising corporate tickets handled by professional DBOs. Some more details below.
Configuration of IIS, Apache, or other web servers.
This is a whole world of its own, particularly with regards to IIS. I have found deployment tools lacking in features to deploy sites as requested by developers and corporate teams.
Though largely untested by me, the WiX framework provides a very flexible implementation of IIS configuration and deployment.
I expect a lot of custom actions are in use to achieve special deployment configurations.
Run SQL server scripts against databases.
Create db, connect to db, update db, run stored procedures, maybe even trigger backups or schedule new tasks, etc... I don't know all that people do here.
Should this be done in the application instead, or by a DBO? That seems much more reliable. A setup is "one shot", an application can be restarted and you try again - a better exception handling.
Plus an MSI setup has a highly limited GUI severely limited in events due to the overall MSI design (proper Win32 dialogs can be spawned from the limited MSI GUI, but it takes a lot of effort - I have only done it once).
Crucially a setup can run with elevated rights, but that is just on the local machine. Authentication is still needed against the database (unless Windows Authentication is used).
A database update is a transaction on its own that would run as a part of the overall Windows Installer transaction. It is not obvious how to handle errors or what to do in terms of rollback if the installation fails.
Needless to say this can all get very complex to handle in your setup. It is an (enterprise) configuration task in my view, not just a deployment task. Insightful comments very welcome on this issue - I am on the fence with regards to best practice.
If you are delivering a client / server solution to your customers and need a way to set up the (server side?) databases "fresh" with defaults to help your customers "get going" with your solution, then database deployment definitely makes sense to me. But update scripts run as part of installation targeting existing databases would worry me in terms of reliability and management - not to mention safety.
For corporate database updates it would seem a proper process involving a DBO would be more secure. They can run a proper backup before updates are applied and then true rollback is in place if problems are found in UAT.
Installing ActiveX browser components (certificate based through browser).
Install of signed CAB file downloaded from a Web page (admin only, can be captured as an MSI for mass deployment with elevated rights).
Defaults to install in "C:\Windows\Downloaded Installations".
Complications can arise if the version in the CAB file differs from the version requested by the Web page (triggers CONFLICT folders to be generated as installs keep re-running).
Update and merge XML files.
Advanced because it is (amazingly) not natively supported by Windows Installer.
Supported with extensions by both WiX and third party deployment tools.
Configure and control COM+ components.
Tech note: I have failed several times to achieve this properly with several third party tools. There seems to be an overall lack of required features.
I normally end up manually configuring the COM+ application and then exporting an MSI from the Component Services administrative tool that is then used for deployment.
This exported MSI is not good at all - fragile if you try to make any modifications. It contains an undocumented .apl file with the application's attributes and any dependent DLL or data files are not auto-included.
WiX provides support for COM+ (not tested at all by me). I hope it is good :-).
Just for reference: Understanding COM+ Application Installation.
Add custom event logs, set up performance monitors, add firewall rules, and other windows extensions. Supported by most deployment tools these days - including WiX. These features are not natively supported by the Windows Installer engine.
Set up connections to mobile devices and deploy.
Can involve "some strangeness" and weird proprietary solutions.
A custom, native dll might be required to achieve smooth deployment (Pocket PC back in the day - not sure how things work these days).
Install drivers of various kinds.
Much easier and more reliable now for signed drivers than before.
Supported by all third party tools and WiX (using dpinst.exe in the background).
Hooking up the application to advanced server features (deployed separately).
Automatic update systems.
License servers. Floating licenses, or regular licenses.
Online resources of various types. Help, templates, discussions, SDKs, developer tools, etc...
Online stores.
Most of the time this just involves setting a link or registry key to point to the server resources, but sometimes it is more complex.
Adding Very Advanced Stuff (custom actions)
When there is no built-in support for a certain operation or task in Windows Installer itself, or in any of the various third party tools available, you are left having to implement the feature yourself.
When you use Windows Installer, this involves running custom actions of various types (Windows Installer's mechanism for running executable, custom installation logic during installation).
Custom actions are purpose built executables (binaries: dll, exe) and scripts capable of making advanced modifications to the system during installation that are not supported by Windows installer natively or by the deployment tool in use (WiX, Installshield, Advanced Installer, etc...).
Custom actions that make modifications to the system run with elevated rights so that changes can be made to the system even if the logged on user does not have admin rights. There is essentially no limit to what these custom actions can do. They are armed and dangerous.
Custom actions are the leading causes of deployment errors and failure.
Hands down. If an MSI install fails it is most often related to a failing custom action.
Custom actions are difficult to write and debug due to the complexity of Windows Installer. They must be used only when necessary and they must be written with full rollback support so that they are capable of undoing all changes that were applied to the system in case the installer fails and must roll back changes.
This is hard and difficult work and custom actions are a big, complex and error prone issue - a can of worms.
Often minor application design changes can allow custom actions to be replaced by standard MSI features, or various MSI extensions available in third party tools and in WiX.
Executables and scripts that run correctly on their own may fail when run as part of an MSI due to the complex impersonation, elevation and runtime design of Windows Installer. These are not trivial things to get right. An MSI install is an intricate transaction with elevated and impersonated sequences that is very hard to deal with.
Custom action types
Windows Installer supports custom actions implemented as purpose built, native (win32) executables and dlls as well as scripts such as JavaScript or VBScript.
Some even use .NET binaries (C#, VB.NET, DTF, etc...) to run custom actions - this is not recommended due to their prerequisite need for the .NET Framework. These binaries are referred to as "managed code" and can't run without the correct .NET framework installed.
Finally there are PowerShell custom actions that are both scrips and managed code combined - and they should not be used since they require the .NET framework.
In the future, when the .NET framework might be guaranteed to exist on all Windows computers this managed code might be a viable options for general use, but as of now the consensus seems to be that these actions are too risky and unreliable.
Common, sample custom actions (some common, custom tasks are frequently implemented as custom actions because they are not natively supported by Windows Installer but frequently needed).
Manage Windows Shares (usually create).
Apply custom ACL permissioning (there is some built-in MSI support for this).
Modify NT privileges.
Configure DCOM.
Manage groups and users.
Configure per-user Office Addins.
Persist installer properties (for repair and reinstall).
Custom and company specific launch conditions.
IP-Configuration redirects for IIS
Encrypt or obfuscate content for data security
Etc...
Most of the custom functionalities mentioned above are now available in the WiX framework as a custom C++ dll - and other tools have some similar, custom features. You should always prefer these ready-made solutions to your own custom actions since rollback is properly implemented in WiX and the implementation is well tested.
Applying custom ACL permissions and modifying NT privileges are considered "deployment anti-patterns" by most deployment specialists. The requirement to do so indicates poor (lazy) application design.
Custom action summary.
Writing a custom action on your own should be a rare event that is unique and that has not been done (better) before.
Minor application re-design can often eliminate unwise and complex deployment constructs. In fact, almost always.
For example: application configuration should happen on first application launch, and not during the setup.
The setup should prepare the application for first launch, and perform tasks that require elevated rights (only).
User data initialization is a particularly bad thing to use setup scripts to perform. All of this should be done in the application launch sequence.
You should enforce proper rollback support.
This is complex and hard work.
Almost all script custom actions I have seen do not implement rollback at all.
You should write with minimal dependencies.
Preferably use C++ or Installscript or maybe JavaScript (only for internal, corporate deployment in my view). Avoid VB Script, and definitely avoid .NET code in C#/DTF or PowerShell scripts. There is some discussion on the issue of managed code. MSI experts like Chris Painter believes C#/DTF custom actions are ready for prime time, whereas the general consensus seems to be to err on the side of caution and rely on C++ dlls until a proper .NET runtime environment can be guaranteed. Here is a long-winded "discussion" of this issue: Windows Installer fails on Win 10 but not Win 7 using WIX
Robust code is difficult write in script. Scripts are fragile, hard to debug, lack advanced language features (particularly error handling) and are vulnerable to anti-virus blocking.
The only real advantages of scripts are that they are transparent and inspectable and the whole source is embedded in the MSI file (no version control issues). Corporate teams that hand off work to each other frequently might use JavaScript (there is a lot of legacy VB Script use as well - but that language is very poor for error handling).
Managed code has runtime requirements that can't be guaranteed at the time of writing - and this has been the situation for a very long time now.
PowerShell is both managed code and a script. Avoid it. Installshield supports it as a type of custom action. It remains to be seen how successful it will be. I would never use it unless forced to.
And much more...
Additional complications For Deployment
There are many additional complications when delivering a professional setup such as delivering setups in different languages (localization), branding setups for different resellers (OEM), ensuring the setup works on all required operating systems in different language versions, delivering separate setups for x86 and x64 machines, delivering a scaled down "viewer version" of the application, making combined setups for client and server installations (can be run on both the server and the client installing different components - not recommended if you ask me - details), and not to mention deploying to different embedded devices such as phones, pocket pcs, smart phones etc...
Certain "Deployment Anti-Patters" are also problematic to deal with (the linked answer is an "experiment" and I am not too happy with it - a work in progress, but it is intended as a check list for developers for their deployment efforts to avoid really common problems). These are bad constructs required in setups to make poorly designed applications run properly. They include things such as applying custom permissioning (write access in otherwise locked down paths, etc...), customizing NT privileges (typically "run as service" for a user account, or much worse), or applying excessive use of complex custom actions that make unpredictable changes to the system (these can really be anything and be very dangerous). Messing up the silent install is also a huge, common problem - it is terrible for corporate use of your setup. Deploying excessive amounts of user-specific data with your setup can also be problematic (hard to control complications). And there are many other, more specific problems to relate to.
Here is a post with the overall issue of setup and deployment seen in the larger context of application marketing and sales.
Doing Your Own Deployment
You will need a tool or a framework to deliver your own setups. Here is an answer describing different tools used to create installers: What installation product to use? InstallShield, WiX, Wise, Advanced Installer, etc. All attempts have been made to make the descriptions as objective as possible - describing real world experience with positives and negatives.
The commercial tools described in the link above are most excellent tools - and they tend to speed things up with good GUIs and ready-made solutions for common requirements, but developers should consider trying WiX - the new way to create MSI files. Please read this post for background information:
Windows Installer and the creation of WiX (read this if you are trying to "find your feet with WiX" and want to understand what the technology is all about and where it is coming from).
WiX has a learning curve but is "developer friendly" in many ways. For one it is a project type in Visual Studio (once you install it), and it allows a setup to be defined in XML and compiled to MSI as you would a normal binary. This allows proper source control, branching and collaboration. Plus it is free and open source. I feel it is OK to recommend a free framework, especially since it is well maintained. Expect a learning experience though. Here are some suggestions for a "flying start" with WiX.
Many programs make use of graphics, sound, and other drivers which are supplied and maintained by third parties. In many cases, these drivers may use underlying hardware or other system features in ways that Windows itself knows nothing about. If two programs, each with its own driver and unaware of the other's existence, tried to use the same hardware, they would likely interfere with each other in unpredictable undesirable ways (e.g. one might overwrite graphical textures loaded by the other). To avoid such problems, Microsoft recommends that has applications install drivers in such a way that the two programs that need the same driver can share the same driver instance.
The approach Microsoft takes is not the only means of ensuring that multiple programs using the same hardware go through the same driver. A system could also have programs temporarily load drivers when they start, and have drivers automatically unload when they're done. The difficulty with that approach is that if a program which uses an old driver is launched, and while it is running a program which needs a newer version of that driver is launched, the new program would not be able to run unless or until the old program shuts down its driver and switches to using a new one. Such a hassle is probably unavoidable, but having to deal with such things every time a program is launched is probably less bothersome than dealing with it only once when a program is installed.
All that having been said, while it may be helpful to be able to install a program once and have any "driver" issues taken care of once and for all, there's also something to be said for being able to simply run a program without having to make "permanent" modifications to the system. There shouldn't be any particular obstacles to programs being able to use either "temporary" or permanent drivers, but I know of no particular efforts to facilitate such designs.
Beside copying the files for You, the installer may also add registry entries needed by the program (if any), add values to environment variables (PATH), create icons on desktop, so You don't have to do this manually etc.
To quote Wikipedia, "Installation typically involves code being copied/generated from the installation files to new files on the local computer for easier access by the operating system." For simple programs, there is no need to install anything, but more complex ones can update, add links, etc. automatically if installed.
For our ERP platform (NetSuite) customization code rests in the Cloud. We (different entities) can directly make changes to it but there is no source control available to us in cloud.
It is possible to fetch the code files through a SOAP API.
I was wondering if it is possible to get the files through API using Apache ANT and shove into TFS/SVN?
I am not familiar with Apache ANT so I do not know the capabilities of ANT that if it can fetch any info through API?
(you can also suggest any better approach to source control the code in cloud)
Ant has several third party task plugins for version control tasks. Plus, you can always use the <exec/> task to build an equivalent command line checkout. However, I do not recommend people use Ant to fetch versions from your version control system. This ends up being a chicken v. egg issue.
Your build script is in version control. You need to fetch it in order to run Ant against it. If you're fetching your build script, why not the rest of your project?
Once you checked out your project in a working directory, and want to do an update, why not let Ant do the update? Because your build script is also version controlled. Doing an update and build at the same time could have you running the wrong version of your build script against your build.
Maybe you're going to check in the files that were modified by the build system. Not a good idea. You should rarely, if ever, check in files you built. If you need them, rebuild them. Built files are usually binary in nature, and can vary greatly from one version to another. In most cases, your version control system will be checking in completely new versions of the built object instead of using diff format. That takes up a lot of room in your version control system.
Even worse, you can't diff the built objects, so you can't really verify their content or trace their history. And, built objects tend to age very quickly. Something built last month is already obsolete. With in a year, the vast majority of the information in your version control system will be nothing but obsolete binary, and very little stored is useful code.
Besides, your version control system has nothing to do with building your files. Imagine between Release 2.1 and Release 2.2, you change version control systems from Subversion to Git. Now, a bug in Release 2.1 needs to be fixed, and you need to create Release 2.1.1. Your checkout code in your build scripts will no longer work.
If you're using NetSuite IDE, you're using Eclipse, and Eclipse is great at handling version control. Eclipse can handle both SVN and TFS (although I don't know why anyone would use TFS). Eclipse tracks file changes quite nicely. In fact, Eclipse gets confused when you change files behind its back (like you do an update outside of Eclipse).
Let Eclipse handle your version control issues. It presents a common interface to almost all version control systems. This way, your build system can handle the builds.
I'm not sure what other requirements you might have, but if you use the NetSuite IDE (Eclipse + bundled plugin), you can use it to pull and push files to NetSuite. And then you can use any source control system you like (we use SVN, for instance).
I'm working with an embedded system software project right now, and we're facing some problems dealing with some precompiled binaries living inside our repository.
We have several repositories for different parts of our project: One for the application itself, one for the OS, one for the bootloader and several libraries. All of them, except the one for our application, are shared with other teams, for other projects. We are using git (and changing is not an option right now), but I think we'd have the same problem with any VCS.
Right now, we have a precompiled binary for each of those components living inside our application repository. The idea was to speed up the build time, since the OS alone takes about 20 minutes to build from scratch and most guys work only with the application.
Problem is, there are several bugs/features in those binaries (and related application code) to be integrated at any time and, as you know, diffing and merging binaries won't work.
So, how do you guys do when you have to work with those external dependencies?
Thanks a lot =)
One viable solution is to use an external binary repository like Nexus.
It is not linked to a VCS, meaning you can easily clean up old versions of said binaries you don't need anymore
It is light-weight (simple HTTP client-server protocol, no need to clone all the repo with all the versioned binaries like you would with a DVCS -- git or mercurial --)
I'm just learning how to do things, and want to start using some sort of version control for a web app.
What's most appropriate for deploying a python or php web app on my own? I'm using linux and have a linux server.
Thanks!
SVN, but you need to be able to easily deploy your webapp with SVN.
Since it is not always a simple task, so I just point out this article which may be of interest for your project.
General principle:
Configure Apache on your development server so that it picks up your checked out working copies as separate subdomains. Using this, you can simply make a checkout of your project and it will automagically be up and running. No need to touch the Apache configuration. You need a DNS wildcard entry so that all subdomains of dev.example.org go to your development server.
The only problem with using the above Apache configuration locally is the DNS wildcard. Unless your desktop is assigned a hostname by your network's DNS server and you can set the wildcard there, you will have to make do with your localhost address. You can install dnsmasq to act as a local caching DNS server and put the wildcard on your own machine
Use dnsmasq so you can achieve the same effect on your own development machine. That way you can develop your web applications locally and you won't need a central development server. In my examples I will be assuming you use subversion for your version control, but it works virtually the same with other version control packages, such as git or bazaar.
Note: (Humor)
This other question on Subversion allowed me to point out to this article about publishing its (source-controlled) data into production, with in it probably the ugliest diagram I ever saw on the topic ;-)
If I had not bumped into git, I would've doubtless gone with SVN. Having said that, I would recommend git.
Nowadays, I would certainly go with a distributed version control system. Setup is faster since you don't need to set up a version control server and everything, all you usually need to do is initialize a certain directory within your development box for version control and you're good to go. They also seem like the way to go these days. If it were 2001, I would recommend a centralized system like Subversion. But it's 2008, everyone is moving to distributed systems and user interfaces and supporting tools tend to get better.
Here are some suggestions for you:
Darcs: Easy to learn and has all the features you will usually need
Mercurial
Git: Powerful. May take some time to understand but evolves rapidly
All three of them should be readily available in your Linux-based OS through the usual package management solutions.
SVN is great.
Nowadays the hype around DVCS.
I prefer Bazaar.
Because of it's name, the support, the feature set, and it works well on my window$ machine too.
I'm using unfuddle.com and I love it. It's free for a one person web app
The answer really depends on your way of thinking. I personally had problems switching to subversion from SourceSafe. If you come from microsoft shop, I'd suggest using SourceGear Vault, it is free for <=2 users. If you come from non microsoft area, then using subversion would be preferrable. Also please consider git if working on linux.
HTH, Valve.
Personally I use monotone, learning a DVCS is definitely the way forward.
For a one-man job, pretty much any revision control system will do the job. It's when you get into multiple people, and past that into multiple repositories, where there start to be differences.
Given that, I'd go with whatever Free Software system your development environment supports best. I see Subversion and Git mentioned and both are fine choices.
SVN would been my first choice. If I have to take a second choice I would go to CVS.
One of the most popular models out there today is Subversion. It's generally easy to setup & configure and is able to handle multiple platforms.
SVN. If one does not need concurrent access (which is your case), it is VERY easy to setup as no server is required at all. Definitely your weapon of choice.
I wholeheartedly agree with SVN. Command-line SVN is quite easy too.
While I like svn a lot, I've found mercurial handy for having the whole repository locally. (the same goes for git, but its interface is a little less polished in my opinion.)
I'm not able to answer the question as asked, because I don't develop on a Linux server.
But maybe this experience has a counterpart in Linux world.
I use a local-on-my-LAN-only IIS server (actually on an old laptop that no longer travels but works as a little server). I have VSS installed on that server too. There is an integration between the IIS Server, the FrontPage extensions on that server, and the VSS.
The upshot is that I can use FrontPage to build and edit my site and build a development image that is always backed up in VSS, and I can check out, check in, and do all of that from within FrontPage.
Now, the way I publish is I take advantage of the sharing capability of VSS so I have a deployment image that shares with the project that is actually an IIS web site. I have a deployment-image directory that I can transfer the latest checked-in material to (material that has not changed is not updated). I then deploy the deployment image to the hosted, public web site using FTP (again, only transfering new and updated files).
I present all of these details to suggest what might be the use-case of interest, even though a different solution approach is needed with Linux.
If I wasn't using a tool that integrated with the web server and also the source control at the server, I could do something similar by checking the VSS material in and out of a local directory and then pushing the updated VSS project to the IIS server web-pages directory hierarchy. The workflow is a little more clumsy. In this case, I would not edit pages directly on the development web server unless I could lock check-in pages as read-only or something.
Does this suggest anything that might be appealing in the Linux server case?
Definitively Mercurial is a good choice, quick, easy to use, perfect for working alone, or with multiple other developer, perfectly multiplateform, handles merges, branches, etc. very simply, plugin based, there are great tools out there such as nice IDE plugins (notably Netbeans and Eclipse).
Robust, it works just as you a expect such a tool to work, not like SVN (and I have years of day to day)...
Both Sun, Xen and Mozilla host all their repos on Mercurial. We're currently moving from SVN to Mercurial after a 6 month daily test, without any regret.
I once used Perforce and was impressed with it. There's GUI and command line versions and it supports Windows, Linux, Mac and Unix for both the server and client. It integrates with Eclipse and has APIs for writing your own client applications (C/C++, Ruby, Perl, Python) It only supports two users and five workspaces before you need to buy licenses though (but that is within the scope of this question).
Subversion is a good choice. For the client, there's TortoiseSVN (http://tortoisesvn.tigris.org/) that integrates with the shell and lets you do things with a right click on a folder. For integration with Visual Studio (I'll assume that's your environment) there's VisualSVN (http://www.visualsvn.com/) and AnhkSVN (http://ankhsvn.open.collab.net/). For the server there's a one-click installer you can find here (http://svn1clicksetup.tigris.org/) that does the setup in a snap. VisualSVN also has a (free) server that you can use which provides it's own web access and security (rather than using apache) and has a mmc-snapin for managing/creating repositories and users.
CVS - No, I'm not joking. Not that it is better (it is not) or the simplest (it isn't), but it really doesn't matter at the end of the day. The important thing is to get started with ANY version control system even if it is a one-developer shop, even if it is CVS.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
How do you manage your project life cycle?
For example: Do you start with a template? Do you use versioning such as SVN as the authoritative source? Do you archive the projects, if so when and how? When a project is revived (work resumes), how’s that handled? Do you use automated scripts to do things such as create IIS sites, DBs, archive, launch, etc?
Of particular interest is management of many projects at varying points of development.
Development: We do not start with a template, because the world changes quick enough to make template maintenance a full time job. We do encourage everybody to use the same IDE (Eclipse), so that they can help eachother with their environments.
Project Management: We are using GForge to manage our projects. Sourceforge is slightly better, but GForge is much cheaper and has a different licensing fee model. GForge incorporates CVS, SVN, Document storage, Issue trackers and integrates everything nicely. This makes it easy to see where the project is at. Open issues, and closed issues with connected code changes, everything is integrated.
Versioning: Although we tried SVN, we switched back to CVS because it fits our needs better and works fine.
Backups: Our GForge server, housing all our projects and sourcecode, is running on a VMWare EX server. Backups are done daily on VM level and we make VM snapshots if we feel that we need more frequent restore points for some reason.
Reviving projects: This is very common in our business. Every project has all it's libraries and build requirements in CVS. The project always has an up-to-date development manual which describes all the steps to get a development environment running, and has a chapter with all the things which are not default, to pay attention to. We try to build software in an as-default-as-possible environment so that developers don't have to spend days tweaking their settings.
Nearly all projects are built using Maven, which also makes life easy for our developers. Ususally reviving a projects only takes a few steps:
Download eclipse
Connect to CVS over SSH (extssh is built into Eclipse)
Check out the project (default "Check Out" option)
Run "Maven Eclipse" and refresh Eclipse
Run unittests in Eclipse to see if everything is working.
Builds: All our projects are built on a seperate build server. Every morning the build server does a complete build and tags CVS if all unittests succeed. During the day, hourly builds are made and when there are failures the team automatically gets an email. Usually we use one build server per project, and it is a simple luntbuid server (Linux, Tomcat, Luntbuild).
Both the buildserver and sometimes even the developer machines are VM's. This makes reviving a project really easy. Get the VM from the fileserver, start it up, and you're good to go.
The build server creates daily sites which show unittest coverage statistics, complexity measurements, CVS activity and developer activity (who changed what and when).
All our software comes with self-building database scripts built in. Point the config file to the database, start the software, and it figures out what it needs to do to the database itself. This really comes in handy because the buildserver can just start the software. No special steps needed. Our customers are also happy, they never need to worry about their database, or upgrade scripts.
The whole project lifecycle is managed, documented and tracked in GForge, with the addition of some external spreadsheets for budget tracking because that's simply easier.
Wether you have an integrated project server or not, I think it is really important to have a system. This enables you to switch developers between projects without them getting lost. It saves time. Particularly when a customer comes back to you after 2 or 3 years for modifications on old software (yes, that happens).
All the stuff we use is open source (you can even use an open source fork of GForge). It's not in the tools, it's how you use them.
It would depend on the nature of the work. When working at home for private clients, I start by opening a folder for the client with a bunch of standard documents, which I customize, such as contracts, invoices, reports, requirements, testing, code repository, etc. As the project develops, I add/modify the directory as required.
If I had to go back to a project, I would reopen that directory, and for any non-common components, create a new directory. For example, if my client had a web application built, and now they need a second application, I would use the same directories for invoices and contracts and create new directories for the code base, requirements and testing.
In terms of backup, I archive the work at any point where I've reached a milestone, with the exception of code, which I back up daily at a minimum. At the end of each project, when I close a contract, I take the entire directory and compress it and store it on a remote server.
i create folders containning the project stages like "initialize software process" on were we placed docs like the bussiness proposal, we use another one for requirements another for the construction,releases,one for meetings minutes, and so on.
We keep those under a subversion repository but it really depends on what metodology you are using, also it depends on how do you handle the configuration management and how organize you want to be. and yes we use template for most of our artifacts so we assure in some way the quality of our products.
As for source code, we have it all in a Subversion repository. After each release, we make a branch - new features only get added to the current branch (on which the next release will be based), critical bug fixes are done in the current and the old branch (so we can deliver hotfixes for the version the customers currently have).
As for all documents belonging to a release - from the planning & resource sheets to specifications, testcases, user and technical manuals for the software we create etc. - we store them in a Sharepoint portal site. The advantages of this Sharepoint site is that users have access via a website (so no need to grant management access to your repository ;-), you can finely control the access rights, and you can turn on versioning. We also use tagging to mark whether a document belongs to a specific release (e.g. service pack xy) or product, or whether it is generally valid.
Concerning scripts, we use several to perform e.g. nightly build plus unit tests (we usually do that for the last and the current release), but also to deploy the complete software solution (including IIS site creation, database data model upgrade,...) on out test servers. These are nant scripts using lots of variables for paths, version numbers etc. so it is very easy to copy and modify them for a new release.