What Does a Standard "Installation" actually do? - deployment

I'm just a hobbyist programmer more or less and have grown up coding-wise in the .NET ClickOnce world.
When one "installs" a program, what actually happens?!
Also: Some little apps/tools just run from the exe. Why do most programs need a fancy installation process? What are the advantages, disadvantages, pros & cons? Is installation usually necessary or more like standard practice?
Apologies for the extra questions. I'm just hoping for a plain-English more-or-less layman's explanation of the key factors.

You're really looking at a lot of legacy reasons all rolled into what has become standard practice in the Windows world.
First, some contrast, because it isn't always this way. An "application" in Mac OS X is simply a directory with a certain structure inside it, named with a .app extension. Installing an application is as simple as dragging it (just the app icon) to your Applications folder, and uninstalling involves dragging it to the trash. That's it, no fancy installer is (usually) necessary.
On Windows, application are typically built from independent components which need to be "registered". This involves the installer program writing some bits and pieces to the Windows registry, to tell Windows where to find the components. Yes, the application probably should know where to find them (since they're all installed in the same place), but years of legacy and different ways of hooking up components has got us where we are today.
Typically, an installation program on Windows:
copies files
registers components
sets security permissions (if appropriate)
adds icons to the Start menu and/or desktop
writes more stuff to the registry to tell Windows to add the program to "Add and Remove Programs"

The program tries to modify the computer in such a way that it works and all competing products fail. On Windows, this means:
Modifying arbitrary keys in the registry until it becomes slow and full of broken entries
Replacing DLLs with the single ancient version that your software can use
Spreading as many files in as many places as possible
Creating an uninstall script to maintain the illusion that the user can get rid of the software without a reinstall of the OS. In the unlikely case that the user tries to run this script, you can educate him/her not to ever do this again with questions like "The file .... might be used by other applications. Do you really want to delete it? Yes/No/Maybe/Any answer/All answers are correct"
Installing hooks in obscure places so your software runs when the computer boots. That may slow down the boot process but your software will start in an instant, so it's a small price to pay ... for you.
Doing obscure things which take a long time but no one can tell what you do (what does "Setup is preparing the install" do for 15 minutes?)
Checking whether there is enough disk space but use 32bit integers to make sure that it can't be installed on 1TB disks.
An important task is to fail with the installation and print the error: "Installation failed. This might be because there is an antivirus software installed. Please deactivate it and try again." This will make sure that users will start to distrust their anti-virus (especially when the install succeeds during the second run since the obscure bugs in the installer weren't triggered) and a lot of them will forget to enable the virus scanner again or even uninstall the damn thing.
Virus authors all over the world are people, too! Spam makes up for most of the traffic on the Internet which must mean that it's important and who wouldn't want to be part of the biggest community on earth? On top of that, you can make big money this way. All you need it a weak conscience and/or some criminal energy.
A very important part of your installer is to increase the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-7-9-23-64738-1349283462-3754093625-4491\IsYourWindowWideEnough\NotGivenUpYetHuh\GoAway\ImportantSystemInformation\Let See How You Can Handle Spaces\DamnIGottaStopSincePathsCanHaveOnl\ReinstalCtr
This important system counter will help to create the illusion of instability for the user until they feel a strong urge to reinstall the whole system. This will help the professional IT industry to sell support hours, sell new computers, more RAM, bigger hard disks, or new Windows versions (they must be better, right?).
Note: If you take this text seriously, seek professional help.

The reasons for using a "fancy" install process
to record the installation process
so it can be replayed (repair) or
undone (uninstall)
to perform actions beyons simple file copy (create registry keys, register components, perform other arbitrary actions
The "standard" option on most installs will be "all the bits usually wanted, in a standard location, like Program Files" install with no customisation, possibly without some expert-level features enabled.

An installer abstracts the process of deploying complex pieces of software infrastructure, which is usually contained within an archive, through a convenient, self-sufficient user interface.
This UI can be graphical or based on text which is output on a command-line such as the unix shell (e.g. bash). In case of graphical installers, most often a so called installation-bootstrapper is used, in the latter case, installation scripts which can be bash-scripts, Microsoft batch scripts, or other any scripting language which runs on a command line.
In the simplest case an application is simply an executable file, with the operating system knowing what to do with the file in order to run it. The application file may reside in a folder with subfolders and other auxiliary files, packed into one archive. In this case no installer may be needed.
For complex software, entire software platforms and tight integration with the underlying operating system infrastructure may be desirable, for instance to enforce the copyright of a software product.
Many installers on Windows provide an /e or /extract flag. e.g. setup.exe /e to allow extraction of the archive's contents without the installer running its installation script.
I recently needed to do just that.
Shifts in Mindset
Installers have almost become a norm for delivering professional software, no matter how simple the underlying software assets. With an increasing number of computer savvy users and the desire to migrate ones applications from one desktop to the next, portable software, often delivered in a simple archive, is becoming increasingly popular.
( I don't know how much time in total I have spent on installers, but it is definitely on the order of days. )
Tasks the installer may handle are:
unpacking (often using exotic, high compression archivers)
ensuring system hardware requirements
ensuring sufficient hard-disk space
ensuring software platform runtime requirements (e.g. 'redistributables')
checking for newer software updates
downloading the software from a remote repository
creating and/or updating program files and folders
create configuration files, registry entries or environment variables
install sofware drivers, mount or unmount devices
increase accessibility for everyday users, by explaining installation steps, creating links, shortcuts
promote the own sofware through bookmarks, etc...
create incentive for the user to actually startup the software, by presenting the keypoints of the software during the installation, slide by slide
create additional revenue, through software-bundling
configure kernel-modules and automatically running components (e.g. daemons, windows-services)
automatic patching of the sofware
setting folder, file and user permissions
creating UUIDs references to couple the software to an installation-instance and prevent portability
PS: If you can think of other points, let me know and I will incorporate them.

Wikipedia tells us that a typical installer creates or modifies the following:
Shared and non-shared program files
Folders/directories
Windows registry entries (Windows only)
Configuration file entries
Environment variables
Links or shortcuts
So if your program needs one or more of these modifications, you should create an installer which does that job.

Depends on the program you are installing. An "Installation" can range from simply copying the (relatively small) executable to a directory, to setting up shared libraries, doing patchlevel checks (I am designed to run on SP2 or higher - do I have SP2 or higher?) and changing the systems configuration, either for the current user or for all users. Most of them also register the Installation with a package manager so that you can easily uninstall at a later point.

Related

is there software for jar file copy protection, or licensing software

is there good copy protection mechanism to protect my jar file from being copied DVD or is there licensing which can limit number of machine installation.
I have software which i require to distribute on DVd. i want my client to just run the application in machine.
advance thanks for your help.
Copy protection can't prevent your file to be copied but you can make it fail when it is run on unlicensed machines. Selection of a copy protection mechanism is a matter of budget and time. There are many companies with many products.
You can go with not yet cracked technologies like iLok etc. This is applicable for very high revenue cases. But if your target audience is less than 1000 people then a simple in-house solution might work. Consequence of a complex method is typically many calls to support line and unhappy customers.
Things got a lot easier after clients are all connected to internet. You can make some simple parts to run on server which a cracker would never dive into for a replacement. Or the app might download the contents from server after installation. Latter is what I did for my app and I never had any problems.

Multiple OS vs SIngle OS phone and server development

Me and few friends run a little app creation business in our spare time, our current development environment is a 3 macbooks laptops running just snow leopard, 4 asus laptops with dual boot windows 7 and ubuntu and a rubbish test server box that is similar to our vps.
Our setup currently work okayish at the moment, with a few minor issues, like not knowing what version of software we are working on, caused by continually switching operating systems and lost of productivity from being to lazy to switch the laptop we are working, having to unplug it and plug in the new one, including the second monitor, keyboard and mouse.
Our system is far from professional and we are looking to upgrade. This is because we wish to increase our staff and we have some cash saved up, so why not. The phone we are targetting are iOS, android and Win7. Our servers are written in php and json. So my question is basically, how do you guys manage with all these multiple operating systems.
iOS requires mac os x
android can use all
json require linux/mac os x
windows phone 7 requires windows
do you guys use some form of virutalization?
or try those libraries that compile to each phone binary such as unity?
There are many many different ways to solve this and you may have to find what works best for you. Here are some suggestions though.
Using the macbooks, set up bootcamp so you can dual boot to OSX or Windows. This will mean you can use the Macbook for all development without having to bother swapping monitors, etc. Doing this will leave your other Windows laptops spare which you can use for the next suggestion....
Set up a central repository for your sourcecode. Use one of the servers you have, or re-purpose one of the other machines and install a decent source code repository system. CVS, Git, etc. There's plenty of resources about these. This will allow you to keep your code in one place so it won't matter which machine you are working on - you can always get the most recent code. Plus it will help you track your code changes. Oh, and don't forget having it all in one place will be much easier for backups (you do do backups, don't you....?)
Don't fall into the trap of upgrading hardware just because you have some money floating around. You may just need to use the hardware you have more wisely. You mention what you have is "far from professional". You don't need the latest, greatest hardware and software to do development. I've done iOS development on 4 year old Macbook Pro, used an 8 year old PC as a server for web and database and still use Windows XP every day.
Depending on how many of you there are, you may not have enough Macbooks. If this is the case, then perhaps you have some who are specialists in the server-side stuff (ie they don't do iOS development and so don't need the Macs).
Virtualisation - using VMWare or similar tools are an excellent way of getting more from what you have. For example, you could have a couple of test servers that aren't very heavily utilised. Using virtualisation, you could put both of these servers onto one machine. This will then free up the other box for something else. It also makes it very easy to backup (you are doing backups, aren't you...?) an entire server and recover it back to the exact state in the case of a hardware failure. You can also very easily create a server tailored for each client/project and switch between them quickly without having to maintain lots of other stuff (think if you had a web server configured for one project and you then work on another project that needs a different configuration and you change it, then you need to change it back, etc).
EDIT: Update in response to comments.
If using Bootcamp isn't an option, then consider running a Windows and/or Linux virtual machine inside OSX. Depending on the spec of your macbooks and as long as you don't need very low-level hardware access on Windows, then this would probably work as well and not need to switch in and out using BootCamp. Same goes for the Linux virtual machine. I'm a big fan of using Virtual Machines on development environments as it allows you to copy around and switch in and out servers without having to rely on physical hardware connections. And you can very easily return to a known state with the server configuration and data.
With regards the source control "in the cloud". I'm not a fan of this approach. It's my source code and I want to control it. I don't want to be reliant on some other company and I don't want to hope I've read some Terms and Conditions correctly and I'm not handing over my code to some other company to do what they want with it. Aside from that, what happens if your internet access goes down and you absolutely must get some coding done for a customer? If you are relying on another service, then you are risking problems. Yes, it has advantages for multi-site, they do the backups for you, etc. But it really isn't a problem unless you have lots of developers spread all across the world. And even then it isn't necessarily a problem. You could always do a backup of your code to some package file, encrypt it and then throw that up in the cloud for a backup storage (as well as burning it to disc, writing to another external hard drive and storing them off-site). But I certainly wouldn't want to rely on an external source control unless I was doing open source stuff.
There's sooooo much more to these subjects and there are many other subjects you will probably encounter along the way of building up your business.
One of the most important things about software development is to keep it organised and to get that organisation part done at the start. If you are just each keeping a copy of the code on local drives, then changing code and hoping that you haven't changed the same file as someone else, then this will just lead to pain. The source control aspect is key from the start.
Oh, and did I mention backups?
I would also consider the IDE you're using as part of the equation. For instance a good cross platform IDE (Like QT4+) and a centralised code repository on a server will go a long way towards mitigating your working problems. Eclipse, Netbeans and QT4+ are cross platform and will work with all 3 systems. Virtualisation as you mentioned is an option, but first I would decide on the IDE platforms to use before worrying about your dev infrastructure setup.
Bro, I'm not a pro, but you have two options:
Either multiboot your system by installing multiple OSes...(Obviously, you need a separate MACbook)
Or use Virtual Machines like VMWare etc.
Personally, I haven't heard much about libraries like Unity etc.
Go for dedicated systems & not just libraries.

How can I deploy an Adobe Air application including the runtime in a single file executable as a portable app?

I am using 7zsfx [Link] to package Java apps together with a JRE as a single file executable. Is it possible to do something similar with Adobe AIR apps if I get a license from Adobe to distribute the runtime?
Also, does anybody have any alternative ideas for deploying Adobe Air apps with an embedded runtime? (Reason: Target computers may not have the Air runtime installed, and target users may not have permissions to download and install the runtime.)
VMWare ThinApp (formerly Thinstall) and MoleBox are viable solutions IF they support your distribution format. They should so long as it is a PE - the Windows standard EXE/DLL form. Then they will allow installation of the runtime EVEN IF THE USER DOES NOT HAVE RIGHTS TO INSTALL IT.
I authored the first of this type product many years ago, called PEBundle. I didn't really follow up on it much, and discontinued it later. The idea is to bundle a bunch of files together, then emulate them in memory in a virtual filesystem and virtual registry (for Windows cases). So, it appears the user has full administrative rights as far as your app is concerned and the AIR runtime installs and runs great. Meanwhile, no real changes to the system occur.
However, if you work out an affordable solution, licensing may be better NOT asked about. I don't see why Adobe would really care if you redistribute their runtime, but it might better not to ask than to ask. Asking might complicate matters and get lawyers involved, when really management doesn't care. I mean, if you were in their shoes, would you care if your freely distributed runtime was redistributed by someone using it? Nah, you'd just be glad they are using AIR. If you are some lawyer, you might care because of some technicality though.

PLC Version Control

I need to come up with a CM process for PLC code.
Currently, the system is developed using RSLogix 5000. The build product is a monolithic file that can be loaded onto a PLC for execution and edited directly in the development environment. With multiple developers, this has become a problem. They're stepping on each others changes.
As an analogy, it's as if, when doing Java development, the only wway to edit and save the source would be to load up a *.jar file into your IDE, make the change, and then save it back to the jar file. This is less than ideal.
How can I coordinate changes between multiple developers working with PLC's?
If we are talking about one big binary files, then a VCS (centralized or decentralized) is not the best tool for the job.
An external referencial (a shared disk for instance) where a batch will copy and label the current PCL state is better.
See "Tracking Software History"
To avert discontinuities in the historical record of revisions, old versions of programs must be stored.
“We take it a step further, though. Using our MDT AutoSave, we actually go out and interrogate the equipment. Overnight or at whatever frequency is specified, the software reads the programs in the PLCs and then compares that information to the last known program. The version-control software will copy the new program and store it and [then] compare it to the last one.
Launching version control is fairly simple. Required is software installation and then hardware configuration. “You would need a server and a couple of weeks of engineering and you’re good to go,” Perysyn says. However, his company uses a “shrink-wrap approach” that involves installing the software and then customization by users filling in the blanks.
That being said, when you have multiple changes from multiple developers, you need an integration environment where a first delivery can be done and validated, before pushing it to the actual server.
See also this post.
I use Unity Pro, so this may not apply for other brands.
Unity can export an "archive" file which is XML which describes the PLC program and IO setup in its entirety. After commissioning changes, I create an export and check it in to my local Git repo. This gets me an annotated history of changes, but no visual comparison. I can always use UnityDiff for comparison.
Check out http://www.mdtsoft.com/ also
You need specialized versioning system for PLCs like VersionDog.
From the manufacturer:
"Special support with Smart Compares for SIMATIC S5, SIMATIC S7,
SIMATIC PCS 7, WinCC, WinCC flexible, InTouch, CoDeSys, TwinCAT,
Phoenix PC WORX, RSLogix, Schneider Modsoft, Schneider Concept,
Schneider Unity, SINUMERIK 840D, Bosch IndraWorks and more. Also robot
programs from ABB and Kuka and office related data formats like
Microsoft Word, Microsoft Excel and Adobe PDF are perfectly supported
by versiondog.
Update: Here is a screenshot showing ladder version compare. I guess that's what most PLC folks are interested in. We also use it to schedule e-mail report if PLC offline and online application versions are a match, as an alarm that something has been changed in PLC but not put into version control server.
About RSLogix5000 specifically, I have seen developers use an emulated PLC and make their changes online. The final product once developed is then put together with all the comments (as they are not contained in the PLC) and then commissioned. There are issues with changes that cannot be done online, such as AOIs. There are tools in place to stop two people editing the same logic online at once and to take ownership of sections. Backups can be done in the form of uploads, but there isn't any way to track changes.
It is a messy problem, messier still for when you are maintaining a system as you want an .ACD that you can go online with, as unless you are somehow doing a diff with the RSLogix compare tool you just see unreadable machine code like "+|Éû³´¬ÙÆW×晵‚>Ù,"
The most common revision control I have seen (sadly) is just saving the the latest file, then taking a copy and adding the current date to the file name, like the recommended control.com post described.
RSLogix5000 has always prohibited multiple users from opening and editing on the same .ACD simultaneously. However, if multiple users have identical .ACD files, open them, and all make connections to the same target controller, they each can edit on the controller simultaneously, but only if they are working on different routines. Other's edits appear automatically, if they were to look at another programmers routine.
Note that working online like this is usually done with the PLC running, even sometimes with the target system (some kind of machine) operating. This kind of arrangement for the purpose of completing work faster, or in some cases because the system is huge. No one develops like this, as it is really a debug tool and impractical for significant changes.
If one programmer finishes, and another is not done, the unfinished work of the other will be saved to the first programmer's .ACD when they save. Whoever saves last will have everyone's work.
Like others have mentioned in this thread, using file date is fairly reasonable. Some companies use a version control variable that is usually displayed on a connected HMI. Other companies use a separate document that documents who and what changes. Sometimes version notes are placed in a lengthy rung comment in the main routine.
My company uses a separate change log, and dated archive copies are maintained. Multiple programmers are only used in the most extreme cases. Someone is always designated to maintain the offline file integrity, usually the person who will be working the longest, or the project manager.
It is important to note that rung comments are not carried from one user to another before RSLogix5000 v21 because previous versions didn't store comments on the controller.
All this said, you might be trying to manage offline development. I haven't seen any sophisticated methods for this. Usually programmers write the needed routines separately, and a project manager will assemble them into a single project. The cleanest approach I've seen is where a project manager will create an architecture with global functionality, and assign routine work to others, giving them a copy of the .ACD to work with. They return the .ACD with changes, and the project manager copies and pastes their routines into the "master" project.
This is a very good question and it really depends on what you want it to do.
If you are only using Rockwell equipment it might be helpfull to look at their solution, I think it's called FactoryTalk AssetCentre.
Currently I am looking into using Bazaar from Canonical.
One thing that VonC pointed out is that a piece of software that can interogate the PLC is a deffinate plus, not a must in my oppinion but it sure as hell helps.
Am I reading your question properly and you have multiple developers working on the same PLC code at the same time? It's a scary thought but I know it sometimes needs to happen, Siemens PLC's are a bit easier to program with multiple developers but I would assign one person to consolidate and test all the changes before committing to the PLC. Any CVS system will let you create branches for every developer but how you would get them to consolidate their changes is the million dolar question.
Bart.
A simple thing to do would be to do a text diff on the .l5k files so you can easily see whether a developer has been messing with part of the file that is outside of their scope.
I saw this question just now from a link at stack exchange: Are There Realistic/Useful Solutions for Source Control for Ladder Logic Programs. Rather than have a link only answer, I'll dupe my answer here:
There is actually a canned solution - from GE-IP of all places. Check out Proficy Change Management. This product does version control from a PLC control systems point of view, rather than a pure version control of files point of view - it works as a layer sitting on top of a VCS (the scary part is that originally this VCS was Visual SourceSafe) and handles rights management, reporting and checkout/checkin.
While the product is from GE-IP, it is designed to support a variety of PLC and HMI systems out of the box.
Full disclosure, I used for work for a company selling and installing PCM (but that was 7 years ago). So if you ask me what it was like back then I'm likely to tell you where it all went wrong!
In my company we just started a trial with Copia.io
Check it out. Our first tests look very promising!
It brings, branching, merging, ladder diff etc... for multiple PLC platforms (Rockwell, Siemens, Codesys)..
PS. I work for a company that builds machines, we were looking for version-dog alike solutions with a bit more power in collaboration and diffing capabilities. I used tools like Mercurial, Git, Tortoise in past companies (not for PLC though).

Deploying .EXE to network drive? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
What are the problems with deploying an .EXE to a network drive and having users execute the .EXE over the network?
The advantage is that upgrades only need to be made to the one location. What are the disadvantages?
I would instead consider creating an MSI (http://en.wikipedia.org/wiki/Windows_Installer) file for your application and a Group Policy to facilitate distribution throughout your company (http://support.microsoft.com/kb/816102).
There are a number of freeware MSI tools. Good ones that come to mind are http://www.advancedinstaller.com/ and http://wix.codeplex.com/
The EXE is one thing, but you also need to consider any DLLs and other shared resources that may be associated with the app.
Some DLLs may be shipped with the EXE - you'd have to put those on the remote drive with the EXE, which would cause additional network traffic if it needed to use them.
Other DLLs may be part of Windows, but there could be versioning issues here if your workstations have different versions of windows or even different service packs or patches but they're all running a common version of the app.
And what about licensing? Does the app's license actually allow you to install it on a network drive - many software companies are very specific about this sort of thing, so you need to really be careful if you don't want to get caught out.
In short, it sounds like a good idea to get a quick win for your deployment management, but it probably causes far more issues than it solves.
If you really want to go down this path, you maybe should consider alternatives like remote desktop (eg Citrix or Terminal Server) or something like that - there are much better ways of achieving your goals than just sticking everything on a network drive.
One problem is file locking. In a Windows environment, if a user executes the application directly from a network share, the application's files are locked. This prevents the application from being updated with a newer version if someone has left the application open.
You can go around this by disabling the network share before updating the app and then again enabling it.
If you write your application using an Object Capability Security model, as defined in Mark S. Miller's Ph.D. thesis, Robust Composition: Towards a Unified Approach to Access Control and Concurrency Control, then you will not have any security drawbacks.
On the other hand, the "disadvantage" is that you must now manage access control via the object graph. The application should only have access to whatever permissions you give it. As some have already mentioned, Windows has a basic protection policy which locks the application files and thus prevents anyone from modifying the EXE until the application instance(s) is closed.
Really, the key issue here is you have to ask yourself what authority the program and its component parts should have. If it requires local user permission, then you will either have to design around that or give the program permission.
Understanding the implications of this, and doing it well, is not an easy task.
For our program we decided against a shared exe. We thought it would be harder to support (IT needs to kill users to unlock files before updates, users wont know where the exe is on the network, share\network file permissions need to be modified by IT, etc) and that we should emulate the behavior of other programs when possible (client software is normally installed on the clients).
The main disadvantage would be the network drive being unavailable.
Then each language, which you didn't specify, the EXE is written in matters. As .NET has some security issues running from a network drive.
It depends on what the application does. My application would be a problematic over-the-network deployment because the configuration files it uses are all in the same folder as the EXE, or in a subfolder. If every user runs off of the network, they could potentially modify the configuration files and screw things up for everyone else.
Thankfully, my app is only going to be deployed on separate workstations. :)
They might not have all the files your app needs installed. If they don't, you'll need to create a setup. If they do and it works and everyone's drives are mapped correctly, you should be fine.
I run a vendor's app like this at work. They didn't design for it, but it works without an issue. I have all the shortucts pointing to the UNC path. This particular app doesn't use files in the exe directory, so file locking isn't an issue. Its also hooked up to SQL Server for the data, so the data store isn't an issue either. (Would be a major problem if the app used a local SQLite, Access, or some other file based DB.)
If your app is a .Net app, this WILL NOT work without some major modifications to each machine's security settings, which is probably bad idea anyway. If you're talking about a .Net app, you should use ClickOnce. I use it for a few apps at work, as well, and it's great, and easy to use.
The problem is there isn't a definitive answer to your question, just a bunch of "it depends" qualifications. The big issues, AFAIK, are using local files for data storage, be they text files or databases. It is awesome for updates, though, which is why the app mentioned above is run like this.
This is perfectly doable. Be sure to set the "Run from CD-ROM" (I think?) flag in the Visual Studio settings when compiling -- this prevents the image from being backed directly by the binary, so you can upgrade it while people are running it. I am not running Windows at the moment, so I can't check, but you may be able to set this flag for DLLs, too.
One problem with doing this is that if your program associates itself with files, when the network changes and computers are renamed everybody's PC starts to run like a dog. Explorer has a tendency to query these things at funny times.
Another more serious problem is that if somebody accidentally deploys a broken version, it's not just the early adopters who get stuffed!
For an easy life, personally I recommend XCOPY deployment...
For .NET applications, we have observed BadImageFormatException which we have come to believe is from network glitches (or computers loosing network connectivity at key moments, for example using WIFI) while reading the EXE or DLL files.
IMHO this is a really bad design decision. We have a third party application in our company which is designed exactly like this.
In order for the program to run properly it requires full sharing for that folder; In this case the worst part was that the program had the freaking DATABASE in the same shared folder (yeah, I was shocked too when I found out)!!! Didn't take too long till someone wiped every file that was not in use from that folder, including the database of course :)
I really recommend a client-server approach, even if you have to buy/build a smart installer with auto-update features to overcome deployment issues.