Can I control the location of .NET user settings to avoid losing settings on application upgrade? - .net-2.0

I'm trying to customize the location of the user.config file. Currently it's stored with a hash and version number
%AppData%\[CompanyName]\[ExeName]_Url_[some_hash]\[Version]\
I want to it be agnostic to the version of the application
%AppData%\[CompanyName]\[ProductName]\
Can this be done and how? What are the implications? Will the user lose their settings from the previous version after upgrading?

I wanted to add this quoted text as a reference for when i have this problem in the future. Supposedly you can instruct the ApplicationSettings infrastructure to copy settings from a previous version by calling Upgrade:
Properties.Settings.Value.Upgrade();
From Client Settings FAQ blog post: (archive)
Q: Why is there a version number in the user.config path? If I deploy a new version of my application, won't the user lose all the settings saved by the previous version?
A: There are couple of reasons why the
user.config path is version sensitive.
(1) To support side-by-side deployment
of different versions of an
application (you can do this with
Clickonce, for example). It is
possible for different version of the
application to have different settings
saved out.
(2) When you upgrade an
application, the settings class may
have been altered and may not be
compatible with what's saved out,
which can lead to problems.
However, we have made it easy to
upgrade settings from a previous
version of the application to the
latest. Simply call
ApplicationSettingsBase.Upgrade() and
it will retrieve settings from the
previous version that match the
current version of the class and store
them out in the current version's
user.config file. You also have the
option of overriding this behavior
either in your settings class or in
your provider implementation.
Q: Okay, but how do I know when to
call Upgrade?
A: Good question. In Clickonce, when
you install a new version of your
application, ApplicationSettingsBase
will detect it and automatically
upgrade settings for you at the point
settings are loaded. In non-Clickonce
cases, there is no automatic upgrade -
you have to call Upgrade yourself.
Here is one idea for determining when
to call Upgrade:
Have a boolean setting called
CallUpgrade and give it a default
value of true. When your app starts
up, you can do something like:
if (Properties.Settings.Value.CallUpgrade)
{
Properties.Settings.Value.Upgrade();
Properties.Settings.Value.CallUpgrade = false;
}
This will ensure that Upgrade() is
called only the first time the
application runs after a new version
is deployed.
i don't believe for a second that it could actually work - there's no way Microsoft would provide this ability, but the method is there just the same.

To answer the first question, you technically can put the file wherever you want, however you will have to code it yourself, as the default place the file goes to is the first of your two examples. (link to how to do it yourself)
As for the second question, it depends on how you deploy the application. If you deploy via a .msi, then there are two hashes in the properties of the setup project (that the msi is built from), the 'upgrade code' and the 'product code'. These determine how the msi can be installed, and if it upgrades, overwrites, or installs beside any other version of the same application.
For instance, if you have two versions of your software and they have different 'upgrade' codes, then to windows they are completely different pieces of software regardless of what the name is. However if the 'upgrade' code is the same, but the 'product' code is different then when you try to install the 2nd msi it will ask you if you want to upgrade, at which time it is supposed to copy the values from the old config to a new config. If both values are the same, and the version number didn't change then the new config will be in the same location as the old config, and it won't have to do anything. MSDN Documentation
ClickOnce is a little bit different, because its based more off of the ClickOnce version # and URL path, however I have found that as long as you continue to 'Publish' to the same location the new version of the application will continue to use the existing config. (link to how ClickOnce handles updates)
I also know there is a way to manually merge configs during the install of the msi using custom install scripts, but I don't remember the exact steps to do it... (see this link for how to do it with a web.config)

The user.config file is stored at
C:\Documents and Settings>\<username>\[Local Settings\]Application Data\<companyname>\<appdomainname>_<eid>_<hash>\<version>
<C:\Documents and Settings> is the user data directory, either non-roaming (Local Settings above) or roaming.
<username> is the user name.
<companyname> is the CompanyNameAttribute value, if available. Otherwise, ignore this element.
<appdomainname> is the AppDomain.CurrentDomain.FriendlyName. This usually defaults to the .exe name.
<eid> is the URL, StrongName, or Path, based on the evidence available to hash.
<hash> is a SHA1 hash of evidence gathered from the CurrentDomain, in the following order of preference:
1. StrongName
2. URL:
If neither of these is available, use the .exe path.
<version> is the AssemblyInfo's AssemblyVersionAttribute setting.
Full description is here http://msdn.microsoft.com/en-us/library/ms379611.aspx

(I'd add this as a comment to #Amr's answer, but I don't have enough rep to do that yet.)
The info in the MSDN article is very clear and appears to still apply. However it fails to mention that the SHA1 hash is written out base 32 encoded, rather than the more typical base 16.
I believe the algorithm being used is implemented in ToBase32StringSuitableForDirName, which can be found here in the Microsoft Reference Source.

Related

How to reference a specific DLL for functionality in said DLL

Good day,
I have an application that I developed that transfers files between two machines ("site" and "server"). This application was set to target dotNet 3.5. Furthermore, I am using Renci.SshNet to handle the connections between the machines and the transferring of said files.
The issue that I am facing currently though is that about 70% of the "site" machines do not have a standard dotNet and is also quite old; thus these machines do not support all the required functionality as the external dll makes calls to System.Threading.WaitHandle.WaitOne() and System.Threading.WaitHandle.WaitAny(WaitHandle[], Int32) and other overloads of these methods.
The workaround that I have for this though is to install netfx20SP2 or netfx30SP1, yet I am not in the position to perform this update on all machines as they are scattered across the country and have data limitations (bandwidth and cap).
What I want to do possibly is to embed the System.Threading dll that I have downloaded and then the application should use those classes instead, or alternatively just point the application to use the said dll.
Is this at all possible, or do you have to load the dll into the GAC? And also, will it be possible to "run" this higher version of System.Threading in the application while the system itself is on a lower framework version. Something is telling me that the best bet will be to actually run the service pack installation to avoid unnecessary coding but I'm not sure exactly how to approach this.
Thank you in advance for any assistance / suggestions,
JD
To allow the execution of an application that, let's say, targets .Net 4; while the machine itself only has let's say, .Net 3.5, installed, one can redirect Windows to check the local (executing) directory for dlls that should contain the required symbols loaded into memory instead of the default symbols that get loaded upon execution (the default would be the NetFx installed on the machine - which I believe the highest version of the framework that can be found upon loading when the execution starts or would be the highest available version that is lower or equal to the targeted framework).
This file's contents (myApp.exe.local) are ignored. It is just there to tell Windows to
look in that folder for the applicable symbols and if not found, the system will roll back to attempt to load these symbols from the NetFx directory.
Read more at Microsoft Dev Center - Docs (link is attached to the following paragraph which is a Copy-Paste of a section of this document).
To use DLL redirection, create a redirection file for your application. The redirection file must be named as follows: App_name.local. For example, if the application name is Editor.exe, the redirection file should be named Editor.exe.local. You must install the .local file in the application directory. You must also install the DLLs in the application directory.

SourceTree 2 - How to change install directory on Windows

By default, SourceTree 2.x installs silently and chooses to sit in C:\Users\<user>\AppData\Local\SourceTree directory.
Forefully moving it does not break it, but updater does not work correctly - still, seeking version in old location, and various tools it is trying to launch from original location.
I don't want it to sit in C:\*, but in E:\, I don't want its settings to sit somewhere deep in C:\*, I want it to be compact and understandable, but it seems that new version [2.x] does not allow us to change anything.
What needs to be done to:
install it in specific directory;
move from one location to another without breaking its access to its tools and updater;
change the location of options\settings?
Use the Enterprise Installer
https://www.sourcetreeapp.com/enterprise
This lets you pick the installation directory.
Still not possible.
The issue is created in the bugtracker, but the priority is set to low.
See: https://jira.atlassian.com/browse/SRCTREEWIN-7168
Currently the only solution to this is to install a version prior to 2.0. You can find the link to the latest 1.9 version here:
https://www.sourcetreeapp.com/update/windowsupdates.txt

How can I deploy an upgrade of my product which contains a file with version lower than the already deployed file?

I have a msi component which deploys a file MyFile.dll. I have a test machine in which my product already deployed MyFile.dll, which has version 09.99.99.99.
Now I'm writing a major upgrade which will deploy a new version of MyFile.dll with version 05.23.76.123. After execution on the test machine, MyFilee.dll is removed... I need to change or repair to correctly deploy it.
How can I force the deployment of MyFile.dll regardless of its injected version number?
PS: This is happening on our test machines only. The product we delivered to our users has files with version numbers consistent with release history.
There are several ways in Windows Installer to do this but they all have their complications. IMO I would just rebuild the same source code as the old DLL but with a newer higher version and keep it simple.
This is perfectly possible. As said here, you may specify the REINSTALLMODE property and set it to "amus" or "dmus" depending on whether you want to always overwrite files or simply overwrite files with different version:
<Wix ...>
<Product ...>
<Property Id="REINSTALLMODE" Value="amus" />
Note that you'll get this warning when compiling your installer though:
warning LGHT1076: ICE40: REINSTALLMODE is defined in the Property table. This may cause difficulties.
Downgrading a file isn't really straightforward and has issues. As pointed out earlier, you can change the component GUIDs and get this to work. However, it really depends on where your RemoveExistingProducts is sequenced. If its sequenced at a point where the older product is removed and the newer product is installed, then it might work.
There is not really a straight forward and documented way. All the available options are just hacks.
Is this just for your test environment?
If yes, then you could use REINSTALLMODE="amus" in the property table and achieve what you are looking to.
However, this is just for your testing and is not advised to be suggested to your end users.
Regards,
Kiran Hegde

How to customize Powerbuilder Specify Version Information

I usually make a lot of application deployment and it is very timeconsuming and uncomfortable every time to overwrite the default Powerbuilder Version information. So usually I do not bother with this. But I thought there could be a solution to give this "default" information somehow to the PowerBuilder, so after the default values were filled I would only need to overwrite the exact version number. Do anybody have an idea? Thanx in advance!
Have you looked at VersionEdit from ECrane?
We actually use PowerGen from them as well for production PowerBuilder compiles and the same capability is in there.
http://ecrane.com/
A free tool is Stampver , see http://www.rgagnon.com/pbdetails/pb-0120.html for a short tutorial.
I recommend you build your application with PBORCA (my preference) or OrcaScript. Both support setting the executable properties. PBORCA can read the properties from an INI file.
We use a small Java program to get the next version number from a database and update the INI file. If you're doing any more than clicking a button to build and deploy your app, you're working too hard. We have Jenkins jobs that watch the SVN repository and build our projects automatically.
This is what one of the INI files for PBORCA looks like before the version is set
[exeproperties]
companyname=BogoSoft
productname=BogoMojo
description=BogoMojo is BogoMojo
copyright=Reserved Rights
fileversion=5.01
fileversionnum=2
;productversion=
productversionnum=0002
;manifestinfo=
[config]
targetname=bogo.pbt
exename=bogomojo.exe
iconname=images\\bogo1.ico
pbrname=bogo.pbr
The Script for PBORCA has
profile string %%WORKSPACE%%\build.ini, config, targetname
profile string %%WORKSPACE%%\build.ini, config, exename
profile string %%WORKSPACE%%\build.ini, config, iconname
profile string %%WORKSPACE%%\build.ini, config, pbrname
profile exeinfo %%WORKSPACE%%\build.ini, exeproperties

Storing third-party framework/middleware into source control that needs to alter your compiler/IDE

I know there are posts that ask how one stores third-party libraries into source control (such as this and this). While those are great answers, I still can't find the answer to this:
How do you store third-party middleware/frameworks binaries that need to alter your compiler / IDE for the library to work properly? Note: for my needs, I don't need to store the middleware source, I only store header files / lib / JAR ..so that it's ready to be linked.
Typically, you simply link libraries to your app, and you are good. But what about middleware / frameworks that need more?
Specific examples:
Qt moc pre-processor.
ZeroC Ice Slice (ice) compiler (similar to CORBA IDL preprocessor).
Basically these frameworks/middleware need to generate their own code before your application can link to it.
From the point of view of the developer, ideally he wants to just checkout, and everything should be ready to go. But then my IDE/compiler will not be setup properly yet, so the compilation will fail..
What do you think?
Backup everything including the setup of the IDE, operating system, etc. This is what i do
1) Store all 3rd party libraries in source control. I have a branch for all the libraries.
2) Backup the entire tool chain which was used to build. This includes every tool. Each tool is installed into the same directory on each developers computer, so this makes it simple to setup a developers machine remotely.
3) This is the most hardcore, but prepare 1 perfect developer IDE setup which is clean, then make a VMWare / VirtualPC image out of it. This will be useful when you cant seem to get the installers to work in future.
I learned this lesson the painful way because I often have to wade through visual studio 6 code which don't build properly.
I think that a better solution is to make sure that the build is self-contained and downloads all necessary software for itself unless you tell it otherwise. This is the way maven works, and it is really handy. The downside is that it sometimes needs to download a application server or similar, which is highly unpractical, but at least the build succeeds and it becomes the new developers responsibility to improve the build if needed.
This does of course not work great if your software needs attended installs, but I would try to avoid any such dependencies in any case. You can add alternative routes (e.g the ant script compiles the code if eclipse hasn't done it yet). If this is not feasible, an alternative option is to fail with a clear indication of what went wrong (e.g 'CORBA_COMPILER_HOME' not set, please set and try again').
All that said, the most complete solution is of course to ship everything with your app (i.e OS, IDE, the works), but I doubt that that is applicable in the general case, how would you feel about that type of requirements to build a software product? It also limits people who want to adapt your software to new platforms.
What about adding 1 step.
A nant script which is started with a bat file. The developer would only have to execute one .bat file, the bat file could start nant, and the Nant script could be made to do anything you need.
This is actually a pretty subtle question. You're talking about how to manage features of the environment which are necessary in order to allow your build to proceed. In this case it's the top level of your code toolchain, but the problem can be generalised to include the entire toolchain, and even key aspects of the operating system.
In my place of work, we have various requirements of the underlying operating system before our code will successfully run. This includes machine-specific configurations as well as ensuring correct versions of system libraries and language runtimes are present. We've dealt with this by maintaining a standard generic build machine image which contains the toolchain requirements we need. We can push this out to a virgin machine and get a basic environment that contains the complete toolchain and any auxiliary programs.
We then use fsvs to version control any additional configuration, which can be layered on to specific groups of machines as needed.
Finally, we use custom scripts hooked in to our CI server (we use Hudson) to perform any pre-processing steps required for specific projects.
The main advantages for us of this approach is:
We can build and deploy developer and production machines very easily (and have IT handle this side of the problem).
We can easily replace failed machines.
We have a known environment for testing (we install everything to a simulated 'production server' before going live).
We (the software team) version control critical configuration details and any explicit pre-processing steps.
I would outsource the task of building the midleware to a specialized build server and only include the binary output as regular 3rd party dependencies under source control.
If this strategy can be successfully applied depends on whether all developers need to be able to change midleware code and recompile it frequently. But this issue could also be solved via a Continous Integration Server like Teamcity that allows to create private builds.
Your build process would look like the following:
Middleware repo containing middleware code
Build server, building middleware
Push middleware build output to project repository as 3rd party references
Update: This doesn't really answer how to modify the IDE. It's just a sort-of Maven replacement thingy for C++/Python/Java. You shouldn't need to modify the IDE to build stuff, if so, you need a different IDE or a system that generates/modifies IDE files for you. (See CMake for a cross-platform c/c++ project file generator.)
I've written a system (first in Ant/Beanshell at two different places, then rewrote it in Python at my current job) where third-partys are compiled separately (by someone), stored and shared via HTTP.
Somewhat hurried description follows:
Upon start, the build system looks through all modules in repo, executes each module's setup target, which downloads the specific version of a third-party lib or app that the current code revision uses. These are then unzipped, PATH/INCLUDE etc are added to (or, for small libs, copy them to a single directory for the current repo) and then launches Visual Studio with /useenv.
Each module's file check for stuff that it needs, and if it needs installing and licensing, such as Visual Studio, Matlab or Maya, that must be on the local computer. If that's not there, the cmd-file will fail with a nice error message. This way, you can also check that the correct version is in there
So there are a number of directories on the local disk involved. %work% needs to be set using an global environment variable, preferrable on a different disk than system or source-checkout, at least if doing heavy C++.
%work% <- local store for all temp files, unzip, and for each working copy's temp files
%work%/_cache <- downloaded zips (2 gb)
%work%/_local <- local zips (for development or retrieved in other manners while travvelling)
%work%/_unzip <- unzips of files in _cache (10 gb)
%work%&_content <- textures/3d models and other big files (syncronized manually, this is 5 gb today, not suitable for VC either)
%work%/D_trunk/ <- store for working copy checked out to d:/trunk
%work%/E_branches/v2 <- store for working copy checked out to e:/branches/v2
So, if trunk uses Boost 1.37 and branches/v2 uses 1.39, both boost-1.39 and boost-1.37 reside in /_cache/ (as zips) and /_unzip/ (as raw files).
When starting visual studio using bat files from d:/trunk/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.37, while if runnig e:/branches/v2/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.39.
In the repo, only a small set of bootstrap binaries need to be stored (i.e. wget and 7z).
We currently download about 2 gb of packed data, which is unzipped to 10 gb (pdb files are huge!), so keeping this out of source control is essential. Having this system allows us to keep the repo size small enough to use DVCS such as Mercurial (or Git) instead of SVN, which is very nice. (I'm thinking of using Mercurials bigfiles extension or file sharing instead of a separately http-served directory.)
It work flawlessly. Developers need only to check out, set an enviroment variable for their local cache, then run Visual Studio via a specific batch-file in the repo. No unzipping or compiling or stuff. A new developer can set up his computer in no time. (Installing Visual Studio takes the order of a magnitude more time.)
First time on a new computer takes some time, but then it's fast, only a few seconds. Downloads/unzips are shared on the local computer, do checking out additional branches/versions does not occupy more space. Working offline is also possible, you just need to get the zip files manually if new ones have been uploaded. (This mechanism is essential to test new versions/compilations of third-party libraries.)
The basics are in a repo on bitbucket but it needs more work before it's ready for the public. Apart from doc and polish, I plan to:
extend it to use cmake instead of raw
vcproj-files, to make it more
cross-platform.
script the entire
process from checkout/download of
third-party packages to building and
zipping them (including storing the
download in a local repo) ... currently that's on my dev computer. Not good. Will fix. :)
As for moc, we use Qt's Visual Studio add-in, which stores this in the .vcproj files. Works well. I do think that CMake is one of the best answers for this though