How do I shim executables with the same names in different directories? - nuget

I am creating a Chocolatey package for internal team usage. (In this case, the package is for Microsoft's windows debuggers.)
Windows Debuggers contains two folders, one for 32-bit x86 executables and an x64 folder for 64-bit executables.
The executable names are identical.
x86\adplus.exe
x64\adplus.exe
After installation it looks like the shim created by Chocolatey is indeed starting one of the adplus instances successfully. But sometimes I need the 32-bit version and sometimes I need the 64-bit version.
So here is the question: When there are two identically named executables in different directories, how do I tell Chocolately to create different shims for the executables in each directory?

The short answer is that you can't have two identically named shims in the Chocolatey shim folder ($env:ChocolateyInstall\bin).
A limitation of Windows for a directory is that each file/folder must be a unique name. This is what you are running into. Shims get dropped into the $env:ChocolateyInstall\bin folder, which puts them on the PATH automatically because $env:ChocolateyInstall\bin is on the PATH (it allows folks to install all kinds of things without overloading the PATH environment variables).
You can create an empty file ending in .ignore (e.g x86\adplus.exe.ignore) file next to the one you don't want to be shimmed. This is documented on the wiki. You can even do it programmatically during install based on something like OS architecture.
It sounds like you have a need for one of them sometimes and the other at other times on the SAME machine. I would suggest .ignore files for both files, and likely using Get-BinRoot to push the files to a tools folder (you get to define where the location of this is). Then you can set the process PATH temporarily for whichever one you need and it doesn't persist to the actual path. You can even set one on the path and then override it when you want the other.
Since the automation scripts are just PowerShell, you have all kinds of options here.

Related

Call a chocolatey package in powershell based on its package name

I recently set up a new machine and installed/enabled chocolatey. As far as I can remember I was able to call a package via powershell based on the package name. For instance, if I wanted to install mongodb, I used to type choco install mongodb - and was able to call the mongo client by simply typing mongo in the powershell console. Is there a way to see if something is bound to a specific shim ? or is there an option to enable it?
I don't think there is a way to match packages with shims, but you can check the executable a shim points to, along with general information about it and what would happen if you run the shim:
shimname.exe --shimgen-noop
I tried crafting a command to check all the shims in the $env:ChocolateyInstall\bin directory, but there's no guarantee that executables there are going to be a shim. I tried filtering out the known Chocolatey executables as well, but some packages (like putty) drop their real executables right in the bin folder, and won't respond to the shim parameters like you'd expect.
Looking at the Install-BinFile cmdlet, it doesn't look like Chocolatey provides a way to track shims at all as it doesn't even do this itself. I think it uses the same logic to track automatically generated shims at package uninstall time, but any shims explicitly created with Install-BinFile also need to have Uninstall-BinFile called in the associated chocolateyUninstall.ps1 script or the shim won't be removed at package uninstall time.
Short of crawling the $env:ChocolateyInstall\lib\packageName directory for potential automatic shim names, or the chocolateyInstall.ps1/chocolateyUninstall.ps1 scripts for explicit shims, you're not going to be able to match a shim to a package.

What is difference between installing a perl module and copying whole folder?

I have installed a perl module, say XYZ then a folder is created that contains many .pm files. I copied the folder and put it in any other system where XYZ is not installed. So, I'm able to use methods of XYZ module in both system. I mean, I'm unable to find out the difference between these method, but I think there must be some. What I know is, when we install a perl module then dependencies also gets installed. Am I right? Can anyone mention other difference between two, if any?
A few off the top of my head:
In case of an XS module, the code is compiled for the local platform.
Installing a module via cpan usually runs the test suite so if there is any other reason beyond dependencies why it wouldn't work, you're told so (I guess that's very rare though)
Regular installation automatically goes to a directory where your perl can find modules.
Of course you can take care of all these yourself. These days chances are pretty good you're running either Linux or Windows on something x86-ish and as long as you only copy Linux to Linux and Windows to Windows, and to the same place as on the source system, you'll be fine. Basically that's what binary Linux distributions and ActivePerl packages do, too, and it may make sense e.g. if you want to avoid installing a whole bunch of compile-time dependencies on all target systems. Just make sure you don't get yourself into a mess by writing to system directories (e.g. /usr/share/perl5) that are supposed to be managed by your system's package manager.

strong naming for microsoft enterprise library

I am using Microsoft enterprise library in one of my projects. I need to strong name one of the dlls which is Microsoft.Practices.EnterpriseLibrary.Common. But it is not working.
When I decompile using ILDASM, it generates 3 files.
IL file
.RESOURCES file
Common resource script file
How do I compile it with the key file. Which ILASM command should I use?
The dll's are distributed from the original install in a few different modes. One set of files is already signed, so you need to find that set and use the files from that set.
When you install the EntLib package, you get the compiled binaries (some are signed) AND you get the source code, which compiles the source-code and creates the dlls (not signed).
My guess is that you are using the non-signed (compiled from the source code on your local machine) files, instead of the signed ones.

How to manage development of PowerShell snap-ins with x86 and x64 versions

I am currently writing a PowerShell snapin that has specific dependencies on mixed-mode assemblies (assemblies containing native code) that specifically target x64 or x86. I have both versions of the dependent assembly, but I am wondering how best to manage the build and deployment of this snapin, specifically:
Is it necessary to have two versions of the snapin, one x86 and one x64, and use the two different versions of installutil to install it, once for each architecture?
Assuming #1 is true, is it recommended to install the two different versions of the snapin in the different "Program Files" and "Program Files (x86)" directories?
What is the ideal (least hassle) way to structure a pair of projects that share everything but a single reference, in order to build for the two different architectures?
If the snapin is compiled as "AnyCpu", and the dependent dlls are both loaded into the GAC, will the runtime load the correct assembly from the GAC based on the architecture of the currently running PowerShell host?
Is there a slick way to dynamically, at run-time, choose which dependent dll to load (if it cannot, for various reasons, be installed in the GAC) without running into headaches with assembly load contexts?
Mark, we have this very situation with the PowerShell Community Extensions with 32-bit and 64-bit versions of 7zip.dll. You can pretty easily work around this by PInvoking to LoadLibrary early in your snapin startup (or before you need to call out to the native DLL). You can then test if you're a 32-bit or 64-bit process (IntPtr.Size) and then load manually the correct DLL using the LoadLibrary PInvoke. After that the, DllImport("YourNative.dll") will notice that the dll is already loaded and use that DLL.
Take a look at these two PSCX source code files:
http://pscx.codeplex.com/SourceControl/changeset/view/74794?ProjectName=Pscx#1358100
http://pscx.codeplex.com/SourceControl/changeset/view/74794?ProjectName=Pscx#1358102
I ended up creating a module (thanks, Richard!), but that didn't solve the problems related to processor architecture. In order to solve that, I put both versions of the dependent dll in the module directory, and in each cmdlet's constructor I put some initialization code (that only runs once) to load the appropriate version of the dependent dll.
Thanks, all, for the pointers.

how could I share workspace between ubuntu and windows xp?

I am using ubuntu 8.04 and windows xp. I mount the fat32 disk which contains eclipse workspace to ubuntu. but I find I could not use the workspace, maybe I have no right to use it.
the fat32 disk I mounted has the 755 right,I try to use chmod to change it to 777 but failed. I try to mount it to 777 mode, but I find there is nothing about mode in vfat option.
How should I do next ? how could I share the workspace? Help me. thanks.
Instead of trying to share the raw workspace data between two different systems, I suggest to do it like in typical big software development projects. Use a version control system to store your code and commit/update to and from that version control system instead of sharing files.
This may not be the answer you were originally interested in, but rest assured, you will notice many advantages of that version control system after some time, including:
Easily get back to the code version before todays "genius" changes which didn't really work at the end
There is a backup of your project in case your workstation dies
You may even access your project from a completely different machine/location.
If your project is going to be open source, you can even use public services like Sourceforge.net.
I believe that the fat32 doesn't support the same kind of permissions as the linux ones you are familiar with. Once you have sorted out the rw option in /etc/mtab then I think you will have a better time.
However, the step after that is to have two different installations of Eclipse working on the same workspace.
I haven't had a lot of success with this (though haven't tried you're exact scenario), but I would be careful to:
keep the Eclipse versions in synch
only use relative paths, and relative to the workspace. This is probably good practice any way, but is worth repeating.
If all goes well, then you should be sharing everything, including preferences across both installations.
There are two refinements I can think of, which may be useful to reason about, if not actually do:
you could probably share most of the installation of eclipse (the plugins and features directory, if not the config.ini and eclipse.ini files). If you can't put both executables in the same directory, consider the -install and -configuration runtime options.
if you can't do any of these things, then you may need to work on two parallel workspaces. You can keep them in synch with tools such as rsync or even a distributed source control like Mercurial.
I agree with bananeweizen.myopenid, and have the following tip to add:
When creating your build path entries, reference all outside resources (eg, jarfiles) using classpath variables. This will allow you to move the .classpath file between environments (or even check it into source control, if you're the sole developer) without running into problems with pathnames.
To reference a JARFile via variable, go into the "Libraries" tab of the Build Path, remove any existing reference to the library, and click "Add Variable...". You will need to define common variables, such as M2_REPO or LOCAL_LIBS, and you will need to make sure that those definitions are available in all your environments.
Perhaps the problem you're having is with capitalization. Be sure to create the workspace in Ubuntu first. This should rule out any filename capitalization issues.