I'm a newbie in PowerShell and I'm having some problems when defining some utility scripts that are "included" in other files; the problem is regarding paths. Let me explain the issue with a simple example:
Imagine you have some utility script named utility.ps1 located under /tools and you want to invoke it from a build.ps1 placed under /build. Now imagine that the utility.ps1 invokes some other utility script in the same folder called "utility2.ps1". So,
utility.ps1:
[...]
.".\utility2.ps1"
[...]
build.ps1:
[...]
."..\tools\utility.ps1"
[...]
The problem here is that when calling from build to utility.ps1 and then from this last one to utility2.ps1, powershell is trying to load utility2.ps1 from the current dir (build) instead of the tools dir.
My current workaround is pushd . before calling and popd after that but it smells bad, I know; on the other hand if some day I move scripts to some other location I'd need to update all my scripts that use the moved one... a mess.
So, I'm doing something very wrong; what would be the proper one to do this, making the client script unaware and independant of the "utility" script location?
The answer by PetSerAl in the comments is perfect but will work for PowerShell 3+. If for some reason you want your script to be backward compatible as well, then use the below code snippet to find the $ScriptRootPath (which is automatically populated via $PSScriptRoot for PS 3+)
$ScriptRootPath = Split-Path $MyInvocation.MyCommand.Path -Parent
For PS 3+, populate this variable via the global variable:
$ScriptRootPath = $PSScriptRoot
and then use the solution as described by PetSerAl,
i.e. for utility.ps1: . "$ScriptRootPath\utility2.ps1"
for build.ps1: . "$ScriptRootPath\..\tools\utility.ps1"
Also refer this question on SO
Related
Needs some help. We run PowerShell scripts using a configuration script.
e.g. .\folderA\TheConfigurationScript .\scriptThatDoesAllTheWork.ps1
The issue is that in the directory where scriptThatDoesAllTheWork.ps1 is located there are number of other scripts.
If I accidentally add a space before scriptThatDoesAllTheWork.ps1 all scripts located in that folder are executed
e.g. .\folderA\TheConfigurationScript
.\ scriptThatDoesAllTheWork.ps1
All variables therefore defined in folderA\TheConfigurationScript are available to scriptThatDoesAllTheWork.ps1 that does all the work
Is there any way to avoid this behaviour
Thanks in advance
Thanks to mclayton. I had a get-childitem that would return all the files in case of space. In case this helps anyone
I found a snippet of code that if I paste in PowerShell It displays all of my windows path variables on one line. What would the syntax be for adding this code to my profile?
Push-Location env:
(ls path).value.split(";")
Pop-Location
Put this function in your PowerShell profile:
function Get-Path {
$env:Path.Split(";")
}
After this function is defined, you can type Get-Path to see the list of directories in your Path.
If you don't already have a profile script, run the command help about_Profiles for more information.
Not sure why you'd prefer to see PATH variable on one line, but here's the code to do it.
C:>(ls Env:\Path).value
I prefer separate lines:
C:>(ls Env:\Path).value.split(';')
As far as your PowerShell Profile goes, open PowerShell and run:
C:>$profile
It will tell you the path to your profile (make the file and directory if it doesn't exist).
Then, copy + paste the code above into it.
It will run whenever you open powershell.
I am currently trying to write a batch program that installs a module named SetConsolePath.psm1 at the correct location. I am a beginner with Batch and I have absolutely no powershell experience.
Through the internet, I have learned how to display PSModulePath with powershell -command "echo $env:PSModulePath.
How can I, via .bat file, move SetConsolePath.psm1 from the desktop to the location displayed by powershell -command "echo $env:PSModulePath?
Thank you in advance, and I apologize for my lack of experience.
Before I answer, I must out that you do not want to copy PowerShell module files directly to the path pointed by PsModulePath. You really want to create a folder inside PSModulePath and copy the files there instead.
The prefix env in a Powershell variable indicates an environment variable. $env:PSModulePath is actually referring to the PSMODULEPATH environment variable. On the command line, and in batch files, environment variables can be displayed by placing the name between percent symbols. (In fact, you could have displayed this value by typing echo %PSMODULEPATH% instead.)
To reference the desktop folder, have a look at this answer, which shows you how to use another environment variable, USERPROFILE.
Therefore, to copy the file from the desktop directory to the path specified in PSModulePath, you would do this:
COPY "%USERPROFILE%\Desktop\SetConsolePath.psm1" "%PSMODULEPATH%"
And, as I warned earlier, you really should copy the file to a folder underneath PsModulePath. So what you really want is:
IF NOT EXIST "%PSMODULEPATH%\MyNewFolder" MKDIR "%PSMODULEPATH%\MyNewFolder"
COPY "%USERPROFILE%\Desktop\SetConsolePath.psm1" "%PSMODULEPATH%\MyNewFolder"
Is there a way to specify the current working directory for the system command executed by the function module SXPG_COMMAND_EXECUTE?
I do not see any parameter which would allow me to do that either by defining the command in transaction SM69 or on the list of IMPORTING parameters in SE37.
It looks like by default such commands are started in DIR_HOME which can be viewed by the transaction AL11. Do I have any control over that?
There isn't a way of doing it via `SM69' unfortunately. I think the only solution is to create a script and call that.
I was going to suggest wrapping the statements in a SM69 command defined as a call to sh with parameters of -c 'cd <dir> && /path/to/command' but unfortunately that doesn't work. According to note 401095 wildcards are not permitted. When I tested, && was translated into a single &, causing the command to fail.
Would be good if you access this information using FM FILE_GET_NAME_USING_PATH (export the script name for which you want to find the physical directory).
The recieving path can be used in SXPG_COMMAND_EXECUTE.
Because the external commands I called were actually .bat files I solved this by putting the following expression at the beginning of each and every one.
cd /d %~dp0
This Stackoverflow question helped a lot actually.
Okay this is and isn't programming related I guess...
I've got a whole bunch of little useful console utilities scattered across a suite of projects that I wrote and I want to dump them all to a single directory to make using them simpler. The only issue is that I have them all compiled in both Debug and Release mode.
Given that I only want the release mode versions in my utilities directory, what switch would allow me to specify that I want all executables from my tree structure but only from within Release folders:
Example:
Projects\
Project1\
Bin\
Debug\
Project1.exe
Release\
Project1.exe
Project2\
etc etc...
To
Utilities\
Project1.exe
Project2.exe
Project3.exe
Project4.exe
...
etc etc...
I figured this would be a cinch with XCopy - but it doesn't seem to allow me to exclude the Debug directories - or rather - only include items in my Release directories.
Any ideas?
You can restrict it to only release executables with the following. However, I do not believe the other requirement of flattening is possible using xcopy alone. To do the restriction:
First create a file such as exclude.txt and put this inside:
\Debug\
Then use the following command:
xcopy /e /EXCLUDE:exclude.txt *.exe C:\target
You can, however, accomplish what you want using xxcopy (free for non-commercial use). Read technical bulletin #16 for an explanation of the flattening features.
If the claim in that technical bulletin is correct, then it confirms that flattening cannot be accomplished with xcopy alone.
The following command will do exactly what you want using xxcopy:
xxcopy /sgfo /X:*\Debug\* .\Projects\*.exe .\Utilities
I recommend reading the technical bulletin, however, as it gives more sophisticated options for the flattening. I chose one of the most basic above.
Sorry, I haven't tried it yet, but shouldn't you be using:
xcopy release*.exe d:\destination /s
I am currently on my Mac so, I cant really check to be for sure.
This might not help you with assembling them all in one place now, but going forward have you considered adding a post-build event to the projects in Visual Studio (I'm assuming you are using it based on the directory names)
xcopy /Y /I /E "$(TargetDir)\$(TargetFileName)" "c:\somedirectory\$(TargetFileName)"
Ok, this is probably not going to work for you since you seem to be on a windows machine.
Here goes anyway, for the logic.
# From the base directory
mkdir Utilities
find . -type f | grep -w Release > utils.txt
for f in $(<utils.txt); do cp $f Utilities/; done
You can combine the find and cp lines into one, I split them for readability.
To do this on a windows machine you'll need Cygwin or some such Unix Utilities handy.
Maybe there are tools in the Windows shell to do this...
This may help get you started:
C:\>for %i in (*) do dir "%~dpi\*.exe"
Used in the dir command as a modifier to i, ~dp uses the drive and path of everything found in (*). If I run the above in a folder that has several subfolders containing executables, I get a dir list of all of the executables in each folder.
You should be able to modify that to add '\bin\release\' following the ~dpi portion and change dir to xcopy. A little experimentation should make it pretty easy.
To use the for statement above in a batch file, change '%' to '%%' in both places.