Test-Path powershell is returning True when it should be false - powershell

I am writing a module to run logparser queries. I wrote a function that checks if LogParser is in the system root so that I can run logparser as a command in subsequent functions.
My code:
function Add-LogParser
{
if(Test-Path "C:\Windows\System32\LogParser.exe")
{
Write-Host -ForegroundColor Cyan "Log Parser is in System Root"
}
else
{
Copy-Item "C:\Program Files (x86)\WindowsPowerShell\Modules\Dealogic.LogAnalysis\LogParser.exe" -Destination "C:\Windows\System32\" -Force
if(Test-Path "C:\Windows\System32\LogParser.exe")
{
Write-Host -ForegroundColor Cyan "Log Parser has been added to System Root"
}
else
{
Write-Host -ForegroundColor Red "Unable to add Log Parser to System Root. This is a requirement of the Dealogic Log Analysis Module. Please verify you have write to copy to the C:\Windows\System32\ folder."
break
}
}
}
I ran the function and the first time it added it to root fine. I ran the function again because it has logic to check that it is in the root and that worked fine. Then I deleted LogParser expecting the third time it will see it is on there and add it back to root, but instead, it thinks LogParser is still there. And even if I start new Powershell sessions and just tab over to the path it thinks its there.
Even outside of my code this command is not working properly:
Test-Path -LiteralPath C:\Windows\System32\LogParser.exe
Is this because it is in the system root? Is that cached in the Powershell profile or something? Since adding it to root is a one time thing I don't know that it affects my script, but I was surprised to see this behavior.

This seems to be a common pitfall when developing in a situation where the developer can accidentally or unwittingly switch between 32 bit and 64 bit Powershell environments.
I conducted the following tests:
Test: Create file in system32 only and check system32 and syswow64 from both 32 and 64 bit PowerShell.
Result: 32 bit session returned FALSE for both. 64 bit session returned TRUE for system32 and FALSE for syswow64.
Test: Create the file in syswow64 only and check both paths from both sessions.
Result: 32 bit session returned TRUE for both. 64 bit session returned FALSE for system32 and TRUE for syswow64.
Test: Create file in both locations and check both paths from both sessions.
Result: Both sessions returned TRUE for both paths.
Test: Create file in both locations and delete from system32 only.
Result: 32 bit session returns TRUE for both. 64 bit session returns true for syswow64 only.
Test: Create file in both locations and delete from syswow64 only.
Result: 32 bit session returned FALSE for both. 64 bit session returned TRUE for system32 only.
From this testing it appears that the 64 bit version is capable of accurately checking for files in both system32 and syswow64. The 32 bit application appears to be defaulting to using wow64. If the file is there it will return true regardless of what is in system32, and if the file is not there it will return false regardless of what is in system32.
Thanks to #Mathias R. Jessen for asking if the file exists in the syswow64 directory, as that reminded me that I've seen this before.
It looks like this all relates to redirection and reflection of keys under wow64. For more info search msdn Microsoft documentation for "Registry Keys affected by WOW64". This article https://support.microsoft.com/en-us/help/305097/how-to-view-the-system-registry-by-using-64-bit-versions-of-windows
contains some related info and includes this interesting line: "To support the co-existence of 32-bit and 64-bit COM registration and program states, WOW64 presents 32-bit programs with an alternate view of the registry."

Related

How to read a text file to a variable in batch and pass it as a parameter to a powershell script

I have a powershell script that generates a report, and I have connected it to an io.filesystemwatcher. I am trying to improve the error handling capability. I already have the report generation function (which only takes in a filepath) within a try-catch loop that basically kills word, excel and powerpoint and tries again if it fails. This seems to work well but I want to embed in that another try-catch loop that will restart the computer and generate the report after reboot if it fails a second consecutive time.
I decided to try and modify the registry after reading this article: https://cmatskas.com/configure-a-runonce-task-on-windows/
my plan would be, within the second try-catch loop I will create a textfile called RecoveredPath.txt with the file path being its only contents, and then add something like:
Set-ItemProperty "HKLMU:\Software\Microsoft\Windows\CurrentVersion\RunOnce" -Name '!RecoverReport' -Value "C:\...EmergencyRecovery.bat"
Before rebooting. Within the batch file I have:
set /p RecoveredDir=<RecoveredPath.txt
powershell.exe -File C:\...Report.ps1 %RecoveredDir%
When I try to run the batch script, it doesn't yield any errors but doesn't seem to do anything. I tried adding in an echo statement and it is storing the value of the text file as a variable but doesn't seem to be passing it to powershell correctly. I also tried adding -Path %RecoveredDir% but that yielded an error (the param in report.ps1 is named $Path).
What am I doing incorrectly?
One potential problem is that not enclosing %RecoveredDir% in "..." would break with paths containing spaces and other special chars.
However, the bigger problem is that using mere file name RecoveredPath.txt means that the file is looked for in whatever the current directory happens to be.
In a comment your state that both the batch file and input file RecoveredPath.txt are located in your desktop folder.
However, it is not the batch file's location that matters, it's the process' current directory - and that is most likely not your desktop when your batch file auto-runs on startup.
Given that the batch file and the input file are in the same folder and that you can refer to a batch file's full folder path with %~dp0 (which includes a trailing \), modify your batch file to look as follows:
set /p RecoveredDir=<"%~dp0RecoveredPath.txt"
powershell.exe -File C:\...Report.ps1 "%RecoveredDir%"

How to detect if a special file is running or not with powershell?

Get-Process only gives the result if a notepad or exe file is running but I want to know if a specific file (index.txt) which is in some folder is running or not in powershell
You can use the mainWindowTitle method and then select the names of the processes running. Something like this -
get-Process notepad | where-Object {$_.mainWindowTitle} | Select-Object id, name, mainwindowtitle
This will give you the list of notepads processes running, and if you find your file index.txt under the MainWindowTitle header then, you can confirm that your file is running indeed.
Get-Process gets all running processes.
A text file is not a process, it of course is an object opened by / in a process (whether PS started it or not), notepad, winword, etc...
PS can be used to start a process, say notepad, but PS does not own it, the exe does.
So, a file, in the context you are asking, is never running in PS, the process (running on your system) can be looked up using Get-Process (or the old Tasklist tool which Get-Process replaces) as well as the path information of the running process.
Start notepad manually and open a text file.
Run Get-Process and ask for all values of the notepad process.
You will see the Get-Process brings back a whole lot of info for you to select from.
Note that it is the MainWindowTitle, which shows what file the Notepad process has open, but no where in this results does it say where that file (path) is ran from.
Get-Process Notepad | Select-Object *
Name : notepad
Id : 20516
...
Path : C:\WINDOWS\system32\notepad.exe
Company : Microsoft Corporation
CPU : 2.515625
ProductVersion : 10.0.17134.1
Description : Notepad
Product : Microsoft® Windows® Operating System
__NounName : Process
...
MainWindowTitle : news-stuff.txt - Notepad
MainModule : System.Diagnostics.ProcessModule (notepad.exe)
...
Note:
This answer tells you if a given file is currently held open by someone.
If you also need to know who (what process) has it open, see the answers to this related question, but note that they require either installation of a utility (handle.exe) or prior configuration of the system with administrative privileges (openfiles)
If you want a conveniently packaged form of the technique presented in this answer, you can download function Get-OpenFiles from this Gist, which supports finding all open files in a given directory [subtree].
Files, unlike processes, aren't running, so I assume that you meant to test if a file is currently open (has been opened, possibly by another process, for reading and/or writing, and hasn't been closed yet).
The following snippet detects if file someFile.txt in the current dir. is currently open elsewhere:
$isOpen = try {
[IO.File]::Open("$PWD/someFile.txt", 'Open', 'Read', 'None').Close()
$false # file NOT open elsewhere
}
catch {
# See if the exception is a sharing / locking error, which indicates that the file is open.
if (
$_.Exception.InnerException -is [System.IO.IOException] -and
($_.Exception.InnerException.HResult -band 0x21) -in 0x21, 0x20
) {
$true # file IS open elsewhere
}
else {
Throw # unexpected error, relay the exception
}
}
$isOpen # echo the result
Note the $PWD/ before someFile.txt, which explicitly prepends the path to the current directory so as to pass a full filename. This is necessary, because the .NET framework typically has a different current directory. Prepending $PWD/ doesn't work in all situations, however; you can read more about it and find a fully robust solution here.
The code tries to open the file for reading with an exclusive lock (a sharing mode of None), which fails if the file is currently open.
Note, however, that this only works if you have permission to at least read the file.
If you don't, the test cannot be performed, and the code relays the [System.UnauthorizedAccessException] that occurred; similarly, exceptions from other unexpected conditions are relayed, such as the specified file not existing ([System.IO.FileNotFoundException]).
[System.IO.IOException] can indicate a range of error conditions (and operator -is also matches derived classes such as [System.IO.FileNotFoundException]), so in order to specifically detect sharing/locking errors, the code must test the .HResult property of the exception for containing Win32 API error codes ERROR_SHARING_VIOLATION (0x20) or ERROR_LOCK_VIOLATION (0x21).
Taking a step back: If the intent is ultimately to process the file [content] yourself, it's better not to perform a separate test beforehand, because the file may get opened again between performing your test and your subsequent attempt to open it; instead, simply try to open the file yourself and handle any failure - see this (C#-based) answer.

Powershell file path with space, multiple drives

I'm trying to use the call operator (&) to run an R script, and for some reason I am unable to direct to the right path on the D:\ drive, but it works fine on the C:\ drive (copied the R folder from D:\ to C:\ for testing).
The D:\ drive error appears like a space error, even though there are quotes around the string/variable.
Double spacing between "Program" and "Files", the call command reads correctly.
Ideally I would like to call to Rscript.exe on the D:\ drive, but I don't know why it's giving me an error - especially when the C:\ drive works fine and double spacing reads correctly.
Also worth noting "D:\Program Files (x86)" doesn't read correctly either, with similar symptoms.
Update: running
gci -r d:\ -include rscript.exe | % fullname
returns:
D:\Program Files\R\R-3.2.3\bin\i386\Rscript.exe
D:\Program Files\R\R-3.2.3\bin\x64\Rscript.exe
D:\Program Files\R\R-3.2.3\bin\Rscript.exe
The last of which is what my variable $RscriptD is set to.
The first error message in your image is:
Rscript.exe : The term 'D:\Program' is not recognized as an internal or external command
This message means that the call operator (&) called Rscript.exe but Rscript.exe failed to do something by using 'D:\Program'.
I don't know exactly the details of internal process of Rscript.exe, however, I think Rscript.exe tried to run D:\Program Files\R\R-3.2.3\bin\i386\Rscript.exe or D:\Program Files\R\R-3.2.3\bin\x64\Rscript.exe but could not handle the whitespace of Program Files. Because the manual says:
Sub-architectures are also used on Windows, but by selecting executables within the appropriate bin directory, R_HOME/bin/i386 or R_HOME/bin/x64. For backwards compatibility there are executables R_HOME/bin/R.exe and R_HOME/bin/Rscript.exe: these will run an executable from one of the subdirectories, which one being taken first from the R_ARCH environment variable, then from the --arch command-line option and finally from the installation default (which is 32-bit for a combined 32/64 bit R installation).
According to this, I think it is better to call directly i386/Rscript.exe or x64/Rscript.exe rather than bin/Rscript.exe which is just for backwards compatibility.

Issue when running external command

I'm using backticks to run an external command in perl, but I've got a problem.
What I want to do is to run
`mount /dev/sdb2 /mnt`
But the sdb2 is only the right parameter when I'm running it with this disk, I want to be able to run the script with any disk.
The script gets information about the source disk that I'm using (in this case the sdb), and puts it as "$source". But when I try the:
`mount $source /mnt`
It says "mount: you must specify the filesystem type"
In this case the program asks for the "2"
Any idea on how to make the script find the number that is requried, or at least how to add a "2" after the "$source" so that
$source = /dev/sdb2 and not /dev/sdb
Use curly braces when expanding the variable:
`mount ${source}2 /mnt`
NB. make sure you validate $sources value, as to not introduce code injection vulnerabilities.

Where does CGI.pm normally create temporary files?

On all my Windows servers, except for one machine, when I execute the following code to allocate a temporary files folder:
use CGI;
my $tmpfile = new CGITempFile(1);
print "tmpfile='", $tmpfile->as_string(), "'\n";
The variable $tmpfile is assigned the value '.\CGItemp1' and this is what I want. But on one of my servers it's incorrectly set to C:\temp\CGItemp1.
All the servers are running Windows 2003 Standard Edition, IIS6 and ActivePerl 5.8.8.822 (upgrading to later version of Perl not an option). The result is always the same when running a script from the command line or in IIS as a CGI script (where scriptmap .pl = c:\perl\bin\perl.exe "%s" %s).
How I can fix this Perl installation and force it to return '.\CGItemp1' by default?
I've even copied the whole Perl folder from one of the working servers to this machine but no joy.
#Hometoast:
I checked the 'TMP' and 'TEMP' environment variables and also $ENV{TMP} and $ENV{TEMP} and they're identical.
From command line they point to the user profile directory, for example:
C:\DOCUME~1\[USERNAME]\LOCALS~1\Temp\1
When run under IIS as a CGI script they both point to:
c:\windows\temp
In registry key HKEY_USERS/.DEFAULT/Environment, both servers have:
%USERPROFILE%\Local Settings\Temp
The ActiveState implementation of CGITempFile() is clearly using an alternative mechanism to determine how it should generate the temporary folder.
#Ranguard:
The real problem is with the CGI.pm module and attachment handling. Whenever a file is uploaded to the site CGI.pm needs to store it somewhere temporary. To do this CGITempFile() is called within CGI.pm to allocate a temporary folder. So unfortunately I can't use File::Temp. Thanks anyway.
#Chris:
That helped a bunch. I did have a quick scan through the CGI.pm source earlier but your suggestion made me go back and look at it more studiously to understand the underlying algorithm. I got things working, but the oddest thing is that there was originally no c:\temp folder on the server.
To obtain a temporary fix I created a c:\temp folder and set the relevant permissions for the website's anonymous user account. But because this is a shared box I couldn't leave things that way, even though the temp files were being deleted. To cut a long story short, I renamed the c:\temp folder to something different and magically the correct '.\' folder path was being returned. I also noticed that the customer had enabled FrontPage extensions on the site, which removes write access for the anonymous user account on the website folders, so this permission needed re-applying. I'm still at a loss as to why at the start of this issue CGITempFile() was returning c:\temp, even though that folder didn't exist, and why it magically started working again.
The name of the temporary directory is held in $CGITempFile::TMPDIRECTORY and initialised in the find_tempdir function in CGI.pm.
The algorithm for choosing the temporary directory is described in the CGI.pm documentation (search for -private_tempfiles).
IIUC, if a C:\Temp folder exists on the server, CGI.pm will use it. If none of the directories checked in find_tempdir exist, then the current directory "." is used.
I hope this helps.
Not the direct answer to your question, but have you tried using File::Temp?
It is specifically designed to work on any OS.
If you're running this script as you, check the %TEMP% environment variable to see if if it differs.
If IIS is executing, check the values in registry for TMP and TEMP under
HKEY_USERS/.DEFAULT/Environment