Issue when running external command - perl

I'm using backticks to run an external command in perl, but I've got a problem.
What I want to do is to run
`mount /dev/sdb2 /mnt`
But the sdb2 is only the right parameter when I'm running it with this disk, I want to be able to run the script with any disk.
The script gets information about the source disk that I'm using (in this case the sdb), and puts it as "$source". But when I try the:
`mount $source /mnt`
It says "mount: you must specify the filesystem type"
In this case the program asks for the "2"
Any idea on how to make the script find the number that is requried, or at least how to add a "2" after the "$source" so that
$source = /dev/sdb2 and not /dev/sdb

Use curly braces when expanding the variable:
`mount ${source}2 /mnt`
NB. make sure you validate $sources value, as to not introduce code injection vulnerabilities.

Related

Powershell compare command for 2 folders having corrupt folders

I am using the command below to compare 2 paths and I get an error msg. when it gets to a folder that ends with a period in the name ie "Folder123."
When I manually try to open those folders I get an error, so I think they are corrupt. How can I skip all folders that end with a period or at least ignore the errors so that my processing can finish?
Compare (Get-ChildItem -r Y:\Ftp\BFold\Final) (Get-ChildItem -r Y:\Dest\TFold\Temp)
You're getting that error because it's part of Naming Files, Paths, and Namespaces limitations in Windows. One or severals of the tools you're using are not able to handle this special case.
Do not end a file or directory name with a space or a period. Although the underlying file system may support such names, the Windows shell and user interface does not. However, it is acceptable to specify a period as the first character of a name. For example, ".temp".
You could either filter the list of folders or using the -ErrorAction to change what happens on an error. Depending on what're you're seeing the error migth already by purely cosmetic.
For Filtering you could use Where-Object for example with -NotMatch ".*\.$".

How to read a text file to a variable in batch and pass it as a parameter to a powershell script

I have a powershell script that generates a report, and I have connected it to an io.filesystemwatcher. I am trying to improve the error handling capability. I already have the report generation function (which only takes in a filepath) within a try-catch loop that basically kills word, excel and powerpoint and tries again if it fails. This seems to work well but I want to embed in that another try-catch loop that will restart the computer and generate the report after reboot if it fails a second consecutive time.
I decided to try and modify the registry after reading this article: https://cmatskas.com/configure-a-runonce-task-on-windows/
my plan would be, within the second try-catch loop I will create a textfile called RecoveredPath.txt with the file path being its only contents, and then add something like:
Set-ItemProperty "HKLMU:\Software\Microsoft\Windows\CurrentVersion\RunOnce" -Name '!RecoverReport' -Value "C:\...EmergencyRecovery.bat"
Before rebooting. Within the batch file I have:
set /p RecoveredDir=<RecoveredPath.txt
powershell.exe -File C:\...Report.ps1 %RecoveredDir%
When I try to run the batch script, it doesn't yield any errors but doesn't seem to do anything. I tried adding in an echo statement and it is storing the value of the text file as a variable but doesn't seem to be passing it to powershell correctly. I also tried adding -Path %RecoveredDir% but that yielded an error (the param in report.ps1 is named $Path).
What am I doing incorrectly?
One potential problem is that not enclosing %RecoveredDir% in "..." would break with paths containing spaces and other special chars.
However, the bigger problem is that using mere file name RecoveredPath.txt means that the file is looked for in whatever the current directory happens to be.
In a comment your state that both the batch file and input file RecoveredPath.txt are located in your desktop folder.
However, it is not the batch file's location that matters, it's the process' current directory - and that is most likely not your desktop when your batch file auto-runs on startup.
Given that the batch file and the input file are in the same folder and that you can refer to a batch file's full folder path with %~dp0 (which includes a trailing \), modify your batch file to look as follows:
set /p RecoveredDir=<"%~dp0RecoveredPath.txt"
powershell.exe -File C:\...Report.ps1 "%RecoveredDir%"

Powershell file path with space, multiple drives

I'm trying to use the call operator (&) to run an R script, and for some reason I am unable to direct to the right path on the D:\ drive, but it works fine on the C:\ drive (copied the R folder from D:\ to C:\ for testing).
The D:\ drive error appears like a space error, even though there are quotes around the string/variable.
Double spacing between "Program" and "Files", the call command reads correctly.
Ideally I would like to call to Rscript.exe on the D:\ drive, but I don't know why it's giving me an error - especially when the C:\ drive works fine and double spacing reads correctly.
Also worth noting "D:\Program Files (x86)" doesn't read correctly either, with similar symptoms.
Update: running
gci -r d:\ -include rscript.exe | % fullname
returns:
D:\Program Files\R\R-3.2.3\bin\i386\Rscript.exe
D:\Program Files\R\R-3.2.3\bin\x64\Rscript.exe
D:\Program Files\R\R-3.2.3\bin\Rscript.exe
The last of which is what my variable $RscriptD is set to.
The first error message in your image is:
Rscript.exe : The term 'D:\Program' is not recognized as an internal or external command
This message means that the call operator (&) called Rscript.exe but Rscript.exe failed to do something by using 'D:\Program'.
I don't know exactly the details of internal process of Rscript.exe, however, I think Rscript.exe tried to run D:\Program Files\R\R-3.2.3\bin\i386\Rscript.exe or D:\Program Files\R\R-3.2.3\bin\x64\Rscript.exe but could not handle the whitespace of Program Files. Because the manual says:
Sub-architectures are also used on Windows, but by selecting executables within the appropriate bin directory, R_HOME/bin/i386 or R_HOME/bin/x64. For backwards compatibility there are executables R_HOME/bin/R.exe and R_HOME/bin/Rscript.exe: these will run an executable from one of the subdirectories, which one being taken first from the R_ARCH environment variable, then from the --arch command-line option and finally from the installation default (which is 32-bit for a combined 32/64 bit R installation).
According to this, I think it is better to call directly i386/Rscript.exe or x64/Rscript.exe rather than bin/Rscript.exe which is just for backwards compatibility.

Powershell: Copy-Item -Recurse -Force is not copying all sub files

I have a one liner that is baked into a larger script for some high level forensics. It is just a simple copy-item command and writes the dest folder and its contents back to my server. The code works great, BUT even with the switches:
-Recurse -Force
It is not returning the file with an extension of .dat. As you can guess what I am trying to achieve, I need the .dat file for analysis. I am running this from a privileged account. My only thought was that it is a read/write conflict and the host file was currently utilizing it (or other sys file). What switch am I missing? The "mode" for the file that will not copy over is -a---. Not hidden, just not copying. Suggestions elsewhere have said to use xCopy/robocopy- if possible I do not want to call another dependancy- im already using powershell for the majority of the script, id prefer to stick with it....Any thoughts? Thanks in advance, this one has been tickling my brain for a little...
The only way to copy a file in use is to find the locking handle close it then retry the copy operation(handle.exe).
From your question it looks like you are trying to remotely copy user profiles which includes ntuser.dat and other files that would be needed to keep the profile working properly. Even if you did manage to find a way to unload the dat file(s), you would have to consider the impact that would have on the remote system.
Shadow copy is typically used by backup programs to copy files in use so your best bet would be to find the latest backup of each remote computer and then try to extract the needed files from the backed-up copies or maybe wait for the users to logoff and then try.

AgeStore Fails to Remove Expired Debug Symbol Files

I’m trying to use AgeStore to remove some expired symbol files. I’ve written a Powershell script in which the AgeStore command works sometimes, but, not always.
For example, my symbol store contains symbol files dating back to 2010. I’d like to clean out the “expired” symbols because they are no longer needed. To that end, I use the -date command line argument to specify “-date=10-01-2010”. Additionally, I use the “-l” switch to force AgeStore to
Causes AgeStore not to delete any files, but merely to list all the
files that would be deleted if this same command were run without the
-l option.
Here’s a snippet of the script code that runs…
$AgeStore = "$DebuggingToolsPath\AgeStore"
$asArgs = "`"$SymbolStorePath`" -date=$CutoffDate -s -y "
if ($WhatIf.IsPresent) { $asArgs += "-l" }
# determine size of the symbol store before delete operation.
Write-Verbose ">> Calculating current size of $SymbolStorePath before deletion.`n" -Verbose
">> $SymbolStorePath currently uses {0:0,0.00} GB`n" -f (((Get-ChildItem -R $SymbolStorePath | measure-object length -Sum ).Sum / 1GB))
Write-Verbose ">> Please wait...processing`n`n" -Verbose
& $AgeStore $asArgs
When the above code runs, it returns the following output…
processing all files last accessed before 10-01-2010 12:00 AM
0 bytes would be deleted
The program 'RemoveOldDebugSymbols.ps1: PowerShell Script' has exited
with code 0 (0x0).
I have verified that there are symbol files with dates earlier than “10-01-2010” in the symbol store. I’ve subsequently tried the same experiment with a different cutoff date, “11-01-2015” and the output indicates that there are several files it would have deleted, but, not those that are from 2010. I’m at a loss as to what may cause the discrepancy.
Has anyone tried to delete symbol files from a symbol store using AgeStore? If so, have you run into this problem? How did you resolve it?
I’ve tried to resolve this many different ways using AgeStore. For the sake of moving forward with a project, I’ve decided to rewrite the script to use the SymStore command with a delete transaction. Basically, I created a list of the debug symbol transactions that should be removed and wrote a loop that iterates over the list and deletes each entry one at a time.
Hope this is helpful for anyone who runs into the same problems.
EDIT: Per request....I cannot post the entire script, but, I used the following code in a loop as a replacement for the AgeStore command.
$ssArgs = ".\symstore.exe del /i $SymbolEntryTransactionID /s `"$SymbolStorePath`""
Invoke-Expression $ssArgs