if then else not seeing else argument - powershell

I'm trying to learn myself some PowerShell scripting to automate some tasks at work.
The latest task I tried to automate was to create a copy of user files to a network-folder, so that users can easily relocate their files when swapping computers.
Problem is that my script automatically grabs the first option in the whole shebang, it never picks the "else"-option.
I'll walk you through part of the script. (I translated some words to make it easier to read)
#the script asks whether you want to create a copy, or put a copy back
$question1 = Read-Host "What would you like to do with your backup? make/put back"
if ($question1 -match 'put back')
{Write-Host ''
Write-Host 'Checking for backup'
Write-Host ''
#check for existing backup
if (-Not(Test-Path -Literalpath "G:\backupfolder"))
{Write-Host "no backup has been found"}
Elseif (Test-Path -LiteralPath "G:\backupfolder")
{Write-Host "a backup has been found."
Copy-Item -Path "G:\backupfolder\pictures\" -Destination "C:\Users\$env:USERNAME\ ....}}
Above you see the part where a user would want the user to put a "backup" back.
It checks if a "backup" exists on the G-drive. If the script doesn't see a backup-folder it says so. If the script DOES see the backup it should copy the content from the folders on the G-drive to the similarly named folder you'd find on the user-profile-folder. Problem is: So far it only acts as if there is never a G:\backupfolder to be found. It seems that I'm doing something wrong with if/then/else.
I tried with if-->Else, and with if-->Elseif, but neither works.
I also thought that it could be the Test-Path, so I tried adding -LiteralPath, but to no avail.
There is more to the script but it's just more if/then/else. If I can get it to work on this part I should be able to get the rest working. What am I not seeing/doing wrong?

Related

How do I run a PowerShell script on a file from the context menu?

I have written a PS script, which replaces a specific string at the beginning of the file, adds another piece of string to the end of the file, and finally it puts out an XML.
My code might be ugly (I am not a programmer/engineer or anything, just trying to make life easier for some family members who are running a small business), but it works:
$content = Get-Content -Path 'C:\Users\blabla\Desktop\4440341930.txt'
$newContent = $content -replace 'text to be replaced','this is going to replace stuff'
$newContent | Set-Content -Path 'C:\Users\blabla\Desktop\4440341930.txt'
Add-Content C:\Users\blabla\Desktop\4440341930.txt '</Items>'
$x = [xml](Get-Content "C:\Users\blabla\Desktop\4440341930.txt")
$x.Save("C:\Users\blabla\Desktop\4440341930.xml")
I would like them to be able to run this script from the context menu, by right clicking on a txt file. I did a little research and I kind of get what I have to add to Registry, however, I'm not sure how to make it work. Since the path of each file that they are going to right click on is going to be different, the path that I'm specifying in $content is not going to work.
What do I have to modify in my code to be able to add it to the Registry?
To accomplish this you need to:
Create a Shortcut in the SendTo Folder: "$DestinationPath\AppData\Roaming\Microsoft\Windows\SendTo"
The target: "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe"
The Arguments: -File "d:\path\your PS1 file"
In your program read the file name passed by Explorer as:
Param
(
[Parameter(Mandatory=$false)]
[String] $FilePath
)
I've written a Setup Function that accomplishes steps 1-3 that I include in all my programs that I want on the context menu and then just run the program with the -Setup switch. We're not supposed to post developed code here, but if you can't figure it out let me know and I'll post it and hope I don't get killed for it. LOL!
UPDATE:
If you want to pass more than one file you need to process the files a little differently. Delete the Param block above and then use this type of code to retrieve the files.
If ($Args.count -eq 0) {
$Message = "No Files were passed from File Explorer."
[Void]$MsgBox::Show(
"$Message","System Exit",$Buttons::OK, $MBIcons::Stop)
Show-PowerShell
Exit #Comment out for testing from ISE!
}
Else {
$FilesToCopy = $Args
}

Moving content from one external storage location to a network with email confirmation

This is my first non-very-basic attempt at PowerShell scripting, so please excuse any poor etiquette.
I have an need to transfer approximately 30GB of video data from USB attached storage to a local network share. As I started this little project, I quickly identified that the processes I do naturally when performing this task need to be accounted for during the scripting, so my question is, how do I lay this all out and achieve my end goal of a simple copy and allow for this.
This is what I have thus far;
### (The purpose of this script is to automate the offloading of the Driver Cameras during FP1 and FP2 on a Friday.
### Please remember that the cameras still need to be cleared after fireups on a Thursday"
Write-Host -NoNewLine "This script will move the footage captured on the SD cards of the FWD and RWD cameras and copy them to a defined network share" -ForegroundColor Green `n
Start-Sleep -Seconds 10
# Execute the copy from the foward facing camera to the network and remove the local files once complete
Write-Host "FWD Camera copy in progress" -ForegroundColor White -BackgroundColor Magenta `n
Start-Sleep -Seconds 5
Get-ChildItem -Path "SourcePath" -Recurse |
Move-Item -destination "DestinationPath" -Verbose
# Execute the copy from the rearward facing camera to the network and remove the local files once complete
Write-Host "RWD Camera copy in progress" -ForegroundColor White -BackgroundColor Magenta `n
Start-Sleep -Seconds 5
Get-ChildItem -Path "SourcePath" -Recurse |
Move-Item -destination "DestinationPath" -Verbose
Write-Host "Sending email confirmation" -ForegroundColor Green `n
Send-MailMessage -smtpserver ServerIP -To "EmailAddress" -From "EmailAddress" -Subject "Camera offload" -body BodyHere -bodyasHTML
Write-Host "All tasks have completed" -ForegroundColor Green `n
Read-Host "Press any key to exit..."
exit
What I'd like to add is fault tolerance and allow for this to be communicated via email dynamically. find these criteria below;
There's a chance the cable connecting the storage to the machine running the script could become disconnected and only have moved a number of items, can I add something to aid this?
If a file transfer fails how do i restart and track this? Can I add a loop to confirm all the items have been moved?
How do I reference a fault code to dynamically update the content of the email sent to the end user?
Finally, are there any other common practice references I've missed and that need to be included?
Many thanks in advance for any help.
This topic is a bit broad but let me try to address your question to help you to start. Of course I won't give you the whole code, just explanation what to use and why.
There's a chance the cable connecting the storage to the machine running the script could become disconnected and only have moved a number of items, can I add something to aid this?
First of all, as vonPryz said in comments, use robocopy!
It should survive network interruptions (e.g. check this post). As a general approach, I'd first make sure that the content is successfully copied before deleting it. For example you could use something like this:
(Get-ChildItem -Recurse).FullName.Replace("C:\old\path","D:\new\path") | Test-Path
Of course the above will only check if file exists, not if the file has the same content. To compare if the files are the same you could use Get-FileHash.
If a file transfer fails how do i restart and track this? Can I add a loop to confirm all the items have been moved?
Partially answered above. Robocopy has this feature built-in. And yes, you can add a loop to check.
How do I reference a fault code to dynamically update the content of the email sent to the end user?
Check the example from here:
robocopy b:\destinationdoesnotexist C:\documents /MIR
if ($lastexitcode -eq 0)
{
write-host "Success"
}
else
{
write-host "Failure with exit code:" $lastexitcode
}
Also, there's article on MS side listing all exit codes which might be helpful to handle the exceptions. All you have to do is to add $LASTEXITCODE to email body.

Powershell script for copying and logging

I have search for similar answers to this and still I am going round in a circle(s).
I am new to any form of scripting so this is a bastardised script. The script is basically copying log files and data from locations to a remote server and making an append log each time it does it but for the life in me I cant get it to work over the network only local, by changing the $dirname = "D:\${env:computername}".
I would appreciate any feed back and help. This came about from a batch file I created and thought to try and progress in the dark arts.
The script is going to be scheduled to run task when a machines connects to the network.
thanks in advance
update
I get no output or error message from the log file at all no txt or data of any type, As for error messages I am trying to copy from local to server in a vm scenario and will not run, but if I apply this on the local machine it will copy c to d no problem. as I said complete novice
missing function body in function declaration
at line:2 char1
<<<<c:script\copy_log.ps1
+categoryinfo : parser error: (:) []. ParentContainsErrorRecordException
+FullyQualifiedErrorId : MissingFunctionBody
Apologies for format had to type it as I can c+p from the unit
UPDATE
figured out that the share to the other server was not shared correctly fixed this but the script still does not create a log file
function CopyLogFiles ($sourcePackage) { #used this syntax as I couldn't get anything else to work and took it from here
$dirName = "\\server\$sourcePackage" #server it is going to
if (!(Test-Path $dirName)) { mkdir $dirName }
Copy-Item -Path "C:\Program Files (x86)\ESS-T\$sourcePackage\Logs" -Destination $dirName -Recurse -Force
}
CopyLogFiles AppLauncher_V2.0.0.7
CopyLogFiles MMA_V2.0.0.12
CopyLogFiles MML_V2.0.0.4
CopyLogFiles SerialDataReader_V2.0.0.5
function Log-Write {
Param ([string]$LogString)
Add-Content $LogFile -value $LogString
}
$LogFile = "C:\Program Files (x86)\ESS-T\.log"
Don't reinvent the wheel. Copy-Item is convenient for small cases, but Windows has had robocopy included with every install since Windows 7 and it's faster, more robust, and has logging built in with the /log:FILENAME switch.
https://technet.microsoft.com/en-us/library/cc733145.aspx
Go ahead and test for the existence of your destination & create it manually in your PowerShell script, but leave the logging of the copy operation to robocopy.
Edit: You aren't creating the logfile because you don't define the logfile name until after the rest of your code runs.

Powershell Delete Locked File But Keep In Memory

Until recently, we've been deploying .exe applications by simply copying them manually to the destination folder on the server. Often though, the file was already running at the time of deployment (the file is called from a SQL Server job)--sometimes even multiple instances. We don't want to kill the process while it's running. We also can't wait for it to finish because it keeps on being invoked, sometimes multiple times concurrently.
As a workaround, what we've done is a "cut and paste" via Windows Explorer on the .exe file into another folder. Apparently, what this does is it moves the file (effectively a delete) but keeps it in RAM so that the processes which are using it can continue without issues. Then we'd put the new files there which would get called when any later program would call it.
We've now moved to an automated deploy tool and we need an automated way of doing this.
Stop-Process -name SomeProcess
in PowerShell would kill the process, which I don't want to do.
Is there a way to do this?
(C# would also be OK.)
Thanks,
function moverunningprocess($process,$path)
{
if($path.substring($path.length-1,1) -eq "\") {$path=$path.substring(0,$path.length-1)}
$fullpath=$path+"\"+$process
$movetopath=$path + "--Backups\$(get-date -f MM-dd-yyyy_HH_mm_ss)"
$moveprocess=$false
$runningprocess=Get-WmiObject Win32_Process -Filter "name = '$process'" | select CommandLine
foreach ($tp in $runningprocess)
{
if ($tp.commandline -ne $null){
$p=$tp.commandline.replace('"','').trim()
if ($p -eq $fullpath) {$moveprocess=$true}
}
}
if ($moveprocess -eq $true)
{
New-Item -ItemType Directory -Force -Path $movetopath
Move-Item -path "$path\*.*" -destination "$movetopath\"
}
}
moverunningprocess "processname.exe" "D:\Programs\ServiceFolder"
Since you're utilizing a SQL Sever to call the EXE. Why do you add a table that contains the path to the latest version of the file and modify your code that fires the EXE. That way when a new version is rolled out, you can create a new folder, place the file in it, and update the table pointing to it. That will allow any still active threads to have access to the old version and any new threads will pickup up the new executable. You then can delete the old file after it's no longer needed.

Powershell running under a service hangs on *.zip CopyHere

I'm running a Windows Service (Hudson) which in turn spawns a PowerShell process to run my custom PowerShell commands. Part of my script is to unzip a file using CopyHere. When I run this script locally, I see a progress dialog pop up as the files are extracted and copied. However, when this runs under the service, it hangs at the point where a dialog would otherwise appear.
Here's the unzip portion of my script.
# Extract the contents of a zip file to a folder
function Extract-Zip {
param([string]$zipFilePath, [string]$destination)
if(test-path($zipFilePath)) {
$shellApplication = new-object -com shell.application
$zipFile = get-item $zipFilePath
$zipFolder = $shellApplication.NameSpace($zipFile.fullname)
$destinationFile = get-item $destination
$destinationFolder = $shellApplication.NameSpace($destinationFile.fullname)
$destinationFolder.CopyHere($zipFolder.Items())
}
}
I suspect that because its running under a service process which is headless (no interaction with the desktop), its somehow stuck trying to display a dialog.
Is there a way around this?
If it's still actual, I managed to fix this with having CopyHere params equal 1564.
So in my case extract zip function looks like:
function Expand-ZIPFile{
param(
$file, $destination
)
$shell = new-object -com shell.application
$zip = $shell.NameSpace($file)
foreach($item in $zip.items())
{
$shell.Namespace($destination).copyhere($item,1564)
"$($item.path) extracted"
}
1564 description can be found here - http://msdn.microsoft.com/en-us/library/windows/desktop/bb787866(v=vs.85).aspx:
(4) Do not display a progress dialog box.
(8) Give the file being operated on a new name in a move, copy, or rename operation if a file with the target name already exists.
(16) Respond with "Yes to All" for any dialog box that is displayed.
(512) Do not confirm the creation of a new directory if the operation requires one to be created.
(1024) Do not display a user interface if an error occurs.
If this is running on Vista or Windows 7, popping up UI from a service isn't going to be seen by the end user as you suspected. See this paper on Session 0 Isolation. However, does the progress dialog require user input? If not, I wouldn't think that would cause the service to hang. I would look for an option to disable the progress display. If you can't find that, then try switching to another ZIP extractor. PSCX 1.2 comes with an Expand-Archive cmdlet. I'm sure there are also others available.
Looking at the documentation for PowerShell, it looks like the -NonInteractive option may help here