Powershell script for copying and logging - powershell

I have search for similar answers to this and still I am going round in a circle(s).
I am new to any form of scripting so this is a bastardised script. The script is basically copying log files and data from locations to a remote server and making an append log each time it does it but for the life in me I cant get it to work over the network only local, by changing the $dirname = "D:\${env:computername}".
I would appreciate any feed back and help. This came about from a batch file I created and thought to try and progress in the dark arts.
The script is going to be scheduled to run task when a machines connects to the network.
thanks in advance
update
I get no output or error message from the log file at all no txt or data of any type, As for error messages I am trying to copy from local to server in a vm scenario and will not run, but if I apply this on the local machine it will copy c to d no problem. as I said complete novice
missing function body in function declaration
at line:2 char1
<<<<c:script\copy_log.ps1
+categoryinfo : parser error: (:) []. ParentContainsErrorRecordException
+FullyQualifiedErrorId : MissingFunctionBody
Apologies for format had to type it as I can c+p from the unit
UPDATE
figured out that the share to the other server was not shared correctly fixed this but the script still does not create a log file
function CopyLogFiles ($sourcePackage) { #used this syntax as I couldn't get anything else to work and took it from here
$dirName = "\\server\$sourcePackage" #server it is going to
if (!(Test-Path $dirName)) { mkdir $dirName }
Copy-Item -Path "C:\Program Files (x86)\ESS-T\$sourcePackage\Logs" -Destination $dirName -Recurse -Force
}
CopyLogFiles AppLauncher_V2.0.0.7
CopyLogFiles MMA_V2.0.0.12
CopyLogFiles MML_V2.0.0.4
CopyLogFiles SerialDataReader_V2.0.0.5
function Log-Write {
Param ([string]$LogString)
Add-Content $LogFile -value $LogString
}
$LogFile = "C:\Program Files (x86)\ESS-T\.log"

Don't reinvent the wheel. Copy-Item is convenient for small cases, but Windows has had robocopy included with every install since Windows 7 and it's faster, more robust, and has logging built in with the /log:FILENAME switch.
https://technet.microsoft.com/en-us/library/cc733145.aspx
Go ahead and test for the existence of your destination & create it manually in your PowerShell script, but leave the logging of the copy operation to robocopy.
Edit: You aren't creating the logfile because you don't define the logfile name until after the rest of your code runs.

Related

if then else not seeing else argument

I'm trying to learn myself some PowerShell scripting to automate some tasks at work.
The latest task I tried to automate was to create a copy of user files to a network-folder, so that users can easily relocate their files when swapping computers.
Problem is that my script automatically grabs the first option in the whole shebang, it never picks the "else"-option.
I'll walk you through part of the script. (I translated some words to make it easier to read)
#the script asks whether you want to create a copy, or put a copy back
$question1 = Read-Host "What would you like to do with your backup? make/put back"
if ($question1 -match 'put back')
{Write-Host ''
Write-Host 'Checking for backup'
Write-Host ''
#check for existing backup
if (-Not(Test-Path -Literalpath "G:\backupfolder"))
{Write-Host "no backup has been found"}
Elseif (Test-Path -LiteralPath "G:\backupfolder")
{Write-Host "a backup has been found."
Copy-Item -Path "G:\backupfolder\pictures\" -Destination "C:\Users\$env:USERNAME\ ....}}
Above you see the part where a user would want the user to put a "backup" back.
It checks if a "backup" exists on the G-drive. If the script doesn't see a backup-folder it says so. If the script DOES see the backup it should copy the content from the folders on the G-drive to the similarly named folder you'd find on the user-profile-folder. Problem is: So far it only acts as if there is never a G:\backupfolder to be found. It seems that I'm doing something wrong with if/then/else.
I tried with if-->Else, and with if-->Elseif, but neither works.
I also thought that it could be the Test-Path, so I tried adding -LiteralPath, but to no avail.
There is more to the script but it's just more if/then/else. If I can get it to work on this part I should be able to get the rest working. What am I not seeing/doing wrong?

How to move file on remote server to another location on the same remote server using PowerShell

Currently, I run the following command to fetch the files to my local system.
Get-SCPFile
-ComputerName $server
-Credential $credential
-RemoteFile ($origin + $target + ".csv")
-LocalFile ($destination + $target + ".csv")
It works as I'd like (although it sucks that I can't copy multiple files by regex and/or wildcard). However, after the operation has been carried out, I'd like to move the remote files to another directory on the remote server so instead of residing in $origin at $server, I want them to be placed in $origin + "/done" at the same server. Today, I have to use PuTTY for that but it would be so much more convenient to do that from PS.
Googling gave me a lot of material but I couldn't make it work. At the moment, I'm not sure if I'm specifying the path incorrectly somehow or if it's not possible to use the plain commands when working against an external, secured, Unix-server.
For copying files, I can't use Copy-Item, hence the function Get-SCPFile. I can imagine that remote moving, renaming and listing the items isn't possible neither for the same reason (whatever that reason is).
This example as well as this one produce error cannot find path despite the value being used for copying the file successfully with the script at the top. I'm pretty sure it's a misleading error message (not being enitrely sure, though).
$file = "\\" + $server + "" + $origin + "" + $target + ".csv"
# \\L234231.vds.afm.se/var/trans/ut/drish/sxx/meta001.csv
Remove-Item $file -force
Many answers (like this) are very simple, which supports my theory that the combination of Unix and secure raise an extra challenge. Perhaps I'm wording the question insufficiently well.
There's also more advanced examples, still not working, just hanging up the window with no error messages. I feel my competence prevents me from estimating the degree of screwuppiness in this approach.
In PowerShell you can create a PowerShell Session (PSSession) from your System remotly on another System (and into another Session on your System but thats details... ) and execute your commands there.
You can create a PSSession with New-PSSession but a lot of cmdlets have a-ComputerName parameter (or something similar) so that they can be executed remotley without creating a PSSession first.
A PSSession can be used with Enter-PSSession to get an interactive Session or with Invoke-Command to execute a ScriptBlock. That way you could test your Remove-Item command directly on the target server. Depending on the setup you might need to use Linux syntax within the remote session.
Here are some more infos about_PSSessions and using it with SSH to connect to Linux

using invoke-command to create files on a remote server

I am new to powershell and all sorts of scripting and have been landed with the following task.
I need to create a file on a remote server based on the filename picked up on the local server using the invoke-command.
WinRM is configured and running on the remote server.
What i need to happen is the following
On Server1 a trigger file is placed folder. Powershell on Server1 passes the filename onto powershell on Server2. Powershell on Server2 then creates a file based on the name.
My heads been melted trolling through forms looking for inspiration, any help would be greatly appreciated
Many thanks
Paul
I think if you're new to scripting, something that will add a lot of extra complexity is storing and handling credentials for Invoke-Command. It would be easier if you could make a shared folder on Server2 and just have one PowerShell script writing to that.
Either way, a fairly simple approach is a scheduled task on Server1 which runs a PowerShell script, with its own service user account, every 5 minutes.
Script does something like:
# Check the folder where the trigger file is
# assumes there will only ever be 1 file there, or nothing there.
$triggerFile = Get-ChildItem -LiteralPath "c:\triggerfile\folder\path"
# if there was something found
if ($triggerFile)
{
# do whatever your calculation is for the new filename "based on"
# the trigger filename, and store the result. Here, just cutting
# off the first character as an example.
$newFileName = $triggerFile.Name.Substring(1)
# if you can avoid Invoke-Command, directly make the new file on Server2
New-Item -ItemType File -Path '\\server2\share\' -Name $newFileName
# end here
# if you can't avoid Invoke-Command, you need to have
# pre-saved credentials, e.g. https://www.jaapbrasser.com/quickly-and-securely-storing-your-credentials-powershell/
$Credential = Import-CliXml -LiteralPath "${env:\userprofile}\server2-creds.xml"
# and you need a script to run on Server2 to make the file
# and it needs to reference the new filename from *this* side ("$using:")
$scriptBlock = {
New-Item -ItemType File -Path 'c:\destination' -Name $using:newFileName
}
# and then invoke the scriptblock on server2 with the credentials
Invoke-Command -Computername 'Server2' -Credential $Credential $scriptBlock
# end here
# either way, remove the original trigger file afterwards, ready for next run
Remove-Item -LiteralPath $triggerFile -Force
}
(Untested)

An exception occurred during a WebClient request (Powershell)

I'm trying to copy a directory folder from our HTTP server using Powershell, I would like to copy it's entire contents including subfolders into the local drive of my current server. The point of this is for server deployment automation so that my boss can run my powershell script and have an entire server setup with all our folders copied to its C: drive. This is the code I have
$source = "http://servername/serverupdates/deploy/Program%20Files/"
$destination = "C:\Program Files"
$client = new-object System.Net.WebClient
$client.DownloadFile($source, $destination)
When I run the script in Powershell ISE as admin, I get the error message
"Exception calling "DownloadFile" with "2" argument(s): "An exception occurred during a WebClient request."
Any suggestions on what could be going on?
I have also tried this block of code, but nothing happens when I run it, no errors or anything.
$source = "http://serverName/serverupdates/deploy/Program%20Files/"
$webclient = New-Object system.net.webclient
$destination = "c:/users/administrator/desktop/test/"
Function Copy-Folder([string]$source, [string]$destination, [bool]$recursive) {
if (!$(Test-Path($destination))) {
New-Item $destination -type directory -Force
}
# Get the file list from the web page
$webString = $webClient.DownloadString($source)
$lines = [Regex]::Split($webString, "<br>")
# Parse each line, looking for files and folders
foreach ($line in $lines) {
if ($line.ToUpper().Contains("HREF")) {
# File or Folder
if (!$line.ToUpper().Contains("[TO PARENT DIRECTORY]")) {
# Not Parent Folder entry
$items =[Regex]::Split($line, """")
$items = [Regex]::Split($items[2], "(>|<)")
$item = $items[2]
if ($line.ToLower().Contains("<dir&gt")) {
# Folder
if ($recursive) {
# Subfolder copy required
Copy-Folder "$source$item/" "$destination$item/" $recursive
} else {
# Subfolder copy not required
}
} else {
# File
$webClient.DownloadFile("$source$item", "$destination$item")
}
}
}
}
}
System.Net.WebClient.DownloadFile expects the second parameter to be a filename, not a directory. It can't download a directory recursively, it can only download a single file.
For the second part, run it line by line and see what happens. But parsing HTML to get paths is prone to error and is generally advised against.
My advice: Don't use http for this. Copy the stuff from a file share, it's only one line and saves you a lot of trouble. If you have to use http, download an archive and extract it in the target directory.
Beside #Gerald Schneider's answer, beware that the same WebException might also occur if the client process does not have needed permission to create output file.
I would suggest you to take the following strategy:
Download file to a unique filename with .tmp (.txt) extensions Windows Temporary Folder to avoid write-permission and other permissions issues
Move temporary file to destination folder
Rename temporary file to destination filename
Hope it helps :-)
In addition to the other answers, the error might occur if you've ran out of disk space.

Powershell Delete Locked File But Keep In Memory

Until recently, we've been deploying .exe applications by simply copying them manually to the destination folder on the server. Often though, the file was already running at the time of deployment (the file is called from a SQL Server job)--sometimes even multiple instances. We don't want to kill the process while it's running. We also can't wait for it to finish because it keeps on being invoked, sometimes multiple times concurrently.
As a workaround, what we've done is a "cut and paste" via Windows Explorer on the .exe file into another folder. Apparently, what this does is it moves the file (effectively a delete) but keeps it in RAM so that the processes which are using it can continue without issues. Then we'd put the new files there which would get called when any later program would call it.
We've now moved to an automated deploy tool and we need an automated way of doing this.
Stop-Process -name SomeProcess
in PowerShell would kill the process, which I don't want to do.
Is there a way to do this?
(C# would also be OK.)
Thanks,
function moverunningprocess($process,$path)
{
if($path.substring($path.length-1,1) -eq "\") {$path=$path.substring(0,$path.length-1)}
$fullpath=$path+"\"+$process
$movetopath=$path + "--Backups\$(get-date -f MM-dd-yyyy_HH_mm_ss)"
$moveprocess=$false
$runningprocess=Get-WmiObject Win32_Process -Filter "name = '$process'" | select CommandLine
foreach ($tp in $runningprocess)
{
if ($tp.commandline -ne $null){
$p=$tp.commandline.replace('"','').trim()
if ($p -eq $fullpath) {$moveprocess=$true}
}
}
if ($moveprocess -eq $true)
{
New-Item -ItemType Directory -Force -Path $movetopath
Move-Item -path "$path\*.*" -destination "$movetopath\"
}
}
moverunningprocess "processname.exe" "D:\Programs\ServiceFolder"
Since you're utilizing a SQL Sever to call the EXE. Why do you add a table that contains the path to the latest version of the file and modify your code that fires the EXE. That way when a new version is rolled out, you can create a new folder, place the file in it, and update the table pointing to it. That will allow any still active threads to have access to the old version and any new threads will pickup up the new executable. You then can delete the old file after it's no longer needed.