Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I have a sample script for my diploma and it uses keypress mechanism, like so:
echo "A text prompting you to pres Enter"
$key = [System.Windows.Input.Key]::Enter
do
{
$isCtrl = [System.Windows.Input.Keyboard]::IsKeyDown($key)
if ($isCtrl)
{
$query = Get-Childitem 'D:\' -Recurse | Where-Object {$_.Name -match "controller.st$"}
foreach-object { $name = $query.FullName}
(Get-Content $name).Replace('A := AND55_OUT OR AND56_OUT OR AND61_OUT;', 'A := AND55_OUT') | Set-Content $name
#this actually does the replacing in a file
echo "sample text"
Start-Sleep -Seconds 30
echo "sample text"
break
}
} while ($true)
and the rest of the script continues.
I use the converting script
function Convert-PowerShellToBatch
{
param
(
[Parameter(Mandatory,ValueFromPipeline,ValueFromPipelineByPropertyName)]
[string]
[Alias("FullName")]
$Path
)
process
{
$encoded = [Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes((Get-Content -Path $Path -Raw -Encoding UTF8)))
$newPath = [Io.Path]::ChangeExtension($Path, ".bat")
"#echo off`npowershell.exe -NoExit -encodedCommand $encoded" | Set-Content -Path $newPath -Encoding Ascii
}
}
Get-ChildItem -Path C:\path\to\powershell\scripts -Filter *.ps1 |
Convert-PowerShellToBatch
and I modify this to my case but when I run the batch file, I get the following error:
Unable to find type [System.Windows.Input.Keyboard].
At line:7 char:11
+ $isCtrl = [System.Windows.Input.Keyboard]::IsKeyDown($key)
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Windows.Input.Keyboard:TypeName) [], RuntimeException
+ FullyQualifiedErrorId : TypeNotFound
What can I do to overcome this?
tl;dr
As Jeroen Mostert notes in a comment, you need to load the assembly containing types System.Windows.Input.Key and System.Windows.Input.Keyboard into your script before you can use them, using Add-Type:
Add-Type -AssemblyName PresentationCore
echo "A text prompting you to press Enter"
$key = [System.Windows.Input.Key]::Enter
# ... the rest of your script.
The conversion script is nifty, but it cannot detect such problems.
Before converting, you should always run your script in a new session started with -NoProfile, so as to ensure that the script doesn't accidentally rely on side effects from the current session, such as the required assembly having been loaded by different code beforehand:
# Run the script in a pristine session to ensure that it is self-contained.
powershell -noprofile -file somescript.ps1
In particular, use of the ISE may have hidden the problem in your case, as these types are available in ISE sessions by default (because the ISE is built on WPF, which these types belong to) - unlike in regular console windows.
This is just one of the behavioral difference between the ISE and regular console windows], which makes the ISE - as convenient as it is - problematic.
Additionally, the ISE is no longer actively developed and cannot run PowerShell (Core) v6+ - see the bottom section of this answer for details.
Consider Visual Studio Code, combined with its PowerShell extension, as the actively maintained alternative.
Related
I have been given the task to write a PS script that will, from a list of machines in a text file:
Output the IP address of the machine
Get the version of the SCCM client on the machine
Produce a GPResult HTMl file
OR
Indicate that the machine is offline
With a final stipulation of running the script in the background (Job)
I have the scriptblock that will do all of these things, and even have the output formatted like I want. What I cannot seem to do, is get the scriptblock to call the source file from within the same directory as the script. I realize that I could simply hard-code the directories, but I want to be able to run this on any machine, in any directory, as I will need to use the script in multiple locations.
Any suggestions?
Code is as follows (Note: I am in the middle of trying stuff I gathered from other articles, so it has a fragment or two in it [most recent attempt was to specify working directory], but the core code is still there. I also had the idea to declare the scriptblock first, like you do with variables in other programming languages, but more for readability than anything else):
# List of commands to process in job
$ScrptBlk = {
param($wrkngdir)
Get-Content Hostnames.txt | ForEach-Object {
# Check to see if Host is online
IF ( Test-Connection $_ -count 1 -Quiet) {
# Get IP address, extracting only IP value
$addr = (test-connection $_ -count 1).IPV4Address
# Get SCCM version
$sccm = (Get-WmiObject -NameSpace Root\CCM -Class Sms_Client).ClientVersion
# Generate GPResult HTML file
Get-GPResultantSetOfPolicy -computer $_.name -reporttype HTML -path ".\GPRes\$_ GPResults.html"}
ELSE {
$addr = "Offline"
$sccm = " "}
$tbl = New-Object psobject -Property #{
Computername = $_
IPV4Address = $addr
SCCM_Version = $sccm}}}
# Create (or clear) output file
Echo "" > OnlineCheckResults.txt
# Create subdirectory, if it does not exist
IF (-Not (Get-Item .\GPRes)) { New-Item -ItemType dir ".\GPRes" }
# Get current working directory
$wrkngdir = $PSScriptRoot
# Execute script
Start-Job -name "OnlineCheck" -ScriptBlock $ScrptBlk -ArgumentList $wrkngdir
# Let job run
Wait-Job OnlineCheck
# Get results of job
$results = Receive-Job OnlineCheck
# Output results to file
$results >> OnlineCheckResults.txt | FT Computername,IPV4Address,SCCM_Version
I appreciate any help you may have to offer.
Cheers.
~DavidM~
EDIT
Thanks for all the help. Setting the working directory works, but I am now getting a new error. It has no line reference, so I am not sure where the problem might be. New code below. I have moved the sriptblock to the bottom, so it is separate from the rest of the code. I thought that might be a bit tidier. I do apologize for my earlier code formatting. I will attempt to do better with the new example.
# Store working directory
$getwkdir = $PWD.Path
# Create (or clear) output file
Write-Output "" > OnlineCheckResults.txt
# Create subdirectory, if it does not exist. Delete and recreate if it does
IF (Get-Item .\GPRes) {
Remove-Item -ItemType dir "GPRes"
New-Item -ItemType dir "GPRes"}
ELSE{
New-Item -ItemType dir "GPRes"}
# Start the job
Start-Job -name "OnlineCheck" -ScriptBlock $ScrptBlk -ArgumentList $getwkdir
# Let job run
Wait-Job OnlineCheck
# Get results of job
$results = Receive-Job OnlineCheck
# Output results to file
$results >> OnlineCheckResults.txt | FT Computername,IPV4Address,SCCM_Version
$ScrptBlk = {
param($wrkngdir)
Set-Location $wrkngdir
Get-Content Hostnames.txt | ForEach-Object {
IF ( Test-Connection $_ -count 1 -Quiet) {
# Get IP address, extracting only IP value
$addr = (test-connection $_ -count 1).IPV4Address
# Get SCCM version
$sccm = (Get-WmiObject -NameSpace Root\CCM -Class Sms_Client).ClientVersion
Get-GPResultantSetOfPolicy -computer $_.name -reporttype HTML -path ".\GPRes\$_ GPResults.html"}
ELSE {
$addr = "Offline"
$sccm = " "}
$tbl = New-Object psobject -Property #{
Computername = $_
IPV4Address = $addr
SCCM_Version = $sccm}}}
Error text:
Cannot validate argument on parameter 'ComputerName'. The argument is null or empty. Provide an argument that
is not null or empty, and then try the command again.
+ CategoryInfo : InvalidData: (:) [Test-Connection], ParameterBindingValidationException
+ FullyQualifiedErrorId : ParameterArgumentValidationError,Microsoft.PowerShell.Commands.TestConnectionCommand
+ PSComputerName : localhost
As Theo observes, you're on the right track by trying to pass the desired working directory to the script block via -ArgumentList $wrkngdir, but you're then not using that argument inside your script block.
All it takes is to use Set-Location at the start of your script block to switch to the working directory that was passed:
$ScrptBlk = {
param($wrkngdir)
# Change to the specified working dir.
Set-Location $wrkngdir
# ... Get-Content Hostnames.txt | ...
}
# Start the job and pass the directory in which this script is located as the working dir.
Start-Job -name "OnlineCheck" -ScriptBlock $ScrptBlk -ArgumentList $PSScriptRoot
In PSv3+, you can simplify the solution by using the $using: scope, which allows you to reference variables in the caller's scope directly; here's a simplified example, which you can run directly from the prompt (I'm using $PWD as the desired working dir., because $PSScriptRoot isn't defined at the prompt (in the global scope)):
Start-Job -ScriptBlock { Set-Location $using:PWD; Get-Location } |
Receive-Job -Wait -AutoRemove
If you invoke the above command from, say, C:\tmp, the output will reflect that path too, proving that the background job ran in the same working directory as the caller.
Working directories in PowerShell background jobs:
Before PowerShell 7.0, starting background jobs with Start-Job uses the directory returned by [environment]::GetFolderPath('MyDocuments') as the initial working directory, which on Windows is typically $HOME\Documents, whereas it is just $HOME on Unix-like platforms (in PowerShell Core).
Setting the working dir. for the background job via Start-Job's -InitializationScript script-block argument via a $using: reference - e.g., Start-Job -InitializationScript { $using:PWD } { ... } should work, but doesn't in Windows PowerShell v5.1 / PowerShell [Core] 6.x, due to a bug (the bug is still present in PowerShell 7.0, but there you can use -WorkingDirectory).
In PowerShell (Core) 7+, Start-Job now sensibly defaults to the caller's working directory and also supports a -WorkingDirectory parameter to simplify specifying a working directory.
In PowerShell (Core) 6+ you can alternatively start background jobs with a post-positional & - the same way that POSIX-like shells such as bash do - in which case the caller's working directory is inherited; e.g.:
# PS Core only:
# Outputs the caller's working dir., proving that the background job
# inherited the caller's working dir.
(Get-Location &) | Receive-Job -Wait -AutoRemove
If I understand correctly, I think that the issue you are having is because the working directory path is different inside the execution of the Script Block. This commonly happens when you execute scripts from Scheduled tasks or pass scripts to powershell.exe
To prove this, let's do a simple PowerShell code:
#Change current directory to the root of C: illustrate what's going on
cd C:\
Get-Location
Path
----
C:\
#Execute Script Block
$ScriptBlock = { Get-Location }
$Job = Start-Job -ScriptBlock $ScriptBlock
Receive-Job $Job
Path
----
C:\Users\HAL9256\Documents
As you can see the current path inside the execution of the script block is different than where you executed it. I have also seen inside of Scheduled tasks, paths like C:\Windows\System32 .
Since you are trying to reference everything by relative paths inside the script block, it won't find anything. One solution is to use the passed parameter to change your working directory to something known first.
Also, I would use $PWD.Path to get the current working directory instead of $PSScriptRoot as $PSScriptRoot is empty if you run the code from the console.
I have a cute little script in my $PROFILE helping me start Notepad++ from the script showing a file of my choosing.
function Edit{
param([string]$file = " ")
Start-Process "C:\Program Files\Notepad++\notepad++.exe" -ArgumentList $file
}
It's worked great until recently, where I jump between different systems. I discovered that NPP is installed in C:\Program Files on some systems but in C:\Program Files (x86) on others. I can edit the script adapting it but having done so a gazillion times (i.e. 5 to this point), I got sick and tired of it, realizing that I have to automate this insanity.
Knowing little about scripting, I wonder what I should Google for. Does best practice dictate using exception handling in such a case or is it more appropriate to go for conditional expressions?
According to Get-Host | Select-Object Version I'm running version 5.1, if it's of any significance. Perhaps there's an even neater method I'm unaware of? Relying on an environment variable? I'd also prefer to not use a method valid in an older version of PS, although working, if there's a more convenient approach in a later one. (And given my experience on the subject, I can't tell a duck from a goose.)
I would use conditionals for this one.
One option is to test the path directly if you know for certain it is in a particular location.
Hard coded paths:
function Edit{
param([string]$file = " ")
$32bit = "C:\Program Files (x86)\Notepad++\notepad++.exe"
$64bit = "C:\Program Files\Notepad++\notepad++.exe"
if (Test-Path $32bit) {Start-Process -FilePath $32bit -ArgumentList $file}
elseif (Test-Path $64bit) {Start-Process -FilePath $64bit -ArgumentList $file}
else {Write-Error -Exception "NotePad++ not found."}
}
Another option is pulling path information from registry keys, if they're available:
function Edit{
param([string]$file = " ")
$32bit = (Get-ItemProperty -Path 'HKLM:\SOFTWARE\Notepad++\' -ErrorAction SilentlyContinue).("(default)")
$64bit = (Get-ItemProperty -Path 'HKLM:\SOFTWARE\WOW6432Node\Notepad++\' -ErrorAction SilentlyContinue).("(default)")
if ($32bit) {Start-Process -FilePath "$32bit\notepad++.exe" -ArgumentList $file}
elseif ($64bit) {Start-Process -FilePath "$64bit\notepad++.exe" -ArgumentList $file}
else {Write-Error -Exception "NotePad++ not found."}
}
Based on the great help from #BoogaRoo (who should get some +1 for effort) and asked by the same to post my own version of the answer, I go against my reluctance to post asnwers to own questions due to strong sensation of tackiness.
My final version, taking into account systems that lack NP++ but still want to show the editor of some kind.
function Edit{
param([string]$file = " ")
$executable = "Notepad++\notepad++.exe"
$32bit = "C:\Program Files (x86)\" + $executable
$64bit = "C:\Program Files\" + $executable
$target = "notepad"
if(Test-Path $32bit) { $target = $32bit }
if(Test-Path $64bit) { $target = $64bit }
Start-Process $target -ArgumentList $file
}
Let me offer a streamlined version that also supports passing multiple files:
function Edit {
param(
# Allow passing multiple files, both with explicit array syntax (`,`-separated)
# or as indiv. arguments.
[Parameter(ValueFromRemainingArguments)]
[string[]] $File
)
# Construct the potential Notepad++ paths.
# Note: `-replace '$'` is a trick to append a string to each element
# of an array.
$exePaths = $env:ProgramFiles, ${env:ProgramFiles(x86)} -replace '$', '\Notepad++\notepad++.exe'
# See which one, if any, exists, using Get-Command.
$exeToUse = Get-Command -ErrorAction Ignore $exePaths | Select-Object -First 1
# Fall back to Notepad.
if (-not $exeToUse) { $exeToUse = 'notepad.exe' }
# Invoke whatever editor was found with the optional file(s).
# Note that both Notepad++ and NotePad can be invoked directly
# without blocking subsequent commands, so there is no need for `Start-Process`,
# whose argument processing is buggy.
& $exeToUse $File
}
An array of potential executable paths is passed to Get-Command, which returns a command-info object for each actual executable found, if any.
-ErrorAction Ignore quietly ignores any errors.
Select-Object -First 1 extracts the first command-info object, if present, from the Get-Command output; this is necessary to guard against the (perhaps unlikely) case where the executable exists in both locations.
$exeToUse receives $null (effectively) if Get-Command produces no output, in which case Boolean expression -not $exeToUse evaluates to $true, causing the fallback to notepad.exe to take effect.
Both command names (strings) and command-info objects (instances of System.Management.Automation.CommandInfo or derived classes, as returned by Get-Command) can be executed via &, the call operator.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
what is the best way to execute .ps1 script?
if I'm a user who has no idea about powershell, but I want to just double click the script and let it run.
could you please advise on the best practice?
Thank you
[CmdletBinding()]
Param(
[Parameter(Mandatory = $False,
ValueFromPipeline = $True,
ValueFromPipelineByPropertyName = $True,
HelpMessage = "Provide Files Path.")]
[string]$FilesPath = 'C:\Users\myusername\OneDrive - mycompany\mycompany\Sales & Marketing\Sales Reports',
[Parameter(Mandatory = $False,
ValueFromPipeline = $True,
ValueFromPipelineByPropertyName = $True,
HelpMessage = "Provide Sheet Name.")]
[string]$SheetName = 'Expenses'
)
Try
{
$Files = Get-ChildItem -Path $FilesPath -Include *.xlsx, *.xls, *.xlsm -Recurse
$Counter = $Files.Count
$Array = #()
$OutPutFilePath = (Join-Path $FilesPath -ChildPath "Exported-ExcelData.csv")
Remove-Item -Path $OutPutFilePath -Force -ErrorAction SilentlyContinue
ForEach ($File In $Files)
{
Write-Verbose -Message "Accessing File $($File.Name) and Exporting Data from Sheet $SheetName. Remaining $Counter Files." -Verbose
$Counter -= 1
$AllData = Import-Excel -Path $File.FullName -WorksheetName $SheetName -NoHeader
$i = 0
ForEach ($Data In $AllData)
{
$ArrayData = "" | Select-Object "P1", "P2", "P3", "P4", "P5", "P6"
$ArrayData.P1 = $Data[0].P1
$ArrayData.P2 = $Data[0].P2
$ArrayData.P3 = $Data[0].P3
$ArrayData.P4 = $File.Name
$ArrayData.P5 = $File.FullName
$ArrayData.P6 = ($i += 1)
$Array += $ArrayData
}
}
$Array | Export-Csv -Path $OutPutFilePath -Append -NoTypeInformation
}
Catch
{
$ErrorLog = "Error On " + (Get-Date) + ";$($_.Exception.Message) - Line Number: $($_.InvocationInfo.ScriptLineNumber)"
Write-Error "$($_.Exception.Message) - Line Number: $($_.InvocationInfo.ScriptLineNumber)"
}
Finally
{
Write-Host "Process has been completed!" -ForegroundColor Green
Read-Host "Press any key to continue..."
}
Create a batch file.
Launching your PS1 file can be problematic because of the Powershell policy and also, PS1 scripts won't launch by default on a windows machine.
However, if you create a batch file that reference your PS1 script, then you're all set.
The -ExecutionPolicy Bypass will ignore the current system policy so your script can run without issues.
PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& 'C:\Users\SO\Desktop\YourScript.ps1'"
See below two batch command using relative path.
# Launch script
PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%mypath:~0,-1%\data\install.ps1'"
# Same thing but force the script to run as admin
PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -NoExit -File ""%mypath:~0,-1%\data\install.ps1""' -Verb RunAs}"
In these two variants, the PS1 I am lauching is in a subfolder called data and script is called install.ps1.
Therefore, I do not have to hardcode the script path in the batch file (you could put the PS1 at the same level than the batch file. In my example, I actually wanted only the batch at the first folder level and everything else "hidden from view" in a subfolder so the user does not have to think what file to execute.
I would recommend compiling the script, I believe that is the only way to reliably double-click a script. There are other ways, but they require weakening security, and partly as a result of this are hard to run across different systems.
The easiest way I've found to compile powershell is by using PowerGUI. You can paste your script and compile a portable .exe file that can be double-clicked. It doesn't require elevated permissions. Their website no longer hosts it, so you will need to find a 3rd party download.
You can also find a Visual Studio extension or one of Sapien's products, however both will cost money and aren't any better for just creating a basic double-clickable script.
Additionally, you will need to modify your script to include a prompt - assuming you wrote your code, that should be easily accomplished. Basically, you need to get it so that you never have to type anything in that you are not directly asked for, if you were to click RUN in ISE.
I have created a PowerShell script, but for some reason the "Start-Process"-cmdlet does not seem to be behaving correctly. Here is my code:
[string]$ListOfProjectFiles = "project_file*."
[string]$arg = "project_file"
[string]$log = "C:\Work\output.log"
[string]$error = "C:\Work\error.log"
Get-ChildItem $PSScriptRoot -filter $ListOfProjectFiles | `
ForEach-Object {
[string]$OldFileName = $_.Name
[string]$Identifier = ($_.Name).Substring(($_.Name).LastIndexOf("_") + 1)
Rename-Item $PSScriptRoot\$_ -NewName "project_file"
Start-Process "$PSScriptRoot\MyExecutable.exe" ` #This line causes my headaches.
-ArgumentList $arg `
-RedirectStandardError $error `
-RedirectStandardOutput $log `
-Wait
Remove-Item "C:\Work\output.log", "C:\Work\error.log"
Rename-Item "$PSScriptRoot\project_file" -NewName $OldFileName
}
The main issue is, is that on my machine the program runs, but only after I added the -Wait switch. I found out that if I stepped through my code in the PowerShell-ISE, MyExecutable.exe did recognise the argument and ran the program properly, while if I just ran the script without breakpoints, it would error as if it could not parse the $arg value. Adding the -Wait switch seemed to solve the problem on my machine.
On the machine of my colleague, MyExecutable.exe does not recognise the output of the -ArgumentList $arg part: it just quits with an error stating that the required argument (which should be "project_file") could not be found.
I have tried to hard-code the "project_file" part, but that is no success. I have also been playing around with the other switches for the Start-Process-cmdlet, but nothing works. I am a bit at a loss, quite new to PowerShell, but totally confused why it behaves differently on different computers.
What am I doing wrong?
If you does not use -Wait switch, then your script continue to run while MyExecutable.exe still executing. In particular you can rename file back (Rename-Item "$PSScriptRoot\project_file" -NewName $OldFileName) before you program open it.
You pass plain project_file as argument to your program. What if current working directory is not a $PSScriptRoot? Does MyExecutable.exe designed to look for files in the exe location directory in addition to/instead of current working directory? I recommend to supply full path instead:
[string]$arg = "`"$PSScriptRoot\project_file`""
Do not just convert FileInfo or DirectoryInfo objects to string. It does not guaranteed to return full path or just file name. Explicitly ask for Name or FullName property value, depending of what you want.
Rename-Item $_.FullName -NewName "project_file"
Have a strange issue crop up last week and can't find a solution. I have a modified version of this script: http://technet.microsoft.com/en-us/magazine/2009.07.heyscriptingguy.aspx running my log backups across 500 servers. Last week, 6 servers in the EMEA region decided that they didn't want to allow the files to be copied over.
The script processes one script a time and is failing with the copy-item cmdlet. The file does exist on the remote server. Take a look at the output below:
PS C:\> BackUpAndClearEventLogsDebug.ps1 -computers bud1s001 -LogsArchive "\\srv1s001\d$\Log_Backups\Test"
+ Processing bud1s001
This is the Backupeventlogs function and we're about to copy \\bud1s001\c$\Windows\temp\bud1s001\Application.evt
This is the Copyeventlogs function and we're about to copy \\bud1s001\c$\Windows\temp\bud1s001\Application.evt
Copy-Item : Cannot find path '\\bud1s001\c$\Windows\temp\bud1s001\Application.evt' because it does not exist.
At C:\Sched\LOG_BACKUPS\BackUpAndClearEventLogsDebug.ps1:132 char:2
+ Copy-Item -path $path -dest "$LogsArchive\$folder"
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (\\bud1s001\c$\W...Application.evt:String) [Copy-Item], ItemNotFoundException
+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.CopyItemCommand
PS C:\> ls \\bud1s001\c$\Windows\temp\bud1s001\Application.evt
Directory: \\bud1s001\c$\Windows\temp\bud1s001
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 2/17/2014 10:17 AM 5242836 Application.evt
If I set the variables in an interactive shell I can run the copy-item cmd with no problems. If I use the same arguments but not variables I can copy the files. It is only when run within the script that the failure happens.
I've tried running under different accounts, all of which are administrators. I changed the -path argument to -LiteralPath with no change. I've added output to make sure that the object isn't getting its value changed and everything looks good. The servers that are failing could have different language settings since they are not in the US, but the variable looks good. Only having issues with Windows 2003 servers, not issues with 2008 servers.
I'm out of ideas, open to suggestions.
Here is the exact function from the script with calling argument
Copy-EventLogsToArchive -path $path -Folder $Folder
Function Copy-EventLogsToArchive($path, $folder)
{
Write-Host " "
write-host "This is the Copyeventlogs function and we're about to copy $path"
Copy-Item -path $path -dest "$LogsArchive\$folder"
} # end Copy-EventLogsToArchive
Have you tried using Test-Path to verify the existence of the target file path, before calling Copy-Item?
Function Copy-EventLogsToArchive
{
[CmdletBinding()]
param ( [string] $Path, [string] $folder)
Write-Host -Object "`nThis is the Copyeventlogs function and we're about to copy $path";
if (Test-Path -Path $Path) {
Copy-Item -Path $Path -Destination "$LogsArchive\$folder";
}
} # end Copy-EventLogsToArchive
Copy-EventLogsToArchive -Path $Path -Folder $Folder;
The behavior you are describing is rather odd. How about bypassing the Copy-Item cmdlet, and going straight to the .NET Framework to copy the file?
[System.IO.File]::Copy($Path, "$LogsArchive\$Folder");
Side note: You should generally put your function calls after the function definition in PowerShell. If you make a change to the function definition, but your function call occurs before the definition, then your old function definition will be called when you execute the script.