I've written a Powershell script to automate a process we have been doing manually, but for some reason it does not work when using a .csv of machines exported from AD. If I query AD for machines with a certain property, output the results to a .csv which are then read into an array and subsequently run any cmdlet on the machines in the array, the cmdlet doesn't fail but always returns a negative result. For instance, if I run test-connection -ComputerName $computer on a computer manually, against a manually created array or an array created from a manually created .csv it returns the proper results. However, if I run the same command against a .csv created from running the Get-ADComputer cmdlet, all machines return false even if they're up and running. Any ideas on what would cause this?
Thanks for the responses. I was actually able to get assistance elsewhere and get it figured out. Due to hostname length variations, Get-ADComputer was adding trailing spaces so they were all the same length. This wasn't visible in .csv . After removing the trailing spaces from each hostname it worked normally.
Related
I have a PS script that collects server metrics. It works absolutely fine when I run it as a single threaded. However when I set it up to run multi threaded, I receieve the below error.
An error occurred while starting the background process. Error reported: The filename or extension is too long.
$jobs=$serverList|%{ Start-Job -InitializationScript $functions -ArgumentList $_ -ScriptBlock {
Script Line-1
Script Line-2
Script Line-3
Script Line-4
Script Line-5
Script Line-6
Script Line-7
}
$functions has 10 functions,totaling about 350 lines.
Based on the research that I have done this error comes up because the initialization script has too many lines of script.Let me know how do I fix this without truncating any script.
First Update-
I was able to resolve the issue by reducing the size of the scriptblock. Based on this Site we can only have about 12 bytes of data being sent in script block. Pretty weird issue. Any solution on how to overcome this size limitations.
For now the issue is resolved by removing unwanted spaces and additonal logs written. However if there is a need to modify the script further this solution may not hold good. Looking forward for few suggestions.
I am not a scripter at all. Someone else had created this script for me and it has previously worked. The only thing that has changed is the drive letter (which I did change in the script - it is currently drive E). But it is not working now. All it is supposed to do is pull back a list of files in a specified folder and save it as a text file in that directory; in this case, it's my karaoke song collection.
When I run the script now, I get:
Get-Process : A positional parameter cannot be found that accepts argument Get-ChildItem.
Here is the original script:
PS C:\Users\Tina> Get-ChildItem "F:\My Music\Karaoke\*.*" | Set-Content "F:\My Music\Karaoke\test.txt"
I'd like to make it so that it just pulls back all .mp3's, if that's possible, too. Thanks in advance for your help!
Since you appear to be copying and pasting this to the command line I will assume there was a typo that caused this issue. After a couple of quick tests to try and guess what the accident was I was unable to replicate exactly. Not being a scripter might make this harder but I recommend saving this code to a ps1 file so that you can just double click on it.
Get-ChildItem "F:\My Music\Karaoke\*.mp3" | Set-Content "F:\My Music\Karaoke\test.txt"
Warning
In order for the this file to work for you you have to allow PowerShell to execute it. If you run the shell as administrator once and run this code
Set-ExecutionPolicy remotesigned
It will allow your script to run. Keep in mind this is a site for scripters to get help. You should expect answers like this.
I’m in the process of creating a powershell script to check OU users against users already configured for file share archiving but I’ve hit a stumbling block. I can query AD to get a list of users per OU and their home directories, dumping all of the details out to text files for logs and basing subsequent queries on. Once I have these details I try to run a dos command, (Enterprise Vault) Archivepoints.exe passing variables to it. The command would usually be :
Archivepoints.exe find \\fopserver045v\ouone_users$
When I try to run the following code I get an error.
$app="D:\Enterprise Vault\ArchivePoints.exe"
$EVArg = "find"
$VolLine = "\\fopserver045v\ouone_users_r$"
Invoke-Item "$app $EVArg $VolLine"
Invoke-Item : Cannot find path 'D:\Enterprise Vault\ArchivePoints.exe find \fopserver045v\ouone_users_r$' because it does not exist.
At first I thought it was missing the first backslash of the UNC path that was causing the issue but I'm no longer sure.
The script and command run on the EV server and the UNC bath doesn't actually go to the server, it's only a reference path within EV so it's not a credentials issue.
I need to be able to log the output to file too if possible.
What should the code look like and should I be using invoke-command or Invoke-Expression instead ?
Thanks
Don't use Invoke-Item. External commands should be run using the call operator (&). You can use splatting for the argument list.
$app="D:\Enterprise Vault\ArchivePoints.exe"
$arguments = "find", "\\fopserver045v\ouone_users_r$"
& $app #arguments
Im thinking this would be easy, but want to make sure it's possible.
Is there an easy way to grab a list of all Hyper-V Servers/VM's on a machine and maybe export it to a CSV File? (excel spreadsheet).
Get-VM returns quite a bit of information, but is there any way to split that? Maybe store them into an array?
This is my first time with powershell so Im mainly wanting to make sure this is a easily doable task.
Problem is.....the machine is a Windows 2008 Server R2 that doesn't support Hyper-V Modules I believe (I think only Win 8 does).....so I've been remoting into it....so can I use Powershell to Remote into it and Run this script?
I tried doing a Get-VM from command line using Invoke-Command, but it complains about File-Path....but I was just trying to do a Get-VM from command line.
Sounds to me like you need to take some smaller steps first. In this case, I would avoid PowerShell remoting. If you have the modules installed and and loaded in your PowerShell session you should be able to point to a remote host to run Get-VM without having to do anything crazy.
As for filtering output for a CSV, You should use Select (to select certain properties of the object and ignore others) and Where-Object (to filter what objects are displayed, based on properties of the object.
Get-VM -ComputerName HyperVHost.constoso.local | Where-Object {$_.OS -match "2008"} | Select Name, Host | ExportTo-CSV C:\csv.csv
The above example was fictional, but it's the syntax I would use to filter input before putting it into a CSV. Again, it sounds like you're going to need a more basic understanding of how to work with the pipeline, understand what options you have to work with objects, etc.
myscript.ps1
Param(
[Parameter(Mandatory=$true,Position=1)]
[string]$computerName
)
echo "arg0: " $computerName
CMD.exe
C:\> myscript.ps1 -computerName hey
Output:
cmdlet myscript.ps1 at command pipeline position 1
Supply values for the following parameters:
computerName: ddd
arg0:
ddd
I'm simply trying to work with Powershell parameters in CMD, and I can't seem to get a script to take one. I see sites saying to precede the script with .\ but that doesn't help. I added the mandatory line to see if Powershell was reading a parameter or not, and it's clearly not. The parameter computerName is obviously the word "hey". The Param block is the very first thing in the script. Powershell appears to recognize a parameter computerName, but no matter how I enter the command, it never thinks I'm actually entering parameter.
What the heck's wrong with my syntax?
By default, Powershell will not run scripts that it just happens to find in your current directory. This is intended by Microsoft as a security feature, and I believe that it mimics behavior found in unix shells.
Powershell will run scripts that it finds in your search path. Your search path is stored in $env:path.
I suspect that you have a script named "myscript.ps1" in some other directory that is on your search path.
I have had this happen to me before. The symptom I saw was that the parameter list seemed different than what I had defined. Each script had a different parameter list, so the script bombed when I fed it a parameter list intended for the other script. My habit is to not rely on parameter position, so this problem was easy to find.
The addition of ".\" to the script ".\myscript.ps1" should force the shell to use the .ps1 file in your current directory. As a test, I would specify the full path to the file you are trying to execute (If there are spaces in the path, be sure to wrap the path in "quotes") or change it to some totally crazy name that won't be duplicated by some other file (like "crazyfishpants.ps1") and see if the shell still finds the file.
You can get into similar problems if you have a function ("Get-Foo") that is loaded out of a module or profile with the same name as a script file ("Get-Foo.ps1"). You may wind up running something other than what you intend.
Position values should be 0-based (0 for the first parameter). That said, I can't duplicate what you're seeing on either PowerShell 2.0 or 3.0.
Thank you all for your very informative responses. It looks like my question was slightly edited before I submitted it, in that the text leads you to believe that I was entering this command directly in Powershell.
I was actually running the command for the script in CMD, which totally explains why it was not passing parameters to the Powershell script. Whoever green-lighted my question probably changed C:\> to PS> thinking that I made a typo.
I assumed that if I could run the script straight from CMD, I could send parameters to it on CMD's command line, but apparently that's not the case. If I run the script in Powershell, it indeed works just fine, I'm now seeing.
My ultimate goal was to allow users to run the Powershell script from CMD. It's looking like I can make a batch file that accepts parameters, and then start powershell and send those parameters to the PS script. And so, in the batch file, I should do something like:
powershell -File C:\myscript.ps -computerName %1
This enigma was probably solved 100 times over on this site, and I apologize for the confusion. Thank you again, for your responses.