I have the code below, to be used in Powershell; it performed well, except that I need the output file to also include the error messages whenever the IPs did not resolve to names.
Get-Content inputfile.txt |
foreach-object { [System.Net.Dns]::GetHostEntry($_) } |
out-file -filepath outputfile.txt
At the moment, I'm able to see the red error messages displayed on Powershell window. But I want these to appear in the output file along with the results for each item listed in the input file.
Thanks in advance!
Since .GetHostEntry(..) doesn't give you a clear hint as to which IP failed to be resolved it's better if you create an object that associates the IP Address you're trying to resolve with the method call. This also allows you to have a better export type, instead of plain .txt file, you can export your objects as .csv with Export-Csv.
Below example uses .GetHostEntryAsync(..) which allow us to query multiple hosts in parallel!
using namespace System.Collections.Generic
using namespace System.Collections.Specialized
(Get-Content inputfile.txt).ForEach{
begin { $tasks = [List[OrderedDictionary]]::new() }
process {
$tasks.Add([ordered]#{
Input = $_
Hostname = [System.Net.Dns]::GetHostEntryAsync($_)
})
}
end {
do {
$id = [System.Threading.Tasks.Task]::WaitAny($tasks.Hostname, 200)
if($id -eq -1) { continue }
$thisTask = $tasks[$id]
$thisTask['Hostname'] = try {
$thisTask.Hostname.GetAwaiter().GetResult().HostName
}
catch { $_.Exception.Message }
$tasks.RemoveAt($id)
[pscustomobject] $thisTask
} while($tasks)
}
} | Export-Csv outputfile.csv -NoTypeInformation
Related
I am attempting to run a foreach loop on a get-content and convertfrom-json cmd. Now im aware this potentially has issues being multiple value results in the variable, im wondering how i can continue to pass this info to the rest of the script.
$testconv = Get-device * |select ID
$testid = $testconv.id
$conv = foreach ($id in $testid)
{
get-content "\\HDC-PRTG-03\System Information Database\Services\Device$id.Services" | Convertfrom-json
}
$rpccheck =$conv.message
$snmpcheck = $conv.message
$svcname = $conv.data.displayname
$svcstate=$conv.data.properties.state
if($RPCon = $rpccheck |select-string -pattern RPC -AllMatches){
write-host RPC Not enabled
}else{
write-host No RPC Enabled - Moving to Services List
Now when i run that with out the $conv= making it a variable it returns
kind : Services
recievetime : 29-01-2018 14:43:32
error : 106
Message : SNMP Channels Not Available.
Which is what i expect. However when i define it a variable with $conv= it just starts to say it cannot find the file paths which i find an odd error to throw but hey ho.
Do any of you smart guys have any pointers for how i can keep these fromjson objects in memory so i can continue to run foreach loops against them. The ultiumate function of this script is to query a local .services file for what services are running on the device and then create sensors to monitor them within our PRTG installation. Therefore i need to be able to ref the deviceID and apply things to it.
I suspect i may be using too many foreach loops in the whole script but frankly i am 100% out of my depth
any guidance hugely hugely appreciated
Sam
If i understand correctly you should have json files for all device ID's. If a file with the name of a particular device is missing you will get the 'File not found' error.
As for the code, you can try this:
$testconv = Get-Device * | select ID
$testid = $testconv.id
$oldErrorAction = $ErrorActionPreference
$ErrorActionPreference = 'Stop'
foreach ($id in $testid) {
try {
$conv = Get-Content -Path "\\HDC-PRTG-03\System Information Database\Services\Device$id.Services" -Raw | ConvertFrom-Json
$rpccheck = $conv.message # These look the same to me...
$snmpcheck = $conv.message # These look the same to me...
$svcname = $conv.data.displayname
$svcstate = $conv.data.properties.state
$Matches = ($rpccheck | Select-String -Pattern "RPC*" -AllMatches)
if ($Matches.Matches.Count) {
Write-Host "RPC Not enabled"
}
else {
Write-Host "No RPC Enabled - Moving to Services List "
}
}
catch {
Write-Warning $_.Exception.Message
}
}
$ErrorActionPreference = $oldErrorAction
Instead of the try{}..catch{} you could also first test if a file with that name is present using Test-Path directly before doing the Get-Content.
I need to create a list of IP addresses and DNS names. I
am trying to get DNS names from IP addresses. I have tried two ways:
try/catch but it ends afterwards.
Without and it just outputs DNS names that I can't relate to the IP addresses.
Here's what I have so far:
#try {
Get-Content C:\Users\pintose\Documents\IP_Address.txt | ForEach-Object
{([system.net.dns]::GetHostByAddress($_)).hostname >> C:\Users\user\Documents\hostname.txt}
# }
# catch {
if ($_.Exception.Message -like "*The requested name is valid*") {
Write-Output "UNREACHABLE" | Out-File C:\Users\user\Documents\hostname.txt }
# }
Try this solution:
$outFile = "C:\Users\user\Documents\hostname.txt"
Get-Content C:\Users\pintose\Documents\IP_Address.txt | ForEach-Object {
$hash = #{ IPAddress = $_
hostname = "n/a"
}
$hash.hostname = ([system.net.dns]::GetHostByAddress($_)).hostname
$object = New-Object psobject -Property $hash
Export-CSV -InputObject $object -Path $outFile -Append -NoTypeInformation
}
We create a objects, that have the IPaddress in it and a hostname n/a if it cannot be resolved. Then, the object gets exported into the file. You'll get something like:
192.0.0.1; Server1
This uses a workflow so it can do parallel foreach
Workflow Get-DNSNames([string[]]$IPAddresses){
foreach -parallel ($IP in $IPAddresses){
try{
#{$IP = $(([system.net.dns]::GetHostByAddress($IP)).hostname)}
}catch{
#{$IP = "N/A"}
}
}
}
$List = Get-DNSNames -IPAddresses $(Get-Content "C:\IPAddresses.txt").Split("[\r\n]")
$List | Out-File "C:\IPAddresses_Complete.txt"
You might want to try the other solutions offered here, but here are some things you might want to think about.
First, I'd recommend not putting the try{}catch{} around the whole of the first command. If you are looping through data and just one of them causes an exception, you risk not completing the task. Put the try{}catch{} around just the "risky" line of code:
Get-Content C:\Users\pintose\Documents\IP_Address.txt | Foreach-Object {
try {
([system.net.dns]::GetHostByAddress($_)).hostname >> C:\Users\user\Documents\hostname.txt
}
catch {
if ($_.Exception.Message -like "*The requested name is valid*") {
Write-Output "UNREACHABLE" | Out-File C:\Users\user\Documents\hostname.txt
}
}
}
When you catch the exception, you only write to the text file in the case that "the requested name is valid" (do you mean invalid?). You never write anything to the file otherwise. Thus, going back to your original code:
IF there is an exception caused by ANY of the IP addresses
AND the exception is NOT "the requested name is valid" (which I think might be a typo?)
THEN no error gets written to the file and the script ends without necessarily completing all the IP addresses.
Other things:
You use two methods to write to the file: >> and Out-File. Probably better to use the PowerShell cmdlet but with the -Append switch to ensure you append to the end of the file:
([system.net.dns]::GetHostByAddress($_)).hostname | Out-File C:\Users\user\Documents\hostname.txt -Append
Write-Output "UNREACHABLE" | Out-File C:\Users\user\Documents\hostname.txt -Append
#restless1987 has suggested a way to ensure you write both the IP address and the hostname (if determined) to the output file. I'd have a look at that to work out what is going on.
My final tip would be to be wary of reading in from .txt files with Get-Content. They often have trailing (blank) lines and you might want to try to ignore such blanks. Probably not a big issue in this case as it will just mean a failed DNS attempt, but I have seen such things wreak havoc on every mailbox in a (very) large company when used with other commands.
Another way...
$ip_list = Get-Content ./IP_Address.txt
foreach ($ip in $ip_list) {
try {
([system.net.dns]::GetHostByAddress($ip)).hostname |
Out-File -FilePath ./hostname.txt -Append
}
catch {
if ($_.Exception.Message -like "*The requested name is valid*") {
Write-Output "UNREACHABLE" | Out-File -FilePath './hostname.txt' -Append }
}
}
There are many tools that can accomplish this, but if you need a quick and dirty solution that you can run just about anywhere this will get the job done.
Using the eternally useful ps tools such as psloggedon /accepteula \\computername or ip address you can get who is currently logged in to check if this is the correct machine. For example:
c:\pstools>psloggedon /accepteula \\10.0.0.10
loggedon v1.33 - See who's logged on
Copyright ⌐ 2000-2006 Mark Russinovich
Sysinternals - www.sysinternals.com
Users logged on locally:
Error: could not retrieve logon time
NT AUTHORITY\LOCAL SERVICE
Error: could not retrieve logon time
NT AUTHORITY\NETWORK SERVICE
1/12/2015 8:06:51 AM DOMAIN\user
Error: could not retrieve logon time
NT AUTHORITY\SYSTEM
Users logged on via resource shares:
1/17/2015 2:26:43 PM DOMAIN\user
Now that you have confirmed that this IP address is the correct one for this user. We just need to lookup the IP address using nslookup.
C:\>nslookup 10.0.0.10
Server: server.domain.com
Address: 10.10.0.1
Name: Workstation07.server.domain.com
Address: 10.0.0.10
Now we know that the computer name for that computer is Workstation07.
I am writing a Powershell 2.0 script that parses a folder of offline .evtx event logs and generates a csv with specified output. I am only looking for certain event IDs and I am only outputting certain fields to the csv. The problem I run into arises when a specified event ID does not exist in a .evtx file; it generates an error: "Get-WinEvent: No events were found that match the specified selection criteria" (NoMatchingEventsFound). This makes sense, but is there a way to write it in a way to "not care" if the event ID exists and continue to parse? (I am only looking for specific existences of these chosen IDs, and I don't care if they don't exist) Here is my code:
$ReviewFile = ("\\thepath\" + (Get-Date).tostring("dd-MMM-yy") + " Review.csv")
If (Test-Path $Reviewfile){Remove-Item $ReviewFile}
$EventLogIDs = "4476","4741","4742" # etc...I have quite a number of IDs
Get-WinEvent -FilterHashtable #{Path="\\evtxpath\" + (Get-Date).tostring("yyyyMMdd") + "\*Security*";id=#($EventLogIDs);}|
Select-Object Id,LevelDisplayName,Message,MachineName,RecordId,TaskDisplayName | Export-CSV $ServerReviewFile
I have tried to add -ErrorAction silentlycontinue but it skips the entire hashtable.
I was thinking along the lines of maybe looping through an eventID array with a try and catch within the loop. Would that work? How would that affect the syntax of the hashtable generation code? Any other suggestions? Thank you for your advice
Sure, you can use a try catch:
$EventLogIDs = #("4476", "4741", "4742")
ForEach($EvtxFileInfo in (Get-ChildItem 'C:\EvtxFolder\'))
{
$ReviewFile = ("\\ThePath\" + (Get-Date).tostring("dd-MMM-yy") +
" Review.csv")
if (Test-Path $ReviewFile) { Remove-Item $ReviewFile }
try
{
Get-WinEvent -FilterHashtable #{
Path = $EvtxFileInfo.FullName; # or your way
Id = $EventLogIDs;
} |
Select Id, LevelDisplayName, Message, MachineName, RecordId, TaskDisplayName |
Export-Csv $ReviewFile -NoTypeInfo
}
catch # plain catch = catch everything
{
# do nothing
}
}
More info about: ForEach
More info about: Try Catch (Finally)
Reference: Get-ChildItem
I'm writing a script to load computer names from a CSV file, then look up their IP addresses.
The script creates an error when using the name from the CSV.
If I run the script in ISE, the error shows up, but the result still comes though. If I run the script from powershell, it errors and the result is null.
If I substitute the $PCname = $_.System for $PCname = "Computer01" everything works fine.
If I write-host $_.System it displays "Computer01". How can I get this to work in powershell?
$file = "\\server.contoso.net\private$\Systems.csv";
$CSV = Import-CSV $file;
$CSV | %{
if ($_.Skip -eq 0)
{
$PCname = $_.System
# $PCname = "Computer01"
write-host $PCname
try
{
$ipaddress = [System.Net.Dns]::GetHostByName($PCname).AddressList[0].IpAddressToString
}
Catch [system.exception]
{
if(1)
{ $error[0].tostring() }
}
}
}
Error displayed is:
Exception calling "GetHostByName" with "1" argument(s): "The requested name is valid, but no data of the requested type was found"
Turns out that the values in the CSV at some point had whitespace added after them, which caused a name look up error. I'm not sure why ISE would still be able to look up the host, but removing the whitespace fixed the issue.
Thanks to sha, his recommendation helped me see the whitespace.
I have written a simple PowerShell filter that pushes the current object down the pipeline if its date is between the specified begin and end date. The objects coming down the pipeline are always in ascending date order so as soon as the date exceeds the specified end date I know my work is done and I would like to let tell the pipeline that the upstream commands can abandon their work so that the pipeline can finish its work. I am reading some very large log files and I will frequently want to examine just a portion of the log. I am pretty sure this is not possible but I wanted to ask to be sure.
It is possible to break a pipeline with anything that would otherwise break an outside loop or halt script execution altogether (like throwing an exception). The solution then is to wrap the pipeline in a loop that you can break if you need to stop the pipeline. For example, the below code will return the first item from the pipeline and then break the pipeline by breaking the outside do-while loop:
do {
Get-ChildItem|% { $_;break }
} while ($false)
This functionality can be wrapped into a function like this, where the last line accomplishes the same thing as above:
function Breakable-Pipeline([ScriptBlock]$ScriptBlock) {
do {
. $ScriptBlock
} while ($false)
}
Breakable-Pipeline { Get-ChildItem|% { $_;break } }
It is not possible to stop an upstream command from a downstream command.. it will continue to filter out objects that do not match your criteria, but the first command will process everything it was set to process.
The workaround will be to do more filtering on the upstream cmdlet or function/filter. Working with log files makes it a bit more comoplicated, but perhaps using Select-String and a regular expression to filter out the undesired dates might work for you.
Unless you know how many lines you want to take and from where, the whole file will be read to check for the pattern.
You can throw an exception when ending the pipeline.
gc demo.txt -ReadCount 1 | %{$num=0}{$num++; if($num -eq 5){throw "terminated pipeline!"}else{write-host $_}}
or
Look at this post about how to terminate a pipeline: https://web.archive.org/web/20160829015320/http://powershell.com/cs/blogs/tobias/archive/2010/01/01/cancelling-a-pipeline.aspx
Not sure about your exact needs, but it may be worth your time to look at Log Parser to see if you can't use a query to filter the data before it even hits the pipe.
If you're willing to use non-public members here is a way to stop the pipeline. It mimics what select-object does. invoke-method (alias im) is a function to invoke non-public methods. select-property (alias selp) is a function to select (similar to select-object) non-public properties - however it automatically acts like -ExpandProperty if only one matching property is found. (I wrote select-property and invoke-method at work, so can't share the source code of those).
# Get the system.management.automation assembly
$script:smaa=[appdomain]::currentdomain.getassemblies()|
? location -like "*system.management.automation*"
# Get the StopUpstreamCommandsException class
$script:upcet=$smaa.gettypes()| ? name -like "*StopUpstreamCommandsException *"
function stop-pipeline {
# Create a StopUpstreamCommandsException
$upce = [activator]::CreateInstance($upcet,#($pscmdlet))
$PipelineProcessor=$pscmdlet.CommandRuntime|select-property PipelineProcessor
$commands = $PipelineProcessor|select-property commands
$commandProcessor= $commands[0]
$ci = $commandProcessor|select-property commandinfo
$upce.RequestingCommandProcessor | im set_commandinfo #($ci)
$cr = $commandProcessor|select-property commandruntime
$upce.RequestingCommandProcessor| im set_commandruntime #($cr)
$null = $PipelineProcessor|
invoke-method recordfailure #($upce, $commandProcessor.command)
if ($commands.count -gt 1) {
$doCompletes = #()
1..($commands.count-1) | % {
write-debug "Stop-pipeline: added DoComplete for $($commands[$_])"
$doCompletes += $commands[$_] | invoke-method DoComplete -returnClosure
}
foreach ($DoComplete in $doCompletes) {
$null = & $DoComplete
}
}
throw $upce
}
EDIT: per mklement0's comment:
Here is a link to the Nivot ink blog on a script on the "poke" module which similarly gives access to non-public members.
As far as additional comments, I don't have meaningful ones at this point. This code just mimics what a decompilation of select-object reveals. The original MS comments (if any) are of course not in the decompilation. Frankly I don't know the purpose of the various types the function uses. Getting that level of understanding would likely require a considerable amount of effort.
My suggestion: get Oisin's poke module. Tweak the code to use that module. And then try it out. If you like the way it works, then use it and don't worry how it works (that's what I did).
Note: I haven't studied "poke" in any depth, but my guess is that it doesn't have anything like -returnClosure. However adding that should be easy as this:
if (-not $returnClosure) {
$methodInfo.Invoke($arguments)
} else {
{$methodInfo.Invoke($arguments)}.GetNewClosure()
}
Here's an - imperfect - implementation of a Stop-Pipeline cmdlet (requires PS v3+), gratefully adapted from this answer:
#requires -version 3
Filter Stop-Pipeline {
$sp = { Select-Object -First 1 }.GetSteppablePipeline($MyInvocation.CommandOrigin)
$sp.Begin($true)
$sp.Process(0)
}
# Example
1..5 | % { if ($_ -gt 2) { Stop-Pipeline }; $_ } # -> 1, 2
Caveat: I don't fully understand how it works, though fundamentally it takes advantage of Select -First's ability to stop the pipeline prematurely (PS v3+). However, in this case there is one crucial difference to how Select -First terminates the pipeline: downstream cmdlets (commands later in the pipeline) do not get a chance to run their end blocks.
Therefore, aggregating cmdlets (those that must receive all input before producing output, such as Sort-Object, Group-Object, and Measure-Object) will not produce output if placed later in the same pipeline; e.g.:
# !! NO output, because Sort-Object never finishes.
1..5 | % { if ($_ -gt 2) { Stop-Pipeline }; $_ } | Sort-Object
Background info that may lead to a better solution:
Thanks to PetSerAl, my answer here shows how to produce the same exception that Select-Object -First uses internally to stop upstream cmdlets.
However, there the exception is thrown from inside the cmdlet that is itself connected to the pipeline to stop, which is not the case here:
Stop-Pipeline, as used in the examples above, is not connected to the pipeline that should be stopped (only the enclosing ForEach-Object (%) block is), so the question is: How can the exception be thrown in the context of the target pipeline?
Try these filters, they'll force the pipeline to stop after the first object -or the first n elements- and store it -them- in a variable; you need to pass the name of the variable, if you don't the object(s) are pushed out but cannot be assigned to a variable.
filter FirstObject ([string]$vName = '') {
if ($vName) {sv $vName $_ -s 1} else {$_}
break
}
filter FirstElements ([int]$max = 2, [string]$vName = '') {
if ($max -le 0) {break} else {$_arr += ,$_}
if (!--$max) {
if ($vName) {sv $vName $_arr -s 1} else {$_arr}
break
}
}
# can't assign to a variable directly
$myLog = get-eventLog security | ... | firstObject
# pass the the varName
get-eventLog security | ... | firstObject myLog
$myLog
# can't assign to a variable directly
$myLogs = get-eventLog security | ... | firstElements 3
# pass the number of elements and the varName
get-eventLog security | ... | firstElements 3 myLogs
$myLogs
####################################
get-eventLog security | % {
if ($_.timegenerated -lt (date 11.09.08) -and`
$_.timegenerated -gt (date 11.01.08)) {$log1 = $_; break}
}
#
$log1
Another option would be to use the -file parameter on a switch statement. Using -file will read the file one line at a time, and you can use break to exit immediately without reading the rest of the file.
switch -file $someFile {
# Parse current line for later matches.
{ $script:line = [DateTime]$_ } { }
# If less than min date, keep looking.
{ $line -lt $minDate } { Write-Host "skipping: $line"; continue }
# If greater than max date, stop checking.
{ $line -gt $maxDate } { Write-Host "stopping: $line"; break }
# Otherwise, date is between min and max.
default { Write-Host "match: $line" }
}