I am a Network Engineer and my scripting skills are basically none. I work in an Enterprise data center (for only 6 months) and we have an IP spreadsheet. And in having this Spreadsheet comes errors. and if not corrected they snowball and become bigger than they needed to be.
I have been tasked with finding out the host name given to a certain IP within our whole enterprise. Doing this by hand one by one would take months to do. Is there a way to write a script to do it for me? I am looking at 1000s of IPs if not 10000's of IPs. So any help would be greatly appreciated.
What I need is a script that will either ping or look in DNS to find the host name to any given IP address. and then put the output to a file so the IP spreadsheet can be updated.
I am at a loss for this, Like I said any help would be greatly appreciated
Give this a go:
Get-Content C:\temp\IP_Addresses.txt | ForEach-Object {([system.net.dns]::GetHostByAddress($_)).hostname >> c:\temp\hostname.txt}
Found it on Technet, you will need to extract your ip's into a text list and it does not de-duplicate, but should be a nice starting point.
Related
Anyone know an easy way to force PTR record generation for existing A records that currently do not have them?
I have a couple of scenarios where this would be beneficial to me.
Members of my team have created records over time using the DNS MMC snap-in, but forgot to check the "Update PTR record" option, and
I have many reverse lookup zones for individual subnets that I'm trying to collapse into an also existing parent reverse zone... e.g. 1.192.10.in-addr.arpa -> 192.10.in-addr.arpa.
My thought process was to simply run through existing A records and run a pre-built DNS cmdlet with some sort of -updatePTR type flag, similar as can be done in the MMC snapin, but I guess nothing like that exists in Powershell.
The Add-DNSServerResourceRecord cmdlet has the -CreatePTR parameter, which sounds ideal, but that seems to only be supported when creating new records.
Is my only option to manually create the PTR records using Add-DNSServerResourceRecortPTR, or something similar, or perhaps even deleting the A record entirely and recreating with -CreatePTR?
I can do the former by following something similar to this:
https://gist.github.com/msoler8785/498332c622f93ace02b5d05e47845001, but I have hundreds of these zones I'm trying to cleanup from years of acquisitions, etc., which would make the code a lot more complex by requiring me to determine the correct Reverse zones based on the IP addresses, whereas that code uses a static zone.
Anyway, hoping I'm just missing something, or someone else has already figured this out.
::EDIT::
I found this which uses WMI/Powershell to do the first method, but I imagine this could easily converted to the native cmdlets.
https://serverfault.com/questions/163612/create-ptr-records-from-existing-a-records-windows-dns
Again, hoping someone has already figured this out.
I'm using Satellite 6 to manage EL 5, 6 and 7x hosts.
I've been trying to perform a Job Invocation (via Monitor-> Jobs -> Run Jobs) on a host of servers, based on a custom fact that I wrote (the fact is called ad_domain and basically tells you whether its active directory joined or not).
However I can't figure out how to do this....is this even possible?
I'm a Satellite newbie...I don't even even know what parameters I can use in the Search Query to do this. Can anyone help enlighten? Is it possible to specify a factor/facter value(s) in the Search Query so that it will resolve only to hosts that match that value(s)?
Appreciate your help in advance,
Sue
You can try
facts.ad_domain = value
I'm running the matlab extension for netlogo in headless(non-gui) mode. I've downloded the extension source and am trying to access the command line arguments from the java code in the extension. The command line arguments are stored in LabInterface.Settings. I would like to be able to access that object in the java code of the extension. I've been working on this for a couple of days but have had not success. It seems the extension process is designed to create primitives to be used inside netlogo. These primitives have knowledge of the different netlogo objects but there is no way for the extension java code to access it. I would appreciate any help.
I would like to be able to run multiple netlogo-matlab analyses with varying parameters, in batch mode accross multiple machines, perhaps a flux cluster. I need to run in headless because of the batch nature. Sometimes the runs will be on the same machine, sometimes split accross multiple machines, flux or condor. I know a similar functionality exist in netlogo for running varying parameters in a single session. Is there some way to split these accross multiple machines?
Currently, I create a series of setup files for netlogo. Each setup file represents the paramenters that vary for that run. Then I submit each netlogo - setup file combination as a single run. Each run can be farmed out to a seperate machine or processor. Adding the matlab extension complicates this. The matlab extension connects it's server to port 9999. With multiple servers running they all get attached to port 9999 and this causes problems. I was hoping to get information from the setup-file name to create independent port numbers tied to the setup file names. This way I could create a unique socket for each setup file, and hence a unique server connection for each netlogo run.
NetLogo doesn't provide a facility for distributing model runs on a cluster, but various people have done it anyway. See:
http://ccl.northwestern.edu/netlogo/docs/faq.html#cluster
https://github.com/jurnix/netlogo-cluster
http://mass.aitia.ai/index.php/intro/meme
and past threads about it on the netlogo-users group. There is no single standard solution.
As for getting access to LabInterface.Settings, it appears to me from looking through the NetLogo source code that the settings object isn't actually stored anywhere. It's just handed off from method to method, ultimately to lab.Lab.run, without ever actually being kept. So trying to get access to the name of the setup file won't work.
So you'll need to some other way for to make the extension generate unique port numbers. Seems to me like there's any number of possible solutions to this. At the time you generate the setup file you know its name, so you could generate a port number at the same time and include it in the experiment definition contained in the file. Or you could pass a port number in a Java system property (using -D) at the time you start NetLogo. Or you could generate a port number based on the process id of the JVM process. Or you could have the extension try port 9999 and see if it's already in use, and if it is, then try a different port number. That's just a few ideas... I could probably come up with ten more.
I wrote a powerCLI script that can automatically deploy a new VM with some given parameters.
In few words, the script connects to a given VC and start the deployment from an existing template.
Can I regulate the number of instances of my script that will run on the same computer ?
Can I regulate the number of instances of my script that will run on different computers but when both instances will be connected to the same VC ?
To resolve the issue i thought of developing a server side appilcation where each instance of my script will connect to, and the server will then handle all the instances , but i am not sure if such thing is possible in powerCLI/Powershell.
Virtually anything is poshable, or so they say. What you're describing may be overkill, however, depending on your scenario. Multiple instances of the same script will each run in its own Powershell process. Virtual Center allows hundreds of simultaneous connections. Of course the content or context of your script might dictate that it shouldn't run in simultaneous instances. I haven't experimented, but it seems like there are ways to determine the name of running Powershell scripts. So if you keep the script name consistent on each computer, you could probably build in some checks along the lines of the linked answer.
But depending on your particulars, it might be easier to go a different way. For example, if you don't want the script to run simultaneously because you have hard-coded the name of a new-osCustomizationSpec, for example, a simple\klugey solution might be to do a check for that new spec, and disconnect/exit/rollback if it exists. A better solution might be to give the new spec a unique name. But the devil is in the details. Hope that helps a bit.
I've wrote a Perl script to query devices (switches) on the network, it's used to find an mac address over the LAN. But, I would like to improve it, I mean, I have to give to my script these parameters:
The #mac searched
Switch' IP
Community
How can I do to just give IP and community ?
I know that it depends on my network topology ?
There is a main stack 3-switches (cisco 3750), and after it's linked to other ones (2960), in cascade.
Anyone has an idea ?
Edit : I would like to not specify the switch.
Just give the #mac and the community.
You have to solve two problems... Where will the script send the first query... Then, suppose you discover that a mac address was learned through port 1/2/1 on that switch and that port is connected to another switch. Somehow your script must be smart enough to query the switch attached to port 1/2/1. Continue the same algorithm until you do not have a switch to query.
What you are asking for is possible, but it would require you to either give the script network topology information in advance, or to discover it dynamically with CDP or LLDP. CDP always carries the neighbor's ip address... Sometimes you can get that from LLDP. Both CDP and LLDP have MIB objects you can query.
You'll need two scripts basically. You already have a script to gather your data, but it takes too long to find a single MAC. Presumably you have a complete list of every switch and it's IP address. Loop over them all building a database of the CAM table. Then when you need to search for a MAC, just query your pre-built database. Update it about once an hour or so and you should maintain pretty accurate results. You can speed the querying of several devices by running multiple snmp walks in parallel.