SNMP : How to find a mac address in the network? - perl

I've wrote a Perl script to query devices (switches) on the network, it's used to find an mac address over the LAN. But, I would like to improve it, I mean, I have to give to my script these parameters:
The #mac searched
Switch' IP
Community
How can I do to just give IP and community ?
I know that it depends on my network topology ?
There is a main stack 3-switches (cisco 3750), and after it's linked to other ones (2960), in cascade.
Anyone has an idea ?
Edit : I would like to not specify the switch.
Just give the #mac and the community.

You have to solve two problems... Where will the script send the first query... Then, suppose you discover that a mac address was learned through port 1/2/1 on that switch and that port is connected to another switch. Somehow your script must be smart enough to query the switch attached to port 1/2/1. Continue the same algorithm until you do not have a switch to query.
What you are asking for is possible, but it would require you to either give the script network topology information in advance, or to discover it dynamically with CDP or LLDP. CDP always carries the neighbor's ip address... Sometimes you can get that from LLDP. Both CDP and LLDP have MIB objects you can query.

You'll need two scripts basically. You already have a script to gather your data, but it takes too long to find a single MAC. Presumably you have a complete list of every switch and it's IP address. Loop over them all building a database of the CAM table. Then when you need to search for a MAC, just query your pre-built database. Update it about once an hour or so and you should maintain pretty accurate results. You can speed the querying of several devices by running multiple snmp walks in parallel.

Related

PowerShell Ping script for gathering host names

I am a Network Engineer and my scripting skills are basically none. I work in an Enterprise data center (for only 6 months) and we have an IP spreadsheet. And in having this Spreadsheet comes errors. and if not corrected they snowball and become bigger than they needed to be.
I have been tasked with finding out the host name given to a certain IP within our whole enterprise. Doing this by hand one by one would take months to do. Is there a way to write a script to do it for me? I am looking at 1000s of IPs if not 10000's of IPs. So any help would be greatly appreciated.
What I need is a script that will either ping or look in DNS to find the host name to any given IP address. and then put the output to a file so the IP spreadsheet can be updated.
I am at a loss for this, Like I said any help would be greatly appreciated
Give this a go:
Get-Content C:\temp\IP_Addresses.txt | ForEach-Object {([system.net.dns]::GetHostByAddress($_)).hostname >> c:\temp\hostname.txt}
Found it on Technet, you will need to extract your ip's into a text list and it does not de-duplicate, but should be a nice starting point.

IBM Datastage reports failure code 262148

I realize this is a bad question, but I don't know where else to turn.
can someone point me to where I can find the list of reports failure codes for IBM? I've tried searching for it in the IBM documentation, and in general google search, but this particular error is unique and I've never seen it before.
I'm trying to find out what code 262148 means.
Background:
I built a datastage job that has:
ORACLE CONNECTOR --> TRANSFORMER -> HIERARCHICAL DATA
The intent is to pull data from a ORACLE table, and output the response of the select statement into a JSON file. I'm using the HIERARCHICAL stage to set it. When tested in the stage, no problems, I see the JSON output.
However, when I run the job, it squawks:
reports failure code 262148
then the job aborts. There are no warnings, no signs, no errors prior to this line.
Until I know what it is, I can't troubleshoot.
If someone can point me to where the list of failure codes are, i can proceed.
Thanks!
can someone point me to where I can find the list of reports failure codes for IBM?
Here you go:
https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_73/rzahb/rzahbsrclist.htm
While this list does not list your specific error code, it does categorize many other codes, and explains how the code breakdown works. While this list is not specifically for DataStage, in my experience IBM standards are generally compatible across different products. In this list every code that starts with a 2 is a disk failure, so maybe run a disk checker. That's the best I've got as far as error codes.
Without knowledge of the inner workings of the product, there is not much more you can do beyond checking system health in general (especially disk, network and permissions in this case). Personally, I prefer to go get internal knowledge whenever exterior knowledge proves insufficient. I would start with a network capture, as I'm sure there's a socket involved in the connection between the layers. Compare the network capture from when the select statement is run from the hierarchical from one captured when run from the job. There may be clues in there, like reset or refused connections.

accessing command line arguments for headless NetLogo in the Matlab extension

I'm running the matlab extension for netlogo in headless(non-gui) mode. I've downloded the extension source and am trying to access the command line arguments from the java code in the extension. The command line arguments are stored in LabInterface.Settings. I would like to be able to access that object in the java code of the extension. I've been working on this for a couple of days but have had not success. It seems the extension process is designed to create primitives to be used inside netlogo. These primitives have knowledge of the different netlogo objects but there is no way for the extension java code to access it. I would appreciate any help.
I would like to be able to run multiple netlogo-matlab analyses with varying parameters, in batch mode accross multiple machines, perhaps a flux cluster. I need to run in headless because of the batch nature. Sometimes the runs will be on the same machine, sometimes split accross multiple machines, flux or condor. I know a similar functionality exist in netlogo for running varying parameters in a single session. Is there some way to split these accross multiple machines?
Currently, I create a series of setup files for netlogo. Each setup file represents the paramenters that vary for that run. Then I submit each netlogo - setup file combination as a single run. Each run can be farmed out to a seperate machine or processor. Adding the matlab extension complicates this. The matlab extension connects it's server to port 9999. With multiple servers running they all get attached to port 9999 and this causes problems. I was hoping to get information from the setup-file name to create independent port numbers tied to the setup file names. This way I could create a unique socket for each setup file, and hence a unique server connection for each netlogo run.
NetLogo doesn't provide a facility for distributing model runs on a cluster, but various people have done it anyway. See:
http://ccl.northwestern.edu/netlogo/docs/faq.html#cluster
https://github.com/jurnix/netlogo-cluster
http://mass.aitia.ai/index.php/intro/meme
and past threads about it on the netlogo-users group. There is no single standard solution.
As for getting access to LabInterface.Settings, it appears to me from looking through the NetLogo source code that the settings object isn't actually stored anywhere. It's just handed off from method to method, ultimately to lab.Lab.run, without ever actually being kept. So trying to get access to the name of the setup file won't work.
So you'll need to some other way for to make the extension generate unique port numbers. Seems to me like there's any number of possible solutions to this. At the time you generate the setup file you know its name, so you could generate a port number at the same time and include it in the experiment definition contained in the file. Or you could pass a port number in a Java system property (using -D) at the time you start NetLogo. Or you could generate a port number based on the process id of the JVM process. Or you could have the extension try port 9999 and see if it's already in use, and if it is, then try a different port number. That's just a few ideas... I could probably come up with ten more.

managing instances of powerCLI script

I wrote a powerCLI script that can automatically deploy a new VM with some given parameters.
In few words, the script connects to a given VC and start the deployment from an existing template.
Can I regulate the number of instances of my script that will run on the same computer ?
Can I regulate the number of instances of my script that will run on different computers but when both instances will be connected to the same VC ?
To resolve the issue i thought of developing a server side appilcation where each instance of my script will connect to, and the server will then handle all the instances , but i am not sure if such thing is possible in powerCLI/Powershell.
Virtually anything is poshable, or so they say. What you're describing may be overkill, however, depending on your scenario. Multiple instances of the same script will each run in its own Powershell process. Virtual Center allows hundreds of simultaneous connections. Of course the content or context of your script might dictate that it shouldn't run in simultaneous instances. I haven't experimented, but it seems like there are ways to determine the name of running Powershell scripts. So if you keep the script name consistent on each computer, you could probably build in some checks along the lines of the linked answer.
But depending on your particulars, it might be easier to go a different way. For example, if you don't want the script to run simultaneously because you have hard-coded the name of a new-osCustomizationSpec, for example, a simple\klugey solution might be to do a check for that new spec, and disconnect/exit/rollback if it exists. A better solution might be to give the new spec a unique name. But the devil is in the details. Hope that helps a bit.

PLC Data Logging System: Some basic questions

I am currently trying to work with PLC. I am using kepware data logger to collect the PLC log data. The output is like below:
Time Stamp Signal Signal O/P
20130407104040.2 Channel2.Device1.Group1-RBT1_Y_WORK_COMP_RST 1
20130407104043.1 Channel2.Device1.Group1-RBT2_Y_WORK_COMP_RST 0
........................
I have few questions:
1) What does that 'Channel', 'Device', 'Group', 'RBT1_Y_WORK_COMP_RST' means ? - What I have got from the PLC class presentation is that: RBT1 (which refers a robot) is a machine and 'Y_WORK_COMP_RST' is it's one signal and 1/0 is the signal state at particular timestamp (like 20130407104040.2). But, I could not get from log data file what is: 'Channel', 'Device1' and 'Group1' means ?
2) I learned from classes that 'PLC is a hard real time system'. However, from the log data file I am seeing that: the cycle time differs often. I mean some time it takes (say) 5 seconds, sometime 7 seconds. Why that ?
3) Does this log data taken by kepware is the actual machine output ? Or taken from the PLC program ?
NB: I am very new in this field and taken very few classes. So, may be my questions are stupid. Please help me by giving some basic not so technical answer.
1) Channel2.Device1.Group1... is the path where your KEPware data logger could find your RBT1. If you add another device with another technology you should get something like : Channel3.Device1.Group1....
This is totally internal to KEPware data logger and have nothing to do with your PLC. What interest you is the last part of the path : RBT1_Y_WORK_COMP_RST
2) Are your PLC and the PC running the KEPware data logger time synchronized ?
3) You are connected to a PLC so the KEPware data logger take data from it, then your PLC has to be setup to collect the output of your machine if you want to do so.
1) The channel is the type of communication, it may be several communication protocol, like modbus or devicenet or whatever kepware supports.
The device is the device Kepware communicate with
and the group is just some way to sort your items
items will refer to your plc address and let you name the item as you wish. This way you got an easy to read alias of your address.
2) Hard real time systems means the PLC must react to its input change within a certain amount of time (Ref: Wikipedia) Most of the time PLC are programmed in Ladder, Ladder is sequential and depending of the step the program takes it maybe longer or shorter. Also the timestamp comes from Kepware, not the PLC, so it depends on kepware scan time as well.
3)Kepware connects to the PLC and request PLC address with the output status.