managing instances of powerCLI script - powershell

I wrote a powerCLI script that can automatically deploy a new VM with some given parameters.
In few words, the script connects to a given VC and start the deployment from an existing template.
Can I regulate the number of instances of my script that will run on the same computer ?
Can I regulate the number of instances of my script that will run on different computers but when both instances will be connected to the same VC ?
To resolve the issue i thought of developing a server side appilcation where each instance of my script will connect to, and the server will then handle all the instances , but i am not sure if such thing is possible in powerCLI/Powershell.

Virtually anything is poshable, or so they say. What you're describing may be overkill, however, depending on your scenario. Multiple instances of the same script will each run in its own Powershell process. Virtual Center allows hundreds of simultaneous connections. Of course the content or context of your script might dictate that it shouldn't run in simultaneous instances. I haven't experimented, but it seems like there are ways to determine the name of running Powershell scripts. So if you keep the script name consistent on each computer, you could probably build in some checks along the lines of the linked answer.
But depending on your particulars, it might be easier to go a different way. For example, if you don't want the script to run simultaneously because you have hard-coded the name of a new-osCustomizationSpec, for example, a simple\klugey solution might be to do a check for that new spec, and disconnect/exit/rollback if it exists. A better solution might be to give the new spec a unique name. But the devil is in the details. Hope that helps a bit.

Related

How to have playwright workers execute separate logical paths in an NUnit test?

I have a Playwright test which I'm running via the following command -
dotnet test -- NUnit.NumberOfTestWorkers=2
From what I can gather, this will execute the same test in parallel with 2 workers. I'm curious if there's any way to have each worker go down a separate logical path, perhaps depending upon a worker id or something similar? Eg:
if (workerId == 1)
//do something
else if (workerId == 2) //do something else
What would be the best way to do this?
As to why I want this, I have a Blazor Server app which is a chat room, and I want to test the text updating from separate users (which would be represented by different worker ids, for example). I'd also like the solution to be scalable, eg: I can enter 5000 or so workers to test large scalability in the chat room.
You appear to have misunderstood what the NumberOfTestWorkers setting does. It simply tells NUnit how many separate test workers to set up. It does not have any impact on how NUnit allocates tests among it's workers, when running in parallel. And it does not cause an individual test to run more than once.
In general, the kind of load testing you are trying to do isn't directly supported by NUnit. You would have to build something of your own, possibly using NUnit or try a framework intended for that kind of testing.
A long time ago, there was something called pnunit, but I don't believe it is kept up to date any longer. See https://docs.plasticscm.com/technical-articles/pnunit-parallel-nunit

Running Netlogo headless on the cloud

I've written a NetLogo model to model agent movement in a landscape. I'd like to run this model from the command prompt, using AWs/Google Compute. The model uses about 500MB worth of input rasters and shapefiles and writes rasters and csv files. It also uses the extensions gis, rnd, cf, table and csv.
Would this be possible using the Controlling API? (https://github.com/NetLogo/NetLogo/wiki/Controlling-API). Can I just use the steps listed in the link? I have not tried running NetLogo from the command prompt before.
Also, I do not want to run BehaviourSpace as it is not relevant to this model.
A BehaviorSpace experiment can consist of only a single run, so BehaviorSpace may actually be relevant to you here. You only need to write one short XML file (or no new files at all, if the experiment setup you want is already part of the model) to do it this way.
Whereas if you go through the controlling API, you will have to write and compile Java (or Scala) code, which is a substantially more complex task.
But if you decide to go the controlling API route: yes, that works too, and it is documented, as you've already noticed.

Stop 2 Conflicting Scripts Running At The Same Time

I have two scripts that do the same thing but for different companies, and during the process they both use the same tables.
It's imperative that only one script runs at once, as sometimes the timings vary greatly, and they are scheduled rather close together purposely. My question is, what is the best method to ensure these scripts do not run together? I tried to have a global field, set to 1 at the beginning of the script, and 0 at the end, so when the 2nd script runs, if global field = 1 - exit script -
This did not work, as both these scripts are scheduled server side, and I have read that the GLOBAL variable is local in this instance.
I assume, we are talking about FileMaker Server schedules.
Global variable will be reset every time you run a scheduled script. Every script will run on it's own session. You can not use them to ensure the scripts do not clash.
As far as I know, FileMaker Server does not run two schedules at the same time. The second script will be delayed until the first one finishes.
FileMaker Server can run simultaneous schedules if they are script schedules, thus an overlap can occur.
What you need to do is set a field that is not a global, so that the schedules can check against the value of that field.
A single record table would be ideal for this.
Make sure that you commit after setting the field, or you may get record locking issues.
Create an OS-level script that uses the fmsadmin command line to run one script, then run the second.
Set the FM Server schedule to run the OS script (which then runs the PSoS scripts).

accessing command line arguments for headless NetLogo in the Matlab extension

I'm running the matlab extension for netlogo in headless(non-gui) mode. I've downloded the extension source and am trying to access the command line arguments from the java code in the extension. The command line arguments are stored in LabInterface.Settings. I would like to be able to access that object in the java code of the extension. I've been working on this for a couple of days but have had not success. It seems the extension process is designed to create primitives to be used inside netlogo. These primitives have knowledge of the different netlogo objects but there is no way for the extension java code to access it. I would appreciate any help.
I would like to be able to run multiple netlogo-matlab analyses with varying parameters, in batch mode accross multiple machines, perhaps a flux cluster. I need to run in headless because of the batch nature. Sometimes the runs will be on the same machine, sometimes split accross multiple machines, flux or condor. I know a similar functionality exist in netlogo for running varying parameters in a single session. Is there some way to split these accross multiple machines?
Currently, I create a series of setup files for netlogo. Each setup file represents the paramenters that vary for that run. Then I submit each netlogo - setup file combination as a single run. Each run can be farmed out to a seperate machine or processor. Adding the matlab extension complicates this. The matlab extension connects it's server to port 9999. With multiple servers running they all get attached to port 9999 and this causes problems. I was hoping to get information from the setup-file name to create independent port numbers tied to the setup file names. This way I could create a unique socket for each setup file, and hence a unique server connection for each netlogo run.
NetLogo doesn't provide a facility for distributing model runs on a cluster, but various people have done it anyway. See:
http://ccl.northwestern.edu/netlogo/docs/faq.html#cluster
https://github.com/jurnix/netlogo-cluster
http://mass.aitia.ai/index.php/intro/meme
and past threads about it on the netlogo-users group. There is no single standard solution.
As for getting access to LabInterface.Settings, it appears to me from looking through the NetLogo source code that the settings object isn't actually stored anywhere. It's just handed off from method to method, ultimately to lab.Lab.run, without ever actually being kept. So trying to get access to the name of the setup file won't work.
So you'll need to some other way for to make the extension generate unique port numbers. Seems to me like there's any number of possible solutions to this. At the time you generate the setup file you know its name, so you could generate a port number at the same time and include it in the experiment definition contained in the file. Or you could pass a port number in a Java system property (using -D) at the time you start NetLogo. Or you could generate a port number based on the process id of the JVM process. Or you could have the extension try port 9999 and see if it's already in use, and if it is, then try a different port number. That's just a few ideas... I could probably come up with ten more.

SNMP : How to find a mac address in the network?

I've wrote a Perl script to query devices (switches) on the network, it's used to find an mac address over the LAN. But, I would like to improve it, I mean, I have to give to my script these parameters:
The #mac searched
Switch' IP
Community
How can I do to just give IP and community ?
I know that it depends on my network topology ?
There is a main stack 3-switches (cisco 3750), and after it's linked to other ones (2960), in cascade.
Anyone has an idea ?
Edit : I would like to not specify the switch.
Just give the #mac and the community.
You have to solve two problems... Where will the script send the first query... Then, suppose you discover that a mac address was learned through port 1/2/1 on that switch and that port is connected to another switch. Somehow your script must be smart enough to query the switch attached to port 1/2/1. Continue the same algorithm until you do not have a switch to query.
What you are asking for is possible, but it would require you to either give the script network topology information in advance, or to discover it dynamically with CDP or LLDP. CDP always carries the neighbor's ip address... Sometimes you can get that from LLDP. Both CDP and LLDP have MIB objects you can query.
You'll need two scripts basically. You already have a script to gather your data, but it takes too long to find a single MAC. Presumably you have a complete list of every switch and it's IP address. Loop over them all building a database of the CAM table. Then when you need to search for a MAC, just query your pre-built database. Update it about once an hour or so and you should maintain pretty accurate results. You can speed the querying of several devices by running multiple snmp walks in parallel.