Satellite 6 Job Invocation search query : using facts (faster) - redhat

I'm using Satellite 6 to manage EL 5, 6 and 7x hosts.
I've been trying to perform a Job Invocation (via Monitor-> Jobs -> Run Jobs) on a host of servers, based on a custom fact that I wrote (the fact is called ad_domain and basically tells you whether its active directory joined or not).
However I can't figure out how to do this....is this even possible?
I'm a Satellite newbie...I don't even even know what parameters I can use in the Search Query to do this. Can anyone help enlighten? Is it possible to specify a factor/facter value(s) in the Search Query so that it will resolve only to hosts that match that value(s)?
Appreciate your help in advance,
Sue

You can try
facts.ad_domain = value

Related

Create Windows DNS PTR records for existing A records using Powershell

Anyone know an easy way to force PTR record generation for existing A records that currently do not have them?
I have a couple of scenarios where this would be beneficial to me.
Members of my team have created records over time using the DNS MMC snap-in, but forgot to check the "Update PTR record" option, and
I have many reverse lookup zones for individual subnets that I'm trying to collapse into an also existing parent reverse zone... e.g. 1.192.10.in-addr.arpa -> 192.10.in-addr.arpa.
My thought process was to simply run through existing A records and run a pre-built DNS cmdlet with some sort of -updatePTR type flag, similar as can be done in the MMC snapin, but I guess nothing like that exists in Powershell.
The Add-DNSServerResourceRecord cmdlet has the -CreatePTR parameter, which sounds ideal, but that seems to only be supported when creating new records.
Is my only option to manually create the PTR records using Add-DNSServerResourceRecortPTR, or something similar, or perhaps even deleting the A record entirely and recreating with -CreatePTR?
I can do the former by following something similar to this:
https://gist.github.com/msoler8785/498332c622f93ace02b5d05e47845001, but I have hundreds of these zones I'm trying to cleanup from years of acquisitions, etc., which would make the code a lot more complex by requiring me to determine the correct Reverse zones based on the IP addresses, whereas that code uses a static zone.
Anyway, hoping I'm just missing something, or someone else has already figured this out.
::EDIT::
I found this which uses WMI/Powershell to do the first method, but I imagine this could easily converted to the native cmdlets.
https://serverfault.com/questions/163612/create-ptr-records-from-existing-a-records-windows-dns
Again, hoping someone has already figured this out.

managing instances of powerCLI script

I wrote a powerCLI script that can automatically deploy a new VM with some given parameters.
In few words, the script connects to a given VC and start the deployment from an existing template.
Can I regulate the number of instances of my script that will run on the same computer ?
Can I regulate the number of instances of my script that will run on different computers but when both instances will be connected to the same VC ?
To resolve the issue i thought of developing a server side appilcation where each instance of my script will connect to, and the server will then handle all the instances , but i am not sure if such thing is possible in powerCLI/Powershell.
Virtually anything is poshable, or so they say. What you're describing may be overkill, however, depending on your scenario. Multiple instances of the same script will each run in its own Powershell process. Virtual Center allows hundreds of simultaneous connections. Of course the content or context of your script might dictate that it shouldn't run in simultaneous instances. I haven't experimented, but it seems like there are ways to determine the name of running Powershell scripts. So if you keep the script name consistent on each computer, you could probably build in some checks along the lines of the linked answer.
But depending on your particulars, it might be easier to go a different way. For example, if you don't want the script to run simultaneously because you have hard-coded the name of a new-osCustomizationSpec, for example, a simple\klugey solution might be to do a check for that new spec, and disconnect/exit/rollback if it exists. A better solution might be to give the new spec a unique name. But the devil is in the details. Hope that helps a bit.

Microsoft Dynamics CRM: find cases that have been close within 2 days

I'm trying to generate a list of all cases in a system that has been closed within 2 days, but don't know what is the base way to do it, apart from running through all the cases and comparing created on with resolved by date. Are there any other ways to do it? Are there build in function to solve such a trivial task?
Thank you
When a Case/Incident is resolved a new record is created: Case Resolution
This is a semi-hidden CRM record type.
You can use Advanced Find to create a view of this:
This will allow you to create a view. Unfortunately you can't then include this view in your list of Incident Views (along with Active Cases, Resolved Cases, etc)

Job Shop : Arena

I'll try and keep it simple : I've started using Arena Simulation for studies purposes, and up until now, I've been unable to find any conclusive documentation or tutorial as to how to create a Job Shop, if you could direct me to specific and practical documentation, or otherwise a helpful example which could get me started , that would be most helpful.
My problem : A given number of jobs must be processed through a given number of ressources (machines), each job has a different route to take, and each one has a different work-time depending on the resource it is using.
Ex : For job_1 to be finished, it must first use ressource_1 with 5 seconds execution-time, then ressource_3 with 3 seconds execution-time and finally ressource_9 with 1 second execution-time. Of course, a different job has a totally different route and different execution-times.
Here's an MS thesis I found...
http://www.scribd.com/doc/54342479/Simulation-of-Job-Shop-using-Arena-Mini-Project-Report
ADDENDUM:
The basic idea is to use ASSIGN to label the jobs with attribute variables reflecting their routing requirements. Those attributes can be read and used by decision blocks to route the job to the appropriate next workstation or to the exit. Perhaps these notes will be more useful to you than the MS thesis cited above. That's about all I can give you since I haven't used Arena for several years now -- I no longer have access to it and can't put together any specific examples.

SNMP : How to find a mac address in the network?

I've wrote a Perl script to query devices (switches) on the network, it's used to find an mac address over the LAN. But, I would like to improve it, I mean, I have to give to my script these parameters:
The #mac searched
Switch' IP
Community
How can I do to just give IP and community ?
I know that it depends on my network topology ?
There is a main stack 3-switches (cisco 3750), and after it's linked to other ones (2960), in cascade.
Anyone has an idea ?
Edit : I would like to not specify the switch.
Just give the #mac and the community.
You have to solve two problems... Where will the script send the first query... Then, suppose you discover that a mac address was learned through port 1/2/1 on that switch and that port is connected to another switch. Somehow your script must be smart enough to query the switch attached to port 1/2/1. Continue the same algorithm until you do not have a switch to query.
What you are asking for is possible, but it would require you to either give the script network topology information in advance, or to discover it dynamically with CDP or LLDP. CDP always carries the neighbor's ip address... Sometimes you can get that from LLDP. Both CDP and LLDP have MIB objects you can query.
You'll need two scripts basically. You already have a script to gather your data, but it takes too long to find a single MAC. Presumably you have a complete list of every switch and it's IP address. Loop over them all building a database of the CAM table. Then when you need to search for a MAC, just query your pre-built database. Update it about once an hour or so and you should maintain pretty accurate results. You can speed the querying of several devices by running multiple snmp walks in parallel.