SCOM Rule for Fake Alerts - scom

I am working on a tool to generate fake data for System Center Operations Manager for internal testing purposes. I wrote a script as part of a discovery that is able to create an instance of any class I want and make SCOM fake-discover it. Currently, I'm using a class for AD Printer. Now the next step is to somehow create alerts on behalf of the Printer. For this, I wrote a rule targeted at the AD Printer, which reads from the logs to detect when it should be fired. The logs are being written to from a PowerShell script. However, I see no results. But when I target the same rule to All Windows Computers, I see the alerts.
From what I understand the rule will run on all agents that have an instance of the target class. Since I fake-discovered the AD Printer on this agent (which also happens to be the Management Server), should the rule not run on this?
Any other suggestions on how I can achieve this are welcome as well.
PS. I probably cannot share any of my code as I am under an NDA, but I can clarify my approach further, if needed.

Yes, the Powershell script should run on the agents which have instances of the AD Printer. I recommend you to check the OperationsManager event log for script errors. The easiest way to generate (fake) alerts is to set up a simple, Event-based text log monitor: one specific word can trigger the unhealthy state (which in turn generates an alert), while another word resets the monitor to the healthy state. You can specify criteria for both events. Look at this blog post for further details.

Related

Way to pull Exchange permissions

Maybe an easy question for someone who knows Powershell and O365 well. Is there a way to configure it so when a command is run for example to pull all access to a shared mailbox, that either a service account is permissioned each time to pull that information or the user who is running the script? I looked at connecting an SA to the script but it would have too much access to 0365 to give it the specific permissions. So the account is not permissioned for the access by default but every time the script/command is ran its permissioned for that inquiry which it shows then it won't have access until the next time its called.
Looking to add this type of function to a script which we only want the helpdesk people to see the information when they run the script and the specific command in the script.
Hopefully explained clear enough :)
Thanks all.
I don't think there is a way to do that natively. You could fiddle something with Azure PIM but that's more for one-off operations than minute action that are done often.
You could however circumvent that by making some sort of web interface that triggers commands on another server using a privileged SA and returns the output through the web interface. You can just make it so that the interface can only request one specific command to be run, and the only thing you have to worry about is sanitizing your parameters well to avoid unwanted injection.
Alternatively, what are you trying to protect against by restricting access so much ? Isn't it something that could be done more easily using a read-only account and some clearly defined policy ? If your helpdesk people overstep their allowed scope, that's a management/HR problem as much as a technical one.

how to automate bots to monitor for successful queues on orchestrator?

I have a project that I have to do that deals with queues being loaded successfully and unsuccessfully whereby I do manually at the moment that can be tedious and also positive negative meaning the orchestrator can state that new queues have been added but when I access the actual job (process) nothing has been added.
I would like to know, is there a way to monitor queue success and unsuccessful rates on orchestrator instead of the using monitoring it manually?
You can access pretty much any information via the Orchestrator API.
You can find the "Orchestrator HTTP Request" activity, which will allow you to access any relevant endpoint.
Note that the provisioned Robot in Orchestrator needs to have the right access permission, so please have a look at what roles are associated to the Robot user.
The API reference can be found here:
https://docs.uipath.com/orchestrator/reference
You will see it mentions swagger, which in turn will give you all the information you need to access the relevant APIs.

Azure WebJob Logging/Emailing

I've converted a console app into a scheduled WebJob. All is working well, but I'm having a little trouble figuring out how to accomplish the error logging/emailing I'd like to have.
1.) I am using Console.WriteLine and Console.Error.WriteLine to create log messages. I see these displayed in the portal when I go to WebJob Run Details. Is there any way to have these logs saved to files somewhere? I added my storage account connection string as AzureWebJobsDashboard and AzureWebJobsStorage. But this appears to have just created an "azure-webjobs-dashboard" blob container that only has a "version" file in it.
2.) Is there a way to get line numbers to show up for exceptions in the WebJob log?
3.) What is the best way to send emails from within the WebJob console app? For example, if a certain condition occurs, I may want to have it send me and/or someone else (depending on what the condition is) an email along with logging the condition using Console.WriteLine or Console.Error.WriteLine. I've seen info on triggering emails via a queue or triggering emails on job failure, but what is the best way to just send an email directly in your console app code when it's running as a WebJob?
How is your job being scheduled? It sounds like you're using the WebJobs SDK - are you using the TimerTrigger for scheduling (from the Extensions library)? That extensions library also contains a new SendGrid binding that you can use to send emails from your job functions. We plan on expanding on that to also facilitate failure notifications like you describe, but it's not there yet. Nothing stops you from building something yourself however, using the new JobHostConfiguration.Tracing.Trace to plug in your own TraceWriter that you can use to catch errors/warnings and act as you see fit. All of this is in the beta1 pre-release.
Using that approach of plugging in a custom TraceWriter, I've been thinking of writing one that allows you to specify an error threshold/sliding window, and if the error rate exceeds, an email or other notification will be sent. All the pieces are there for this, just haven't done it yet :)
Regarding logging, the job logs (including your Console.WriteLines) are actually written to disk in your Web App (details here). You should be able to see them if you browse your site log directory. However, if you're using the SDK and Dashboard, you can also use the TextWriter/TraceWriter bindings for logging. These logs will be written to your storage account and will show up in the Dashboard Functions page per invocation. Here's an example.
Logs to files: You can use a custom TraceWriter https://gist.github.com/aaronhoffman/3e319cf519eb8bf76c8f3e4fa6f1b4ae
Exception Stack Trace Line Numbers: You will need to make sure your project is built with debug info set to "full" (more info http://aaron-hoffman.blogspot.com/2016/07/get-line-numbers-in-exception-stack.html)
SendGrid, Amazon Simple Email Service (SES), etc.

using PowerShell to create automated systens

I'm looking forward to develop an automated notification and logging-off system that
notifies and logs off accounts from a computer. So far I planned an example when a class is
scheduled, except accounts that are registered on the scheduled class. It may
notify the logged-in users a certain period of time before the class time and
log them off just before the class time. Or, it could limit their access, for
example to the printer once the class has started.
So my Question is can I use PowerShell to develop this project ? How far can it be useful, or I should think about using python!
Thanks Fellas!
I'm not sure PowerShell brings anything special to the party. What you are talking about would require a PowerShell session running in the background and perhaps even tying into some sort of eventing, perhaps with the timer class. It might be just as easy to automate something using the task scheduler. At the appointed time check the logged on user and if they don't meet the requirement log them off. You could use PowerShell to create the tasks and handle the processing or any other language really.

Can Microsoft Windows Workflow route to specific workstations?

I want to write a workflow application that routes a link to a document. The routing is based upon machines not users because I don't know who will ever be at a given post. For example, I have a form. It is initially filled out in location A. I now want it to go to location B and have them fill out the rest. Finally, it goes to location C where a supervisor will approve it.
None of these locations has a known user. That is I don't know who it will be. I only know that whomever it is is authorized (they are assigned to the workstation and are approved to be there.)
Will Microsoft Windows Workflow do this or do I need to build my own workflow based on SQL Server, IP Addresses, and so forth?
Also, How would the user at a workstation be notified a document had been sent to their machine?
Thanks for any help.
I think if I was approaching this problem workflow would work to do it. It is a state machine you want that has three states:
A Start
B Completing
C Approving
However workflow needs to work in one central place (trust me on this, you only want to have one workflow run time running at once, otherwise the same bit of work can be done multiple times see our questions on MSDN forum). So a central server running the workflow is the answer.
How you present this to the users can be done in multiple ways. Dave suggested using an ASP.NET site to identify the machines that are doing the work, which is probably how I would do it. However you could also write a windows forms client that would do the same thing. This would require using something like SOAP / WCF to facilitate communication between client form applications and the central workflow service. This would have the advantage that you could use a system try icon to alert the user.
You might also want to look at human workflow engines, as they are designed to do things such as this (and more), I'm most familiar with PNMsoft's Sequence
You can design a generic "routing" workflow that will cause data to go to a workstation. The easiest way to do this would be to embed the workflow in an ASP.NET application. Each workstation should visit the application with a workstation ID in the querystring:
http://myapp/default.aspx?wid=01
When the form is filled out at workstation A, the workflow running in the web app can enter it into the "work bin" of the next workstation. Anyone sitting at the computer for which the form is destined will see it appear in their list of forms to review. You can use AJAX to make it slick and auto-updating.