Azure WebJob Logging/Emailing - email

I've converted a console app into a scheduled WebJob. All is working well, but I'm having a little trouble figuring out how to accomplish the error logging/emailing I'd like to have.
1.) I am using Console.WriteLine and Console.Error.WriteLine to create log messages. I see these displayed in the portal when I go to WebJob Run Details. Is there any way to have these logs saved to files somewhere? I added my storage account connection string as AzureWebJobsDashboard and AzureWebJobsStorage. But this appears to have just created an "azure-webjobs-dashboard" blob container that only has a "version" file in it.
2.) Is there a way to get line numbers to show up for exceptions in the WebJob log?
3.) What is the best way to send emails from within the WebJob console app? For example, if a certain condition occurs, I may want to have it send me and/or someone else (depending on what the condition is) an email along with logging the condition using Console.WriteLine or Console.Error.WriteLine. I've seen info on triggering emails via a queue or triggering emails on job failure, but what is the best way to just send an email directly in your console app code when it's running as a WebJob?

How is your job being scheduled? It sounds like you're using the WebJobs SDK - are you using the TimerTrigger for scheduling (from the Extensions library)? That extensions library also contains a new SendGrid binding that you can use to send emails from your job functions. We plan on expanding on that to also facilitate failure notifications like you describe, but it's not there yet. Nothing stops you from building something yourself however, using the new JobHostConfiguration.Tracing.Trace to plug in your own TraceWriter that you can use to catch errors/warnings and act as you see fit. All of this is in the beta1 pre-release.
Using that approach of plugging in a custom TraceWriter, I've been thinking of writing one that allows you to specify an error threshold/sliding window, and if the error rate exceeds, an email or other notification will be sent. All the pieces are there for this, just haven't done it yet :)
Regarding logging, the job logs (including your Console.WriteLines) are actually written to disk in your Web App (details here). You should be able to see them if you browse your site log directory. However, if you're using the SDK and Dashboard, you can also use the TextWriter/TraceWriter bindings for logging. These logs will be written to your storage account and will show up in the Dashboard Functions page per invocation. Here's an example.

Logs to files: You can use a custom TraceWriter https://gist.github.com/aaronhoffman/3e319cf519eb8bf76c8f3e4fa6f1b4ae
Exception Stack Trace Line Numbers: You will need to make sure your project is built with debug info set to "full" (more info http://aaron-hoffman.blogspot.com/2016/07/get-line-numbers-in-exception-stack.html)
SendGrid, Amazon Simple Email Service (SES), etc.

Related

how do you dynamically insert console logs on a development server

When you're developing on localhost, then you've got full access to a terminal that you can log anywhere you want. But, in a project, I work on (and am new to team collaboration as a whole) they use something called weavescope to view logs that developers have created at the time of coding.
Now what the difference between this and logging locally, everytime you'll create a change in the code, you gotta send a pull request, they approve it, and merge it, deploy it and we finally see it in the log. Now, sometimes the state of local and deployed things don't match and it really makes us wanna dynamically log on to the development server without having to go through all these cycles over again. Is there any solution already around that helps us insert some quick log statements without having to go through the routine PR, merge, deploy cycle?
EDIT: I think from discussions I had below, the tool I am looking for is more or less a logging statment code injection tool. A tool that would keep track of the logs I'm inserting into the production code, and on/off them at spin of a command.
This seems like something that logging levels can help with (unless I'm misunderstanding). Something I typically do is leave debug-level log messages on commonly problematic or complex functions, but change the logging level to something higher when I move out of local. Sometimes depending on the app and access these can be configured at the environment rather than in the build.
For example there are Spring libraries that will let you import a static logger, set the level of each message you log out. Then locally you can keep the level at DEBUG, in UAT the level can be INFO, and if you only want ERROR OR WARN messages in prod you can separate that too. At the time of deployment you can set what environment it is and store a separate app.properties or yml file for each environment storing the desired level for each
Of course there is a solution for fast pace code changes.
Maybe this kind of hot reloading is what you're looking for. This way you can insert new calls to a logger or console.log quickly.
Although it does come with a disclaimer from the author.
I honestly haven't looked into whether this method of hot reloading would provide stable production zero-downtime deploys, however my "gut feel" says don't do it. And production deployments are probably one area where we should stick to known, trusted procedures unless we have good reason.

how to automate bots to monitor for successful queues on orchestrator?

I have a project that I have to do that deals with queues being loaded successfully and unsuccessfully whereby I do manually at the moment that can be tedious and also positive negative meaning the orchestrator can state that new queues have been added but when I access the actual job (process) nothing has been added.
I would like to know, is there a way to monitor queue success and unsuccessful rates on orchestrator instead of the using monitoring it manually?
You can access pretty much any information via the Orchestrator API.
You can find the "Orchestrator HTTP Request" activity, which will allow you to access any relevant endpoint.
Note that the provisioned Robot in Orchestrator needs to have the right access permission, so please have a look at what roles are associated to the Robot user.
The API reference can be found here:
https://docs.uipath.com/orchestrator/reference
You will see it mentions swagger, which in turn will give you all the information you need to access the relevant APIs.

SCOM Rule for Fake Alerts

I am working on a tool to generate fake data for System Center Operations Manager for internal testing purposes. I wrote a script as part of a discovery that is able to create an instance of any class I want and make SCOM fake-discover it. Currently, I'm using a class for AD Printer. Now the next step is to somehow create alerts on behalf of the Printer. For this, I wrote a rule targeted at the AD Printer, which reads from the logs to detect when it should be fired. The logs are being written to from a PowerShell script. However, I see no results. But when I target the same rule to All Windows Computers, I see the alerts.
From what I understand the rule will run on all agents that have an instance of the target class. Since I fake-discovered the AD Printer on this agent (which also happens to be the Management Server), should the rule not run on this?
Any other suggestions on how I can achieve this are welcome as well.
PS. I probably cannot share any of my code as I am under an NDA, but I can clarify my approach further, if needed.
Yes, the Powershell script should run on the agents which have instances of the AD Printer. I recommend you to check the OperationsManager event log for script errors. The easiest way to generate (fake) alerts is to set up a simple, Event-based text log monitor: one specific word can trigger the unhealthy state (which in turn generates an alert), while another word resets the monitor to the healthy state. You can specify criteria for both events. Look at this blog post for further details.

Insert message into a process running in gwt-console-server from external application?

I'm a jBPM noob running jBPM5.4 in AS7. I have tried posting this question on the jBPM duscussion board, but no luck, so I thought I'd try here on stack.
My Goal: Create the process in guvnor, run it in gwt-console-server, have my java application feed information to the process, and follow the current state in the jbpm Console.
So far, I have installed the jbpm console and console server as well as Guvnor and designer on jBOSS AS7. I am able to create a process in Guvnor and run and monitor that process from the jbpm Console. The missing piece is that I do not understand how to externally insert messages to the process that is running.
Using eclipse and the jBPM example, I can run a process and insert messages, but my goal is to use the jbpm console to monitor the processes.
I assume I need to access the knowledgesession running in the gwt-console-server, but I'm not sure how to do that. Is it safe to access/modify a session that is persisted out to a database (ie, both gwt-console-server and my custom app would be able to modify it) and then the jbpm console would read from it?
I see in the BPM Console reference (https://community.jboss.org/wiki/BPMConsoleReference) that there is an Integration Layer, but there is nothing about how to leverage that - and the like in the doc is broken :(
Can someone point me to an example of an external application feeding messages to a jbpm process that is being monitored by jbpm-console or suggest ways to accomplish this?
Thanks very much for any insight.
-J
PS. I have the new jBPM Developer's Guide, but can't find anything in it to help me with this (so if I am missing something, I can handle a reference back to that guide).
The jBPM console has a REST api that exposes a subset of the functionality. For example, if you model this feeding of information as the start of a process, or the sending of a signal, you could use the signal REST method to send this information to the console for processing.
It's also fine to use an external ksession to update a process instance. As long as they are using the same database to store the information, everything should be fine.
It turns out that the console is just using the logs, so as long as you log to the same DB the console is using (with JPAWorkingMemoryDbLogger) everything pretty much automagically works. You can use either JBPMHelper.newStatefulKnowledgeSession(kbase) or JBPMHelper.loadStatefulKnowledgeSession(kbase, sessionId) depending on if you want to use the knowledge session started from the Console. Also, if you borrow the Console's session, don't dispose it of course.
I read somewhere that you can give the session a business id (and soon do the same from your own code so that they automatically use the same session), but currently when I want to borrow the Console's session I use a kludge that just assumes the highest session is the one I want (it will be as long as the console is already running).

mqsvc.exe pegs cpu at full usage when deploying nservicebus to production

When I deployed my site that uses nservice to a new production box, it was unusably slow...
After some debugging I discovered that mqsvc.exe was taking up 50% of the CPU usage and the other 50% was being taken up by w3wp.exe
I found this post here:
http://geekswithblogs.net/michaelstephenson/archive/2010/05/07/139717.aspx
which recommended the following:
Make sure you set the windows service for NserviceBus Generic Host to the right credentials
Make sure you have the queue set with the right permissions
Make sure you turn on the right logging configuration in NServiceBus
So I figured the issue was something related to permissions, but even after trying to set the permissions correctly (I thought) I still wasn't able to resolve the issue.
If you allow NServiceBus to create its own queues, then it will create them with the correct permissions it needs.
The problem comes in when you set up a web application, and then the queues are created, and then the identity the application runs under changes. Then you get exactly this problem. NServiceBus tries to check the queue for a message, it does not have access to do so, so it immediately retries over and over, and you spike the processor.
The fix: Delete the queue. Restart the web application. NServiceBus takes over.
Edit: As noted in the comments, NServiceBus 3.x doesn't invoke the installers by default, which means queues are not automatically created in production unless you ask it to. See the documentation page on Installers for more detail.
For a web application (or any other situation where you're not using NServiceBus.Host) you can invoke the installers as part of the fluent config. There is a full example in the NServiceBus download, but here is a link to the relevant file on GitHub.
The issue did end up being that the website needed to be granted explicit permissions to the queues.
I found a number of resources online telling me this, but I still had to spend a good amount of time monkeying around with exactly WHICH account needed access... turned out that since my application pools were set to run as ApplicationPoolIdentity, I need to grant the account permissions by adding the following account to the nservicebus queue:
IIS AppPool\{APP POOL NAME}
I granted full access rights, though I'm sure you could refine that a bit if you needed to.
Hopefully, this will help anyone who runs into the same issues.
(This is my first attempt at the "Answer your own question" mechanism so please let me know if I am doing something wrong..)