How to get PostSharp to log aspect caught exceptions to a rolling text file vs. the event log - event-log

I have googled mush, and found nothing, so please bare with a possible silly question. I have my own logging, of events and stats, that logs to the Event Log. I would like to log long and verbose error information to a 30 day rolling text file. How do I do this?

To log with PostSharp you can either use included Diagnostics Pattern Library or create your own custom aspect.
The diagnostics library can log the names of methods being invoked together with parameter/return values. The actual logging messages are sent to one of the supported logging back-ends (Console, System.Diagnostics.Trace, Log4Net, NLog, EnterpriseLibrary).
You can follow the PostSharp docs to add logging with the chosen back-end first, and then you would need to set up that back-end to write messages to a rolling text file. The configuration depends on the specific back-end, there are examples for log4net, NLog, etc.
If you want to write more custom information to the log, then it would be better to create your own logging aspect. You can start with the example in the PostSharp docs. Again it would be better to prepare your message and then just pass it on to the logging library that will handle writing to the rolling text file. That way you get the powerful configuration options provided by the library and don't need to re-implement low-level details.

Related

Is it possible to track down very rare failed requests using linkerd?

Linkerd's docs explain how to track down failing requests using the tap command, but in some cases the success rate might be very high, with only a single failed request every hour or so. How is it possible to track down those requests that are considered "unsuccessful"? Perhaps a way to log them somewhere?
It sounds like you're looking for a way to configure Linkerd to trap requests that fail and dump the request data somewhere, which is not supported by Linkerd at the moment.
You do have a couple of options with the current functionality to derive some of the info that you're looking for. The Linkerd proxies record error rates as Prometheus metrics which are consumed by Grafana to render the dashboards. When you observe one of these infrequent errors, you can use the time window functionality in Grafana to find the precise time that the error occurred, then refer to the service log to see if there are any corresponding error messages there. If the error is coming from the service itself, then you can add as much logging info about the request that you need to in order to help solve the problem.
Another option, which I haven't tried myself is to integrate linkerd tap into your monitoring system to collect the request info and save the data for the requests that fail. There's a caveat here in that you will want to be careful about leaving a tap command running, because it will continuously collect data from the tap control plane component, which will add load to that service.
Perhaps a more straightforward approach would be to ensure that all the proxy logs and service logs are written to a long-term store like Splunk, an ELK (Elasticsearch, Logstash, and Kibana), or Loki. Then you can set up alerting (Prometheus alert-manager, for example) to send a notification when a request fails, then you can match the time of the failure with the logs that have been collected.
You could also look into adding distributed tracing to your environment. Depending on the implementation that you use (jaeger, zipkin, etc.) I think the interface will allow you to inspect the details of the request for each trace.
One final thought: since Linkerd is an open source project, I'd suggest opening a feature request with specifics on the behavior that you'd like to see and work with the community to get it implemented. I know the roadmap includes plans to be able to see the request bodies using linkerd tap and this sounds like a good use case for having those bodies.

Plain console.warn() shows up in logs with serverity "ERROR"

When I log something with console.warn() it seems to appear in the Stackdriver logs with severity "ERROR". The Stackdriver Error Reporting does not show these errors, so it seems there they are not considered errors. This makes it impossible to filter the logs to only show me errors.
Reading the Stackdriver logging docs I get the impression that I'm not supposed to use the plain javascript console functions but instead use Bunyan. Is that correct? I didn't read anywhere that I shouldn't.
Cloud Functions only distinguishes between stdout & stderr.
The docs on Writing, Viewing, and Responding to Logs say that "Cloud Functions includes simple logging by default. Logs written to stdout or stderr will appear automatically". The logging docs page that you referenced mentions the same thing about stdout & stderr being automatic for Cloud Functions.
My interpretation is that console.warn() is going to stderr, and once there the distinction between warn and error is lost. I suspect you'd see the same for console.debug() showing up as INFO. I have this behavior in VMs when stderr is used, but I think App Engine does not have this problem.
I don't think the logging docs page is suggesting Bunyan specifically. It addresses Winston similarly, as well as a client library (in which case authentication should just work).
Error Reporting has a specific notion of what constitutes an "error" to be captured: https://cloud.google.com/error-reporting/docs/formatting-error-messages
I have since learned from Google support that things changed (at least for cloud functions) since Node 10. Node 8 still logged correctly with console.info getting level info and console.warn getting level warning, and that seems to align with my experience.
In recent versions of firebase-functions there is the logger library which you should use for writing logs. For non-firebase environments you can use #google-cloud/logging which seems essentially the same thing. You then have full control over severity level as well as the ability to log extra JSON payload as the second parameter.
So in other words, don't use the native Javascript console methods.
If your logs are showing up in Stackdriver Logging, then Error Reporting is at least able to see them. From there, there's some more requirements that depend on exactly what you're using (eg if you're just logging JSON, it may need a reportLocation with serviceContext).
This might be useful: https://cloud.google.com/error-reporting/docs/formatting-error-messages
On the other hand, if you're just trying to view severity ERROR logs, just using the advanced filter in Logging for severity=ERROR might do what you're looking for?

Azure WebJob Logging/Emailing

I've converted a console app into a scheduled WebJob. All is working well, but I'm having a little trouble figuring out how to accomplish the error logging/emailing I'd like to have.
1.) I am using Console.WriteLine and Console.Error.WriteLine to create log messages. I see these displayed in the portal when I go to WebJob Run Details. Is there any way to have these logs saved to files somewhere? I added my storage account connection string as AzureWebJobsDashboard and AzureWebJobsStorage. But this appears to have just created an "azure-webjobs-dashboard" blob container that only has a "version" file in it.
2.) Is there a way to get line numbers to show up for exceptions in the WebJob log?
3.) What is the best way to send emails from within the WebJob console app? For example, if a certain condition occurs, I may want to have it send me and/or someone else (depending on what the condition is) an email along with logging the condition using Console.WriteLine or Console.Error.WriteLine. I've seen info on triggering emails via a queue or triggering emails on job failure, but what is the best way to just send an email directly in your console app code when it's running as a WebJob?
How is your job being scheduled? It sounds like you're using the WebJobs SDK - are you using the TimerTrigger for scheduling (from the Extensions library)? That extensions library also contains a new SendGrid binding that you can use to send emails from your job functions. We plan on expanding on that to also facilitate failure notifications like you describe, but it's not there yet. Nothing stops you from building something yourself however, using the new JobHostConfiguration.Tracing.Trace to plug in your own TraceWriter that you can use to catch errors/warnings and act as you see fit. All of this is in the beta1 pre-release.
Using that approach of plugging in a custom TraceWriter, I've been thinking of writing one that allows you to specify an error threshold/sliding window, and if the error rate exceeds, an email or other notification will be sent. All the pieces are there for this, just haven't done it yet :)
Regarding logging, the job logs (including your Console.WriteLines) are actually written to disk in your Web App (details here). You should be able to see them if you browse your site log directory. However, if you're using the SDK and Dashboard, you can also use the TextWriter/TraceWriter bindings for logging. These logs will be written to your storage account and will show up in the Dashboard Functions page per invocation. Here's an example.
Logs to files: You can use a custom TraceWriter https://gist.github.com/aaronhoffman/3e319cf519eb8bf76c8f3e4fa6f1b4ae
Exception Stack Trace Line Numbers: You will need to make sure your project is built with debug info set to "full" (more info http://aaron-hoffman.blogspot.com/2016/07/get-line-numbers-in-exception-stack.html)
SendGrid, Amazon Simple Email Service (SES), etc.

How to make Bonita BPM to show an error?

I have an assignment where before I get a message from a server and tweet it, I have to check if an error occurs. If it does, it says that I have to "show with a human task an error message specifying a number and the error received. After that, the process ends".
In another part of the workflow I do check for errors but I'm not required to show anything, and frankly I do not understand how that would work, I believe my mistake is that I might be thinking too literally or too close to code showing errors and such.
Any help or place to look for information?
The answer to this question will vary depending on the edition of Bonita BPM that you are using.
With Community edition:
Note that error management will impact process design.
You can implement the following scenario:
retrieve the error (this can be done by using a custom connector output).
store the error details in a process variable.
have an exclusive gateway with a condition that branches to an optional human task that shows the error in a form.
With Performance edition:
There is a built in error management feature in Bonita BPM Portal. As an administrator you may review stack traces associated to connector execution failures, edit some settings and replay the connectors.
All of this is done without impacting the process design.

How to read the Windows Event Log without an EventMessageFile?

I have code that reads the Windows Event Log. It uses OpenEventLog, ReadEventLog and gets the event source and event ID. Then it looks up the source under the
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\Application
key, loads the appropriate DLL(s) according to what is listed in EventMessageFile and finally uses FormatMessage to merge the event strings with the message DLL content to get the final event message text. This is the recommended way, and although a bit of a pain, it works great.
Until... I go lookup the source and find it doesn't have an EventMessageFile, but rather a ProvideGuid entry. This seems to be the new way (they show up on Vista and Windows 2008). Uggh -- nothing to pass to FormatMessage for looking up the message text and merging in the data strings
:(
Searching the registry for the guid does lead to references to other files (http.sys in the case of the HTTP source), but I can never get the complete message text. Do I have to use those EvtOpenSession APIs? I'm hoping not since I already have the EVENTLOGRECORD* from a call to ReadEventLog, and the fact that the software needs to run on Windows 2003 where EvtOpenSession isn't supported (only available on Vista and Windows 2008). NOTE: Some sources on Vista have ProviderGUID, and others have EventMessageFile, so the old method is still viable.
So what I'm after is a way to look at the ProviderGuid and get the DLL that needs to be passed to FormatMessage for displaying the complete event log message text.
Thanks for any input
The APIs that Richard links to are for the new style Eventing system (code-named Crimson, sometimes called Manifest Based Providers) introduced in Vista/Server 2K8. One of the artifacts of this new system is new APIs to consume these logs, another is the ProviderGuid key for certain EventSources that produce events using this new framework.
I think you should use the functions on Windows Vista later to consume these logs, it should handle the work for you. You can use the EvtFormatMessage method to format the strings. I believe these APIs will also read the events produced by "Classic" providers.
If you're consuming these messages from a .NET app you can use types in the System.Diagnostics.Eventing.Reader namespace, introduced in .NET 3.5.
There are Win32 APIs for reading/expanding event log entries.
See MSDN: http://msdn.microsoft.com/en-us/library/aa385780(VS.85).aspx
Anything else, and you are likely to find problems with patches, let alone service packs or new versions.