activemq-artemis log monitoring and alerting - activemq-artemis

I have been looking to find if there is an alternate way to monitor warning/errors that are in logs. Currently I am using the logs to know the error code which keeps on changing for every update. Is there an alternate way to know them?

Apache ActiveMQ Artemis uses the JBoss Logging framework to do its logging and is configurable via the logging.properties, see the documentation.
You could use a custom handler to intercept warning/errors.

Related

Is it possible to send liberty batch logs to console

Liberty has rich support for logging and tracing as outlined in https://www.ibm.com/support/knowledgecenter/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/rwlp_logging.html
Based on that, I am able to get audit, messaging, and trace logs sent to the console. However, one part of our system is using Liberty's JBatch implementation and these logs are getting lost inside our containers' ephemeral filesystem.
Is there a way to force Liberty to send these logs to the console with the rest of our output (so that it will get shipped to our log analysis service)?

FlowForce - monitoring and alerting tool

I had to configure AppDynamics alerts in the past for Java applications I worked for.
I also heard of Nagios, but I am not very sure how that works.
Now, I need to configure alerts for a FlowForce Server, but I don't believe it can be integrated with AppDynamics or Nagios.
I saw FlowForce allow me to send some alerts, like when a step of a job fails, but I would like to have some server alerts, like, for instance, if the license expires and, as a result, the server is automatically shut down.
I am wondering the best way to achieve it.
I am running it on a Windows environment BTW.
Suggestions are welcome.
Thank you in advance!
I found my answer on the Flow Force online help (https://manual.altova.com/flowforceserver/flowforceserver/)
The Flow Force is deployed as two servers, which in a window env, can be started and stopped as windows services (can be found via "Control Panel">"Administrative Tools">Services). With this information, I can monitor them via NAGIOS.

looking for current example of MDB consuming messages from remote queue in Wildfly 10

I have a Wildfly 10 instance which defines a queue, publishes to that queue as well as receives from that queue via an MDB.
That has been have accomplished.
Now I want to add a second Wildfly 10 instance, running on another machine, which will also receive messages from that same (remote) queue defined in the first instance.
I've spent 2 days looking for a current example of how to do this.
There are tons of questions, and some outdated answers.
It seems like the one of the most trivial things to expect from a Q implementation, yet i cannot find any example.
Would someone please refer me to a good and current example (Wildlfy 10) of what needs to be done as far as annotation of the MDB, configuration of the standalone-full.xml, and and security requirements?
I looked into a similar scenario and I had as well trouble to find good documentation.
There are several ways to connect JMS-Queues together:
JMS core bridges
JMS bridges
Connections to a remote server (using a remote connector or properties directly in your MDB).
JMS-Clustering
… ?
I created a demo project at Github which uses "JMS-Bridges" to forward messages to another server. The project also uses remote connections to listen to messages of a remote server. The readme explains step by step how I would configure "Wildfly 10" servers that they use the same destination for JMS messages.
The best source of information concerning this topic seems to be the Messaging documentation of the Red Hat JBoss Enterprise Application Platform 7.0 which is as well valid for Wildfly 10.

Is it possible to issue log messages from Neko to Apache logs?

When using the nekotools server for development, I get all messages in the console. Is it be possible to activate those messages in Apache logs on a server, for debugging purposes?
Answering own question since no answers yet.
In Haxe API I found neko.Web.logMessage(msg:String):Void, it allows sending custom messages to the webserver file logs. Not exactly what I was looking for though.

How does an out-of-process semantic logging service receive events?

The reason I'm asking is I would like to use the out-of-proc mode, but I cannot install a service on each user's workstation, only on a central server. Is the communication between event source and listener service an ETW thing, or is there some kind of RPC I could use?
Yes, the out-of-process mode works by using ETW. All ETW events are system wide so the service just has to listen to ETW events.
ETW only works locally and does not offer a remote solution you could use. Your options are to install a service on each workstation, listen to ETW events (here or here) and forward them to your server with a RPC solution you build yourself. Using MSMQ comes to mind. Or have your application forward the events to your server directly so you don't need the service. Either way, you will have to build it yourself.
To add to Lars' answer, you could also log to SQL. There is a SQL sink you can use but like everything else, to get the most customized fit, you would build your own (or inherit from another class to give you a good starting point). Be careful though. Not all sinks are created the same. They all have their pros and cons. For example, with SQL and Azure sinks, you have to worry about high latency. The XML formatter doesn't write the root starting and ending node so it's not well-formed xml. Whatever reads that file would have to provide them. Good luck!