Insert message into a process running in gwt-console-server from external application? - drools

I'm a jBPM noob running jBPM5.4 in AS7. I have tried posting this question on the jBPM duscussion board, but no luck, so I thought I'd try here on stack.
My Goal: Create the process in guvnor, run it in gwt-console-server, have my java application feed information to the process, and follow the current state in the jbpm Console.
So far, I have installed the jbpm console and console server as well as Guvnor and designer on jBOSS AS7. I am able to create a process in Guvnor and run and monitor that process from the jbpm Console. The missing piece is that I do not understand how to externally insert messages to the process that is running.
Using eclipse and the jBPM example, I can run a process and insert messages, but my goal is to use the jbpm console to monitor the processes.
I assume I need to access the knowledgesession running in the gwt-console-server, but I'm not sure how to do that. Is it safe to access/modify a session that is persisted out to a database (ie, both gwt-console-server and my custom app would be able to modify it) and then the jbpm console would read from it?
I see in the BPM Console reference (https://community.jboss.org/wiki/BPMConsoleReference) that there is an Integration Layer, but there is nothing about how to leverage that - and the like in the doc is broken :(
Can someone point me to an example of an external application feeding messages to a jbpm process that is being monitored by jbpm-console or suggest ways to accomplish this?
Thanks very much for any insight.
-J
PS. I have the new jBPM Developer's Guide, but can't find anything in it to help me with this (so if I am missing something, I can handle a reference back to that guide).

The jBPM console has a REST api that exposes a subset of the functionality. For example, if you model this feeding of information as the start of a process, or the sending of a signal, you could use the signal REST method to send this information to the console for processing.
It's also fine to use an external ksession to update a process instance. As long as they are using the same database to store the information, everything should be fine.

It turns out that the console is just using the logs, so as long as you log to the same DB the console is using (with JPAWorkingMemoryDbLogger) everything pretty much automagically works. You can use either JBPMHelper.newStatefulKnowledgeSession(kbase) or JBPMHelper.loadStatefulKnowledgeSession(kbase, sessionId) depending on if you want to use the knowledge session started from the Console. Also, if you borrow the Console's session, don't dispose it of course.
I read somewhere that you can give the session a business id (and soon do the same from your own code so that they automatically use the same session), but currently when I want to borrow the Console's session I use a kludge that just assumes the highest session is the one I want (it will be as long as the console is already running).

Related

Azure WebJob Logging/Emailing

I've converted a console app into a scheduled WebJob. All is working well, but I'm having a little trouble figuring out how to accomplish the error logging/emailing I'd like to have.
1.) I am using Console.WriteLine and Console.Error.WriteLine to create log messages. I see these displayed in the portal when I go to WebJob Run Details. Is there any way to have these logs saved to files somewhere? I added my storage account connection string as AzureWebJobsDashboard and AzureWebJobsStorage. But this appears to have just created an "azure-webjobs-dashboard" blob container that only has a "version" file in it.
2.) Is there a way to get line numbers to show up for exceptions in the WebJob log?
3.) What is the best way to send emails from within the WebJob console app? For example, if a certain condition occurs, I may want to have it send me and/or someone else (depending on what the condition is) an email along with logging the condition using Console.WriteLine or Console.Error.WriteLine. I've seen info on triggering emails via a queue or triggering emails on job failure, but what is the best way to just send an email directly in your console app code when it's running as a WebJob?
How is your job being scheduled? It sounds like you're using the WebJobs SDK - are you using the TimerTrigger for scheduling (from the Extensions library)? That extensions library also contains a new SendGrid binding that you can use to send emails from your job functions. We plan on expanding on that to also facilitate failure notifications like you describe, but it's not there yet. Nothing stops you from building something yourself however, using the new JobHostConfiguration.Tracing.Trace to plug in your own TraceWriter that you can use to catch errors/warnings and act as you see fit. All of this is in the beta1 pre-release.
Using that approach of plugging in a custom TraceWriter, I've been thinking of writing one that allows you to specify an error threshold/sliding window, and if the error rate exceeds, an email or other notification will be sent. All the pieces are there for this, just haven't done it yet :)
Regarding logging, the job logs (including your Console.WriteLines) are actually written to disk in your Web App (details here). You should be able to see them if you browse your site log directory. However, if you're using the SDK and Dashboard, you can also use the TextWriter/TraceWriter bindings for logging. These logs will be written to your storage account and will show up in the Dashboard Functions page per invocation. Here's an example.
Logs to files: You can use a custom TraceWriter https://gist.github.com/aaronhoffman/3e319cf519eb8bf76c8f3e4fa6f1b4ae
Exception Stack Trace Line Numbers: You will need to make sure your project is built with debug info set to "full" (more info http://aaron-hoffman.blogspot.com/2016/07/get-line-numbers-in-exception-stack.html)
SendGrid, Amazon Simple Email Service (SES), etc.

Is there a way to enable an SQL log to see/optimize my queries using CloudSQL

I started my test of using a Google's CloudSQL instance with a desktop based application, so far I am impressed with a performance, even it is laggy, it does the job, so my next step is to see what simple modifications can do to my application most intended to reduce Access to the database and optimize if there is something more to do.
How can I do log the sql commands send to the database in order to check what queries are being sent. My app uses ODBC drivers in Windows.
Regards
What you probably want is to turn on the general log. Unfortunately, that requires SUPER privileges and that was removed some time ago (announcement). We are going to provide a way to tweak parameters like that via the Cloud SQL API. For now, the best solution is to use a setup a local server and use the logging on that one. If you really want it on production ping us on the google-cloud-sql-discuss Google group and we'll enable the SUPER for your instance.

mqsvc.exe pegs cpu at full usage when deploying nservicebus to production

When I deployed my site that uses nservice to a new production box, it was unusably slow...
After some debugging I discovered that mqsvc.exe was taking up 50% of the CPU usage and the other 50% was being taken up by w3wp.exe
I found this post here:
http://geekswithblogs.net/michaelstephenson/archive/2010/05/07/139717.aspx
which recommended the following:
Make sure you set the windows service for NserviceBus Generic Host to the right credentials
Make sure you have the queue set with the right permissions
Make sure you turn on the right logging configuration in NServiceBus
So I figured the issue was something related to permissions, but even after trying to set the permissions correctly (I thought) I still wasn't able to resolve the issue.
If you allow NServiceBus to create its own queues, then it will create them with the correct permissions it needs.
The problem comes in when you set up a web application, and then the queues are created, and then the identity the application runs under changes. Then you get exactly this problem. NServiceBus tries to check the queue for a message, it does not have access to do so, so it immediately retries over and over, and you spike the processor.
The fix: Delete the queue. Restart the web application. NServiceBus takes over.
Edit: As noted in the comments, NServiceBus 3.x doesn't invoke the installers by default, which means queues are not automatically created in production unless you ask it to. See the documentation page on Installers for more detail.
For a web application (or any other situation where you're not using NServiceBus.Host) you can invoke the installers as part of the fluent config. There is a full example in the NServiceBus download, but here is a link to the relevant file on GitHub.
The issue did end up being that the website needed to be granted explicit permissions to the queues.
I found a number of resources online telling me this, but I still had to spend a good amount of time monkeying around with exactly WHICH account needed access... turned out that since my application pools were set to run as ApplicationPoolIdentity, I need to grant the account permissions by adding the following account to the nservicebus queue:
IIS AppPool\{APP POOL NAME}
I granted full access rights, though I'm sure you could refine that a bit if you needed to.
Hopefully, this will help anyone who runs into the same issues.
(This is my first attempt at the "Answer your own question" mechanism so please let me know if I am doing something wrong..)

Embedding Openfire

Is it possible to embed an Openfire server (version 3.7.0) in a Java application?
I am trying to run integration tests on the server in Eclipse. However, because Openfire is in Standalone Mode (the condition for this being that it can find its ServerStarter bootstrap class), when the server tries to shutdown, it calls System.exit(0) which I do not want to happen.
Is there any way to stop this from happening, i.e. without just deliberately preventing Openfire from finding its bootstrap class?
I have a successful approach, which is fairly straightforward and much easier than trying to manually set up Openfire.
Install Openfire onto a machine(Mac, PC, etc), setup with the admin console using the embedded database, and then comment out the adminConsole from openfire.xml if you'd like.
Copy the directory to a location you want to run your unit tests from. If you want to ensure exact repeatability, then it would be wise to zip and unzip the directory every time you run the tests.
Ensure all the all the jars(openfire, hsqldb, mail, bouncycastle, jasper, etc) are added.
Now you should be able to start and stop normally. Openfire does have one quirk. Because it's singleton oriented, even if you shutdown, that singleton instance stays around, so if you want to use it in something like a unit test, you'll have to call XMPPServer.getInstance() to check if an instance already exists, then call the constructor if getInstance() returns null.
I hope that helps.

Continuation of a process after a system crash/restart - Drools Flow

I've been playing with examples I downloaded with the book Drools JBoss Rules 5.0. To my relief they work :) Drools Flow has been my point of interest as a possible workflow engine replacement.
As I'm trying to wrap my head around things, I've been wondering how a premature death of a rulesflow process gets restarted? What I'm mean is say a process is bouncing from one node to another like expected, then the containing process dies due to a crash, restart or whatever. Is the current node/place of the ruleflow process retained, and can it just continue from that point on system restart? If so how?
The group I work for is very Java EE centric with JBoss being our favorite application server. I see examples of Drools leveraging Spring's persistence and bean lookup support.
Are there examples of doing the same with JBoss?
If you persist the state of the process instances and tasks in the database. Even if the VM was down and restart again, you can retrieve the process instances.
Use the
To create the session
ksession = JPAKnowledgeService.newStatefulKnowledgeSession(kbase,null,env)
To load the session with the session id.
ksession = JPAKnowledgeService.loadStatefulKnowledgeSession( sessionId, kbase,
You only need to know the session id. Session information will be store in SessionInfo table. Download the example project below.
http://dl.dropbox.com/u/2634115/drools-test.zip
The example is using Btm with H2 database, it also work well with mysql-connector-java-5.1.13 with Btm. Note that the process that are complete will be automatically deleted from the database.
You are looking at the basic concept of Process Migration. During what is known as strong migration, a process can be stopped on one machine and the entire state of the process migrated to another machine (including the program counter and all existing stacks). Before you go thinking that this is completely insane, think about this from a JVM perspective. Since you're application is already being run in virtual hardware; it isn't hard to stop the application and pick it back up where it left off since it is completely virtualized.
If you would like another example, look at VMWare; an entire machine can be paused and migrated to another machine and started again. It's very interesting stuff and usually relates mainly to Distributed Computing where you might have hundreds of agents that need to migrate from machine to machine as some go down for maintenance.
I realize that I didn't give an example of this through JBoss; but giving a background on what exactly you're looking for can give you a much better insight into what to look for going forward.