JBoss Bean Statistics in Web-Console - jboss

I would like to use the JBoss web-console to view bean statistics. I read in a book ("JBoss: A Developer's Notebook") that bean invocation statistics are visible when drilling down through web-console's tree. There's even a tantalizing screenshot of individual bean data showing "the total number of invocations processed is recorded, along with the
minimum, maximum, and average processing time." However, when I drill down to my ejb module, I cannot get all the way to my beans. At the ejb module level, I see xml on the right side of the screen, and the line: "Provides Statistics" shows false. I have scoured the net for some way to 'turn on' beans so that they provide statistics. Is there an element in my applications jboss.xml?
Oh: I should say that I'm using JBossAS 5.1.
Thanks for any help.
Josh

Related

IBM Datastage reports failure code 262148

I realize this is a bad question, but I don't know where else to turn.
can someone point me to where I can find the list of reports failure codes for IBM? I've tried searching for it in the IBM documentation, and in general google search, but this particular error is unique and I've never seen it before.
I'm trying to find out what code 262148 means.
Background:
I built a datastage job that has:
ORACLE CONNECTOR --> TRANSFORMER -> HIERARCHICAL DATA
The intent is to pull data from a ORACLE table, and output the response of the select statement into a JSON file. I'm using the HIERARCHICAL stage to set it. When tested in the stage, no problems, I see the JSON output.
However, when I run the job, it squawks:
reports failure code 262148
then the job aborts. There are no warnings, no signs, no errors prior to this line.
Until I know what it is, I can't troubleshoot.
If someone can point me to where the list of failure codes are, i can proceed.
Thanks!
can someone point me to where I can find the list of reports failure codes for IBM?
Here you go:
https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_73/rzahb/rzahbsrclist.htm
While this list does not list your specific error code, it does categorize many other codes, and explains how the code breakdown works. While this list is not specifically for DataStage, in my experience IBM standards are generally compatible across different products. In this list every code that starts with a 2 is a disk failure, so maybe run a disk checker. That's the best I've got as far as error codes.
Without knowledge of the inner workings of the product, there is not much more you can do beyond checking system health in general (especially disk, network and permissions in this case). Personally, I prefer to go get internal knowledge whenever exterior knowledge proves insufficient. I would start with a network capture, as I'm sure there's a socket involved in the connection between the layers. Compare the network capture from when the select statement is run from the hierarchical from one captured when run from the job. There may be clues in there, like reset or refused connections.

How to debug drools decision table

Could someone help me how to debug a decision table in Drools. For our project we are creating a decision table with more than 1000 rules. Whenever their is a mistake in a rule who spreadsheet is not working and also its not displaying where is the exact error.
Drools: version 7.15.0.Final
I currently follow two approaches for debugging decision tables:
Compilation phase
In my case, I have to serialize the decision tables to save time –– normally they're converted into the .drl files which are then evaluated. I skip the line and compile them directly and get the knowledge bases and serialize them. My application uses these serialized knowledge bases.
Sometimes my decision tables fail to compile.
I debug them by generating the .drl file. The errors that drl parser generates are mostly identifiable from the generated .drl file.
Code snippet for converting a drools decision table into its corresponding drl file
Runtime phase
Sometimes, even if my decision tables have compiled successfully, they have some runtime issues –– some rules don't fire as expected. To debug these I've found the use of AgendaEventListener helpful. Drools provides two helpful agenda event listener implementations for debugging purposes out of the box: DebugAgendaEventListener and DebugRuleRuntimeEventListener.
There are two variations of the DebugAgendaEventListener and DebugRuleRuntimeEventListener. The ones from the org.drools.core.event package use the Logger instance for logging the events where as the ones from org.kie.api.event.rule package use the stderr. However, both have the exact same functionality.
Moreover, the Kie Event Model can be leveraged to get more information out and custom debugging. More information can be found in the drools 7.15.0.Final docs.
Additional links and references:
https://javadude.wordpress.com/2012/03/06/debugging-drools-rules/

Spring Batch architecture

Hi
I am a novice in Spring Batch world and last days I've spent time watching Michael Minella's youtube video, read some documentation and successfully run some demo projects I found on the internet. I think Spring Batch is a hot candidate for our needs. But here is our story.
I am working in a company that developed their own scheduling and batch framework, for more than a decade ago, for their business department. The framework is capable of running DB stored procs, DB functions and dynamic SQLs. Needless to say it is very challenging to maintain it since too many people with various development skills did the coding and they don't work here anymore. Our framework may handle jobs and steps to run sequentially as well as async (as Spring Batch). We have also a Job Repository where we store whole job definitions (users create new jobs via GUI), job instances with its context (in case the server goes down, when server is up it will resume running a job).
My questions are following:
Can we create new Spring Batch jobs dynamically (either via XML og code) and via standard SB interfaces store them to the JobRepository DB?
Today, at certain time period, we have up to hundred of job executions simultaneously. They are also reusing a connection pool to the DB. Older Spring Batch ref documentation states JobFactory will create fresh ApplicationContext for each job execution. How can we achieve reusing connection pools if this is the case in Spring Batch.
I know there is a support for continuing failed steps but what if the server/app goes down, will I be able to restart my app and retrieve job instance with its context from JobRepository in order to continue from failed step?
Can a "step1.1" in "job1" be dependent on "step 2.1" from "job2" finishing within last hour? In such scenarios I may be using a step listener on "step1.1" to accomplish this?
Kind regards
Toto
You have a lot of material here to cover, so let me respond one point at a time:
Can we create new Spring Batch jobs dynamically (either via XML or code) and via standard SB interfaces store them to the JobRepository DB?
Can you generate a job definition dynamically? Yes. We do it in Spring XD with regards to the job orchestration piece (the composed job DSL is used to generate an XML file for example.
Does Spring Batch provide facilities to do this? No. You'd have to code it yourself.
Also note that you'd have to store the definition in your own table (the schema defined by Spring Batch doesn't have a table for this).
Today, at certain time period, we have up to hundred of job executions simultaneously. They are also reusing a connection pool to the DB. Older Spring Batch ref documentation states JobFactory will create fresh ApplicationContext for each job execution. How can we achieve reusing connection pools if this is the case in Spring Batch.
You can use parent/child context configurations to reuse beans including a DataSource. Define the DataSource in the parent and then the jobs that depend on it in child contexts.
I know there is a support for continuing failed steps but what if the server/app goes down, will I be able to restart my app and retrieve job instance with its context from JobRepository in order to continue from failed step?
This is really an orchestration concern. Spring Batch, by design, does not address the orchestration of jobs into consideration. This allows you to orchestrate them how you want.
The way I'd recommend handling this is via Spring XD or (depending on your timelines) Spring Cloud Data Flow. These tools provide orchestration capabilities including the redeployment of a job if it goes down. That being said, it won't restart a job that was running if it fails because that typically requires some form of human decision based on use case. However, Spring XD currently (and Spring Cloud Data Flow will) have the capabilities to implement something like this in a pretty straight forward way.
Can a "step1.1" in "job1" be dependent on "step 2.1" from "job2" finishing within last hour? In such scenarios I may be using a step listener on "step1.1" to accomplish this?
In cases like this, I'd start to question how your job is configured. You can use a JobExecutionDecider to decide if a step should be executed or not if it still makes sense.
All things considered, while you can accomplish most of what you're looking for with Spring Batch, using something like Spring XD or Spring Cloud Data Flow will make your life a lot easier.
Can we create new Spring Batch jobs dynamically (either via XML og code) and via standard SB interfaces store them the JobRepository DB?
It is easy to use StepBuilderFactory, FlowBuilder etc. to programatically build the Spring Batch artifacts. You'll probably want to back those artifacts with Spring Beans (to get nice facilities like the step/job spring scopes, injection and so on) and for that you can use prototype, execution scoped and job scoped beans, or even use facilities such as BeanDefinitionBuilder to dynamically create beans.
Older Spring Batch ref documentation states JobFactory will create fresh ApplicationContext for each job execution. How can we achieve reusing connection pools if this is the case in Spring Batch.
The GenericApplicationContextFactory creates a child application context. You can have the "global" beans in the parent application context.
I know there is a support for continuing failed steps but what if the server/app goes down, will I be able to restart my app and retrieve job instance with its context from JobRepository in order to continue from failed step?
Yes, but not that easily.
Can a "step1.1" in "job1" be dependent on "step 2.1" from "job2" finishing within last hour? In such scenarios I may be using a step listener on "step1.1" to accomplish this?
A JobExecutionDecider will likely be the best option there.

How to Query a Web Service (XML) From a Subreport?

I apologize if this has been already asked but my searches have had little luck. I've also tried MSDN forum's, but its obvious that I need the big guns for this one ;)
I am using VS2008 (SSRS 2008 R2) to create a series of subreports. Each Subreport queries 1 or more Web Methods from a WCF Web Service.
When I run an rdl as a stand-alone report, everything renders properly. When I run that rdl as a subreport, I receive an error recommending that I check the log (details, and steps to reproduce below.)
Simple Test (No Subreports):
Using the instructions found in the article Reporting Services: Using XML and Web Service Data Sources I was able to create the necessary Shared Datasets for each web method.
I successfully created a report (SubTest.rdl) utilizing a Shared Dataset for a Table.
The dataset's underlying web method contains no parameters (trying to keep it simple).
SubTest.rdl renders correctly!
So far so good.
Test 2: Master/Subreport structure
created a Parent/Master report (MasterTest.rdl)
added a Subreport Report Item, and specified "SubTest.rdl"
Note: No Report parameters are specified, as SubTest does not have any parameters defined.
I receive the following error during the rendering of the MasterTest.rdl report:
Warning 1 [rsErrorExecutingSubreport] An error occurred while executing the subreport 'Subreport1' (Instance: 5iS0): Data retrieval failed for the subreport, 'Subreport1', located at: /SubTest. Please check the log files for more information.
Additional Testing:
To ensure that my subreport is properly defined in MasterTest.rdl, I altered SubTest.rdl. In SubTest I removed the DataSource, DataSet, and Table from "SubTest.rdl" and insterted a TextBox filled with the words "Output From Subreport". This rendered properly in the Master report, indicating that the problem specifically relates to my Web Service Datasource/DataSet.
Questions: :(
Is there a way to accomplish this task?
If this is not possible, can anyone suggest a workaround for providing Web Service xml to a subreport?
Also, per the error message: Any idea where I can find this log? (because this is running in Visual Studio, checking the SSRS logs folder on my local machine did not help, nor did running VS with logging enabled.)
A workaround that I could not get to work:
I tried to follow the instructions in the linked article for passing XML to a subreport as a parameter, but
The master passes the xml as a scalar string. Because I am querying
a web service and not using a data set where each row contains a col holding the XML, I only have the resultant dataset to work with. Basically I need to convert a data set to a scalar.
I had difficulty following the instructions (even if I could solve problem
1, I'm not even sure that I properly defined the dataset and
parameter - how do I get fields when the data is not known until
runtime?)
Thank you for any help you can give. This has been driving nuts for days!

xlang/s engine event log entry: Failed while creating a X service. Object of type 'Y' cannot be converted to type 'Y'

xlang/s engine event log entry: Failed while creating a X service. Object of type 'Y' cannot be converted to type 'Y'.
This event log entry appears to be the same as what is discussed here:
Microsoft.XLANGs.Core.ServiceCreationException : Failed while creating a ABC service
I've investigated the 2 solutions offered in this post, but neither fixed my problem.
I'm running BizTalk 2010 and am seeing the issue with a uniform sequential convoy. Each instance of the orchestration is initially activated as expected. All the shapes before the second receive shape execute without issue. The problem occurs when the orchestration instance receives its second message. Execution does not proceed beyond the receive shape that corresponds to this second message.
Using the Group Hub page, I can see that the second message is associated with the correct service instance. This service instance is suspended and the error message shown above appears in the event log.
Occasionally (about 1 out of every 5 times), the problem mentioned above does NOT occur. That is, subsequent messages are process by the orchestration. I'm feeding in the same test files each time. Even more interesting...the problem NEVER occurs if I set a break point (in Orchestration Debugger) on a Listen shape just before the second receive shape.
The fact that I don't see the problem when using the debugger makes me wonder if this is a timing issue. Unfortunately, it doesn't seem like I would have much control over the timing.
Does anyone have any idea about how to prevent this problem from occurring?
Thanks
Is there only a single BizTalk host server involved? It wouldn't surprise me if the issue was related to difficulty loading a required assembly from the GAC. If there were multiple BizTalk servers involved, it could be that one of them is the culprit (or only one of them isn't). Of course, it may not be that easy.
An alternative is the second answer on the other question to which you linked, stating to check that a required schema is not deployed more than once. I have had this problem before, and about the only way to figure out that this is what's going on is to look in the BizTalk Admin Console under BizTalk Group > Applications > <AllArtifacts> > Schemas and sort by the Target Namespace to see if there are any two (or more) rows with the same combination of Target Namespace and Root Name.
The issue could also be caused by a schema mismatch, where perhaps an older/different version of a schema is deployed than expected, and a field that is only sometimes there (hence why it sometimes works) causes a mismatch.
These are, of course, just theories, without the ability to look into your environment and see the actual BizTalk artifacts.
I filed this issue with Microsoft. It turns out that "the behavior is actually an existing design limitation with the way the XLANG compiler relies on type wrappers." The issue resulted from an very specific scenario. We had an orchestration with a message variable directly referencing a schema and another message variable referencing a multi-part message type based on the same schema. The orchestration, schema, and multi-part message type were each defined in different projects.
Microsoft suggested that we modify one of the variables so that both referenced the schema or both referenced the MMT. Unfortunately, keeping the variables as they were was critical for us. We discovered (and Microsoft confirmed) that moving the definition of the MMT into the same project as the orchestration resolved the issue as well.