When using the On SubJob Error trigger, I would like to know what component failed inside the subjob. I have read that you can check the error message of each component and select the one that is not null. But I feel this practice is bad. Is there any variable that stores the identity of the component that failed?
I may be wrong, but I'm afraid there isn't. This is because globalVar elements are component-scoped (ie they are get/set by components themselves), not subjob-scoped (this would mean being set by Talend itself, or something). When the subjobError signal is triggered, you loose any component-based data coming from tFileInputDelimited. For this design reason, I don't think you will be able to solve your problem without iterating inside the globalMap searhcing for the error strings here and there.
Alternatively you can use tLogCatcher, which has a 'origin' column, to spot the offending component and eventually route to different recoverable subjobs depending on which component went to exception. This is not a design I trust too much, actually, because tLogCatcher is job-scoped, while OnSubjobError is directly linked to a specific subjob only. But It could work in simple cases
Related
I am trying to simulate vehicles parking on a parking place of a company. I have planned and unplanned cars. I have a schedule set up for the planned cars, which is working fine. The planned cars with the arrival and departure time and some other parameters are stored in the Anylogic Database.
For the source "unplannedCars" I wanted to inject them by the inject function call. The unplannedCar would be set on exit. But as soon as I try to inject some unplanned Cars, the model gives me a Null Pointer Exception Error. Has this something to do with my model taking the values of the Database?
I tried to fix it by adding the cars manually to a population and using the enter block. But there, I had the problem of using the road traffic library in combination with it.
Edit: I noticed the NullPointerException happens only if the "eOrV" block is used.
Edit2: I also tried to set default values for the agent and for the database. Now I get following Error with a NullPointerException: Error
Help is appreciated.
unplannedCarSourceImage plannedCarSourceImage Erorrmessage modelimage Errormessage2 SelectOutputBlock
The NPE in the SelectOutput is telling you that there is no field motortype in the incoming agent.
Likely, your Car agent type does not have such a field or it is indeed null, i.e. the String is completely empty and not initialized.
Make sure that agents passing through the SelectOutput have a field motortype of type String and that it contains some String.
I managed to create a workaround for that problem. As I have a database and a schedule for my agents, I also wanted to control them manually. So I needed to use the Schedule and the inject function. By using the schedule and set parameter from DB, I was able to solve the Null Pointer Exception problem. Still I dont know what caused the Null Pointer Exception as I had default values set for my agents.
Please see the image.
So here is a flow, wherein the first component executes a database query to find QAR_ID (single row), if it is found then all well. I am trying to put error handling into this. When no rows are found, it directs to tJava_11 which raises an java exception and that gets logged by another tJava component.
The problem I am facing is when it goes to error handling flow, it logs the error and just goes to the post-job section. However, I want Talend to take the OnSubJobOk route so that it continues with other steps instead of directly jumping to post-job section.
I know this is possible using subjobs but I don't want to keep creating 'n' number of subjobs.
Is there any way this can be done in the same job?
You could remove the runif and handle both scenarios in the get_QAR_ID into context component. ie query the database component's NB_LINE after variable, if it's <1 raise the error, else set the value. Your job would then flow to the onSubjobOk.
You can do something like this :
In tJava_1 you do your error logging if no row is returned by your query, and you continue to the next subjob. No need to throw an exception here only to catch it immediately after.
If any row is found, you continue to the next subjob (tJava_2) with an If trigger.
I have put an InitializeCorrelation activity in the beginning of the workflow and then I want to correlate on a different keys, so I've put another InitializeCorrelation activity with a different keys but I am getting this error:
The execution of an InstancePersistenceCommand was interrupted because the instance key 'a765c209-5adc-4f03-9dd2-1af5e33aab3b' was not associated to an instance. This can occur because the instance or key has been cleaned up, or because the key is invalid. The key may be invalid if the message it was generated from was sent at the wrong time or contained incorrect correlation data.
So, is it possible to change the correlation after the workflow started or not?
To answer the question explicitly, yes, you can change the data the correlation based on. You can do it in not only within a sequence but you can use different correlation data within each branch of a Parallel activity. Correlation can be initialized using an InitializeCorrelation or a SendReply activity as described here: http://msdn.microsoft.com/en-us/library/ee358755(v=vs.100).aspx.
As the Workflow Designer is not the strongest part of Visual Studio (XPath queries are never checked, sometimes even build errors are not reflected on activities, etc.), usually it is not always obvious what the problem is. So, I suggest the followings:
initialize a CorrelationHandle only once with the correlation type Query correlation for a specicific correlation data
initialize a new CorrelationHandle instance for a different correlation data
once a CorrelationHandle is initialized, it can be used later for multiple times for different Receive activities (Receive.CorrelatesOn, Receive.CorrelatesWith)
if correlation does not work it can be because of wrong XPath queries. These are not refreshed automatically if the OperationName or parameters' name are changed. It is advisable to regenerate them after renaming
it can be a good idea to turn off workflow persistance and NLB while you are testing - to let yourself concentrate on correlation-related problems
Have a look into Instances table in database where you store persisted instaces. One of the entries would probably have the suspended state, there is also a column with some error description. What have caused this error ? have you done some changes in workflow and deployed it ?
I have a requirement to allow a user to specify the value of an InArgument / property from a list of valid values (e.g. a combobox). The list of valid values is determined by the value of another InArgument (the value of which will be set by an expression).
For instance, at design time:
User enters a file path into workflow variable FilePath
The DependedUpon InArgument is set to the value of FilePath
The file is queried and a list of valid values is displayed to the user to select the appropriate value (presumably via a custom PropertyValueEditor).
Is this possible?
Considering this is being done at design time, I'd strongly suggest you provide for all this logic within the designer, rather than in the Activity itself.
Design-time logic shouldn't be contained within your Activity. Your Activity should be able to run independent of any designer. Think about it this way...
You sit down and design your workflow using Activities and their designers. Once done, you install/xcopy the workflows to a server somewhere else. When the server loads that Activity prior to executing it, what happens when your design logic executes in CacheMetadata? Either it is skipped using some heuristic to determine that you are not running in design time, or you include extra logic to skip this code when it is unable to locate that file. Either way, why is a server executing this design time code? The answer is that it shouldn't be executing it; that code belongs with the designers.
This is why, if you look at the framework, you'll see that Activities and their designers exist in different assemblies. Your code should be the same way--design-centric code should be delivered in separate assemblies from your Activities, so that you may deliver both to designers, and only the Activity assemblies to your application servers.
When do you want to validate this, at design time or run time?
Design time is limited because the user can use an expression that depends on another variable and you can't read the value from there at design time. You can however look at the expression and possibly deduce an invalid combination that way. In this case you need to add code to the CacheMetadata function.
At run time you can get the actual values and validate them in the Execute function.
xlang/s engine event log entry: Failed while creating a X service. Object of type 'Y' cannot be converted to type 'Y'.
This event log entry appears to be the same as what is discussed here:
Microsoft.XLANGs.Core.ServiceCreationException : Failed while creating a ABC service
I've investigated the 2 solutions offered in this post, but neither fixed my problem.
I'm running BizTalk 2010 and am seeing the issue with a uniform sequential convoy. Each instance of the orchestration is initially activated as expected. All the shapes before the second receive shape execute without issue. The problem occurs when the orchestration instance receives its second message. Execution does not proceed beyond the receive shape that corresponds to this second message.
Using the Group Hub page, I can see that the second message is associated with the correct service instance. This service instance is suspended and the error message shown above appears in the event log.
Occasionally (about 1 out of every 5 times), the problem mentioned above does NOT occur. That is, subsequent messages are process by the orchestration. I'm feeding in the same test files each time. Even more interesting...the problem NEVER occurs if I set a break point (in Orchestration Debugger) on a Listen shape just before the second receive shape.
The fact that I don't see the problem when using the debugger makes me wonder if this is a timing issue. Unfortunately, it doesn't seem like I would have much control over the timing.
Does anyone have any idea about how to prevent this problem from occurring?
Thanks
Is there only a single BizTalk host server involved? It wouldn't surprise me if the issue was related to difficulty loading a required assembly from the GAC. If there were multiple BizTalk servers involved, it could be that one of them is the culprit (or only one of them isn't). Of course, it may not be that easy.
An alternative is the second answer on the other question to which you linked, stating to check that a required schema is not deployed more than once. I have had this problem before, and about the only way to figure out that this is what's going on is to look in the BizTalk Admin Console under BizTalk Group > Applications > <AllArtifacts> > Schemas and sort by the Target Namespace to see if there are any two (or more) rows with the same combination of Target Namespace and Root Name.
The issue could also be caused by a schema mismatch, where perhaps an older/different version of a schema is deployed than expected, and a field that is only sometimes there (hence why it sometimes works) causes a mismatch.
These are, of course, just theories, without the ability to look into your environment and see the actual BizTalk artifacts.
I filed this issue with Microsoft. It turns out that "the behavior is actually an existing design limitation with the way the XLANG compiler relies on type wrappers." The issue resulted from an very specific scenario. We had an orchestration with a message variable directly referencing a schema and another message variable referencing a multi-part message type based on the same schema. The orchestration, schema, and multi-part message type were each defined in different projects.
Microsoft suggested that we modify one of the variables so that both referenced the schema or both referenced the MMT. Unfortunately, keeping the variables as they were was critical for us. We discovered (and Microsoft confirmed) that moving the definition of the MMT into the same project as the orchestration resolved the issue as well.