xlang/s engine event log entry: Failed while creating a X service. Object of type 'Y' cannot be converted to type 'Y' - service

xlang/s engine event log entry: Failed while creating a X service. Object of type 'Y' cannot be converted to type 'Y'.
This event log entry appears to be the same as what is discussed here:
Microsoft.XLANGs.Core.ServiceCreationException : Failed while creating a ABC service
I've investigated the 2 solutions offered in this post, but neither fixed my problem.
I'm running BizTalk 2010 and am seeing the issue with a uniform sequential convoy. Each instance of the orchestration is initially activated as expected. All the shapes before the second receive shape execute without issue. The problem occurs when the orchestration instance receives its second message. Execution does not proceed beyond the receive shape that corresponds to this second message.
Using the Group Hub page, I can see that the second message is associated with the correct service instance. This service instance is suspended and the error message shown above appears in the event log.
Occasionally (about 1 out of every 5 times), the problem mentioned above does NOT occur. That is, subsequent messages are process by the orchestration. I'm feeding in the same test files each time. Even more interesting...the problem NEVER occurs if I set a break point (in Orchestration Debugger) on a Listen shape just before the second receive shape.
The fact that I don't see the problem when using the debugger makes me wonder if this is a timing issue. Unfortunately, it doesn't seem like I would have much control over the timing.
Does anyone have any idea about how to prevent this problem from occurring?
Thanks

Is there only a single BizTalk host server involved? It wouldn't surprise me if the issue was related to difficulty loading a required assembly from the GAC. If there were multiple BizTalk servers involved, it could be that one of them is the culprit (or only one of them isn't). Of course, it may not be that easy.
An alternative is the second answer on the other question to which you linked, stating to check that a required schema is not deployed more than once. I have had this problem before, and about the only way to figure out that this is what's going on is to look in the BizTalk Admin Console under BizTalk Group > Applications > <AllArtifacts> > Schemas and sort by the Target Namespace to see if there are any two (or more) rows with the same combination of Target Namespace and Root Name.
The issue could also be caused by a schema mismatch, where perhaps an older/different version of a schema is deployed than expected, and a field that is only sometimes there (hence why it sometimes works) causes a mismatch.
These are, of course, just theories, without the ability to look into your environment and see the actual BizTalk artifacts.

I filed this issue with Microsoft. It turns out that "the behavior is actually an existing design limitation with the way the XLANG compiler relies on type wrappers." The issue resulted from an very specific scenario. We had an orchestration with a message variable directly referencing a schema and another message variable referencing a multi-part message type based on the same schema. The orchestration, schema, and multi-part message type were each defined in different projects.
Microsoft suggested that we modify one of the variables so that both referenced the schema or both referenced the MMT. Unfortunately, keeping the variables as they were was critical for us. We discovered (and Microsoft confirmed) that moving the definition of the MMT into the same project as the orchestration resolved the issue as well.

Related

why is my ctp not getting any data for one of the tables i am subscribing to?

I have few ctps subscribing to a tp.
subscription is established with no problems but data doesn't seem to hit 1 of my ctps.
I have 2 ctps subscribing to the same table. one is getting data the other doesn't.
I checked. u.w and I can see the handles being open for the said table but when I check the upd on my ctp... it receives all other tables except this one.
upd on my ctp it's a simple insert. I cannot see any data at all for the table. the t parameter is never set to the the name of the table I am interested in. I don't know what else to check. any suggestions would be greatly appreciated. the pub logic is the default pub logic.
no errors in the tp.
UPDATE1: I can send other messages and I receive data from the tp for other tables. issue doesn't seem to persist I dr just prod. I cannot debug much in prod
Without seeing more of your code it's hard to create a good answer.
Couple things you could try:
Check if you can send a generic message (e.g. h(+;1;2)) from your tp to ctp via the handle in .u.w this will make sure the connection is ok.
If you can send a message then you can check if the issue is in your ctp. You can see exactly what is being sent by adding some logging to your upd function, or if you thing the message isn't getting that far, to your .z.ps message handler function, e.g. .z.ps:{0N!x;value x} will perform some very basic client side logging.
If you can't send a message down the handle in the tp then it's possible there's other network issues at play (although I expect you would be seeing errors in your tp if that was the case). You could check .z.W in your ctp in this case to see if the corresponding handle for the tp is present there.
Can also send a test update to your tickerplant and add logging along each step of the way if you really want to see the chain of events but this could be quite invasive.

IBM Datastage reports failure code 262148

I realize this is a bad question, but I don't know where else to turn.
can someone point me to where I can find the list of reports failure codes for IBM? I've tried searching for it in the IBM documentation, and in general google search, but this particular error is unique and I've never seen it before.
I'm trying to find out what code 262148 means.
Background:
I built a datastage job that has:
ORACLE CONNECTOR --> TRANSFORMER -> HIERARCHICAL DATA
The intent is to pull data from a ORACLE table, and output the response of the select statement into a JSON file. I'm using the HIERARCHICAL stage to set it. When tested in the stage, no problems, I see the JSON output.
However, when I run the job, it squawks:
reports failure code 262148
then the job aborts. There are no warnings, no signs, no errors prior to this line.
Until I know what it is, I can't troubleshoot.
If someone can point me to where the list of failure codes are, i can proceed.
Thanks!
can someone point me to where I can find the list of reports failure codes for IBM?
Here you go:
https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_73/rzahb/rzahbsrclist.htm
While this list does not list your specific error code, it does categorize many other codes, and explains how the code breakdown works. While this list is not specifically for DataStage, in my experience IBM standards are generally compatible across different products. In this list every code that starts with a 2 is a disk failure, so maybe run a disk checker. That's the best I've got as far as error codes.
Without knowledge of the inner workings of the product, there is not much more you can do beyond checking system health in general (especially disk, network and permissions in this case). Personally, I prefer to go get internal knowledge whenever exterior knowledge proves insufficient. I would start with a network capture, as I'm sure there's a socket involved in the connection between the layers. Compare the network capture from when the select statement is run from the hierarchical from one captured when run from the job. There may be clues in there, like reset or refused connections.

How can I know what component failed?

When using the On SubJob Error trigger, I would like to know what component failed inside the subjob. I have read that you can check the error message of each component and select the one that is not null. But I feel this practice is bad. Is there any variable that stores the identity of the component that failed?
I may be wrong, but I'm afraid there isn't. This is because globalVar elements are component-scoped (ie they are get/set by components themselves), not subjob-scoped (this would mean being set by Talend itself, or something). When the subjobError signal is triggered, you loose any component-based data coming from tFileInputDelimited. For this design reason, I don't think you will be able to solve your problem without iterating inside the globalMap searhcing for the error strings here and there.
Alternatively you can use tLogCatcher, which has a 'origin' column, to spot the offending component and eventually route to different recoverable subjobs depending on which component went to exception. This is not a design I trust too much, actually, because tLogCatcher is job-scoped, while OnSubjobError is directly linked to a specific subjob only. But It could work in simple cases

Is it possible to change the correlation in the middle of the workflow?

I have put an InitializeCorrelation activity in the beginning of the workflow and then I want to correlate on a different keys, so I've put another InitializeCorrelation activity with a different keys but I am getting this error:
The execution of an InstancePersistenceCommand was interrupted because the instance key 'a765c209-5adc-4f03-9dd2-1af5e33aab3b' was not associated to an instance. This can occur because the instance or key has been cleaned up, or because the key is invalid. The key may be invalid if the message it was generated from was sent at the wrong time or contained incorrect correlation data.
So, is it possible to change the correlation after the workflow started or not?
To answer the question explicitly, yes, you can change the data the correlation based on. You can do it in not only within a sequence but you can use different correlation data within each branch of a Parallel activity. Correlation can be initialized using an InitializeCorrelation or a SendReply activity as described here: http://msdn.microsoft.com/en-us/library/ee358755(v=vs.100).aspx.
As the Workflow Designer is not the strongest part of Visual Studio (XPath queries are never checked, sometimes even build errors are not reflected on activities, etc.), usually it is not always obvious what the problem is. So, I suggest the followings:
initialize a CorrelationHandle only once with the correlation type Query correlation for a specicific correlation data
initialize a new CorrelationHandle instance for a different correlation data
once a CorrelationHandle is initialized, it can be used later for multiple times for different Receive activities (Receive.CorrelatesOn, Receive.CorrelatesWith)
if correlation does not work it can be because of wrong XPath queries. These are not refreshed automatically if the OperationName or parameters' name are changed. It is advisable to regenerate them after renaming
it can be a good idea to turn off workflow persistance and NLB while you are testing - to let yourself concentrate on correlation-related problems
Have a look into Instances table in database where you store persisted instaces. One of the entries would probably have the suspended state, there is also a column with some error description. What have caused this error ? have you done some changes in workflow and deployed it ?

SSRS Error: "One or more parameters required to run the report have not been specified. (rsParametersNotSpecified)"

Okay there are similar questions to this but this is NOT a duplicate. This error seems to come up when you have parameters referencing a dataset which is shared. Deleting the report from the server and redeploying does not fix in my case.
So I am developing on VS 2010 Professional with Business Intelligence Development Studio, BIDS, which is under source control with Team Foundation Server. I am deploying to a 2008R2 server which I thought may be the issue. The workaround is to change the dataset references to be embedded instead which stops this error dead in it's tracks but that is pretty poor in my opinion and I would like to have this work with shared datasets ultimately.
Things I have tried:
Ensure the naming of the dataset matches the reference. EG: "Name is ClientQuery, shared dataset is ClientQuery"
Ensure the naming on the server matches the refernces in step 1.
Ensure that this is what is breaking it by removing the reference to the shared dataset, works right away then.
Ensures that the shared dataset is not enabling some type of caching on the server.
I had a filter on a second shared dataset limiting scope, I removed that and there was still an error.
Removed all parameters and only added a single shared dataset, it gives error right away.
Added an option to the parameters binding to say: "Allow Empty values". Did this with Nulls as well.
Recreated EVERYTHING, a whole brand new RDL file, and copy and pasted only elements on the body of the report but explicitly created the parameters and the datasets and this STILL HAPPENED.
9. UPDATED - I have done the old destroy the RDL and then hope to redeploy. I found that a lot online. That does not work in this case. It is almost like this reference in the RDL:
< DataSet Name="**ClientQuery**">
< SharedDataSet>
< SharedDataSetReference>**ClientQuery**</SharedDataSetReference>
< /SharedDataSet>
< Fields>
< Field Name="CUSTOMER_ID">
< DataField>CUSTOMER_ID</DataField>
< rd:TypeName>System.String</rd:TypeName>
< /Field>
< Field Name="CUSTOMER_NAME">
< DataField>CUSTOMER_NAME</DataField>
< rd:TypeName>System.String</rd:TypeName>
< /Field>
< /Fields>
< /DataSet>
It appears that somehow the mention of this refernce causes havoc. I would examine my bin(environment) directory under my project. (I deploy for multiple environments and set up QA, UAT, PROD, etc.. under solution config) Each time the RDL is getting updated as it should and posting the updates I am showing. I think 'rebuild' is a lot of the issue at times when people see their report files not updating on a server, in my case a rebuild usually gets updates to the RDL versus just hitting deploy first.
While all of this is happening the hard part is that it works throughout changes every time on BIDS seamlessly. So the error is dealing completely with what the source server believes the rdl data to represent.
Any help is much appreciated, I would rate myself advanced at SSRS but this one has me stumped of what the error is refernecing that it is not getting.
I know this is an old question, but I just ran across this and was able to resolve my issue. Thought an updated option is warranted for others struggling with it. My issue had to do with the parameter settings on the Shared Dataset properties. The menu looks like this:
Specifically, make sure that you check the "Allows null value" option where needed. This instantly resolved my issue where a dataset would not work when pointing to a shared but embedding the dataset did.
Okay so the answer Jeroen proposed and others is half right. My issue was that my source code was under an older SVN Source Control, that was deployed to an SSRS 2008 Server, then we migrated the code base to TFS Source Control. The issue appears to be that the Shared Datasets were BELIEVING to be different identifiers than they actually were. The simple work around IN ADDITION to deleting the files is to redeploy the shared datasets as well. In my case I went into my project settings and deployed them to a different location entirely under the report structure to keep them in the same area so: Reports/Datasets instead of just Datasets. This seems to clear up the issue in my case so I believe this was just a perfect storm. In doubt with SSRS just delete everything and start from the ground up I guess.