1-Hour Timeout on SSAS 2014 + ADOMD.Net - but no Timeouts Set to an Hour - ado.net

I've run into a mystifying XMLA timeout error when running an ADOMD.Net command from a .Net application. The Visual Basic routine iterates over a list of mining models residing on a SQL Server Analysis Services 2014 instance and performs a cross-validation test on each one. Whenever the time elapsed on the cross-validation test reaches the 60 minute mark, the XML for Analysis parser throws an error, saying that the request timed out. For any routine operations taking less than one hour, I can use the same ADOMD.Net connections with the same server and application without any hitches. The culprit in such cases is often the ExternalCommandTimeout setting on the server, which defaults to 3600 seconds, i.e one hour. In this case, however, all of the following timeout properties on the server are set to zero: CommitTimeout, ExternalCommandTimeout, ExternalConnectionTimeout, ForceCommitTimeout, IdleConnectionTimeout, IdleOrphanSessionTimeout, MaxIdleSessionTimeout and ServerTimeout.
There are only three other timeout properties available, none of which is set to one hour: MinldleSessionTimeout (currently at 2700), DatabaseConnectionPoolConnectTimeout (now at 60 seconds) and DatabaseConnectionPoolTimeout (at 120000). The MSDN documentation lists another three timeout properties that aren't visible with the Advanced Properties checked in SQL Server Management Studio 2017:
AdminTimeout, DefaultLockTimeoutMS and DatabaseConnectionPoolGeneralTimeout. The first two default to no timeout and the third defaults to one minute. MSDN also mentions a few "forbidden" timeout properties, like SocketOptions\ LingerTimeout, InitialConnectTimeout, ServerReceiveTimeout, ServerSendTimeout, which all carry the warning, "An advanced property that you should not change, except under the guidance of Microsoft support." I do not see any means of setting these through the SSMS 2017 GUI though.
Since I've literally run out of timeout settings to try, I'm stumped as to how to correct this behavior and allow my .Net app to wait on those cross-validations through ADOMD. Long ago I was able to solve a few arcane SSAS timeout issues by appending certain property settings to the connection strings, such as "Connect Timeout=0;CommitTimeout=0;Timeout=0" and so on. Nevertheless, attempting to assign an ExternalCommandTimeout value through the connection string in this manner results in the XMLA error
"The ExternalCommandTimeout property was not recognized." I have not tested each and every one of the SSAS server timeouts in this manner, but this exception signifies that ADOMD.Net connection strings can only accept a subset of the timeout properties.
Am I missing a timeout setting somewhere? Does anyone have any ideas on what else could cause this kind of esoteric error? Thanks in advance. I've put this issue on the back burner about as long as I can and really need to get it fixed now. I wonder if perhaps ADOMD.Net has its own separate timeout settings, perhaps going by different names, but I can't find any documentation to that effect...

I tracked down the cause of this error: buried deep in the VB.Net code on the front end was a line that set the CommandTimeout property of the ADOMD.Net Command object to 3600 seconds. This overrode the connection string settings mentioned above, as well as all of the server-level settings. The problem was masked by the fact that cross-validation retrieval operations were also timing out in the Visual Studio 2017 GUI. That occurred because the VS instance was only recently installed and the Connection and Query Timeouts hadn't yet been set to 0 under Options menu/Business Intelligence Designers/Analysis Services Designs/General.

Related

Can you force Npgsql to reset all pooled connections or update search path for active connections after some trigger?

Here's our issue. Every day, we update our search path by replacing a schema with another.
So if today our search path would be public, alpha, tomorrow it will be public, beta, then back to public, alpha the day after that. We do this because we want our users to get data from the latest schema, while we do some work on the previous day's data.
Our problem is that whenever we switch the search path, we have some time to wait until the connections in Npgsql's pool are closed and get the updated search path. If you add that some user might spam our API continuously, we might end up with a connection that uses the same search path for a lot longer.
Is there a way to update the search path for the whole pool using some kind of trigger? I know that we could set a lifetime for each connection and allow for something like 30 minutes for a connection until it's closed, but I was hoping there was a better solution.
Instead of "switching the search path" (more detail is needed on what exactly that means), you can simply include the search path in the connection string, meaning that you'd be alternating between two connection strings. Since each connection string gets its own pool, there's no problem. The older pool would gradually empty thanks to connection pruning.
Otherwise, a connection pool can be emptied by calling NpsgqlConnection.ClearPool (or ClearAllPools).

"QueryFailedError{Message: workflow must handle at least one decision task before it can be queried}" when trying to run a workflow

I get the aforementioned error when trying to start a workflow (and query it for init). It took me a while to get into names of things - after reading this it is more clear what decision task really is, however I think I am still a bit lost in terminology. So I believe in my case decision task takes longer than 1 sec (queryFirstDecisionTaskWaitTime). Is this wait time in any way configurable? Has anyone experienced similar issue?
Yes, you should be able to "configure" the queryFirstDecisionTaskWaitTime by setting request timeout.
For example, in golang that's just the context timeout when sending the query requests to Cadence server.
Tested in CLI:
% date ; ~/cadence/cadence --ct 10 --do qlong wf query -w helloworld_b721724d-11f9-4b5b-a158-2bda4a230297 --query_type "__stack_trace" ; date
Thu Oct 8 14:46:47 PDT 2020
Error: Query workflow failed.
Error Details: QueryFailedError{Message: workflow must handle at least one decision task before it can be queried}
('export CADENCE_CLI_SHOW_STACKS=1' to see stack traces)
Thu Oct 8 14:46:56 PDT 2020
Note: --ct 10 means we uses 10 seconds as context timeout for this command.
As a minimum, defaultQueryFirstDecisionTaskWaitTime is one second. Currently there is no way to change this minimum boundary. And I don't think we need that as you can configure per each request :D
BTW, thank you for asking questions in StackOverfolw, that helps us to preserve knowledge for the community better.
Oh, I figured out what it was - my tasklist was not configured properly. So it was the reason workflow is stuck in DecisionTaskScheduled state

Meaning of SessionStatistics in AEM's JMX console

My AEM server after a few days, becomes unresponsive and crashes. As per this article - https://helpx.adobe.com/experience-manager/kb/check-and-analyze-if-JCR-session-leaks-in-your-AEM-instance.html, on checking http://localhost:4502/system/console/jmx I found out that there are more than 60,000 SessionStatistics objects. I would like to know what these represent? Are these active sessions? or is this the list of all the sessions ever created on AEM server?
I would like to know what these represent? Are these active sessions? or is this the list of all the sessions ever created on AEM server?
Yes, these are active open sessions running currently on your AEM server - created since you last started your instance. You can find the last started time from /system/console/vmstat and all the session objects will have a timestamp after the Last Started time. You'll notice the timestamp against the session name. Something similar to this.
"communities-user-admin#session-1132#25/10/2018 5:03:26 PM"
The link you've posted already indicates potential fixes for open sessions.
Another possible reason for Build up of session objects is due to inefficient long running JCR queries (queries without indexes, very broad predicates, etc). This could lead to increase in garbage collection because of increase in memory usage (if mem params are not specified in start script), analysing gc.log might provide some insights. If you know pretty well that queries are causing build up of session objects, you can use these params in your start script to optimize the resources being used.
-Doak.queryLimitInMemory=1000 -Doak.queryLimitReads=1000 -Dupdate.limit=1000 -Doak.fastQuerySize=true
To find location of gc.log, use lsof
lsof -p ${JAVA PID} | grep gc.log

What can cause an inability to set QRYTIMLMT in DB2 from .NET?

We are using IBM's data provider from C# .NET 4.5 to query an i Series DB2 database. Normally this works very well, but for some queries, DB2 reports error "SQL0666 - SQL query exceeds specified time limit or storage limit".
I have tried setting the command timeout to 0, but to no effect. I have also tried to execute, in the manner explained here, the CHGQRYA command to set the QRYTIMLMT value to *NOMAX (or some other large value), but seemingly to no effect. However, if I use the same command to set the QRYSTGLMT (storage limit), it takes effect. Thus, I know that I am using the command correctly, and that it gets interpreted and executed by the database.
So, what can cause my inability to set the QRYTIMLMT value?
Also, our "DBA" has set the limit to *NOMAX on his end, and for queries not running through the .NET provider, everything works fine.
We're using IBM's Client Tools version 6r1 with service pack SI42423.
OK, so after lots of testing, I found the problem.
We're using the DeriveParameters() method to set the parameter types correctly, and if this method is called before setting CommandTimeout, the latter has no effect(!). The solution was to reverse the ordering of these statements.

xlang/s engine event log entry: Failed while creating a X service. Object of type 'Y' cannot be converted to type 'Y'

xlang/s engine event log entry: Failed while creating a X service. Object of type 'Y' cannot be converted to type 'Y'.
This event log entry appears to be the same as what is discussed here:
Microsoft.XLANGs.Core.ServiceCreationException : Failed while creating a ABC service
I've investigated the 2 solutions offered in this post, but neither fixed my problem.
I'm running BizTalk 2010 and am seeing the issue with a uniform sequential convoy. Each instance of the orchestration is initially activated as expected. All the shapes before the second receive shape execute without issue. The problem occurs when the orchestration instance receives its second message. Execution does not proceed beyond the receive shape that corresponds to this second message.
Using the Group Hub page, I can see that the second message is associated with the correct service instance. This service instance is suspended and the error message shown above appears in the event log.
Occasionally (about 1 out of every 5 times), the problem mentioned above does NOT occur. That is, subsequent messages are process by the orchestration. I'm feeding in the same test files each time. Even more interesting...the problem NEVER occurs if I set a break point (in Orchestration Debugger) on a Listen shape just before the second receive shape.
The fact that I don't see the problem when using the debugger makes me wonder if this is a timing issue. Unfortunately, it doesn't seem like I would have much control over the timing.
Does anyone have any idea about how to prevent this problem from occurring?
Thanks
Is there only a single BizTalk host server involved? It wouldn't surprise me if the issue was related to difficulty loading a required assembly from the GAC. If there were multiple BizTalk servers involved, it could be that one of them is the culprit (or only one of them isn't). Of course, it may not be that easy.
An alternative is the second answer on the other question to which you linked, stating to check that a required schema is not deployed more than once. I have had this problem before, and about the only way to figure out that this is what's going on is to look in the BizTalk Admin Console under BizTalk Group > Applications > <AllArtifacts> > Schemas and sort by the Target Namespace to see if there are any two (or more) rows with the same combination of Target Namespace and Root Name.
The issue could also be caused by a schema mismatch, where perhaps an older/different version of a schema is deployed than expected, and a field that is only sometimes there (hence why it sometimes works) causes a mismatch.
These are, of course, just theories, without the ability to look into your environment and see the actual BizTalk artifacts.
I filed this issue with Microsoft. It turns out that "the behavior is actually an existing design limitation with the way the XLANG compiler relies on type wrappers." The issue resulted from an very specific scenario. We had an orchestration with a message variable directly referencing a schema and another message variable referencing a multi-part message type based on the same schema. The orchestration, schema, and multi-part message type were each defined in different projects.
Microsoft suggested that we modify one of the variables so that both referenced the schema or both referenced the MMT. Unfortunately, keeping the variables as they were was critical for us. We discovered (and Microsoft confirmed) that moving the definition of the MMT into the same project as the orchestration resolved the issue as well.