What can cause an inability to set QRYTIMLMT in DB2 from .NET? - ado.net

We are using IBM's data provider from C# .NET 4.5 to query an i Series DB2 database. Normally this works very well, but for some queries, DB2 reports error "SQL0666 - SQL query exceeds specified time limit or storage limit".
I have tried setting the command timeout to 0, but to no effect. I have also tried to execute, in the manner explained here, the CHGQRYA command to set the QRYTIMLMT value to *NOMAX (or some other large value), but seemingly to no effect. However, if I use the same command to set the QRYSTGLMT (storage limit), it takes effect. Thus, I know that I am using the command correctly, and that it gets interpreted and executed by the database.
So, what can cause my inability to set the QRYTIMLMT value?
Also, our "DBA" has set the limit to *NOMAX on his end, and for queries not running through the .NET provider, everything works fine.
We're using IBM's Client Tools version 6r1 with service pack SI42423.

OK, so after lots of testing, I found the problem.
We're using the DeriveParameters() method to set the parameter types correctly, and if this method is called before setting CommandTimeout, the latter has no effect(!). The solution was to reverse the ordering of these statements.

Related

Postgres AUTONOMOUS_TRANSACTION equivalent on the same DB

I'm currently working on a SpringBatch application that should insert some logs in case a certain type of error happens. The problem is that if the BatchJob fails, it automatically rollback everything done and that’s perfect, but it also rollback the error logs.
I need to achieve something similar to the AUTONOMOUS_TRANSACTION of Oracle while using PostgreSQL (14).
I’ve seen the DBLINK and it seem the only thing close to an alternative, but I have found some problems:
I need to avoid the connection string because the database host/port/name changes in the different environments, is it possible? I need to persist the data in the same database to technically I don’t need to connect to any other database but use the calling connection.
Is it possible to create a Function/Procedure that creates the takes care of all and I only have to call it Java side? Maybe this way I can somehow pass the connection data as a parameter in case that is not possible to avoid.
In a best case scenario I would be able to do something like:
dblink_exec(text sql);
That without arguments considers the same database where is been executed.
The problem is that I need this to be done without specifying any connection data, this will be inside a function on the executing db, in the same schema… that function will pass from one environment to the next one and the code needs to be the same so any name/user/pass needed must be avoided since they will change by environment. And since doing it in the same db and schema technically they can be inferred.
Thanks in advance!
At the moment I haven't try anything, I'm trying to get some information first.

db2 update dbm cfg immediate

I am looking as the following article:
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.1.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001988.html
I would like to ask about the IMMEDIATE and the DEFERRED part. Sorry I am still confuse and not really understand on it.
in the IMMEDIATE part, it explain that IMMEDIATE is the default, but it requires an instance attachment to be effective. , what does it means that requires an instance attachment to be effective? I though it should be straight take effect after I run the command?
For example:
db2 update dbm cfg using diaglevel 4 immediate
Does this direct take effect on my db2diag log files?
Take care to read the Db2 knowledge-center version that matches your Db2-version. Maybe you are using a more recent version of Db2 like V10.5 or v11.1.
For the DIAGLEVEL parameter, you can change it on the fly i.e. without needing to bounce the Db2-instnce. The new value is effective immediately and you can see this in the db2diag (which will increase quickly in size because of all the extra messages that will appear).
For "instance attachment" it means that you can run db2 attach ... command before running the db2 update dbm cfg ... The details are here.
However, if you are running as the Db2-instance owner and you are on the Db2-server directly (e.g. via ssh etc) then the instance-attachment is not necessary in this specific case. The instance-attachment is necessary when the instance is remote, or is not the current instance, or you are not running as the instance-owner etc.

How can I set sql_mode to a list of values

I am trying to use the 2nd gen cloud sql and would like to change the sql mode. In the UI, I can only set sql_mode to one value from a drop-down list, but not multiple of them (eg, "STRICT_MODE_TRANS,ALLOW_INVALID_DATES"). What would be the best way to accomplish that?
Cheers,
Andres
I know this post is 1 year old, but I stumbled upon this now when I had a problem with sql_mode when I tried migrating a database from MySQL 5.5 to Google SQL using 5.7. Though I know that we could SET GLOBAL sql_mode='' to any valid value we want, it took me hours to give up and concluded we could not set multiple values on Google Cloud SQL.
Google only allows one value to be set on sql_mode flag for now. If your problem pertains to removing ONLY_FULL_GROUP_BY (OP does not mention why he wants to customize values) without removing the rest of the values of sql_mode, using the value TRADITIONAL in the Console or gcloud sql instances patch <instance_name> --database-flags sql_mode=TRADITIONAL will remove that value from the rest of the string.
From MySQL 5.7 Documentation:
Before MySQL 5.7.4, and in MySQL 5.7.8 and later, TRADITIONAL is equivalent to STRICT_TRANS_TABLES, STRICT_ALL_TABLES, NO_ZERO_IN_DATE, NO_ZERO_DATE, ERROR_FOR_DIVISION_BY_ZERO, NO_AUTO_CREATE_USER, and NO_ENGINE_SUBSTITUTION.
I would have only added this as a comment above, but I can't add one yet due to lacking points.
This is not supported right now by Google Cloud SQL. You can only set one value.
Another potential solution is to set the sql_mode to HIGH_NOT_PRECEDENCE
Once set in Cloud SQL the string for sql_mode will become:
HIGH_NOT_PRECEDENCE
All other flags are removed!
I was coming from an older project so this solution might not work for all, but seems to be working well for us, plus it's something that can be tried quickly.

Solr AutoCommit not working with Postgresql

I am using Solr 4.10.0 with PostgreSql 9.3. I am able to configure my solr core properly using data-config.xml and search through the database different tables. However, I am not able to setup the autoCommit feature. Whenever any row gets added in the table, I expect them to start appearing in the results after the maxTime (1 minute) but that doesn't happen. I have to explicitly rebuild the index by doing a full data-import and then everything works fine.
My solrconfig.xml is:
<updateHandler class="solr.DirectUpdateHandler2">
<autoCommit>
<maxTime>60000</maxTime>
<openSearcher>true</openSearcher>
</autoCommit>
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
</autoSoftCommit>
</updateHandler>
Is there something extra needs to be done for using autoCommit here? I checked my log files as well but there is no error / exception. What am I missing?
Please find the below link...
SOLR: What does an autoSoftCommit maxtime of -1 mean?
I think this is what is happening in your case..
First off, you can see the expression ${solr.autoSoftCommit.maxTime:-1} within the tag. This allows you to make use of Solr's variable substitution. That feature is described in detail here in the reference. If that variable has not been substituted by any of those means -1 is taken as value for that configuration.
Turning commitMaxTime to -1 effectively turns autocommit off.

Issue with Entity Framework 4.2 Code First taking a long time to add rows to a database

I am currently using Entity Framework 4.2 with Code First. I currently have a Windows 2008 application server and a database server running on Amazon EC2. The application server has a Windows Service installed that runs once per day. The service executes the following code:
// returns between 2000-4000 records
var users = userRepository.GetSomeUsers();
// do some work
foreach (var user in users)
{
var userProcessed = new UserProcessed { User = user };
userProcessedRepository.Add(userProcessed);
}
// Calls SaveChanges() on DbContext
unitOfWork.Commit();
This code takes a few minutes to run. It also maxes out the CPU on the application server. I have tried the following measures:
Remove the unitOfWork.Commit() to see if it is network related when the application server talks to the database. This did not change the outcome.
Changed my application server from a medium instance to a high CPU instance on Amazon to see if it is resource related. This caused the server not to max out the CPU anymore and the execution time improved slightly. However, the execution time was still a few minutes.
As a test I modified the above code to run three times to see if execution time for the second and third loop using the same DbContext. Every consecutive loop took longer to run that the previous one but that could be related to using the same DbContext.
Am I missing something? Is it really possible that something as simple as this takes minutes to run? Even if I don't commit to the database after each loop? Is there a way to speed this up?
Entity Framework (as it stands) isn't really well suited to this kind of bulk operation. Are you able to use one of the bulk insert methods with EC2? Otherwise, you might find that hand-coding the T-SQL INSERT statements is significantly faster. If performance is important then that probably outweighs the benefits of using EF.
My guess is that your ObjectContext is accumulating a lot of entity instances. SaveChanges seems to have a phase that has time linear in the number of entities loaded. This is likely the reason for the fact that it is taking longer and longer.
A way to resolve this is to use multiple, smaller ObjectContexts to get rid of old entity instances.