Default value of Statement QueryTimeout on DB2 V9.7.7? - db2

I tried to find out like what is the default value of Statement QueryTimeout in DB2 V9.7.7???
I googled alot but did not get this specific information. I found the value is 0 in Db2V9.7.4. I am having hard time to find in verison 9.7.7.
Please help me out.

There is no default query timeout within the DB2 Server or DB2 Client – DB2 will allow a query to run until completion.
Applications can (optionally) set the QueryTimeout attribute to control this. If you are using .NET, you might also check to see if your application sets the CommandTimeout property.

Related

Orion contextBroker allows set read preference to Mongodb replicaset?

I'm reading the documentation of orion Context Broker and in the command line arguments I dont see any argument to set the read preference to my replicaset of mongoDB. In my application I need to set that the read preference have the option nearest to avoid bottle necks in high query traffic periods. Does anyone know if is possible?
Current Orion version (3.3.1) doesn't allow to set read preference. There is an open issue in the Orion repository about implement the -mongoUri CLI parameter to allow setting the MongoDB connection URI (so you could add for instance &readPreference=secondary to it).
Alternatively, you could hack the Orion source code to build an specific version for you with the readPreference value you want. Have a look to composeMongoUri() function. It seems it is a matter of just addding uri += optionPrefix + "readPreference=<whatever you want>"; at the end.
It is not a smart soluction (it is not flexible and you would need to rebuild Orion if you want to change the setting) but it could be a valid workaround while -mongoUri gets implemented.

Go database/sql - Issue Commands on Reconnect

I have a small application written in Go that connects to a PostgreSQL database on another server, utilizing database/sql and lib/pq. When I start the application, it goes through and establishes that all the database tables and indexes exist. As part of this process, it issues a SET search_path TO preferredschema,public command. Then, for the remainder of the database access, I do not have to specify the schema.
From what I've determined from debugging it, when database/sql reconnects (no network is perfect), the application begins failing because the search path isn't set. Is there a way to specify commands that should be executed when it reconnects? I've searched for an event that might be able to be leveraged, but have come up empty so far.
Thanks!
From the fine manual:
Connection String Parameters
[...]
In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html.
Then if we go over to the PostgreSQL documentation, you'll see various ways of setting connection parameters such as config files, SET commands, command line switches, ...
While the desired behavior isn't exactly spelled out, it is suggested that you can put anything you'd SET right into the connection string:
connStr := "dbname=... user=... search_path=preferredschema,public"
// -----------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
and since that's all there is for configuring the connection, it should be used for every connection (including reconnects).
The Connection String Parameters section of the pq documentation also tells you how to quote and escape things if whatever preferredschema really is needs it or if you have to grab a value at runtime and add it to the connection string.

Setting up MongoDB environment requirements for Parse Server

I have my instance running and am able to connect remotely however I'm stuck on where to set this parameter to false since it states that the default is set to true:
failIndexKeyTooLong
Setting the 'failIndexKeyTooLong' is a three-step process:
You need to go to the command console in the Tools menu item for the admin database of your database instance. This command will only work on the admin database, pictured here:
Once there, pick any command from the list and it will give you a short JSON text for that command.
Erase the command they provide (I chose 'ping') and enter the following JSON:
{
"setParameter" : 1,
"failIndexKeyTooLong" : false
}
Here is an example to help:
Note if you are using a free plan at MongoLab: This will NOT work if you have a free plan; it only works with paid plans. If you have the free plan, you will not even see the admin database. HOWEVER, I contacted MongoLab and here is what they suggest:
Hello,
First of all, welcome to MongoLab. We'd be happy to help.
The failIndexKeyTooLong=false option is only necessary when your data
include indexed values that exceed the maximum key value length of
1024 bytes. This only occurs when Parse auto-indexes certain
collections, which can actually lead to incorrect query results. Parse
has updated their migration guide to include a bit more information
about this, here:
https://parse.com/docs/server/guide#database-why-do-i-need-to-set-failindexkeytoolong-false-
Chances are high that your migration will succeed without this
parameter being set. Can you please give that a try? If for any reason
it does fail, please let us know and we can help you on potential next
steps.
Our Dedicated and Shared Cluster plans
(https://mongolab.com/plans/pricing/) do provide the ability to toggle
this option, but because our free Sandbox plans are running on shared
server processes, with other Sandbox users, this parameter is not
configurable.
When launching your mongodb server, you can set this parameter to false :
mongod --setParameter failIndexKeyTooLong=false
I have wrote an article that help you to Setting up Parse-Server and all its dependencies on your own server:
https://medium.com/#jcminarro/run-parse-server-on-your-own-server-using-digitalocean-b2a7d66e1205

How do you enable the logging of all queries to postgreSQL AWS RDS instance?

I want to monitor all queries to my postgresql instance. Following these steps, I created a custom db parameter group, set log_statement to all and log_min_duration_statement to 1, applied the parameter group to my instance, and rebooted it. Then I fired a POST request to the instance but a record of query wasn't to be found in the Recent Events & Logs tab of my instance. Doing a SELECT * FROM table query in psql however shows that the resource was created and the post request worked. What am I missing in order to see the logs?
Setting log_min_duration_statement to 1 tells Postgres to only log queries that take longer than 1 ms. If you set it to 0 instead, all queries will be logged (the default, no logging, is -1).
You followed the right steps, all that is left is to make sure the parameter group is properly applied to your Postgres instance. Look at the Parameter Group in the Configuration Details tab of your instance, and make sure they have the right group name followed by "in-sync".
A reboot is usually required when changing parameter groups on an instance, this may be what is (was?) missing in your case.

How to determine if current User is SQL Server Analysis Server Admin

I'd like to scope some cells to particular values if the current user is an SSAS admin.
I'm not sure where I'd even begin to determine that kind of introspection. Any ideas would be welcome please.
Note that I'm using UDM and not Tabular models
As a workaround you can append «;EffectiveUserName=» to connection string. Then try to connect and check for exception. This connection is able to establish with administrative rights only.
See here for how to use the SCOPE function in an IF clause. Inside the clause, the USERNAME() function can be used to return the current user(See here). Hope this helps.