DB2 Registry and Configuration Settings - db2

I am setting up the DB2 10.1 (FP2) environment on AIX 7.1 for IBM Worklight 5.0.5.
Are the following registry settings acceptable ?
DB2_SKIPINSERTED=YES
DB2_OPTPROFILE=YES
DB2_INLIST_TO_NLJN=YES
DB2_MINIMIZE_LISTPREFETCH=YES
DB2_EVALUNCOMMITTED=YES
DB2_ANTIJOIN=EXTEND
DB2_SKIPDELETED=YES
I could not find recommendations for DB2 settings, so using WCS settings as a starting point.
Are there any recommendations for dbm and db configuration settings for Worklight?
Thanks
Sathyaram

As to whether these are set correctly, the answer, as usual, is ...it depends. ;-)
These enhance concurrency, since a connection is less affected by uncommitted rows of another connection (with certain isolation levels). Whether this is desirable is dependant on the type of work being done. See http://publib.boulder.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.admin.perf.doc/doc/c0012389.html )
DB2_SKIPINSERTED=YES
DB2_SKIPDELETED=YES
DB2_EVALUNCOMMITTED=YES
Another important setting for enhanced concurrency is the DB CFG parameter CUR_COMMIT - which is ON by default now.
This is obsolete now and refers to the use of Optimization Profiles (sort of like Hints for DB2). Search the Info Center on this topic.
DB2_OPTPROFILE=YES
These are among the registry variables which can change the behavior of optimizer decisions (usually to restrict the optimizer making it's own decisions). Generally, they should only be set when recommended by a particular application (such as Worklight or SAP, etc.) or by IBM Support as the result of a performance engagement. Note that the affects of the variable should always be rechecked when moving to a different DB2 release (e.g., v9 to v10), as there are always improvements to the optimizer and thus the significance of these variables.
DB2_INLIST_TO_NLJN=YES
DB2_MINIMIZE_LISTPREFETCH=YES
DB2_ANTIJOIN=EXTEND

Related

Postgres auto tuning

As we all know postgres performance highly depends on config params. Eg if I have ssd drive or more RAM I need to tell that postgres by changing relevant cfg param
I wonder if there is any tool (for Linux) which can suggest best postgres configuration for current hardware?
Im aware Websites (eg pgtune) where I can enter server spec and those can suggest best config
However each hardware is different (eg I might have better raid / controller or some processes what might consume more ram etc). My wish would be postgres doing self tuning, analysing query execution time available resources etc
Understand there is no such mechanism, so maybe there is some tool / script I can run which can do this job for me (checking eg disk seq. / random disk read, memory available etc) and telling me what to change in config
There are parameters that you can tweak to get better performance from postgresql.
This article gives good read about that.
There are few scripts that can do that. One that is mentioned in postgres wiki is this one.
To get more idea about what more tuning your database needs, you need to log its request and performance, after analysing those logs you can tune more params. For this there is pgbadger log analyzer.
After using database in production, you get more idea regarding what requirements you have and how to approach them rather than making changes just based on os or hardware configuration.

What is new in mongodb server monitoring and discovery ? What were the limitation of old one ? How did they overcome now?

I am trying to update my node-mongodb-native, but it is asking me to add useUnifiedTopology option. I just want to know what were the issues were old monitoring tools and how they were solved.
The useUnifiedTopology option was introduced in the MongoDB 3.2.1 Node.js driver and will become the default behaviour in the 4.x driver release later this year. This represents a change to the Node driver's Server Discovery and Monitoring (SDAM) implementation rather than a change to the MongoDB SDAM specification.
The 3.2.1 release notes includes an overview of the changes:
In this release we are very excited to announce the immediate availability of a complete rewrite of the driver's "topology" layer. This is the core brains of the driver responsible for things like server selection, server discovery and monitoring. This work combines the three existing topology concepts (Mongos, ReplSet, and Server) into a single type Topology. The new Topology type uses the same machinery to represent all three types, greatly improving our ability to maintain the code, and reducing the chance for bug duplication.
The Topology class no longer uses a callback store, instead relying on a server selection loop for operation execution. This means failed operations will fail faster, with more accurate stack traces and specifics about the failure. It generally makes it much easier to reason about what the driver is doing each time an operation is executed.
There are also some features in newer 3.x Node driver releases that require the useUnifiedTopology option, such as the SRV Polling for Sharded Clusters introduced in v3.3.0.

Inserting into table Postgres 9.4 vs 9.1

My team is thinking of switching from 9.1 to 9.4 and as a part evaluation we would like to measure how much of an improvement is INSERT INTO TABLE ... where there are 3-4 columns of fixed length types like INT, DOUBLE PRECISION. We are using an unbatched INSERT and the tables are logged and not temporary. fsync is set to on.
Q0: Are there any grounds to think that 9.4 would be faster than 9.1 on this particular statement?
For example based on improved WAL performance:
https://momjian.us/main/writings/pgsql/features.pdf
Clearly the best answer would be to go and check our data and put an experiment, but lets allow some speculation.
Q1: Are there performance evaluations that you are aware of?
Q2: How much of INSERT is taken by WAL?
Settings on the server (copied verbatim from 9.1 config file)
#fsync = off
#synchronous_commit = on
#wal_sync_method = fsync
#full_page_writes = on
#wal_buffers = -1
#wal_writer_delay = 200ms
shared_buffers = 15GB
temp_buffers = 1024MB
work_mem = 1024MB
The link you provided is based on information found in Section E.2.3.1.2. General Performance - a great read by the way. Based on your suggested test, I would not expect any real performance difference because you will not take advantage of parallel or partial writes (regarding wal files). That said, you might run across this in the future. Also, 9.4 (well, post 9.1 really) provides many useful tools and performance enhancements that, in my opinion, justify a switch from 9.1 to 9.4. For instance, since 9.2, JSON is now a datatype, Index Only scans are possible, and in-memory sorting has been improved by as much as 25%. 9.3 saw the introduction of materialized views (with concurrent refresh in 9.4) and updatable "simple" (definition expanded slightly in 9.4) views. In 9.4, aggregates are enhanced and ALTER SYSTEM defined (the ability to change config settings (goes into postgresql.auto.config which is read last, ensuring it overrides postgresql.config values) using a SQL command).
Of note, default logging has also changed. For instance, when creating a table, you won't receive messages about implicit index and sequence creation (set log level to DEBUG1 to fix - drove me crazy though when I switched from 9.1 to 9.3, especially during lectures).
In regards to question 1, I'd run a TPC benchmark test (C and VMS are the only ones that aren't free). For question 2, that really depends on your wal settings, but with what I see from your config file, it shouldn't matter in regards to version performance. I'd also run pgtune on your system (link below) to ensure your config file is as optimal as possible before testing.
As with the other commenters, just build it out and see what happens. You might not get much, if any, difference with straight inserts, so I'd try large, multi-table joins, huge sorts, and lots of transaction simulation (e.g., lots of inserts, updates, and deletes - just use plpgsql for simplicity) - the TCP queries will also do a pretty good job of performance testing.
Links:
To find the "What's new" PostgreSQL wiki pages, add the version number to the end of the following URL.
You can find a GUI version of pgtune at pgtune.leopard.in.ua; the standalone download is hit or miss from pgfoundry because it always seems to be down.

What is default timeout for MongoDB operation (CRUD and aggregate)?

I didn't find information about what is default value for executing operation in MongoDB. Some of my aggregate commands takes minutes (very large reports). It is OK for me to waiting this time, but I'm afraid to get error.
I know, that I can set it. But a lots of my software users use their own servers. Of course with default settings.
Until this feature is implemented, this will essentially be a driver/client level setting. The query will run until completion on the server, though eventually it might timeout a cursor - see the cursorinfo command for more there.
To figure out what your settings are you will need to consult your relevant driver documentation. There may be multiple settings that apply based on what you are looking for, like the various options in the Java driver, for example.

Is it possible to default all MongoDB writes to safe? What is the performance hit from doing this?

For MongoDB 2.2.2, is it possible to default all writes to safe, or do you have to include the right flags for each write operation individually?
If you use safe writes, what is the performance hit?
We're using MongoMapper on Rails.
If you are using the latest version of 10gen official drivers, then the default actually is safe, not fire-and-forget (which used to be the default).
You can read this blog post by 10gen co-founder and CTO which explains some history and announces that all MongoDB drivers as of late November use "safe" mode by default rather than "fire-and-forget".
MongoMapper is built on top of 10gen supported Ruby Driver, they also updated their code to be consistent with the new defaults. You can see the check-in and comments here for the master branch. Since I'm not certain what their release schedule is, I would recommend you ask on MongoMapper mailing list.
Even prior to this change you could set "safe" value on connection level in MongoMapper which is as good as global. Starting with 0.11, you can do it in mongo.yml file. You can see how in the release notes.
The bottom line is that you don't have to specify safe mode for each write, but you can still specify higher or lower than default durability for each individual write operation if you wish, but when you switch to the latest versions of everything, then you will be using "safe writes" globally by default.
I do not use mongomapper so I can only answer a little.
In terms of the database, depends. A safe write is basically (basically being the keyword) waiting for the database to do what it would normally do after you got a default "I am done" response from a fire and forget.
There is more work depending on how safe you want the write to be. A good example is a write to a single node and one to many nodes. If you write to that single node you will get a quicker response from the database than if you wish to replicate the command (safely) to other nodes in the network.
Any amount of safe write does, of course, cause a performance hit in the amount of writes you can send to the server since there is more work required before a response is given which means less writes able to be thrown at the database. The key thing is getting the balance just right for your application, between speed and durability of your data.
Many drivers now (including MongoDB PHP 1.3, using a write concern of 1: http://php.net/manual/en/mongo.writeconcerns.php ) are starting to use normal safe writes by default and normal fire and forget queries are starting to abolished by default.
By looking at the mongomapper documentation: http://mongomapper.com/documentation/plugins/safe.html it seems you must still add the flags everywhere.