(MON_OBJ_METRICS) in db2 - db2

I am very much new to db2. My requirement is I just want to remove tracking system on a user table. I mean tracking of all kind of DML operations on a particular table. I read some thing from the google, according to my understanding this can be done by this parameter in (MON_OBJ_METRICS) parameter at db level.does my understanding is worth ?
How can i disable this parameter ? If i disable this parameter at db level, does all the tables under this particular database are quit from tracking ?
On sql server we can do this by set_change_tracking parameter. I am looking for same functionality in db2luw.
Kindly help me out & please excuse me , as i m a kid in db2

MON_OBJ_METRICS (and DFT_MON_TABLE, from your other post) are used for performance monitoring only. The keep track of the number of rows read, inserted, deleted and updated (among other things), but do not track actual data changes.
Change tracking in MS SQL Server is used for data capture to feed replication processes, not monitoring.
In DB2 you can set the DATA CAPTURE table parameter to NONE, but be aware that this is its default value. You can check the current value for a table with the following query:
SELECT datacapture
FROM syscat.tables
WHERE tabschema = <schema>
AND tabname = <tablename>
You can modify the setting using:
ALTER TABLE <schema>.<tablename> DATA CAPTURE NONE

In addition to the DATA CAPTURE option that Ian described, DB2 v10.1 and newer can also track changes made to tables that are enabled for system-period temporal versioning. System-period versioning is not enabled by default, but if it is, every DML change to the table is automatically captured into a designated history table with the same column layout. Unlike SQL Server's retention policy for tracking changes, the DBA is responsible for pruning old row versions out of DB2 temporal history tables.
The SYSCAT.TABLES view will reveal which tables are enabled for system-period versioning:
SELECT tabschema, tabname FROM syscat.tables
WHERE temporaltype IN ( 'B', 'S' )
The following statement will disable whichever temporal features are enabled ( system-period, business-period, or both):
ALTER TABLE <schema>.<tablename> DROP VERSIONING

Related

How to get all external in db2 using a system tables information

I need to get complete list of external tables in db2 database. I have defined schema called DB2INST1. How to get complete list of external tables information using system tables?
The information lives in syscat.tables (documentation here) for Db2-LUW databases that support external tables, which have their PROPERTY column with value Y in position 27 of that column.
Example query returns the fully qualified name of external tables:
select trim(tabschema)||'.'||rtrim(tabname)
from syscat.tables
where substr(property,27,1)='Y'
with ur;
In general, the best and most reliable way to retrieve all information and to recreate the DDL statements for tables is to use the db2look tool. If you want to extract the metadata on your own, there are some catalog views to start with:
SYSCAT.TABLES holds the table information. Look for the PROPERTY column and check there if it is an external table.
SYSCAT.COLUMNS has the basic column information. But there are more related tables depending on types and attributes.
SYSCAT.EXTERNALTABLEOPTIONS shows the actual options for an external table, the things in addition to what makes a regular table.
There are many more tables to hold table properties, depending on the complexity of the table and column definitions.

Is there a way to access one database from another in Postgresql? [duplicate]

I'm going to guess that the answer is "no" based on the below error message (and this Google result), but is there anyway to perform a cross-database query using PostgreSQL?
databaseA=# select * from databaseB.public.someTableName;
ERROR: cross-database references are not implemented:
"databaseB.public.someTableName"
I'm working with some data that is partitioned across two databases although data is really shared between the two (userid columns in one database come from the users table in the other database). I have no idea why these are two separate databases instead of schema, but c'est la vie...
Note: As the original asker implied, if you are setting up two databases on the same machine you probably want to make two schemas instead - in that case you don't need anything special to query across them.
postgres_fdw
Use postgres_fdw (foreign data wrapper) to connect to tables in any Postgres database - local or remote.
Note that there are foreign data wrappers for other popular data sources. At this time, only postgres_fdw and file_fdw are part of the official Postgres distribution.
For Postgres versions before 9.3
Versions this old are no longer supported, but if you need to do this in a pre-2013 Postgres installation, there is a function called dblink.
I've never used it, but it is maintained and distributed with the rest of PostgreSQL. If you're using the version of PostgreSQL that came with your Linux distro, you might need to install a package called postgresql-contrib.
dblink() -- executes a query in a remote database
dblink executes a query (usually a SELECT, but it can be any SQL
statement that returns rows) in a remote database.
When two text arguments are given, the first one is first looked up as
a persistent connection's name; if found, the command is executed on
that connection. If not found, the first argument is treated as a
connection info string as for dblink_connect, and the indicated
connection is made just for the duration of this command.
one of the good example:
SELECT *
FROM table1 tb1
LEFT JOIN (
SELECT *
FROM dblink('dbname=db2','SELECT id, code FROM table2')
AS tb2(id int, code text);
) AS tb2 ON tb2.column = tb1.column;
Note: I am giving this information for future reference. Reference
I have run into this before an came to the same conclusion about cross database queries as you. What I ended up doing was using schemas to divide the table space that way I could keep the tables grouped but still query them all.
Just to add a bit more information.
There is no way to query a database other than the current one. Because PostgreSQL loads database-specific system catalogs, it is uncertain how a cross-database query should even behave.
contrib/dblink allows cross-database queries using function calls. Of course, a client can also make simultaneous connections to different databases and merge the results on the client side.
PostgreSQL FAQ
Yes, you can by using DBlink (postgresql only) and DBI-Link (allows foreign cross database queriers) and TDS_LInk which allows queries to be run against MS SQL server.
I have used DB-Link and TDS-link before with great success.
I have checked and tried to create a foreign key relationships between 2 tables in 2 different databases using both dblink and postgres_fdw but with no result.
Having read the other peoples feedback on this, for example here and here and in some other sources it looks like there is no way to do that currently:
The dblink and postgres_fdw indeed enable one to connect to and query tables in other databases, which is not possible with the standard Postgres, but they do not allow to establish foreign key relationships between tables in different databases.
If performance is important and most queries are read-only, I would suggest to replicate data over to another database. While this seems like unneeded duplication of data, it might help if indexes are required.
This can be done with simple on insert triggers which in turn call dblink to update another copy. There are also full-blown replication options (like Slony) but that's off-topic.
see https://www.cybertec-postgresql.com/en/joining-data-from-multiple-postgres-databases/ [published 2017]
These days you also have the option to use https://prestodb.io/
You can run SQL on that PrestoDB node and it will distribute the SQL query as required. It can connect to the same node twice for different databases, or it might be connecting to different nodes on different hosts.
It does not support:
DELETE
ALTER TABLE
CREATE TABLE (CREATE TABLE AS is supported)
GRANT
REVOKE
SHOW GRANTS
SHOW ROLES
SHOW ROLE GRANTS
So you should only use it for SELECT and JOIN needs. Connect directly to each database for the above needs. (It looks like you can also INSERT or UPDATE which is nice)
Client applications connect to PrestoDB primarily using JDBC, but other types of connection are possible including a Tableu compatible web API
This is an open source tool governed by the Linux Foundation and Presto Foundation.
The founding members of the Presto Foundation are: Facebook, Uber,
Twitter, and Alibaba.
The current members are: Facebook, Uber, Twitter, Alibaba, Alluxio,
Ahana, Upsolver, and Intel.
In case someone needs a more involved example on how to do cross-database queries, here's an example that cleans up the databasechangeloglock table on every database that has it:
CREATE EXTENSION IF NOT EXISTS dblink;
DO
$$
DECLARE database_name TEXT;
DECLARE conn_template TEXT;
DECLARE conn_string TEXT;
DECLARE table_exists Boolean;
BEGIN
conn_template = 'user=myuser password=mypass dbname=';
FOR database_name IN
SELECT datname FROM pg_database
WHERE datistemplate = false
LOOP
conn_string = conn_template || database_name;
table_exists = (select table_exists_ from dblink(conn_string, '(select Count(*) > 0 from information_schema.tables where table_name = ''databasechangeloglock'')') as (table_exists_ Boolean));
IF table_exists THEN
perform dblink_exec(conn_string, 'delete from databasechangeloglock');
END IF;
END LOOP;
END
$$

DB2 iseries materialized view refresh

I have created the following materialized query table:
CREATE TABLE SCHEMA.TABLE AS
(SELECT * FROM SCHEMA.TABLEEXAMPLE)
DATA INITIALLY DEFERRED
REFRESH DEFERRED
MAINTAINED BY USER
DISABLE QUERY OPTIMIZATION;
When I execute a REFRESH TABLE SCHEMA.TABLE it get locked for others users to read from it.
Reading this doc from IBM https://www.ibm.com/support/knowledgecenter/en/SSEPGG_9.7.0/com.ibm.db2.luw.sql.ref.doc/doc/r0000977.html
I tried to execute this statement:
REFRESH TABLE SCHEMA.TABLE ALLOW READ ACCESS
But I get the following error: SQL State: 42601 Unexpected keyword ALLOW
What I'm missing on statement? Is there other way to allow read access to materialized query table while it is beign updated?
MQTs on Db2 for IBM i lag behind the functionality available in Db2 for LUW.
I've never bother with them, instead an encoded vector index (EVI) with computed columns meets every need I've every considered. (Note that Db2 LUW doesn't have EVIs)
Per Mao's comment, you might try deleting an recreating the MQT with the following:
CREATE TABLE SCHEMA.TABLE AS
(SELECT * FROM SCHEMA.TABLEEXAMPLE)
DATA INITIALLY DEFERRED
REFRESH DEFERRED
MAINTAINED BY USER
DISABLE QUERY OPTIMIZATION
with NC;
But I think a refresh would still require exclusive access to the MQT.
The only options I can think of for "refreshing" an MQT while it is being used
programmatically , using either triggers on the base tables or perhaps a process that uses SQL to update a few rows at a time.
removing the DISABLE QUERY OPTIMIZATION and not accessing the MQT directly. Instead depend on the optimizer to access it when appropriate. Now you can create a version of it every few hours and the Db should start using the newer version for new queries. Once the older one is no longer being used, you delete it (or REFRESH it)

Alter Table Set Statistics requires table lock

I have run into a case such that Pg always preferring into a sequential scan for a table that has around 70M rows. (Index scan is ideal for that query and i have confirmed it by setting enable_seq_scan=off, speed improved by 200x)
So, in order to help Pg understand my data better i executed this
ALTER TABLE tablename ALTER COLUMN columnname SET STATISTICS 1000;
Unfortunately this requires Update Exclusive lock which locks the entire table (too much lock).
Is there a solution to avoid locking for this statement ?
Data sharding is done for this table based on Primary Key Range, so I would like Pg to even understand my Pk better so that it knows which User has got large data. Will it be of use if i increase the statistics of PrimaryKey column as well ?
From the very docs you linked
SET STATISTICS
This form sets the per-column statistics-gathering target for subsequent ANALYZE operations. The target can be set in the range 0 to 10000; alternatively, set it to -1 to revert to using the system default statistics target (default_statistics_target). For more information on the use of statistics by the PostgreSQL query planner, refer to Section 14.2.
SET STATISTICS acquires a SHARE UPDATE EXCLUSIVE lock.
And, on the docs for Explicit Locking
SHARE UPDATE EXCLUSIVE
Conflicts with the SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE lock modes. This mode protects a table against concurrent schema changes and VACUUM runs.
Acquired by VACUUM (without FULL), ANALYZE, CREATE INDEX CONCURRENTLY, and ALTER TABLE VALIDATE and other ALTER TABLE variants (for full details see ALTER TABLE).
So you can't change the schema, or vacuum while analytics are happening. So what? They should happen very fast. Almost instantly.

db2 reorganize a table

When I alter a table in db2, I have to reorganize it
so I execute the next query:
Call Sysproc.admin_cmd ('reorg Table myTable');
I m searching an appropriate solution to reorganize a table when it s altered, or reorganize all the schema after making various modifications
You can determine when tables will require a REORG by looking at SYSIBMADM.ADMINTABINFO:
select tabschema, tabname
from sysibmadm.admintabinfo
where reorg_pending = 'Y'
You may also want to look at the NUM_REORG_REC_ALTERS column as this may show you additional tables that don't require reorganization due to various ALTER TABLE statements.
The reorg operation is similar to a defrag in hard disk. It frees empty spaces in pages, and eventually it could reorganize data according to an index. Depending on the features, it creates the compression dictionary and compress data.
As you can see, reorg operation is an administrative task, and it is not necessary each time data is modified. A database could run without reorg.
It order to ease this, DB2 included autonomic features like automatic backup, however this doesn't answer you own question. This will only trigger reorg on tables that need that.
To reorg a table explicitly you need to execute the command reorg http://publib.boulder.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.admin.cmd.doc/doc/r0001966.html
or via the admin_cmd http://publib.boulder.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.sql.rtn.doc/doc/r0023582.html
in db2 config we have:
Automatic reorganization (AUTO_REORG) = OFF
we can set auto_reorg to on