How to clear monitoring statistics in IBM DB2 9.7 - db2

I am monitoring query information on my IBM DB2 9.7 such as how long some queries take to execute. But how do I reset this information and clear the monitors? Apparently they are reset when the whole DB instance is reset, but this forces all connections to close also on other databases in this instance (not good). Any ideas on how to reset the monitor statistics only on a particular DB? Thanks.

That is correct, these monitors cannot be reset en DB2 V9.7, however you can simulate a reset by following the steps of this article: http://www.ibm.com/developerworks/data/library/techarticle/dm-1009db2monitoring1/
You create a set of objects that keep track of the values, and when you want to reset, they keep the value at that time, and then they just give the difference between the value stored and the most recent value.

Related

How to speed up insert in cloudsql

I can not change this global parameter in google cloud.
set global innodb_flush_log_at_trx_commit = 0
How to speed up insert in cloudSQL?
I have tried use bulk insert and any other flags. But it dos not work.
You cannot change every parameter you want for Cloud SQL since it is a Google managed service. Anyway, the innodb_flsuh_log_at_trx_commit parameter should be kept to the value 1, in general, because it helps InnoDB for being ACID compliant. If you ever try to modify this parameter you risk loosing some data in your transactions.
Going back to your issue, here you can find a set of tips for improving performance of your Cloud SQL instance.
If you really want to have full control over your databases, you can always opt for setting your databases on a Compute Engine instance.

monitor Postgres table activity

i need to monitor my postgres server. i need to get an alarm if there is no change in certain tables after a given time. i've been trying to get xymon and nagios to do this and i have not been able to. please help
You probably want to look at pg_stat_user_tables and note whether the statistics for row insertion/deletion/updates have changed for the table. That's the easiest way to check for this sort of activity in monitoring software.
You might also get ideas in this area from looking at the source code to the best of the PostgreSQL monitoring plug-in, the Nagios one: check_postgres
First, create a trigger on the table that activates on any modification statement (INSERT/UPDATE/DELETE). This trigger should update a "last-changed" timestamp somewhere (e.g. a field in some other control table).
Then, you'll need a separate process that is started regularly by some external means (e.g. cron on Unix). This process is run e.g. every 10 minutes, or every hour -- whatever granularity you need. It simply checks the last-changed timestamp to determine whether there has been any activity in the period since the last check.
It's not a free solution, but LogicMonitor's postgres monitoring can do this trivially.
If you have a means to get an alert when a file does not change in some time, then I have a less elegant, but probably simpler solution: try to find out the filename where Postgres stores the table in question (someone should dig into system tables in Postgres - maybe ask this in a separate question) and then have your monitoring tool set up to watch the modify time of that file.

what happens to my dataset in case of unexpected failure

i know this has been asked here. But my question is slightly different. When the dataset was designed keeping the disconnected principle in mind, what was provided as a feature which would handle unexpected termination of the application, say a power failure or a windows hang or system exception leading to restart. Say the user has entered some 100 rows and it is modified at the dataset alone. Usually the dataset is updated at the application close or at a timely period.
In old times which programming using vb 6.0 all interaction used to take place directly with the database, thus each successful transaction was committing itself automatically. How can that be done using datasets?
DataSets are never for direct access to database, they are a disconnected model only. There is no intent that they be able to recover from machine failures.
If you want to work live against the database you need to use DataReaders and issue DbCommands against the database live for changes. This of course will increase your load on the database server though.
You have to balance the two for most applications. If you know a user just entered vital data as a new row, execute an insert command to the database, and put a copy in your local cached DataSet. Then your local queries can run against the disconnected data, and inserts are stored immediately.
A DataSet can be serialized very easily, so you could implement your own regular backup to disk by using serialization of the DataSet to the filesystem. This will give you some protection, but you will have to write your own code to check for any data that your application may have saved to disk previously and so on...
You could also ignore DataSets and use SqlDataReaders and SqlCommands for the same sort of 'direct access to the database' you are describing.

Database last updated?

I'm working with SQL 2000 and I need to determine which of these databases are actually being used.
Is there a SQL script I can used to tell me the last time a database was updated? Read? Etc?
I Googled it, but came up empty.
Edit: the following targets issue of finding, post-facto, the last access date. With regards to figuring out who is using which databases, this can definitively monitored with the right filters in the SQL profiler. Beware however that profiler traces can get quite big (and hence slow/hard to analyze) when the filters are not adequate.
Changes to the database schema, i.e. addition of table, columns, triggers and other such objects typically leaves "dated" tracks in the system tables/views (can provide more detail about that if need be).
However, and unless the data itself includes timestamps of sorts, there are typically very few sure-fire ways of knowing when data was changed, unless the recovery model involves keeping all such changes to the Log. In that case you need some tools to "decompile" the log data...
With regards to detecting "read" activity... A tough one. There may be some computer-forensic like tricks, but again, no easy solution I'm afraid (beyond the ability to see in server activity the very last query for all still active connections; obviously a very transient thing ;-) )
I typically run the profiler if I suspect the database is actually used. If there is no activity, then simply set it to read-only or offline.
You can use a transaction log reader to check when data in a database was last modified.
With SQL 2000, I do not know of a way to know when the data was read.
What you can do is to put a trigger on the login to the database and track when the login is successful and track associated variables to find out who / what application is using the DB.
If your database is fully logged, create a new transaction log backup, and check it's size. The log backup will have a fixed small lengh, when there were no changes made to the database since the previous transaction log backup has been made, and it will be larger in case there were changes.
This is not a very exact method, but it can be easily checked, and might work for you.

SyBase SQL anywhere check if Synchronization is needed?

I have a Sybase SQL Anywhere 11.0.1 database that I am using to sync with an Oracle Consolidated Database.
I know that the SQL Anywhere database keeps track of all of the changes that are made to it so that it knows what to synchronize with the consolidated database. My question is whether or not there is a SQL command that will tell you if the database has changes to sync.
I have a mobile application and I want to show a little flag to the user anytime they have made changes to the handheld that need to be synced. I could just create another table to track that stuff myself but I would much rather just ping the database and ask it if it has changes that need to be synced.
There's nothing automatic to tell you that there is data to synchronize. In addition to Ben's suggestion, another idea would be to query the SYS.SYSSYNC table at the remote database to get an idea of whether there might be changes. The following statement returns a result set that shows a simple status of your last synchronization :
select ss.site_name, sp.publication_name, ss.log_sent,ss.progress
from sys.syssync ss, sys.syspublication sp
where ss.publication_id = sp.publication_id
and ss.publication_id is not null
and ss.site_name is not null
If progress < log_sent, then the status of the last synchronization is unknown. The last upload may or may not have been applied at the consolidated, because the upload was sent, but no response was received from the MobiLink server. In this case, suggesting a synch isn't a bad idea.
If progress = log_sent, then the last synch was successful. Knowing this, you could check the value of db_property('CurrentRedoPos'), which will return the current log offset of the remote database. If this value is significantly higher than the progress value, there have been many operations applied to the database since the last synchronization, so there's a good chance that there is data to synchronize. There are lots of reasons why even a large difference in progress and db_property('CurrentRedoPos') could result in no actual data needing synchronization.
The download from the ML Server is applied by dbmlsync after the progress value at the remote is updated by dbmlsync when the upload is confirmed by the ML Server. Operations applied in the download by dbmlsync are not synchronized back to the ML Server, so the entire offset range could just be the last download that was applied. This could be worked around by tracking the current log offset in the sp_hook_dbmlsync_end hook when the exit code value in the #hook_dict table value is zero. This would tell you the log offset of the database after the download was applied, and you could now compare the saved value with the current log offset.
All the operations in the transaction log could be operations on tables that are not synchronized.
All the operations in the transaction log could have been rolled back.
My solution is not ideal. Tracking the changes to synchronized tables yourself is the best solution, but I thought I could offer an alternative that might be OK for your needs, with the advantage that you are not triggering an extra action on every operation performed on a synchronized table.
The mobile database doesn't keep track of when the last sync was, the MobiLink server keeps all of that information in the MobiLink tables of the consolidated database.
Since synchronization only transfers necessary information, you could simply initiate a sync. If there's nothing to sync, then very little data will be used by your application.
As a side note, SQL Anywhere has its own SO clone which is monitored by Sybase engineers. If anyone knows for sure, it'll be them.
As of SQL Anywhere 17, SAP PM maps to a local Sybase database that contains a TTRANSACTION_UPLOAD table, so to determine if a synchronization is necessary we simply query this table to see if it has any records that need to be sync'd to the HANA consolidation database.