Performance Tuning - oracle10g

How can i check the Query running from long time & steps of tuning the query? (Oracle)

Run explain plan for select .... to see what Oracle is doing with your query.
Post your query here so that we can look at it and help you out.
Check out the Oracle Performance Tuning FAQ for some tricks-of-the-trade, if you will.

You can capture the query by selecting from v$sql or v$sqltext.
If you are not familiar with it, look up 'Explain Plan' in the Oracle
documentation. There should be plenty on it in the performance tuning
guide.
Have a look at Quest Software's Toad for a third party tool that helps
in this area too.
K

Unfortunately your question is not expressed clearly. The other answers have already tackled the issue of tuning a known bad query, but another interpretation is that you want to monitor your database to find poorly performing queries.
If you don't have Enterprise Edition with the Diagnostics pack - and not many of us do - your best bet is to run statspack snapshots on a reqular basis. This will give you a lot of information about your system, including which queries take a long time to complete and which queries consume a lot of your system's resources. You can find out more about statspack here.

If you do not want to use OEM, then you can query and find out.
First find the long running query. If it's currently being executing, You can join gv$session to find which session running since long time. Then go to gv$sql to find SQL details. You need to look last_call_et column.If SQL executed some time inpast you can use dba_hist_snapshot ,dba_hist_sqlstat ,DBA_HIST_SQLTEXT tables to find offending SQL.
Once you get query, you can check what plan it's picking from dba_hist_sql_plan table if this SQL executed in past or from gv$sql_plan if it's currently executing.
Now you analyze execution plan and see if it's using right index, join etc.
If not tune those.
Let me know which step you have the problem. I can help you in answering those.

Related

Postgres: monitor what part of the execution plan is running for long queries

The information in pg_stat_activity is sort of scarce, and does not give progress information for long queries.
This information is sort of available in v$session_longops in Oracle, which gives which object is being processed (target), the number of items it needs to go through (totalwork), and the number of item done so far (sofar). One can then use that to infer what part of the execution plan the engine is in. This information is available in Spark and Flink as well.
I was wondering if there was a way to have access to that in Postgres, either in system tables, or by observing the processes, or where one might look in the internals if he wants to implement a patch.
Cheers!
AFAIK there is no existing feature to monitor long running queries in detail (there is such a feature only for 3 DDL statements).
A patch has been proposed 3 years ago. It looks like it was not integrated.
See discussion in hackers mailing list:
https://www.postgresql.org/message-id/CADdR5nxQUSh5kCm9MKmNga8+c1JLxLHDzLhAyXpfo9-Wmc6s5g#mail.gmail.com

Time of first occurrence of the query from pg_stat_statements. Possible to get?

I'm debugging a DB performance issue. There's a lead that the issue was introduced after a certain deploy, e.g. when DB started to serve some new queries.
I'm looking to correlate deployment time with the performance issues, and would like to identify the queries that are causing this.
Using pg_stat_statements has been very handy so far. Unfortunately it does not store the time stamp of the first occurrence of each query.
Is there any auxiliary tables I could look into to see the time of first occurrence of queries?
Ideally, if this information would have been available in pg_stat_statements, I'd make a query like this:
select queryid from where date(first_run) = '2020-04-01';
Additionally, it'd be cool to see last_run as well, so to filter out some old queries that no longer execute at all, but remain in pg_stat_statements. That's more of a nice thing that's a necessity though.
This information is not stored anywhere, and indeed it would not be very useful. If the problem statement is a new one, you can easily identify it in your application code. If it is not a new statement, but something made the query slower, the first time the query was executed won't help you.
Is your source code not under version control?

Db2 performance - many batch programs inserting rows in the same table

Hi there I'm looking for advice from someone who is good at IBM db2 performance.
I have a situation in which many batch tasks are massively inserting rows in the same db2 table, at the same time.
This situation looks potentially bad. I don't think db2 is able to resolve the many requests quickly enough, causing the concurring tasks to take longer to end and even causing some of them to abend with a -904 or -911 sqlcode.
What do you guys think? Should situations like these be avoided? Are there some sort of techniques that could improve the performance of the batch tasks, keeping them from abending or running too slow?
Thanks.
Inserting should not be a big problem running ETL workloads i.e. with DataStage do this all the time.
I suggest to run an
ALTER TABLE <tabname> APPEND ON
This avoids the free space search - details can be found here
With the errors reported the information provided is not sufficient to get the cause of it.
There are several things to consider. What indexes are on the table is one.
Append mode works well to relieve last page contention, there are also issues where you could see contention with the statement itself for the variation lock. Then you could have issues with the transaction logs if they are not fast enough.
What is the table, indexes and statement and maybe we can come up with how to do it. What hardware are you using and what io subsystem is being used for transaction logs and database tablespaces.

Query log equivalent for Progress/OpenEdge

Short story: A report running against a Progress database (OpenEdge Release 10.1C03) takes hours to complete. I suspect that it does not take advantage of existing data indexes. Would like to understand how it scans the data to then try to add an index that will make it run faster.
Source code of the report is not available. The code is native Progress 4GL, not SQL.
If it were an SQL database I would try to do a dump of SQL queries and would then go from that. With 4GL I did not find any such functionality. Is it possible to somehow peek at what gets executed at the low level?
What else can be done if there is no source code?
Thanks!
There are several things you can do:
If I recall correctly 10.1C should have the _usertablestat and _userindexstat virtual system tables available. These allow you to observe, at runtime, what tables and indexes are being accessed by a particular session. You can either write your own 4GL program to query them or you can use the screens in PROMON, R&D, 3 "Other Displays", 5 "I/O Operations by User by Table" and 6 "I/O Operations by User by Index". That will show you what tables and indexes are actually in use and how much use they are getting. If the observed data seems wrong it will probably give you a clue. (If the VSTs are missing it might be because the db was upgraded from an older version -- add them with proutil dbname -C updatevsts.)
You could also use the session startup parameters -clientlog "filename" and -logentrytypes QryInfo to obtain more detailed information about the queries being executed.
Keep in mind that Progress is not SQL. Unlike most SQL databases the 4gl uses a static, compile-time, optimizer. Index selection happens when the code is compiled. So unless you can recompile (and you seem to not have source so that seems unlikely) you won't be able to improve things by adding missing indexes. You might, however, at least be able to show the person who does have source where the problem is.
Another tool that can help is the profiler. This will identify where in the code the time is being spent. That can also be good information to provide to the original vendor if they need help finding the problem. For more information on the profiler: http://dbappraise.com/ppt/profiler.pptx

SQL Server 2008 R2 table access times

Does SQL Server maintain statistics for each table on read, write, update times etc?
What we are wanting to know which tables our ERP applications spend the most time and begin looking for ways to optimize the tables.
Well, SQL Server doesn't keep track of those statistics by table name. But you could look at DMVs like sys.dm_exec_query_stats to see which queries are taking the longest.
SELECT [sql] = SUBSTRING
(
st.[text],
(s.statement_start_offset/2)+1,
(CASE s.statement_end_offset
WHEN -1 THEN DATALENGTH(CONVERT(NVARCHAR(MAX), st.[text]))
ELSE s.statement_end_offset END
- s.statement_start_offset)/2
), s.*
FROM sys.dm_exec_query_stats AS s
CROSS APPLY sys.dm_exec_sql_text(s.[sql_handle]) AS st
WHERE s.execution_count > 1
AND st.[dbid] = DB_ID('Your_ERP_Database_Name')
ORDER BY total_worker_time*1.0 / execution_count DESC;
Of course you can order by any metrics you want, and quickly eyeball the first column to see if you identify anything that looks suspicious.
You can also look at sys.dm_exec_procedure_stats to identify procedures that are consuming high duration or reads.
Keep in mind that these and other DMVs reset for various events including reboots, service restarts, etc. So if you want to keep a running history of these metrics for trending / benchmarking / comparison purposes, you're going to have to snapshot them yourself, or get a 3rd party product (e.g. SQL Sentry Performance Advisor) that can help with that and a whole lot more.
Disclaimer: I work for SQL Sentry.
You could create a SQL Server Audit as per the following link:
http://msdn.microsoft.com/en-us/library/cc280386(v=sql.105).aspx
SQL Server does capture the information you're asking about, but it's on a per index basis, not per table - look in sys.dm_db_index_operational_stats and sys.dm_db_index_usage_stats. You'll have to aggregate the data based on object_id to get table information. However, there are caveats - for example, if an index is not used (no reads and no writes), it won't show up in the output. These statistics are reset on instance restart, and there's a bug that causes them to be reset in index_usage_stats when an index is rebuilt (https://connect.microsoft.com/SQLServer/feedback/details/739566/rebuilding-an-index-clears-stats-from-sys-dm-db-index-usage-stats). And, there are notable differences between the outputs from the DMVs - check out Craig Freedman's post for more information (http://blogs.msdn.com/b/craigfr/archive/2008/10/30/what-is-the-difference-between-sys-dm-db-index-usage-stats-and-sys-dm-db-index-operational-stats.aspx).
The bigger question is, what problem are you trying to solve by having this information? I would agree with Aaron that finding queries that are taking a long time is a better place to start in terms of optimization. But, I wanted you to be aware that SQL Server does have this information.
we use sp_whoisActive from Adam Mechanics blog.
It gives us a snap shot of what is currently going on on the server, and what execution plan the statements are using.
It is easy to use and free of charge.