Clear oracle cache between queries - oracle10g

I want to get to know the real time of my query execution with different hints and without it. But oracle DB caches the query after its first execution and second time it executes quickly. How can I clear this cache after each query execution?

ALTER SYSTEM FLUSH BUFFER_CACHE
More details in the manual:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_2013.htm#i2053602

Related

Query Tuning PostgreSQL stored procedure which has 1000 queries Inside

I want to Tune my PostgreSQL stored procedure which has 1000 queries Inside. My SP is suddenly started to lack Perfomance.
How can I debug this SP which query is lagging performance inside the SP? Since Explain analyze doesn’t really show the much stats on SP.
Thanks for you help out
You best use auto_explain with auto_explain.log_nested_statements, auto_explain.log_analyze and auto_explain.log_buffers turned on.
Then you get the execution plans of all SQL statements with their duration logged.
I think that if you have a single function with 1000 different SQL statements inside, your design could be improved.

Are query execution plans stored anywhere in postgresql?

I am trying to figure out if PostgreSQL query execution plans are stored somewhere (possibly as complimentary to pg_stat_statements and pg_prepared_statements) in a way they are available for longer than the duration of the session. I understand that PREPARE does cache a sql statement in pg_prepared_statements, though the plan itself does not seem to be available in any view as far as I can tell.
I am not sure if there is a doc explaining the life cycle of a query plan for PostgreSQL but from what it sounds in the EXPLAIN documentation, PostgreSQL does not cache query plans at all. Is this accurate?
Thanks!
PostgreSQL has no shared storage for execution plans, so they cannot be reused across database sessions.
There are two ways to cache an execution plan within a session:
Use prepared statements with the SQL statements PREPARE and EXECUTE. The plan will be cached for the life time of the prepared statement, usually until your session ends.
Use PL/pgSQL functions. The plans for all static SQL statements (that is, statements that are not run with EXECUTE) in such a function will be cached for the session life time.
Other than that, execution plans are not cached in PostgreSQL.

Determine locks during process

I have a huge process(program with activerecord), which lock different tables for an amount of time.
Now I want to check all my locks during the process. So which tables are locked and for how long. I could use the activity monitor, but I need more information.
Is there a tool like the SQL Server Profiler, which list all locks during a process? Or is somewhere a logtable, which I can check?
Further Information:
There is a process in our program which use half of the tables from our database. Create new rows, update existing rows, select informations... The process runs only during the night. Now they want to run this process during the day and I have to evaluate to possibility of that request. I already checked the sourcecode, but I also want to check the database for longer locks, tablelocks and such stuff, just to be sure. The idea is, to start that process in our test environment and collect all lock informations. But I don't see all locks in the activity monitor and I can't look for an hour over the activity monitor.
There are many DMVS which will help you out to gather lock stats.Run this query based on your frequency through a SQL job and log this to table for later analysis..
--This shows all the locks involved in each session
SELECT resource_type, resource_associated_entity_id,
request_status, request_mode,request_session_id,
resource_description
FROM sys.dm_tran_locks lck
WHERE resource_database_id = db_id()
--You also can use SYS.DM_EXEC_Requests DMV to gather blockings,wait_types to understand more
select status,wait_type,last_wait_type,txt.text from sys.dm_exec_requests ec
cross apply
sys.dm_exec_sql_text(ec.sql_handle) txt

SQL queries running slowly or stuck after DBCC DBReindex or Alter Index

All,
SQL 2005 sp3, database is about 70gb in size. Once in a while when I reindex all of my indexes in all of my tables, the front end seems to freeze up or run very slowly. These are queries coming from the front end, not stored procedures in sql server. The front end is using JTDS JDBC connection to access the SQL Server. If we stop and restart the web services sending the queries the problem seems to go away. It is my understandning that we have a connection pool in which we re-use connections and dont establish a new connection each time.
This problem does not happen every time we reindex. I have tried both ways with dbcc dbreindex and alter index online = on and sort in tempdb = on.
Any insight into why this problem occurs once in a while and how to prevent this problem would be very helpful.
Thanks in advance,
Gary Abbott
When this happens next time, look into sys.dm_exec_requests to see what is blocking the requests from the clients. The blocking_session_id will indicate who is blocking, and the wait_type and wait_resource will indicate what is blocking on. You can also use the Activity Monitor to the same effect.
On a pre-grown database an online index rebuild will not block normal activity 9select/insert/update/delete). The load on the server may increase as a result of the online index rebuild and this could result in overall slower responses, but should not cause blocking.
If the database is not pre-grown though then the extra allocations of the index rebuild will trigger database growth events, which can be very slow if left default at 10% increments and without instant file initialisation enabled. During a database growth event all activity is frozen in that database, and this may be your problem even if the indexes are rebuilt online. Again, Activity Monitor and sys.dm_exec_requests would both clearly show this as happening.

Slow query / disable cache - Sybase Adaptive Server

This query seems to be running incredibly slow (25 seconds for 4 million records!) on Sybase v10 at a clients database:
Select max(tnr) from myTable;
With tnr being the primary key.
If I run it 1000x on our server however, it seems to go fast (15 ms...) which makes me think it's because the query result is cached. Is there a way to disable the cache for this query (or entire database) in Sybase to reproduce this problem?
I tried:
call sa_flush_cache ();
call sa_flush_statistics ();
But didn't seem to do the trick.
Unfortunately dbcc cacheremove will not work as it does not clear down the pages from cache but rather removes the descriptor and places it back on the free chain.
Aside from restarting the data server the only way to do this is to bind the object to a cache and then do your tests then unbind the object which will remove all the pages from cache.
Try dbcc cacheremove