Long runtime when query is executed the first time in RedShift - amazon-redshift

I noticed that the first time I run a query on RedShift, it takes 3-10 second. When I run same query again, even with different arguments in WHERE condition, it runs fast (0.2 sec).
Query I was talking about runs on a table of ~1M rows, on 3 integer columns.
Is this huge difference in execution times caused by the fact that RedShift compiles the query first time its run, and then re-uses the compiled code?
If yes - how to always keep this cache of compiled queries warm?
One more question:
Given queryA and queryB.
Let's assume queryA was compiled and executed first.
How similar should queryB be to queryA, such that execution of queryB will use the code compiled for queryA?

The answer of first question is yes. Amazon Redshift compiles code for the query and cache it. The compiled code is shared across sessions in a cluster, so the same query with even different parameters in the different session will run faster because of no overhead.
Also they recommend to use the result of the second execution of the query for the benchmark.
There is the answer for this question and details in the following link.
http://docs.aws.amazon.com/redshift/latest/dg/c-compiled-code.html

Related

How to: Change actual execution method from "row" to "batch" - Azure SQL Server

I am having some major issues. When inserting data into my database, I am using an INSTEAD OF INSERT trigger which performs a query.
On my TEST database, this query takes much less than 1 second for insert of a single row. In production however, this query takes MUCH longer (> 30 seconds for 1 row).
When comparing the Execution plans for both of them, there seems to be some CLEAR differences:
Test has: "Actual Execution Method: Batch"
Prod has: "Actual Execution Method: Row"
Test has: "Actual number of rows: 1"
Prod has: "Actual number of rows 92.000.000"
Less than a week ago production was running similar to test. But not anymore - sadly.
Can any of you help me figure out why?
I believe, if I can just get the same execution plan for both, it should be no problem.
Sometimes using query hint OPTION(hash Join) helps to force a query plan to use batch processing mode. The following query that uses AdventureWorks2012 sample database demonstrates what I am saying.
SELECT s.OrderDate, s.ShipDate, sum(d.OrderQty),avg(d.UnitPrice),avg(d.UnitPriceDiscount)
FROM Demo d
join Sales.SalesOrderHeader s
on d.SalesOrderID=s.SalesOrderID
WHERE d.OrderQty>500
GROUP BY s.OrderDate,s.ShipDate
The above query uses row mode. With the query hint it then uses batch mode.
SELECT s.OrderDate, s.ShipDate, sum(d.OrderQty),avg(d.UnitPrice),avg(d.UnitPriceDiscount)
FROM Demo d
join Sales.SalesOrderHeader s
on d.SalesOrderID=s.SalesOrderID
WHERE d.OrderQty>500
GROUP BY s.OrderDate,s.ShipDate
OPTION(hash Join)
You don't get to force row vs. batch processing directly in SQL Server. It is a cost-based decision in the optimizer. You can (as you have noticed) force a plan that was generated that uses batch mode. However, there is no specific "only use batch mode" model on purpose as it is not always the fastest. Batch mode execution is like a turbo on a car engine - it works best when you are working with larger sets of rows. It can be slower on small cardinality OLTP queries.
If you have a case where you have 1 row vs. 92M rows, then you have a bigger problem with having a problem that has high variance in the number of rows processed in the query. That can make it very hard to make a query optimal for all scenarios if you have parameter sensitivity or the shape of the query plan internally can create cases where sometimes you have only one row vs. 92M. Ultimately, the solutions for this kind of problem are either to use option(recompile) if the cost of the compile is far less than the variance from having a bad plan or (as you have done) finding a specific plan in the query store that you can force that works well enough for all cases.
Hope that helps explain what is happening under the hood.
I have found a somewhat satifying solution to my problem.
By going into Query store of the database, using Microsoft SQL Server Management Studio, I was able to Force a specific plan for a specific query - but only if the plan was already made by the query.

Benchmarking Redshift Queries

I want to know how long my queries take to execute, so that I can see whether my changes improve the runtime or not.
Simply timing the executing of the whole query is unsuitable, since this also takes into account the (highly variable) time spent waiting in an execution queue.
Redshift provides the STL_WLM_QUERY table that contains separate columns for queue wait time and execution time. However, my queries do not reliably show up in this table. For example if I execute the same query multiple times the number of corresponding rows in STL_WLM_QUERY is often much smaller than the number of repetitions. Sometimes, but not always, only one row is generated no matter how often I run the query. I suspect some caching is going on.
Is there a better way to find the actual execution time of a Redshift query, or can someone at least explain under what circumstances exactly a row in STL_WLM_QUERY is generated?
My tips
If possible, ensure that your query has not waited at all, if it has
there should be a row on stl_wlm_query. If it did wait - then rerun
it.
Run the query once to compile it, then a second time to benchmark
it. compile time can be significant
Disable the new query result caching feature (if you have it yet -
you probably don't)
(https://aws.amazon.com/about-aws/whats-new/2017/11/amazon-redshift-introduces-result-caching-for-sub-second-response-for-repeat-queries/)

PostgreSQL approaching plan caching on identical queries?

I am running some benchmarks tests on a lot of queries. I have a set of queries and they will be run multiple times after each other. I know that PostgreSQL caches query plans so this is important to consider but as far as I know this does not always happen.
So I have two approaches. I am considering to either (a) force the query plan to be generated each time I run a query or either (b) to 'warm up' a bit so that a plan is cached and it is reused each time. How can I perform either or what precautions can I take to ensure that one or the other is happening?
It would be great if I could monitor plans in the cache but I am not sure if it is possible.
UPDATE: My queries are complex SELECTs to retrieve data, no DELETEs/INSERTs etc. Does this mean I should not give so much respect to the query planner in benchmarks?
PostgreSQL only caches query plans if
you use prepared statements
the statement is executed inside a PL/pgSQL function
So if you want to benchmark how much faster your queries become if you avoid the overhead of planning, you should create a prepared statement and execute it al least six times (because the first five runs will always generate a custom plan).
If your queries are complex, odds are that you might even lose if you cache query plans, particularly if the runtime of the queries is long. In such a case, it is usually better to spend more effort on planning each query. The biggest win with prepared statements is when the execution time of the queries is low.

Executing same query makes time difference in postgresql

I just want to know what is the reason for having different time while executing the same query in PostgreSQL.
For Eg: select * from datas;
For the first time it takes 45ms
For the second time the same query takes 55ms and the next time it takes some other time.Can any one say What is the reason for having non static time.
Simple, everytime the database has to read the whole table and retrieve the rows. There might be 100 different things happening in database which might cause a difference of few millis. There is no need to panic. This is bound to happen. You can expect the operation to take same time with some millis accuracy. If there is a huge difference then it is something which has to be looked.
Have u applied indexing in your table . it also increases speed to a great deal!
Compiling the explanation from
Reference by matt b
EXPLAIN statement? helps us to display the execution plan that the PostgreSQL planner generates for the supplied statement.
The execution plan shows how the
table(s) referenced by the statement will be scanned — by plain
sequential scan, index scan, etc. — and if multiple tables are
referenced, what join algorithms will be used to bring together the
required rows from each input table
And Reference by Pablo Santa Cruz
You need to change your PostgreSQL configuration file.
Do enable this property:
log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
After that, execution time will be logged and you will be able to figure out exactly how bad (or good) are performing your queries.
Well that's about the case with every app on every computer. Sometimes the operating system is busier than other times, so it takes more time to get the memory you ask it for or your app gets fewer CPU time slices or whatever.

PostgreSQL. Slow queries in log file are fast in psql

I have an application written on Play Framework 1.2.4 with Hibernate(default C3P0 connection pooling) and PostgreSQL database (9.1).
Recently I turned on slow queries logging ( >= 100 ms) in postgresql.conf and found some issues.
But when I tried to analyze and optimize one particular query, I found that it is blazing fast in psql (0.5 - 1 ms) in comparison to 200-250 ms in the log. The same thing happened with the other queries.
The application and database server is running on the same machine and communicating using localhost interface.
JDBC driver - postgresql-9.0-801.jdbc4
I wonder what could be wrong, because query duration in the log is calculated considering only database processing time excluding external things like network turnarounds etc.
Possibility 1: If the slow queries occur occasionally or in bursts, it could be checkpoint activity. Enable checkpoint logging (log_checkpoints = on), make sure the log level (log_min_messages) is 'info' or lower, and see what turns up. Checkpoints that're taking a long time or happening too often suggest you probably need some checkpoint/WAL and bgwriter tuning. This isn't likely to be the cause if the same statements are always slow and others always perform well.
Possibility 2: Your query plans are different because you're running them directly in psql while Hibernate, via PgJDBC, will at least sometimes be doing a PREPARE and EXECUTE (at the protocol level so you won't see actual statements). For this, compare query performance with PREPARE test_query(...) AS SELECT ... then EXPLAIN ANALYZE EXECUTE test_query(...). The parameters in the PREPARE are type names for the positional parameters ($1,$2,etc); the parameters in the EXECUTE are values.
If the prepared plan is different to the one-off plan, you can set PgJDBC's prepare threshold via connection parameters to tell it never to use server-side prepared statements.
This difference between the plans of prepared and unprepared statements should go away in PostgreSQL 9.2. It's been a long-standing wart, but Tom Lane dealt with it for the up-coming release.
It's very hard to say for sure without knowing all the details of your system, but I can think of a couple of possibilities:
The query results are cached. If you run the same query twice in a short space of time, it will almost always complete much more quickly on the second pass. PostgreSQL maintains a cache of recently retrieved data for just this purpose. If you are pulling the queries from the tail of your log and executing them immediately this could be what's happening.
Other processes are interfering. The execution time for a query varies depending on what else is going on in the system. If the queries are taking 100ms during peak hour on your website when a lot of users are connected but only 1ms when you try them again late at night this could be what's happening.
The point is you are correct that the query duration isn't affected by which library or application is calling it, so the difference must be coming from something else. Keep looking, good luck!
There are several possible reasons. First if the database was very busy when the slow queries excuted, the query may be slower. So you may need to observe the load of the OS at that moment for future analysis.
Second the history plan of the sql may be different from the current session plan. So you may need to install auto_explain to see the actual plan of the slow query.