optimizing a really long code with a lot of JOINS on Redshift - amazon-redshift

I need help with a really long existing code that runs on Amazon Redshift. It is returning a WLM timeout therefore I need to optimize or improve it somehow so it won't get the WLM anymore.
Here is the code:
https://drive.google.com/file/d/1CsgFNisb77qWc3t5KUVQkVGA4iPtk5ER/view?usp=sharing
Any ideas?
Thank you.

Related

Improving postgres query performance by descreasing the cost of the query

I am doing the query tuning with one of the existing query in my project eventually the execution time taking is quite longer than usual.
below is my screen shot which indicates the performance of the query.
can someone could you please help me with the solution to improve the performance of my query.
any small suggestion would be a great help. thanks in advance.

Db2 performance - many batch programs inserting rows in the same table

Hi there I'm looking for advice from someone who is good at IBM db2 performance.
I have a situation in which many batch tasks are massively inserting rows in the same db2 table, at the same time.
This situation looks potentially bad. I don't think db2 is able to resolve the many requests quickly enough, causing the concurring tasks to take longer to end and even causing some of them to abend with a -904 or -911 sqlcode.
What do you guys think? Should situations like these be avoided? Are there some sort of techniques that could improve the performance of the batch tasks, keeping them from abending or running too slow?
Thanks.
Inserting should not be a big problem running ETL workloads i.e. with DataStage do this all the time.
I suggest to run an
ALTER TABLE <tabname> APPEND ON
This avoids the free space search - details can be found here
With the errors reported the information provided is not sufficient to get the cause of it.
There are several things to consider. What indexes are on the table is one.
Append mode works well to relieve last page contention, there are also issues where you could see contention with the statement itself for the variation lock. Then you could have issues with the transaction logs if they are not fast enough.
What is the table, indexes and statement and maybe we can come up with how to do it. What hardware are you using and what io subsystem is being used for transaction logs and database tablespaces.

PostgreSQL - query behaves inconsistently - what is causing this?

I have a query in PostgreSQL (rather a func call) which returns normally for say 5-6 secs. This happens in 90-59% of the cases, I think. Sometimes though this same func call takes 10-20 mins or even 1-2 hours. The parameters passed to the func in this "slower case" are the same as in the "faster case".
What could be causing this? Is it possible that the PostgreSQL picks a different execution plan even though the parameters are exactly the same?
Since I will be asked about the overall server load... I don't think it's related. I believe I have seen cases where my func call is slow even without any significant additional load on the server (by other client sessions).
So when would the query be slow seems completely random to me. But logically speaking, I know it cannot be random, it should be influenced by some factor.
That's exactly my point here: what is this factor? This seems a deep issue so any good suggestions or hints would be highly appreciated.
Many thanks in advance.
I added ANALYZE "tbl" statements for all temp tables used in my function after inserting the data into them. I also added a few indexes to some of the largest temp tables.
This seems to have fixed the issue. I guess I will never know what exactly the issue was, but it seems to me Postgres was picking a different execution plan even for the same func arguments.
Now when I explicitly say: "go and analyze these temp tables", it seems Postgres is picking always the same/fast execution plan/path.
Just posting this answer here so that others may try it too, if they run into a similar problem.
Yes, postgres can pick a different execution plan, which may be the cause of the issue, but that seems unlikely to me.
Have you looked at pg_stat_activity while the query is running for a long time? Check to see that it is not stuck waiting on some other process that has obtained a lock.

MongoDb gets slow when large amount of data already installed?

I am a new user of mongodb, I m currently doing a stress test, 100thousands data per 5s are inserting with 10 threads and we have already stored x00million of data. The db is getting gravely slow. Although when I restart the computer it get faster for a while, it drops down again after a short period of time. why is that? Can I do something to avoid?
Please share more information. What queries do you have? Please don't take this as a complete answer because this is just a suggestion. I can not add comments unfortunately. You may try this;
http://docs.mongodb.org/manual/reference/command/repairDatabase/

Performance Tuning

How can i check the Query running from long time & steps of tuning the query? (Oracle)
Run explain plan for select .... to see what Oracle is doing with your query.
Post your query here so that we can look at it and help you out.
Check out the Oracle Performance Tuning FAQ for some tricks-of-the-trade, if you will.
You can capture the query by selecting from v$sql or v$sqltext.
If you are not familiar with it, look up 'Explain Plan' in the Oracle
documentation. There should be plenty on it in the performance tuning
guide.
Have a look at Quest Software's Toad for a third party tool that helps
in this area too.
K
Unfortunately your question is not expressed clearly. The other answers have already tackled the issue of tuning a known bad query, but another interpretation is that you want to monitor your database to find poorly performing queries.
If you don't have Enterprise Edition with the Diagnostics pack - and not many of us do - your best bet is to run statspack snapshots on a reqular basis. This will give you a lot of information about your system, including which queries take a long time to complete and which queries consume a lot of your system's resources. You can find out more about statspack here.
If you do not want to use OEM, then you can query and find out.
First find the long running query. If it's currently being executing, You can join gv$session to find which session running since long time. Then go to gv$sql to find SQL details. You need to look last_call_et column.If SQL executed some time inpast you can use dba_hist_snapshot ,dba_hist_sqlstat ,DBA_HIST_SQLTEXT tables to find offending SQL.
Once you get query, you can check what plan it's picking from dba_hist_sql_plan table if this SQL executed in past or from gv$sql_plan if it's currently executing.
Now you analyze execution plan and see if it's using right index, join etc.
If not tune those.
Let me know which step you have the problem. I can help you in answering those.