Is postgres caching my query? - postgresql

I have a pretty simple snippet of Python code to run a Postgres query then send the results to a dashboard. I'm using psycopg2 to periodically run the same query. Let's not worry about the looping mechanism for now.
conn = psycopg2.connect(<connection info>)
while True:
# Run query and update dashboard
cur = conn.cursor()
cur.execute(q_tcc)
query_results = cur.fetchall()
update_dashboard(query_results)
time.sleep(5)
For reference, the actual query is :
q_tcc = """SELECT client_addr, application_name, count(*) cnt FROM pg_stat_activity
GROUP BY client_addr, application_name ORDER BY cnt DESC;"""
When I run this, I keep getting the same results even though they should be changing. If i move the psycopg2.connect() line into the loop with a conn.close(), everything works fine. According to the connection and cursor docs, however, I should be able to keep using the same cursor (and, therefore, connection) the whole time.
Does this mean Postgres is caching my query on a per-client-connection basis?

PostgreSQL doesn't have a query cache.
However, if you're using SERIALIZABLE isolation, you might be seeing the same snapshot of the data, since you appear to do all your queries within a single transaction.
You should really commit (or rollback) the transaction after each query in your loop. conn.rollback()

Related

Postgresql - Unable to see exact Query or Transaction that blocked my actual Transaction

Note: I'm using DBeaver 21.1.3 as my PostgreSQL development tool.
For my testing, I have created a table:
CREATE TABLE test_sk1(n numeric(2));
then I have disabled Auto-Commit on my DBEaver to verify whether I can see the blocking query for my other transaction.
I have then executed an insert on my table:
INSERT INTO test_sk1(n) values(10);
Now this insert transaction is un-committed, which will block the table.
Then I have taken another new SQL window and tried alter command on the table.
ALTER TABLE test_sk1 ADD COLUMN v VARCHAR(2);
Now I see the alter transaction got blocked.
But when I verified in the locks, I see that this Alter transaction got blocked by "Show search_path;' transaction, where I'm expecting "INSERT..." transaction as blocking query.
I used below query to fetch the lock:
SELECT p1.pid,
p1.query as blocked_query,
p2.pid as blocked_by_pid,
p2.query AS blocking_query
FROM pg_stat_activity p1, pg_stat_activity p2
WHERE p2.pid IN (SELECT UNNEST(pg_blocking_pids(p1.pid)))
AND cardinality(pg_blocking_pids(p1.pid)) > 0;
Why is this happening on our databases?
Try using psql for such experiments. DBeaver and many other tools will execute many SQL statements that you didn't intend to run.
The query that you see in pg_stat_activity is just the latest query done by that process, not necessarily the one locking any resource.

Postgres statement_timeout does not work as expected

I am working on a use case where in my current api application I need to kill any query that has been running more than 30 sec (as my server has a timeout of 30 sec but the query keeps running on Postgres).
So after some finding i came across the statement_timeout configuration in postgres. and implemented it in my sqlAlchemy code like this:
#contextmanager
def db_session():
"""Executes the query."""
import os
from my_aws import secretsmanager
secret_name = f'<my_secrey_key>'
secret = secretsmanager.get_secret(secret_name)
conn = f'{secret["dbname"]}://{secret["username"]}:{secret["password"]}#' \
f'{secret["host"]}:{secret["port"]}/{secret["dbname"]}'
eng = create_engine(
conn,
connect_args={'options': '-c statement_timeout=30s'})
connection = eng.connect()
db_session = scoped_session(sessionmaker(autocommit=False, autoflush=True, bind=eng))
yield db_session
db_session.close()
connection.close()
So my expectation here was that any query whcih cannot complete within 30s should timeout and return an error.
So when testing this.
I place a lock in one of my tables to delay my queries by doing this:
BEGIN WORK;
LOCK TABLE <schema>.<table_name> IN ACCESS EXCLUSIVE mode;
then i trigger an API call which queries the locked table (from above). this api does not repond as expected becuase the query is unable to execute witin 30 sec.
however the query does not terminate and i can still see it running in the pg_stat_activity
SELECT pid, age(clock_timestamp(), query_start), usename, query
FROM pg_stat_activity
WHERE query != '<IDLE>' AND query NOT ILIKE '%pg_stat_activity%' and usename='api_user'
ORDER BY query_start desc;
So the above query gives the reponse:
pid |age |usename |query
----|---------------|--------|-------------------------------
3334|00:05:17.962059|api_user|SELECT count(*) AS count_1 ¶FRO
1752|00:05:22.577919|api_user|COMMIT
1754|00:05:22.627446|api_user|COMMIT
3270|00:05:22.791417|api_user|SELECT count(*) AS count_1 ¶FRO
1755|00:05:23.058261|api_user|COMMIT
1753|00:05:23.123582|api_user|COMMIT
1689|00:05:24.149163|api_user|SELECT count(*) AS count_1 ¶FRO
1759|00:05:24.579171|api_user|SELECT DISTINCT sum(public.dema
1760|00:05:24.631371|api_user|SELECT count(*) AS count_1 ¶FRO
As you can see that the query on the locked tables are still waiting from more than 5 min.
Is there something wrong with my understanding of statement_timeout here.
FYI: I can see that the timeout is set on the postgres as the result of this query:
show statement_timeout;
Result:
statement_timeout|
-----------------|
30s |
I recommend that you set the parameter in postgresql.conf (then it is valid for the whole PostgreSQL server) or with ALTER DATABASE (then it is valid only for new connections to that database).
If that does not do the trick, the setting must be overridden somewhere. To debug, run the following SQL statement using SQLAlchemy:
SELECT current_setting('statement_timeout');
However, when I look at your query, perhaps everything is working anyway: add the state column to the pg_stat_activity query and check if the state is indeed active. Perhaps the query has already been canceled, and the state is idle or idle in transaction (aborted) (note that query shows the last query on that connection, which need not be active any more).
I think The statement_timeout should be a value in milliseconds. If you are really passing in 30s, that might be the wrong parameter value. Try using 30000 for 30 seconds.
eng = create_engine(
conn,
connect_args={'options': '-c statement_timeout=30000'})

Postgres: Perform statement and limit

I have the following query which I run every night.
perform distinct fn_debtor_summary( clientacc) from client where not clientacc is null;
However because the function is quite slow, when I debug I like to debug off a small subset of data, so I use the following query.
perform distinct fn_debtor_summary( clientacc) from client where not clientacc is null limit 10;
However I find that the limit doesn't work and it runs the function against the whole table.
Any ideas why this is happening and how I could run it against a small subset of the data without creating temporary tables?
PostgreSQL runs functions on every row in the PERFORM query, before applying the limit. So even through it returns only 10, it will still run the function more than 10 times.
the solution is to use a subquery, interestingly PERFORM doesnt work, but a SELECT will work as well.
select fn_debtor_summary( limitclients.clientacc) from (select clientacc from client limit 1) limitclients;

SQL Server OpenQuery() behaving differently then a direct query from TOAD

The following query works efficiently when run directly against Oracle 11 using TOAD (with native Oracle drivers)
select ... from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date('12/31/9999','mm/dd/yyyy')
and rgn_nm = 'Boston'
) ...
;
The exact same query "never" returns if passed from SQL Server 2008 to the same Oracle database via openquery(). SQL Server has a link to the Oracle database using an Oracle Provider OLE DB driver.
select * from openquery( servername, '
select ... from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date(''12/31/9999'',''mm/dd/yyyy'')
and rgn_nm = ''Boston''
) ...
');
The query doesn't return in a reasonable amount of time, and the user kills the query. I don't know if it would eventually return with the correct result.
This result where the direct TOAD query works efficiently and the openquery() version "never" returns is reproducible.
A small modification to the openquery() gives the correct efficient result: Change eff_endt to trunc(eff_endt).
That is well and good, but it doesn't seem like the change should be necessary.
openquery() is supposed to be pass through, so how can there be a difference between the TOAD and openquery() behavior?
The reason we care is because we frequently develop complex queries with TOAD directly accessing Oracle. Once we have the query functioning and optimized, we convert it to an openquery() string for use in a SQL Server application. It is extremely aggravating to have a query suddenly fail with openquery() when we know it worked as a direct query. Then we have to search for a work-around through trial and error.
I would like to see the Oracle trace files for the two scenarios, but the Oracle server is within another organization, and we are not getting cooperation from the Oracle DBAs.
Does anyone know of any driver, or TOAD, or ??? issues that could account for the discrepancy? Is there any way to eliminate the problem such that both methods always give the same result?
I know you asked this a while ago but I just came across your question.
I agree, they should be the same. Obviously there is a difference. We need to find out where the difference is.
I am thinking out loud as I type...
What happens if you specify just a few column instead of select * from openquery?
How many rows are supposed to be returned?
What if, in the oracle select, you limit the returned rows?
How quickly does the openquery timeout?
Are TOAD and SS on the same machine? Are you RDPing into the SS and running toad from there?
Are they using the same drivers? including bit? (32/64) version?
Are they using the same account on oracle?
It is interesting that using the trunc() makes a difference. I assume [eff_endt] is one of the returned fields?
I am wondering if SS is getting all the rows back but it is choking on doing the date conversions. The date type in oracle may need to be converted to a ss date type before ss shows it to you.
What if you insert the rows from the openquery into a table where the date field is just a (n)varchar. I am thinking ss might just dump the date it is getting back from oracle into that text field without trying to convert it.
something like:
insert into mytable(f1,f2,f3,datetimeX)
select f1,f2,f3,datetimeX from openquery( servername, '
select f1,f2,f3,datetimeX from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date(''12/31/9999'',''mm/dd/yyyy'')
and rgn_nm = ''Boston''
) ...
');
What if toad or ss is modifying the query statement before sending it to oracle. You could fire up wireshark and see what toad and ss are actually sending.
I would be very curious if you get this resolved. I link ss to oracle often and have not run into this issue.
Here are basic things you can check for to see what the database is doing after it receives the query. First, check that the execution plans are the same in TOAD as when the query runs using openquery. You could plan the query yourself in TOAD using:
explain plan set statement_id = 'openquery_test' for <your query here>;
select *
from table(dbms_xplan.display(statement_id => 'openquery_test';
then have someone initiate the query using openquery() and have someone with permissions to view v$ tables to run:
select sql_id from v$session where username = '<user running the query>';
(If there's more than one connection with the same user, you'll have to find an additional attribute to isolate the row representing the session running the query.)
select *
from table(dbms_xplan.display_cursor('<value from query above'));
If those look the same then I'd move on to checking database waits and see what it's stuck on.
select se.username
, sw.event
, sw.p1text
, sw.p2text
, sw.p3text
, sw.wait_time_micro/1000000 as seconds_in_wait
, sw.state
, sw.time_since_last_wait_micro/1000000 as seconds_since_last_wait
from v$session se
inner join
v$session_wait sw
on se.sid = sw.sid
where se.username = '<user running the query>'
;
(again, if there's more than one session with the same username, you'll need another attribute to whittle it down to the one you're interested in.)
If the plans are different, then you need to find out why, or if they're the same, look into what it's waiting on (e.g. SQL*Net message to client ?) and why.
I noticed a difference using OLEDB through MS Access (2013) connecting to Oracle 10g & 11g tables, in that it did not always recognize indexes or primary keys on the Oracle tables properly. The same query through an MS Access 2000 database (using odbc) worked fine / had no problem with indexes & keys. The only way I found to fix the OLEDB version was to include all of the key fields in the SELECT -- which was not a satisfying answer, but it's all I could find. This might be an option to try through SSMS / OpenQuery(...) as well.
Besides that... you can try some alternatives to OPENQUERY, such as:
4-part names: SELECT ... FROM Server..Schema.Table
Execute AT: EXEC ('select...') at linked server
But as for why the OLEDB provider works differently than the native Oracle Provider -- the providers are not identical, and the native provider would be more likely to pave-over Oracle quirks than the more generic OLEDB provider would.

Npgsql with PostgreSQL: Can't see uncommitted changes with UNCOMMITTED READ

I'm using Npsql with PostgreSQL. I want to see uncommitted changes of one transaction in a different one.
This is how I create my connection and transaction:
// create connection
m_Connection = new NpgsqlConnection(connectionString);
m_Connection.Open();
//create transaction
m_Transaction = m_Connection.BeginTransaction(IsolationLevel.ReadUncommitted);
In one thread I insert a row like so:
NpgsqlCommand command = CreateCommand("INSERT INTO TABLEA ....", parameters, commandTimeout)
command.ExecuteNonQuery();
and process something else without committing or rolling back the transaction.
In a different thread I read a row like so:
NpgsqlCommand command = CreateCommand("SELECT COUNT(*) FROM TABLEA", parameters, commandTimeout);
command.ExecuteScalar();
but somehow I don't see the results of the first INSERT. I don't see the results of the insert in pgAdmin either (even after running SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED).
What am I doing wrong? Any help will be appreciated.
PostgreSQL does not support uncommitted reads.
You will never be able to see changes from other transactions that are not committed.