code in matlab app:
query = sprintf('select *,st_askml(line) from %s;', table_name);
var = fetch(connection, query);
This completes successfully and I get the data, the app continues running. However if I separately (in this cause within a python script) try to run "drop table if exists" on the same table, it won't work because it is locked;
How should I change my select query in matlab so that it finishes gracefully. (btw the lock is released when I close the app).
I am open to hearing more details about this but I solved the problem by adding
execute(connection,'COMMIT');
after my select query.
Related
What I am trying to achieve is to have multiple instances of the same application running at the same time, but only one of those instances running a cron, by locking it in a Postgres database.
My solution so far is :
Running a cron on all the instances.
Inserting a row in a table cron_lock with a unique identifier for the cron.
If I have an error while running the insert query, it is most likely because the row already exists (the cron identifier is the primary key of the table). If that is the case, I do nothing, and I exit.
If I don't have an error while running the insert query, then the application instance will run the cron process.
At the end of my process, I delete the row with the unique identifier.
This solution is working, but I am not sure if another locking mechanism would exist with Postgres, in particular one that would not have me execute queries that are creating errors.
Thanks to #Belayer I found a nice way to do it with advisory locks.
Here is my solution :
Each of my crons have an associated and unique ID (integer format).
All of the crons start on all the different servers. But before running the main function of the cron, I try to get an advisory lock with the unique ID in the database. If the cron can get the lock, then it will run the main function and free the lock, otherwise, it just stops.
And here is some pseudo code if you want to implement it in a language of your choice :
enum Cron {
Echo = 1,
Test = 2
}
function uniqueCron(id, mainFunction) {
result = POSTGRES ('SELECT pg_try_advisory_lock($id) AS "should_run"')
if(result == FALSE){ return }
mainFunction()
POSTGRES ('SELECT pg_advisory_unlock($id)')
}
cron(* * * * *) do {
uniqueCron(Cron.Echo, (echo "Unique cron"))
}
cron(*/5 * * * *) do {
uniqueCron(Cron.Test, (echo "Test"))
}
Running this process many times, or on many different servers, all using the same database, will result in only one mainFunction being executed at once, given that all crons are launched at the same time (same time/timezone on the different servers). A main function too short to execute might cause problems if one server try to get the lock and another already released it. In that case, wait a little before releasing the lock.
I am using sqlworkbench-j to query Redshift data. I am facing issue of locking tables whenever I do query on this table. It also happens for simple select statements. I know this is happening because workbench explicitly adds begin for every statement to take care for any changes happening for the data. So for every query we need to write end transaction.
Is there any option to disable the begin statement or to add end transaction statement in sqlworkbench-j?
When you set up redshift - click the "autocommit" option.
see here for more detailed instructions
https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-using-workbench.html
especially point 10
I am currently comparing performance of PostgreSQL with several other SQL systems. I am aware of the \timing option to turn on timing queries. However, I would very much like to automate the process of copying the statements executed and the query speed below it. I imagine there is a simple way to log this?
Let's say I run:
CREATE TABLE t1 AS
SELECT itemID, prodCategory
FROM products
WHERE prodCategory = "footwear"
I want to automatically save into a text file:
CREATE TABLE t1 AS
SELECT itemID, prodCategory
FROM products
WHERE prodCategory = "footwear"
SELECT 7790
Time: 10.884 ms
If OS Specifications are needed, I am using MacOS.
I just learned that you can use the:
script filename
command to save everything that is printed on your screen. If timing is on, you can record the queries and the query time outputs.
To stop recording, simply type exit.
I have the following query which I run every night.
perform distinct fn_debtor_summary( clientacc) from client where not clientacc is null;
However because the function is quite slow, when I debug I like to debug off a small subset of data, so I use the following query.
perform distinct fn_debtor_summary( clientacc) from client where not clientacc is null limit 10;
However I find that the limit doesn't work and it runs the function against the whole table.
Any ideas why this is happening and how I could run it against a small subset of the data without creating temporary tables?
PostgreSQL runs functions on every row in the PERFORM query, before applying the limit. So even through it returns only 10, it will still run the function more than 10 times.
the solution is to use a subquery, interestingly PERFORM doesnt work, but a SELECT will work as well.
select fn_debtor_summary( limitclients.clientacc) from (select clientacc from client limit 1) limitclients;
I have started REINDEX on my PostgreSQL database. It can be visible in GUI that it processed a number of tables, and then stop responding. It looks like it is still working, even after two hours. The GUI is not responsive and its last row says: "NOTICE: table public.res_request_history" was reindexed."
Can I safely stop REINDEX? What can I do to actually make REINDEX work?
Thanks.
Yes, you can use pg_cancel_backend(pid). PID you can find executing 'select pg_stat_activity()'.
For example:
--Will display running queries and corresponding pid
SELECT query, pid FROM pg_stat_activity;
--You can then cancel one of them by calling this method with its pid
SELECT pg_cancel_backend(<pid>);