I am using PostgreSQL 10 from RDS (AWS).
So note that I don't have full permissions to do whatever I want.
In PostgreSQL I have some functions written in PL/pgSQL.
From my experience in these function I cannot start/commit/rollback transactions. In a DO block I cannot do that either.
Is that correct? So what is the logic behind this... seems PostgreSQL expects each function to be called in the context of an existing transaction. Right?
But what if I want every statement in my function to be executed in a separate (short) transaction i.e. to have a behavior something like AUTOCOMMIT = ON?
I found some extension which maybe can do that but I am not sure.
I don't know if it's relevant.
https://www.postgresql.org/docs/10/ecpg-sql-set-autocommit.html
Isn't there a standard way of doing this in Postgres without the need to download and install additional packages/extensions?
Again: I want every statement in my function to be executed in a separate (short) transaction i.e. to have a behavior something like AUTOCOMMIT = ON.
So I want something like this:
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-implicit-transactions-transact-sql?view=sql-server-2017
All statements in a function run in the same transaction, and no plugin can change that.
You can use procedures from v11 on, but you still have to explicitly manage transactions then.
I suspect that the best thing would be to run your functions on the database client, where you have autocommit automatically, rather than as a function in the database.
Related
I have a stored procedure on Postgres, which processes large data and takes a good time to complete.
In my application, there is a chance that 2 processes or schedulers can run this procedure at same time. I want to know if there is a built in mechanism in db to allow only instance of this procedure to run at db level.
I searched the internet, but didn't find anything concrete.
There is nothing built in to define a procedure (or function) so that concurrent execution is prevented.
But you can use advisory locks to achieve something like that.
At the very beginning of the procedure, you can add something like:
perform pg_advisory_lock(987654321);
which will then wait to get the lock. If a second session invokes the procedure it will have to wait.
Make sure you release the lock at the end of the procedure using pg_advisory_unlock() as they are not released when the transaction is committed.
If you use advisory locks elsewhere, make sure you use a key that can't be used in other places.
I need to implement transparent encryption in Postgres (TDE). To do this, I found which functions are called when INSERT and SELECT are triggered. Used LLVM-LLDB on SELECT.
I'm trying to do the same with INSERT - does not work
the base process stops and does not allow insertion. I did everything about one manual https://eax.me/lldb/.
What could be wrong? how to find out which functions are called upon insertion (in the case of SELECT, this is secure_read, etc.)? And, if anyone knows how to change the function code in the source?
First, the client and server are located on the same machine, the same user adds data and reads them
Unfortunately I do not have enough reputation to add a screenshots.
The SQL statements are the wrong level to start debugging. You should look at the code where blocks are read and written. That would be in src/backend/storage/smgr.
Look at the functions mdread and mdwrite in md.c. This is probably where you'd start hacking.
PostgreSQL v12 has introduced “pluggable storage”, so you can write your own storage manager. See the documentation. If you don't want to patch PostgreSQL, but have an extension that will work with standard PostgreSQL, that would be the direction to take.
So far I have only covered block storage, but you must not forget WAL. Encrypting that will require hacking PostgreSQL.
This is a complex question which you should post to PostgreSQL hackers distribution list https://www.postgresql.org/list/pgsql-hackers/.
You could start by setting a GDB breakpoint in Executor_Start in execMain.c
I need to write an update script that will check to see if certain tables, indexes, etc. exist in the database, and if not, create them. I've been unable to figure out how to do these checks, as I keep getting Syntax Error at IF messages when I type them into a query window in PgAdmin.
Do I have to do something like write a stored procedure in the public schema that does these updates using Pl/pgSQL and execute it to make the updates? Hopefully, I can just write a script that I can run without creating extra database objects to get the job done.
If you are on PostgreSQL 9.1, you can use CREATE TABLE ... IF NOT EXISTS
On 9.0 you can wrap your IF condition code into a DO block: http://www.postgresql.org/docs/current/static/sql-do.html
For anything before that, you will have to write a function to achieve what you want.
Have you looked into pg_tables?
select * from pg_tables;
This will return (among other things) the schemas and tables that exist in the database. Without knowing more of what you're looking for, this seems like a reasonable place to start.
I have a .NET application which executes a statement like this:
SELECT ST_GeomFromKML('
<LineString>
<coordinates>-71.1663,42.2614
-71.1667,42.2616</coordinates>
</LineString>');
There is no need for tables or where clause, I'm basically using it as a converter.
So my question is does my application hit the database when i issue this command or does the local postgress dll take care of it in memory?
It will hit the database, which basically means that it will be much slower than it needs to be.
You should try to write a method thaat performs the conversion without using the database, and call that method instead.
I will hit the database, however the overhead is not so huge, usually you won't notice that.
Does anyone knows how can I set up an insert trigger so when a perform an insert from my application, the data gets inserted and postgres returns, even before the trigger finishes executing?
There is no built-in support for this; you will have to hack something up. Options include:
Write the trigger in C, Perl, or Python and have it launch a separate process to do the things you want. This can get tricky and possibly slightly dangerous to your database system, and it only works if the things you want to do are outside of the database.
Write a lightweight trigger function that only records an entry into a log or task table, and have a separate job or daemon that looks into that table on its own schedule and executes things from there. That's more or less how Slony works.
The question is : why do you need it? Triggers should be fast. If you need to do something complicated, write trigger that send notification to some daemon that does the complex part - for example using LISTEN/NOTIFY feature of PostgreSQL.