How does a stored procedure handle index? - tsql

Does a stored procedure use newly created indexes? I have a table which is dropped and recreated every time before the execution of the procedure. Does I have to recompile the procedure or does it use the new index and execution plan?
Sp_recompile

If you drop and recreate an object on which the stored procedure depends the stored procedure will be recompiled automatically and potentially use any new indexes that have been created.

Related

Insert into Memory Optimized Table from non optimized

I have two database.
Primary have a DDL triggers so i can't create memory optimized tables there. So i created secondary database and create there table with memory optimized on. Now, in procedure on primary database i need insert copy data from other table to this optimized.
For example:
INSERT INTO InMemory.dbo.DestTable_InMem SELECT * FROM #T;
And i have:
A user transaction that accesses memory optimized tables or natively compiled modules cannot access more than one user database or databases model and msdb, and it cannot write to master.
Did exists some workarounds from it?
I cannot move my procedure to second database.
There is no other way than using a native procedure to INSERT, UPDATE or DELETE in an in-memory table.
See: A Guide to Query Processing for Memory-Optimized Tables
To move from one DB to the other, the source table must exists locally

COPY support with postgreSQL v12 triggers

We have this pair of trigger and function that we use on our psql database for the longest time. Basically, the trigger is called each time there is a new record to the main table, and each row is inserted to the monthly partition individually. Following is the trigger function:
CREATE TRIGGER partition_mic_teams_endpoint_trg1
BEFORE INSERT ON "mic_teams_endpoint"
FOR EACH ROW EXECUTE
PROCEDURE trg_partition_mic_teams_endpoint('month');
The function we have creates monthly partitions based on a timestamp field in each row.
I have two questions:
List item Even if I try to COPY a bunch of rows from CSV to the main table, is this trigger/function going to insert each row individually? Is this efficient?
If that is the case, is it possible to have support for COPYing data to partitions instead of INSERT.
Thanks,
Note: I am sorry if I did not provide enough information for an answer
Yes, a row level trigger will be called for each row separately, and that will make COPY quite a bit slower.
One thing you could try is a statement level AFTER trigger that uses a transition table, so that you can
INSERT INTO destination SELECT ... FROM transition_table;
That should be faster, but you should test it to be certain.
See the documentation for details.

Postgresql: out of shared memory due to max locks per transaction using temporary table

I am using Postgresql 9.6.
I have a stored proc which is called by Scala. This stored proc is a wrapper, i.e. it will call another stored procs for each input list passed in wrapper. For e.g. wrapper has input list of 100 elements, so the internal stored proc will be called 100 times per element.
The internal proc is data heavy proc, which creates 4-5 temp tables and processes the data and returns.
So wrapper will collect all the data and finally complete.
get_data_synced(date, text, integer[])
Here the text is comma-separated items (10-1000 depending on use -case).
Basically the problem is if I pass a bigger number 100-200 items i.e. in a loop we call the internal procs that many times, it throws the error:
SQL execution failed (Reason: ERROR: out of shared memory
Hint: You might need to increase max_locks_per_transaction.
I understand that create temp table inside the internal function will create locks. Bu each time the proc is called, first thing is DROP and then CREATE the temp table.
DROP TABLE IF EXISTS _temp_data_1;
CREATE TEMP TABLE _temp_data_1 AS (...);
DROP TABLE IF EXISTS _temp_data_2;
CREATE TEMP TABLE _temp_data_2 AS (...);
..
..
..
So even if the proc is called 1000 times, the first thing it does is drop table (which should release locks?) and then create the table.
The max_locks_per_transaction is set to 256.
Now, the transaction is not over until my wrapper function (outside function) is over, right?
So it means that even if I am dropping the temp table, the locks are not released?
Is there a way to release the lock on temp table immediately once my function is complete?
You diagnosis is correct, the locks survive until the end of the transaction. Even if it was dropped in the same transaction that created it, and even if it is a temp table. Perhaps this could be optimized, but that is currently how it works.
For work-arounds, why not just truncate the table, rather than drop and re-create it, if it already exists?

How to use stored procedure inside the stored procedure in Postgresql

I have been working on my code for our activity in our major comp sci subject. The task asks to update a certain field in the table in postgresql using stored procedure
I have already create a gettopemp() to retrieved the data in the table, and I want to retrieve the information of gettopemp() to my new stored procedure updatetopemp(). How to use stored procedure inside the stored procedure ???
If you want to pass a function name as a parameter and call that in your code, you'll have to use dynamic SQL.

map stored procedure to entity returning temproray table data

I have a stored procedure that returns the temporary table data. because i have used dynamic queries. When i tried to map stored procedure using complex types it returns no columns
how to handle temporary table columns name in complex types?
It is not supported by default because EF always executes SET FMTONLY ON before executing your stored procedure. This option will turn off logic execution - it will only ask for metadata but if logic execution is turned off no temporary table is created and no column's metadata exists.
There are some workarounds.