Decide which procedure to call, depending on field value - postgresql

I have about 20 (and there will be more) specific stored procedures in my PostgreSQL 9.2 DB. They are used to make some calculations, some kind of financial "reports" (unfortunately, I can't just store data in tables and implement algorithms in the programs's code).
Procedures are very different one from another, they're operating on different tables, columns, implementing different algorithms etc.
Every procedure returns the same data type (numeric value).
And now, my client wants to create the funcionality, where user can select specific procedure (or combination of them, e.g. 10% of procedure's 1 returning value + 90% of procedure's 2 returning value), and use it as a "base" for later modeling.
He wants also my user to be able to change his selection later, without calling programmers every time. ;-)
I thought about making some tables:
Table: base_models
id <PK>,
user_id, <FK from users table>
model_name (varchar, or sth.)
Table: base_models_algorithms
base_model_id <FK from base_models>
algorithm_id <FK from algorithms>
percent_value (percent value of specific algorithm in model, eg. 10)
...and then, I need to store also my algorithm names (= stored procedures) in some table:
Table: algorithms
name: (stored procedure name, varchar?)
... and that's where the problem is.
Of course I can later create some view, with a column calculated by another procedure (model_current_value :-)), and decide what procedure to call depending on name stored in my algorithm's table, but that look awful for me. :(
There would be no data control (you can write anything to algorithms' table, and there is no way to ensure that this string is a name of procedure, returning correct data type etc.).
Of course I can fill the table myself, and won't let anyone to change it's data :-)
But maybe there is more elegant way to do the whole thing?

It sounds like you want the PL/PgSQL EXECUTE statement. You can use this to invoke dynamic SQL, eg for a 2-argument procedure with dynamic name:
EXECUTE format('SELECT %I($1,$2)', func_name) USING arg1, arg2;
This is a PL/PgSQL statement. It is not available in regular SQL. You can of course create a simple PL/PgSQL procedure that does this statement and call that from SQL.

Related

Dynamic SQL for trigger or rules for computed columns?

I would like to keep some computed columns in several tables and columns (PostgreSQL). The calculation is the same but there might be practically any number of tables and columns. So if the input is table1.field1 then the calculated field is table1.field1_calculated . And so forth. I was thinking on using triggers to maintain this, one trigger per field. This would require passing in the table and the field name as arguments into the function, assembling a SQL statement and executing it. I do not want to rely on plpgsql as that might not be available everywhere and everything I can find about dynamic SQL in PostgreSQL is a) old b) says you need plpgsql for that.
So
Is there a way in PostgreSQL 9.4+ or perhaps 9.5+ to create and execute dynamic SQL without plpgsql?
Is there a better way than triggering this function containing said dynamic SQL?
Ps. I am aware of the function/column notation trick but that doesn't really work here. For 2. I am suspecting rules might work but can't figure out how.

User defined function : insert into a table statement forbidden

I'd like to write a ud-function in my SQL database in order to write procedure logging in a specific table among dbo tables.
I'd like this specific function could be called by any stored procedures inside my database.
I have no idea what kind of solutions I could use. I've learned that only 3 kinds of UDF are avalaible.
Any suggestions ?
Thanks.
A UDF always has to be side-effect free. That means that you cannot change data in a table from within a function.
If you are looking to call your logger from stored procedures, why not implement it as a stored procedure too?

why use stored procedure instead of query directly to db?

My company doing new policy because my company would have certification of some international standards. That policy is, the DBA not allowed to query directly into database, like :
select * from some_table, update some_table, etc.
We have to use stored procedure to do that queries.
Regarding my last question in here : Postgres pl/pgsql ERROR: column "column_name" does not exist
I'm wondering, do we have to create a stored procedure per table, or per condition?
Is there any way to create stored procedures more efficiently?
Thanks for your answer before..
and sorry for my bad english.. :D
Some reasons to use stored procedures are:
They have presumably undergone some testing to ensure that they do
not allow business rules to be broken, as well as some optimization
for performance.
They ensure consistency in results. Every time you are asked to
perform task X, you run the stored procedure associate with task X.
If you write the query, you may not write it the same way every time;
maybe one day you forget something silly like forcing text to the
same case before a comparison and something gets missed.
They start off taking somewhat longer to write than just a query, but
running that stored procedure takes less time than writing the query
again. Run it enough times and it becomes more efficient to have
written the stored procedure.
They reduce or eliminate the need to know the relationships of
underlying tables.
You can grant permissions to execute the stored procedures (with
security definer), but deny permissions on the underlying tables.
Programmers (if you separate DBAs and programmers) can be provided an
API, and that’s all they need to know. So long as you maintain the
API while changing the database, you can make any changes necessary
to the underlying relations without breaking their software; indeed,
you don’t even need to know what they have done with your API.
You will likely end up making one stored procedure per query you would otherwise execute.
I'm not sure why you consider this inefficient, or particularly time-consuming as compared to just writing the query. If all you are doing is putting the query inside of a stored procedure, the extra work should be minimal.
CREATE OR REPLACE FUNCTION aSchema.aProcedure (
IN var1 text,
IN var2 text,
OUT col1 text,
OUT col2 text
)
RETURNS setof record
LANGUAGE plpgsql
VOLATILE
CALLED ON NULL INPUT
SECURITY DEFINER
SET search_path = aSchema, pg_temp
AS $body$
BEGIN
RETURN QUERY /*the query you would have written anyway*/;
END;
$body$;
GRANT EXECUTE ON FUNCTION aSchema.aProcedure(text, text) TO public;
As you used in your previous question, the function can be even more dynamic by passing columns/tables as parameters and using EXECUTE (though this increases how much the person executing the function needs to know about how the function works, so I try to avoid it).
If the "less efficient" is coming from additional logic that is included in the function, then the comparison to just using queries isn't fair, as the function is doing additional work.

Creating Stored Procedures that can work with different tables

I need to use the same Stored Procedures against many tables all with the same structure in my DB. This is data loaded from customers,with one table/customer and the data needs calculations/checks run before it's loaded to our DataWarehouse.
So far these are the options and issues I've found and I'm looking for a better pattern/approach.
Create a view that points to the
table I want to process, the SPs
then talk to that view. This works
well (especially once I'd worked out
how to create views 'automagically'
based on their columns). But the
view can only be used with one table
at a time, forcing the system to
deal with one customer at a time.
Use dynamic sql within each SP -
makes the SPs much harder to
read/debug and for those reasons has
been ruled out
Create a partitioned view across
all the tables and then use a
paramatised table function to return
just the data we're interested in -
ah but then I can't update the data
as the function returns a table that
can be only used for select
Use dynamic sql inside a function
(can't be done) to create a view
(which also can't be done) .... give
up
Within the SP create a temp table
with over the target table using
dynamic sql, but then the temp table
only exists in the session that runs
the dynamic sql not the 'parent'
session that's running the SP ...
give up
Create a global temp table using
dynamic SQL to avoid the scope issue
of 5, then run the SP against the
global temp table. Still run into
the single customer issue.
Create the view as in 1 within a
transaction and then run all the SPs
and then commit - works fine for one
user, but any others are now blocked
trying to create a new view of the
same name
Use a temporary view ... can't in
T/Sql
Move all the code into .Net - but
we have environment issues where
tsql is much easier to host/run
I know I'm not the only person who has this problem, have any of you good people solved it, please help.
Maybe your approach is wrong, I will go deep in details in a while but it seems that your problem can be solved using SSIS
-- Updated answer:
First, the big picture:
The most affordable way to process the tables dynamically is using a script instead of a stored procedure. If you want to make table access randomly chosen, you certainly will not use any of the performance advantages of stored procedures, i.e. execution plans. A SQL Script can be easily upgraded to point one table at runtime using placeholders and replacing it before executing.
The script can be loaded from the filesystem, a variable, a text column in a table, etc. The loading process consists in read the script content to a string variable. This step occurs once.
The next step is the preparation stage. This step will be executed for each table to be processed. The main business of this step is to replace the table placeholders with the current table being processed. Also is possible to set parameter values like any parameter you can need to pass into the sp that you already wrote.
The last step is the execution of the script. As is already loaded into a variable and the placeholders were set to the current table name, you can safely call a ExecuteSQLTask with the sql variable as the input. This process of course happens for each table you want to process.
Ok. Now let's see this in action.
This is a sample database model:
CREATE TABLE [dbo].[t_n](
[id] [int] IDENTITY(1,1) NOT NULL,
[name] [varchar](50) NOT NULL,
[start] [datetime] NULL,
CONSTRAINT [PK_t_n] PRIMARY KEY CLUSTERED ([id] ASC)
) ON [PRIMARY]
where t_n represents any table (t_1, t_2, t_3, etc).
This is your current stored procedure:
CREATE PROCEDURE SpProcessT_n
AS
BEGIN
SET NOCOUNT ON;
SELECT * FROM [t1];
END
GO
Now, transform this stored procedure to a Sql script, placing a placeholder instead of the table name
SET NOCOUNT ON;
SELECT * FROM [$table_name];
I choose to save this in a .sql file in the filesystem to keep the POC as simple as possible.
Next, create a SSIS Package like this:
These are the settings I choose to set up the loop:
And this is the way you can assign the table name to a variable called appropriately _table_name_
This is the setup of the script task, here you find that the variable _table_name_ has read only access, while a new variable called SqlExec has read/write access:
And this is it's Main function:
public void Main()
{
String Table_Name = Dts.Variables["table_name"].Value.ToString();
String SqlScript;
Regex reg = new Regex(#"\$table_name", RegexOptions.Compiled);
using (var f = File.OpenText(#"c:\sqlscript.sql")) {
SqlScript = f.ReadToEnd();
f.Close();
}
SqlScript = reg.Replace(SqlScript, Table_Name);
Dts.Variables["SqlExec"].Value = SqlScript;
Dts.TaskResult = (int)ScriptResults.Success;
}
You can notice that the Dts Variable SqlExec contains the sql script that will be executed. Now you can set the following options in your ExecuteSqlTask:
Successfully tested in MSSQL 2008, if you put a insert inside the script file you will notice new rows in each table.
Hope this helps!
If your application can afford to have one cut-off day late, then you can have a nightly scheduled job to run an SSIS package that will consolidate all 150+ tables into one single huge table. Since the freshness of the results of the queries against that huge table will then be 1 'date' late, this solution will not include any rows that recently been loaded.
You can actually time the running of this package. If it is still amazingly fast, say within 30 minutes, then you can bet to run it in every few hours, like during: the start of work day, lunch break, and end of day. This way you can have a nearly fresh data to query with.
Write a partitioned view including table names?
SELECT 'TableName', t.* FROM TableName t
UNION ALL
SELECT 'TableName2', t.* FROM TableName2 t
Then write a single instead of trigger which uses dynamic SQL for writing (less testing involved with that use of dynamic SQL because you'd just write the simple CRUD operations once for all tables I'd think)
I would not do this with SQL. What you are describing sounds like a traditional ETL situation.
Since all of the customer tables are the same, I would create a table in the data warehouse with all the columns from the client table, a surrogate key column, and a type identifier. You have an option to create a "staging" table here that will only have data in it during the ETL process, or just working on a single "live" table. I would create the staging table.
Then within SSIS package (don't worry you can still schedule from SQL Server agent, it hasn't totally left the DB server), start the ETL process...
E(xtract): copy the data from your source into the staging table in the data warehouse. You most likely want to use a sub-package within a foreach loop and changing the name of the table that you want to process from an external store (most people would say put this in the warehouse, but its up to you).
T(ransform): run the calculations/checks you were talking about, but do it on the whole set...
L(oad): Copy it to your real within the data warehouse.
There are a couple things I would NOT do.
1. Modify the data in the source table.
2. Try to do this in t-sql. Its just not what tsql is good at.
If you need more detail on this approach, I would probably ask the question with some Business Intelligence tags. I'll be traveling for the next week or so, but I will try to look at the comments to clear anything up if you need me to.
I am fairly certain that the standard way to solve this is using dynamic SQL in each sp (your option 2), which has already been ruled out.
Your goal is to make generic, multi-table SQL. I don't see how you intend to accomplish that without sacrificing some efficiency and readability.

Best use of indices on temporary tables in T-SQL

If you're creating a temporary table within a stored procedure and want to add an index or two on it, to improve the performance of any additional statements made against it, what is the best approach? Sybase says this:
"the table must contain data when the index is created. If you create the temporary table and create the index on an empty table, Adaptive Server does not create column statistics such as histograms and densities. If you insert data rows after creating the index, the optimizer has incomplete statistics."
but recently a colleague mentioned that if I create the temp table and indices in a different stored procedure to the one which actually uses the temporary table, then Adaptive Server optimiser will be able to make use of them.
On the whole, I'm not a big fan of wrapper procedures that add little value, so I've not actually got around to testing this, but I thought I'd put the question out there, to see if anyone had any other approaches or advice?
A few thoughts:
If your temporary table is so big that you have to index it, then is there a better way to solve the problem?
You can force it to use the index (if you are sure that the index is the correct way to access the table) by giving an optimiser hint, of the form:
SELECT *
FROM #table (index idIndex)
WHERE id = #id
If you are interested in performance tips in general, I've answered a couple of other questions about that at some length here:
Favourite performance tuning tricks
How do you optimize tables for specific queries?
What's the problem with adding the indexes after you put data into the temp table?
One thing you need to be mindful of is the visibility of the index to other instances of the procedure that might be running at the same time.
I like to add a guid to these kinds of temp tables (and to the indexes), to make sure there is never a conflict. The other benefit of this approach is that you could simply make the temp table a real table.
Also, make sure that you will need to query the data in these temp tables more than once during the running of the stored procedure, otherwise the cost of index creation will outweigh the benefit to the select.
In Sybase if you create a temp table and then use it in one proc the plan for the select is built using an estimate of 100 rows in the table. (The plan is built when the procedure starts before the tables are populated.) This can result in the temp table being table scanned since it is only "100 rows". Calling a another proc causes Sybase to build the plan for the select with the actual number of rows, this allows the optimizer to pick a better index to use. I have seen significant improvedments using this approach but test on your database as sometimes there is no difference.