DataAdapter.Update() and its use of the dataset relationship schema - ado.net

I have a dataset with multiple tables and the necessary relations in place to call SQL statements in the proper order.
When the Adapter.Update() method is called, I presume it scours the relationships between all tables to determine the order in which it makes SQL calls.
For example:
A delete in Table A requires first a delete in Table B.
An insert in Table B first requires an insert into Table A.
How can I leverage the mechanism it uses to implement my own update strategy?
Reason being, rather than being able to allow the Adapter to perform the Updates, I instead need to call Stored Procedures.
* * * * * * EDIT * * * * * *
The dataSet is passed from the UI client to a back-end server component.
On the back end server, the DataAdapter.Update(dataSet) occurs.

Maybe you could use the RowUpdating Event on your Tables and call your Stored Procedure from there ... also set SqlRowUpdatingEventArgs.Status to SkipCurrentRow to prevent the standard Update Sql Command from being triggered and call SqlRowUpdatingEventArgs.Row.AcceptChanges() to set the RowState back to Unchanged ...

Related

Postgres: how does a default value get populated?

To be clear, I want to know what is the mechanism that is used to populate the default value not the SQL syntax needed to create the default value constraint on the table.
Does Postgres use some kind of trigger that updates the default value if it is missing or something else?. I couldn't find an explanation on the official website.
This happens in the rewriteTargetListIU function in src/backend/rewrite/rewriteHandler.c. The comment says it all:
/*
* rewriteTargetListIU - rewrite INSERT/UPDATE targetlist into standard form
*
* This has the following responsibilities:
*
* 1. For an INSERT, add tlist entries to compute default values for any
* attributes that have defaults and are not assigned to in the given tlist.
* (We do not insert anything for default-less attributes, however. The
* planner will later insert NULLs for them, but there's no reason to slow
* down rewriter processing with extra tlist nodes.) Also, for both INSERT
* and UPDATE, replace explicit DEFAULT specifications with column default
* expressions.
So this happens during query rewrite, which is the step between parsing the SQL string and optimizing it.

Why can't you define a materialized view using bound parameters in PostgreSQL?

I've tracked where the error message comes from in the source:
/*
* A materialized view would either need to save parameters for use in
* maintaining/loading the data or prohibit them entirely. The latter
* seems safer and more sane.
*/
if (query_contains_extern_params(query))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("materialized views may not be defined using bound parameters")));
Permalink: https://github.com/postgres/postgres/blob/ef3109500030030b0e8d3c1d7f2b409d702cc49a/src/backend/parser/analyze.c#L2538)
Why is this? Why would a materialized view need to save parameters?
I'm using Elixir and I can't create the view from Ecto using:
Repo.query("CREATE MATERIALIZED VIEW $1 AS
SELECT * FROM tasks WHERE
resource_type = $2 AND
task_type = $3
", [view_name, resource_type, task_type])
but
Repo.query("CREATE MATERIALIZED VIEW \"#{view_name}\" AS
SELECT * FROM tasks WHERE
resource_type = '#{resource_type}' AND
task_type = '#{task_type}'
", [])
works fine.
Please tell me what I'm missing, if you can.
In the first case you are using a prepared SQL statement with placeholders and you provide the values to fill those placeholders separately. Materialized view would require to store those values to be able to rerun the query.
In the second case you construct a query string yourself inside your programming language, by injecting values into the string. And then you pass the query as a string, without placeholders and values to the PostgreSQL. In this case materialized view can just store the string and this is it. And is able to rerun the query.
The problem with the second case is that you might allow SQL injection to happen in your query.

Postgres Rules Preventing CTE Queries

Using Postgres 9.3:
I am attempting to automatically populate a table when an insert is performed on another table. This seems like a good use for rules, but after adding the rule to the first table, I am no longer able to perform inserts into the second table using the writable CTE. Here is an example:
CREATE TABLE foo (
id INT PRIMARY KEY
);
CREATE TABLE bar (
id INT PRIMARY KEY REFERENCES foo
);
CREATE RULE insertFoo AS ON INSERT TO foo DO INSERT INTO bar VALUES (NEW.id);
WITH a AS (SELECT * FROM (VALUES (1), (2)) b)
INSERT INTO foo SELECT * FROM a
When this is run, I get the error
"ERROR: WITH cannot be used in a query that is rewritten by rules
into multiple queries".
I have searched for that error string, but am only able to find links to the source code. I know that I can perform the above using row-level triggers instead, but it seems like I should be able to do this at the statement level. Why can I not use the writable CTE, when queries like this can (in this case) be easily re-written as:
INSERT INTO foo SELECT * FROM (VALUES (1), (2)) a
Does anyone know of another way that would accomplish what I am attempting to do other than 1) using rules, which prevents the use of "with" queries, or 2) using row-level triggers? Thanks,
        
TL;DR: use triggers, not rules.
Generally speaking, prefer triggers over rules, unless rules are absolutely necessary. (Which, in practice, they never are.)
Using rules introduces heaps of problems which will needlessly complicate your life down the road. You've run into one here. Another (major) one is, for instance, that the number of affected rows will correspond to that of the very last query -- if you're relying on FOUND somewhere and your query is incorrectly reporting that no rows were affected by a query, you'll be in for painful bugs.
Moreover, there's occasional talk of deprecating Postgres rules outright:
http://postgresql.nabble.com/Deprecating-RULES-td5727689.html
As the other answer I definitely recommend using INSTEAD OF triggers before RULEs.
However if for some reason you don't want to change existing VIEW RULEs and still want use WITH you can do so by wrapping the VIEW in a stored procedure:
create function insert_foo(int) returns void as $$
insert into foo values ($1)
$$ language sql;
WITH a AS (SELECT * FROM (VALUES (1), (2)) b)
SELECT insert_foo(a.column1) from a;
This could be useful when using some legacy db through some system that wraps statements with CTEs.

using schemas in postgresql

I have developed an application using postgresql and it works well.
Now I need to create several instances of the same application but I have only one database. So I am thinking about using schemas, so that I can group each instance tables in a different schema.
Now, I wouldn't like to rewrite all my functions and script, thus I am wondering if I can just use some directive to instruct the database to operate on a specific schema. Just to try to make it clearer, do you know when in c++ you do
using namespace std;
so that you can use cout instead of std::cout ? I would like to use someting similar if possible.
The parameter you are looking for is search_path - that lists the schemas a query will look in. So, you can do something like:
CREATE TABLE schema1.tt ...
CREATE TABLE schema2.tt ...
CREATE FUNCTION schema1.foo() ...
CREATE FUNCTION schema2.foo() ...
SET search_path = schema1, something_else;
SELECT * FROM tt; -- schema1.tt
SELECT * FROM schema2.tt -- schema2.tt
SELECT foo(); -- calls schema1.foo
SELECT schema2.foo(); -- calls schema2.foo
Note that if a query's plan gets saved inside the body of foo() then you may get an unexpected results. I would recommend you always explicitly list schemas for referenced tables in plpgsql functions if you are using duplicated tables. If not, make sure you have testing in place to check behaviour with a chaning search_path.
Oh - you can explicitly set search_path for a function's body too - see the manual's CREATE FUNCTION reference for details.

Materilizing related entities from SPs using EF Extensions

I have a stored procedure that returns a collection of my entity class objects. Since I want my navigation properties populated as well I'm using EF Extensions and I've written my own Materializer class.
But. My entity class has a navigation property of Type that points to a different entity. Stored procedure of course returns an ID from the lookup table (foreign key).
I would like to populate my types as if I was eagerly loading a related entity. How do I do that in my Materializer? Can I do it without having a stored procedure that returns two result sets?
I would like to implement something similar to what Include() extension method does on source selection in LINQ statement.
I've solved this myself some time ago. I'm just answering my own question if anyone else would need it.
So what kind of results should stored procedures return? It hugely depends on relation type. Lets say we have two tables: TableOne and TableTwo.
1:0..1 relation
In this case stored procedure should return both at once:
select t1.*, t2.*
from TableOne t1
[left] join TableTwo t2
on t2.key = t1.key
When they are 1:1, you can easily omit left.
1:MANY relation
In this case it's much easier to write stored procedures that return more results. Starting from those at side one so they will be prepared when you bind the many side table.
/* relation one */
select *
from TableOne
/* relation many */
select *
from TableTwo
But if you would still like to return a single result set, you should check for each record whether you already have a certain entity loaded. There's a method called FindOrAttach() that can help you. So each result will return both entities and you have to chech if they are loaded. If they are not, materialize... But as mentioned: it's much easier to return two result sets.
MANY:MANY relation
You should also write your stored procedure to return more results. In this case 3 of them.
/* first table */
select *
from TableOne
/* second table */
select *
from TableTwo
/* relation *:* */
select *
from TableOne2Table2
Materilize first two tables as usual and then call Attach for each record from thirs result loading by keys from TableOne and TableTwo. This will also populate entity set navigation properties.
I home this will help others as it helped me.