in my LiveAppDB we have a load of views which reference a larger LiveProduction DB. What I'd like to do is switch the code to look at the TestProduction DB, depending on which Application DB the view is running on
-- VIEW CAN RUN ON LiveApplicationDB or TestApplicationDB
SELECT COL1, COL2
FROM (CASE WHEN DB_NAME() = ‘LiveApplicationDB‘ THEN LIVEPRODUCTION.DB.DBTABLE ELSE TESTPRODUCTION.DB.DBTABLE END) AS tabl1 -- CASE TO DETERMINE WHICH PRODUCTION DB TO USE
INNER JOIN dbo.ThisDBTable BRA
ON tabl1.product COLLATE Latin1_General_BIN = BRA.product
WHERE (tabl1.COL1 IS NOT NULL)
In a bid to help clarify this…
If the view LiveAppDB use LiveProductionDB else TestAppDB use TestProductionDB
Obviously you can’t use variables in a View.
Any help much appreciated.
Create a synonym in LiveProductionDB pointing to LIVEPRODUCTION.DB.DBTABLE and a synonym with the same exact name in TestAppDB pointing to TESTPRODUCTION.DB.DBTABLE. Use the synonym in your query. You probably should put something like this in your deployment script:
IF DB_NAME() = 'LiveApplicationDB'
CREATE SYNONYM dbo.DBTABLE FOR LIVEPRODUCTION.DB.DBTABLE;
IF DB_NAME() = 'LiveApplicationDB'
CREATE SYNONYM dbo.DBTABLE FOR TESTPRODUCTION.DB.DBTABLE;
FYI: Most compare tools I've tried ignore the destination of a synonym and thus this will not be considered a difference.
EDIT: I'd recommend never to use a object in an other database directly. For example: when dbA needs some objects in dbB, I'd create a schema in dbA called dbB and place synonyms in this schema referencing the objects in dbB from dbA. In most cases I'd also create a schema called dbA in dbB and place views and sprocs there only to be used by dbA. dbA is only allowed to use objects placed in the dbA schema in dbB. To futher explain the reasoning behind this approach:
All dependancies on dbB from dbA are clearly stated in both databases
Moving dbB to another server is a breeze, simply create a linked server and modify all synonyms. No sproc needs to be modified or tested.
Datamodel changes dbB don't necessarily mean you'd need to make changes in dbA, just make sure the interfaces of the objects in schema dbA in dbB remain the same.
All code that matters is exactly the same between test and production which benefits deployments and code compares
If the servers are linked, then you can use a four-part naming convention to point to your test instance - just qualify the name like so:
(CASE WHEN DB_NAME() = 'LiveApplicationDB' THEN LiveServer.LIVEPRODUCTION.DB.DBTABLE ELSE TestServer.TESTPRODUCTION.DB.DBTABLE END
But you'll have to get your DBA to agree to link the servers, or do it yourself, if you have the knowledge/power.
Donna
I'd approach this slightly differently to how you're doing it. You're always using the table dbo.ThisDBTable. Use this as the base table then join to the others with the db_name() check in the ON clause;
-- VIEW CAN RUN ON LiveApplicationDB or TestApplicationDB
SELECT BRA.COL1
,COLLATE(tab1.COL2, tab2.COL2) COL2
FROM dbo.ThisDBTable BRA
LEFT JOIN LIVEPRODUCTION.DB.DBTABLE tab1
ON BRA.product = tab1.product COLLATE Latin1_General_BIN
AND DB_NAME() = 'LiveApplicationDB'
LEFT JOIN TESTPRODUCTION.DB.DBTABLE tab2
ON BRA.product = tab2.product COLLATE Latin1_General_BIN
AND DB_NAME() = 'TestProductionDB'
WHERE tab1.COL1 IS NOT NULL
Related
I've been reading a lot about PostgreSQL's TOAST, and there's one thing I seem to be missing. They mention in the documentation that, "there are four different strategies for storing TOAST-able columns on disk," those being: PLAIN, EXTENDED, EXTERNAL, and MAIN. They also have a very clear way to define which strategy to use for your column, which can be found here. Essentially, it would be something like this:
ALTER TABLE table_name ALTER COLUMN column_name SET STORAGE EXTERNAL
The one thing I don't see is how to easily retrieve that setting. My question is, is there a simple way (either through commands or pgAdmin) to retrieve the storage strategy being used by a column?
This is stored pg_attribute.attstorage, e.g.:
select att.attname,
case att.attstorage
when 'p' then 'plain'
when 'm' then 'main'
when 'e' then 'external'
when 'x' then 'extended'
end as attstorage
from pg_attribute att
join pg_class tbl on tbl.oid = att.attrelid
join pg_namespace ns on tbl.relnamespace = ns.oid
where tbl.relname = 'table_name'
and ns.nspname = 'public'
and not att.attisdropped;
Note that attstorage is only valid if attlen is > -1
While I like #a_horse_with_no_name's method, after I posted this question, I expanded my search to just general table information and found that if you use psql, you can use the command described here, and the result will be a table listing all of the columns, their types, modifiers, storage types, stats targets, and descriptions.
So, using psql this info can be found with:
\d+ table_name
I just figured I'd post this in case anyone wanted another solution.
Using Postgres 9.3:
I am attempting to automatically populate a table when an insert is performed on another table. This seems like a good use for rules, but after adding the rule to the first table, I am no longer able to perform inserts into the second table using the writable CTE. Here is an example:
CREATE TABLE foo (
id INT PRIMARY KEY
);
CREATE TABLE bar (
id INT PRIMARY KEY REFERENCES foo
);
CREATE RULE insertFoo AS ON INSERT TO foo DO INSERT INTO bar VALUES (NEW.id);
WITH a AS (SELECT * FROM (VALUES (1), (2)) b)
INSERT INTO foo SELECT * FROM a
When this is run, I get the error
"ERROR: WITH cannot be used in a query that is rewritten by rules
into multiple queries".
I have searched for that error string, but am only able to find links to the source code. I know that I can perform the above using row-level triggers instead, but it seems like I should be able to do this at the statement level. Why can I not use the writable CTE, when queries like this can (in this case) be easily re-written as:
INSERT INTO foo SELECT * FROM (VALUES (1), (2)) a
Does anyone know of another way that would accomplish what I am attempting to do other than 1) using rules, which prevents the use of "with" queries, or 2) using row-level triggers? Thanks,
TL;DR: use triggers, not rules.
Generally speaking, prefer triggers over rules, unless rules are absolutely necessary. (Which, in practice, they never are.)
Using rules introduces heaps of problems which will needlessly complicate your life down the road. You've run into one here. Another (major) one is, for instance, that the number of affected rows will correspond to that of the very last query -- if you're relying on FOUND somewhere and your query is incorrectly reporting that no rows were affected by a query, you'll be in for painful bugs.
Moreover, there's occasional talk of deprecating Postgres rules outright:
http://postgresql.nabble.com/Deprecating-RULES-td5727689.html
As the other answer I definitely recommend using INSTEAD OF triggers before RULEs.
However if for some reason you don't want to change existing VIEW RULEs and still want use WITH you can do so by wrapping the VIEW in a stored procedure:
create function insert_foo(int) returns void as $$
insert into foo values ($1)
$$ language sql;
WITH a AS (SELECT * FROM (VALUES (1), (2)) b)
SELECT insert_foo(a.column1) from a;
This could be useful when using some legacy db through some system that wraps statements with CTEs.
for example the following is my query
WITH stkpos as (
select * from mytbl
),
updt as (
update stkpos set field=(select sum(fieldn) from stkpos)
)
select * from stkpos
ERROR: relation "stkpos" does not exist
Unlike MS-SQL and some other DBs, PostgreSQL's CTE terms are not treated like a view. They're more like an implicit temp table - they get materialized, the planner can't push filters down into them or pull filters up out of them, etc.
One consequence of this is that you can't update them, because they're a copy of the original data, not just a view of it. You'll need to find another way to do what you want.
If you want help with that you'll need to post a new question that has a clear and self contained example (with create table statements, etc) showing a cut down version of your real problem. Enough to actually let us understand what you're trying to accomplish and why.
Is it possible to declare a variable within a View? For example:
Declare #SomeVar varchar(8) = 'something'
gives me the syntax error:
Incorrect syntax near the keyword 'Declare'.
You are correct. Local variables are not allowed in a VIEW.
You can set a local variable in a table valued function, which returns a result set (like a view does.)
http://msdn.microsoft.com/en-us/library/ms191165.aspx
e.g.
CREATE FUNCTION dbo.udf_foo()
RETURNS #ret TABLE (col INT)
AS
BEGIN
DECLARE #myvar INT;
SELECT #myvar = 1;
INSERT INTO #ret SELECT #myvar;
RETURN;
END;
GO
SELECT * FROM dbo.udf_foo();
GO
You could use WITH to define your expressions. Then do a simple Sub-SELECT to access those definitions.
CREATE VIEW MyView
AS
WITH MyVars (SomeVar, Var2)
AS (
SELECT
'something' AS 'SomeVar',
123 AS 'Var2'
)
SELECT *
FROM MyTable
WHERE x = (SELECT SomeVar FROM MyVars)
EDIT: I tried using a CTE on my previous answer which was incorrect, as pointed out by #bummi. This option should work instead:
Here's one option using a CROSS APPLY, to kind of work around this problem:
SELECT st.Value, Constants.CONSTANT_ONE, Constants.CONSTANT_TWO
FROM SomeTable st
CROSS APPLY (
SELECT 'Value1' AS CONSTANT_ONE,
'Value2' AS CONSTANT_TWO
) Constants
#datenstation had the correct concept. Here is a working example that uses CTE to cache variable's names:
CREATE VIEW vwImportant_Users AS
WITH params AS (
SELECT
varType='%Admin%',
varMinStatus=1)
SELECT status, name
FROM sys.sysusers, params
WHERE status > varMinStatus OR name LIKE varType
SELECT * FROM vwImportant_Users
also via JOIN
WITH params AS ( SELECT varType='%Admin%', varMinStatus=1)
SELECT status, name
FROM sys.sysusers INNER JOIN params ON 1=1
WHERE status > varMinStatus OR name LIKE varType
also via CROSS APPLY
WITH params AS ( SELECT varType='%Admin%', varMinStatus=1)
SELECT status, name
FROM sys.sysusers CROSS APPLY params
WHERE status > varMinStatus OR name LIKE varType
Yes this is correct, you can't have variables in views
(there are other restrictions too).
Views can be used for cases where the result can be replaced with a select statement.
Using functions as spencer7593 mentioned is a correct approach for dynamic data. For static data, a more performant approach which is consistent with SQL data design (versus the anti-pattern of writting massive procedural code in sprocs) is to create a separate table with the static values and join to it. This is extremely beneficial from a performace perspective since the SQL Engine can build effective execution plans around a JOIN, and you have the potential to add indexes as well if needed.
The disadvantage of using functions (or any inline calculated values) is the callout happens for every potential row returned, which is costly. Why? Because SQL has to first create a full dataset with the calculated values and then apply the WHERE clause to that dataset.
Nine times out of ten you should not need dynamically calculated cell values in your queries. Its much better to figure out what you will need, then design a data model that supports it, and populate that data model with semi-dynamic data (via batch jobs for instance) and use the SQL Engine to do the heavy lifting via standard SQL.
What I do is create a view that performs the same select as the table variable and link that view into the second view. So a view can select from another view. This achieves the same result
How often do you need to refresh the view? I have a similar case where the new data comes once a month; then I have to load it, and during the loading processes I have to create new tables. At that moment I alter my view to consider the changes.
I used as base the information in this other question:
Create View Dynamically & synonyms
In there, it is proposed to do it 2 ways:
using synonyms.
Using dynamic SQL to create view (this is what helped me achieve my result).
Basically what i want in my stored procedure is to return a list of tables, store this list in a variable; i need to go through every item in my list to recursively call this storedprocedure. In the end i need an overall listOfTables built up of this recursion.
Any help would be most appreciated
You should take a look at Common Table Expressions in case you're on SQL2005 or higher (not sure if they can help in your specific situation but an important alternative to most recursive queries) . Recursive procedures cannot nest more than 32 levels deep and are not very elegant.
You can use CTE's:
WITH q (column1, column2) (
SELECT *
FROM table
UNION ALL
SELECT *
FROM table
JOIN q
ON …
)
SELECT *
FROM q
However, there are different limitations: you cannot use aggregates, analytics functions, TOP clause etc.
Are you after recursion or just a loop through all tables? If you are using Sql Server 2005 and want to loop through all tables you can use a table variable in your SP, try something along thse lines:
declare #TableList as table (
ID int identity (1,1),
TableName varchar(500)
)
insert into #TableList (TableName)
select name
from sys.tables
declare #count int
declare #limit int
declare #TableName varchar(500)
set #count = 1
select #limit = max(ID) from #TableList
while #count <= #limit
begin
select #TableName = TableName from #TableList where ID = #count
print #TableName --replace with call to SP
set #count = #count + 1
end
Replace the print #TableName with the call to the SP, and if you don't want this to run on every table in the DB then change the query select name from sys.tables to only return the tables you are after
Most likely a CTE would answer your requirement.
If you really must use a stored procedure not a query then all you have to do is iterate through the table list then you can use your code of choice to iterate through the table list and call the procedure. And Macros already posted how to do that as I was typing lol. And as Mehrdad already told you, there is limit on the number of nested levels of call SQL Server allows and is rather shallow. I'm not convinced from your explanation that you need a recursive call, it looks more like a simple iteration over a list, but if you do indeed need recursivity then remember CS 101 class: any recursive algorithm can be transformed into a non-recursive one by using a loop iteration and a stack.
Stored procedures are very useful. BUT.
I recently had to work on a system that was heavily dependent on stored procedures. It was a nightmare. Half the business logic was in one language (Java, in this case), and the other half was in the database in stored procedures. Worse yet, half the application was under source code control and the other half was one database crash from being lost forever (bad backup processes). Plus, all those lovely little tools I have for scanning, analyzing and maintaining source code can't work with sources inside the database.
I'm not inherently anti-stored-procedure, but oh, how they can be abused. Stored procedures are excellent for when you need to enforce rules against data coming from a multiplicity of sources, and there's no better way to offload heavy-duty record access off the webservers (and onto the DBMS server). But for the most part, I'd rather use a View than a Stored Procedure and an application programming language for the business logic. I know it makes some things a little more complex. But it can make life a whole lot easier.