Does, updating catalog table on postgres11 is good idea as it runs quicker? and I noticed this catelog update valid for future data only not existing ,is this true in all cases?
UPDATE pg_catalog.pg_attribute SET attnotnull = TRUE
WHERE attrelid = 'tttt2'::regclass
AND attname = 'b';
Modifying catalogs like that is a bad idea. It can lead to data corruption unless you know exactly what you are doing.
In this special case, I recommend that you create a NOT VALID check constraint and use ALTER TABLE to validate it later. That avoids a long ACCESS EXCLUSIVE lock.
Thanks for the responses:
As I don't have dare to update catalog in prod without knowing the impact.
I fallowed with not valid (Also suggested by #laurenz ). I will try to spend some time and share catalog impact, once I done some research.
Related
in my LiveAppDB we have a load of views which reference a larger LiveProduction DB. What I'd like to do is switch the code to look at the TestProduction DB, depending on which Application DB the view is running on
-- VIEW CAN RUN ON LiveApplicationDB or TestApplicationDB
SELECT COL1, COL2
FROM (CASE WHEN DB_NAME() = ‘LiveApplicationDB‘ THEN LIVEPRODUCTION.DB.DBTABLE ELSE TESTPRODUCTION.DB.DBTABLE END) AS tabl1 -- CASE TO DETERMINE WHICH PRODUCTION DB TO USE
INNER JOIN dbo.ThisDBTable BRA
ON tabl1.product COLLATE Latin1_General_BIN = BRA.product
WHERE (tabl1.COL1 IS NOT NULL)
In a bid to help clarify this…
If the view LiveAppDB use LiveProductionDB else TestAppDB use TestProductionDB
Obviously you can’t use variables in a View.
Any help much appreciated.
Create a synonym in LiveProductionDB pointing to LIVEPRODUCTION.DB.DBTABLE and a synonym with the same exact name in TestAppDB pointing to TESTPRODUCTION.DB.DBTABLE. Use the synonym in your query. You probably should put something like this in your deployment script:
IF DB_NAME() = 'LiveApplicationDB'
CREATE SYNONYM dbo.DBTABLE FOR LIVEPRODUCTION.DB.DBTABLE;
IF DB_NAME() = 'LiveApplicationDB'
CREATE SYNONYM dbo.DBTABLE FOR TESTPRODUCTION.DB.DBTABLE;
FYI: Most compare tools I've tried ignore the destination of a synonym and thus this will not be considered a difference.
EDIT: I'd recommend never to use a object in an other database directly. For example: when dbA needs some objects in dbB, I'd create a schema in dbA called dbB and place synonyms in this schema referencing the objects in dbB from dbA. In most cases I'd also create a schema called dbA in dbB and place views and sprocs there only to be used by dbA. dbA is only allowed to use objects placed in the dbA schema in dbB. To futher explain the reasoning behind this approach:
All dependancies on dbB from dbA are clearly stated in both databases
Moving dbB to another server is a breeze, simply create a linked server and modify all synonyms. No sproc needs to be modified or tested.
Datamodel changes dbB don't necessarily mean you'd need to make changes in dbA, just make sure the interfaces of the objects in schema dbA in dbB remain the same.
All code that matters is exactly the same between test and production which benefits deployments and code compares
If the servers are linked, then you can use a four-part naming convention to point to your test instance - just qualify the name like so:
(CASE WHEN DB_NAME() = 'LiveApplicationDB' THEN LiveServer.LIVEPRODUCTION.DB.DBTABLE ELSE TestServer.TESTPRODUCTION.DB.DBTABLE END
But you'll have to get your DBA to agree to link the servers, or do it yourself, if you have the knowledge/power.
Donna
I'd approach this slightly differently to how you're doing it. You're always using the table dbo.ThisDBTable. Use this as the base table then join to the others with the db_name() check in the ON clause;
-- VIEW CAN RUN ON LiveApplicationDB or TestApplicationDB
SELECT BRA.COL1
,COLLATE(tab1.COL2, tab2.COL2) COL2
FROM dbo.ThisDBTable BRA
LEFT JOIN LIVEPRODUCTION.DB.DBTABLE tab1
ON BRA.product = tab1.product COLLATE Latin1_General_BIN
AND DB_NAME() = 'LiveApplicationDB'
LEFT JOIN TESTPRODUCTION.DB.DBTABLE tab2
ON BRA.product = tab2.product COLLATE Latin1_General_BIN
AND DB_NAME() = 'TestProductionDB'
WHERE tab1.COL1 IS NOT NULL
Given a client library that can only execute one statement in a batch, if you run
query.exec_sql("SELECT * FROM (" + sql + ")")
Are there any vectors where sql can run anything but a SELECT ?
Are there any other ways to temporarily de-elevate a connection so it can only perform SELECT?
Note: It looks like SET ROLE solves this problem, but the the issue I have is that I am unable to create a role upfront in an easy way.
While you can put data-modifying statements in queries by embedding INSERT/UPDATE/DELETE statements in CTEs, they're only allowed at the top level, so that's not an issue.
You can, however, invoke a function, which could contain just about anything. Even if you ran this in a read-only transaction, a function could potentially elevate it to read-write.
But the solution is simple: If you don't want to allow the caller to do something, don't give them permission to do it. Create a user with only the GRANTs they need, and you can execute sql as-is.
Without the ability to define permissions, the closest you're going to get is probably a read-only transaction and/or an explicit rollback after the query, but there will still be holes you can't plug (e.g. you can't roll back a setval() call).
If the sql string came from a third party then it can be used to SQL injection. I'm not sure if that is what you are asking because it is too basic for a 56k points user to ask. Sorry if that is not the case. The string could be:
some_table; insert into user_table (user_id, admin_privilege) values (1, true);
I was reading PostgreSql documentation here, and came across the following code snippet:
EXECUTE 'SELECT count(*) FROM mytable WHERE inserted_by = $1 AND inserted <= $2'
INTO c
USING checked_user, checked_date;
The documentation states that "This method is often preferable to inserting data values into the command string as text: it avoids run-time overhead of converting the values to text and back, and it is much less prone to SQL-injection attacks since there is no need for quoting or escaping".
Can you show me how this code is prone to SQL injection at all?
Edit: in all other RDBMS I have worked with this would completely prevent SQL injection. What is implemented differently in PostgreSql?
Quick answer is that it isn't by itself prone to SQL injection, As I understand your question you are asking why we don't just say so. So since you are looking for scenarios where this might lead to SQL injection, consider that mytable might be a view, and so could have additional functions behind it. Those functions might be vulnerable to SQL injection.
So you can't look at a query and conclude that it is definitely not susceptible to SQL injection. The best you can do is indicate that at the level provided, this specific level of your application does not raise sql injection concerns here.
Here is an example of a case where sql injection might very well happen.
CREATE OR REPLACE FUNCTION ban_user() returns trigger
language plpgsql security definer as
$$
begin
insert into banned_users (username) values (new.username);
execute 'alter user ' || new.username || ' WITH VALID UNTIL ''YESTERDAY''';
return new;
end;
Note that utility functions cannot be parameterized as you indicate, and we forgot to quote_ident() around new.username, thus making the field vulnerable.
CREATE OR REPLACE VIEW banned_users_today AS
SELECT username FROM banned_users where banned_date = 'today';
CREATE TRIGGER i_banned_users_today INSTEAD OF INSERT ON banned_users_today
FOR EACH ROW EXECUTE PROCEDURE ban_user();
EXECUTE 'insert into banned_users_today (username) values ($1)'
USING 'postgres with password ''boo''; drop function ban_user() cascade; --';
So no it doesn't completely solve the problem even if used everywhere it can be used. And proper use of quote_literal() and quote_ident() don't always solve the problem either.
The thing is that the problem can always be at a lower level than the query you are executing.
The bound parameters prevent garbage to manipulate the statement into doing anything other than what it's intended to do.
This guarantees no possibility for SQL-injection attacks short of a Postgres bug. (See H2C03's link for examples of what could go wrong.)
I imagine the "much less prone to SQL-injection attacks" amounts to CYA verbiage were such a thing were to arise.
SQL injection is usually associated with large data dumps on pastebin.com and such scenario won't work here even if the example used contatenation not variables. It's because that COUNT(*) will aggregate all data you'd be trying to steal.
But I can imagine scenarios where count of arbitrary record would be sufficiently valuable information - e.g. number of competitor's clients, number of sold products etc. And actually recalling some of the really tricky blind SQL injection methods it might be possible to build a query that using COUNT alone would allow to iteratively recover actual text from the database.
It would be also much easier to exploit on a database sufficiently old and misconfigured to allow the ; separator, in which case the attacker might just append a completely separate query.
Well I've been using #temp tables in standard T-SQL coding for years and thought I understood them.
However, I've been dragged into a project based in MS Access, utilizing pass-through queries, and found something that has really got me puzzled.
Though maybe it's the inner workings of Access that has me fooled !?
Here we go : Under normal usage, I understand the if I create a temp table in a Sproc, it's scope ends with the end of the SProc, and is dropped by default.
In the Access example, I found it was possible to do this in one Query:
select top(10) * into #myTemp from dbo.myTable
And then this in second separate query:
select * from #myTemp
How is this possible ?
If a temp table dies with the current session, does this mean that Access keeps a single session open, and uses that session for all Queries executed ?
Or has my fundamental understanding of scope been wrong all this time ?
Hope someone out there can help clarify what is occurring under the hood !?
Many Thanks
I found this answer of a kind of similar question:
Temp table is stored in tempdb until the connection is dropped (or in the case of a global temp tables when the last connection using it is dropped). You can also (and it is a good proctice to do so) manually drop the table when you are finished using it with a drop table statement.
I hope this helps out.
I use both Firebird embedded and Firebird Server, and from time to time I need to reindex the tables using a procedure like the following:
CREATE PROCEDURE MAINTENANCE_SELECTIVITY
ASDECLARE VARIABLE S VARCHAR(200);
BEGIN
FOR select RDB$INDEX_NAME FROM RDB$INDICES INTO :S DO
BEGIN
S = 'SET statistics INDEX ' || s || ';';
EXECUTE STATEMENT :s;
END
SUSPEND;
END
I guess this is normal using embedded, but is it really needed using a server? Is there a way to configure the server to do it automatically when required or periodically?
First, let me point out that I'm no Firebird expert, so I'm answering on the basis of how SQL Server works.
In that case, the answer is both yes, and no.
The indexes are of course updated on SQL Server, in the sense that if you insert a new row, all indexes for that table will contain that row, so it will be found. So basically, you don't need to keep reindexing the tables for that part to work. That's the "no" part.
The problem, however, is not with the index, but with the statistics. You're saying that you need to reindex the tables, but then you show code that manipulates statistics, and that's why I'm answering.
The short answer is that statistics goes slowly out of whack as time goes by. They might not deteriorate to a point where they're unusable, but they will deteriorate down from the perfect level they're in when you recreate/recalculate them. That's the "yes" part.
The main problem with stale statistics is that if the distribution of the keys in the indexes changes drastically, the statistics might not pick that up right away, and thus the query optimizer will pick the wrong indexes, based on the old, stale, statistics data it has on hand.
For instance, let's say one of your indexes has statistics that says that the keys are clumped together in one end of the value space (for instance, int-column with lots of 0's and 1's). Then you insert lots and lots of rows with values that make this index contain values spread out over the entire spectrum.
If you now do a query that uses a join from another table, on a column with low selectivity (also lots of 0's and 1's) against the table with this index of yours, the query optimizer might deduce that this index is good, since it will fetch many rows that will be used at the same time (they're on the same data page).
However, since the data has changed, it'll jump all over the index to find the relevant pieces, and thus not be so good after all.
After recalculating the statistics, the query optimizer might see that this index is sub-optimal for this query, and pick another index instead, which is more suited.
Basically, you need to recalculate the statistics periodically if your data is in flux. If your data rarely changes, you probably don't need to do it very often, but I would still add a maintenance job with some regularity that does this.
As for whether or not it is possible to ask Firebird to do it on its own, then again, I'm on thin ice, but I suspect there is. In SQL Server you can set up maintenance jobs that does this, on a schedule, and at the very least you should be able to kick off a batch file from the Windows scheduler to do something like it.
That does not reindex, it recomputes weights for indexes, which are used by optimizer to select most optimal index. You don't need to do that unless index size changes a lot. If you create the index before you add data, you need to do the recalculation.
Embedded and Server should have exactly same functionality apart the process model.
I wanted to update this answer for newer firebird. here is the updated dsql.
SET TERM ^ ;
CREATE OR ALTER PROCEDURE NEW_PROCEDURE
AS
DECLARE VARIABLE S VARCHAR(300);
begin
FOR select 'SET statistics INDEX ' || RDB$INDEX_NAME || ';'
FROM RDB$INDICES
WHERE RDB$INDEX_NAME <> 'PRIMARY' INTO :S
DO BEGIN
EXECUTE STATEMENT :s;
END
end^
SET TERM ; ^
GRANT EXECUTE ON PROCEDURE NEW_PROCEDURE TO SYSDBA;