I want to be able to add an SQL schema via <cfquery>. I know this is not safe:
<cfquery dataSource="#form.datasource#">
CREATE SCHEMA [#form.schema#] AUTHORIZATION [dbo]
</cfquery>
And this throws and error:
<cfquery dataSource="#form.datasource#">
CREATE SCHEMA <cfqueryparam CFSQLType="cf_sql_varchar" value="#form.schema#"> AUTHORIZATION [dbo]
</cfquery>
And stored procedures are not an option. They are not an option because the stored procedure should be a part of the schema, which doesn't yet exist.
It's only "not safe" if you don't verify it's safe before using it. I imagine you'd be fine if you simply validate that form.schema value to be a sequence of safe characters and nothing else? That's a simple regex: ^\w+$ (allows for A-Z, 0-9, and underscore).
And you can't use a <cfqueryparam> as those are for parameter values, not random bits of the SQL statement. Ref: "What one can and cannot do with <cfqueryparam>"
Related
I am working with SCD Type 2 transformation in SAS Data integration Studio (4.905) and using Postgres (12) as database.
I am facing the following error when I try to execute a query via passthrough:
When using passthrough in Postgres, SCD Type 2 doesn't enclose the table name in quotes (which would keep the name uppercase, since postgres converts all unquoted data to lowercase) and so doesn't find it as you can see.
My questions are:
Is there a way to make SCD2 transformation declare the table’s name, used via passthrough, in quotes?
Is there a way to make the SCD2 transformation create intermediate tables ‘name in lower case so that the reference is not lost when doing passthrough?
Is there a global option in DI that allow us to modify/edit temporary table names?
Source and target tables are postgresql tables, with name and columns name in lowercase:
Please, if anyone has faced this problem before or knows what is missing, please, let I know.
To solve this issue, we have to select the following highlighted (source and target) table options. It results in quotes around source/target table names:
Then, SCD2 transformation automatically put quotes in tables y columns names as you can see:
I'm trying to get basic plain SQL example working in Slick 3, on Postgres but with custom DB schema, say local instead of default public one. I have hard time inserting the row as executing the following
sqlu"INSERT INTO schedule(user_id, product_code, run_at) VALUES ($userId, $code, $nextRun)"
says
org.postgresql.util.PSQLException: ERROR: relation "schedule" does not exist
The table is in place because when I prefix schedule with local. in the insert statement it works as expected. How can I get correct schema provided to this query?
I'm using it as part of akka-projection handler and all the projection internals like maintaining offsets work as expected on local schema.
I cannot simply put schema as a variable as it errors while resolving parameters:
sqlu"INSERT INTO ${schema}.schedule(user_id, product_code, run_at) VALUES ($userId, $code, $nextRun)"
You can insert schema name using #${value}:
sqlu"INSERT INTO #${schema}.table ..."
I have schema1 in database1. I want to move all the functions of schema1 to schema2 which is present in database2. I have restored backup file of database1 into database2. And changed the schema name. The schema name for function call automatically got changed. But within function definition the schema name is not changed. for ex:
CREATE OR REPLACE FUNCTION schema2.execute(..)
BEGIN
select schema1."VALIDATE_SESSION"(....)
end
How can I change "schema1" to "schema2" automatically?
I have tried to store current schema name in variable and append it to table. But calling current_schema() returns "public". How to get current schema created by user? Because every time I need to change the schema name while generating script.
The essential detail that is missing in your dummy function are the single quotes (or dollar-quotes, all the same) around the function body. Meaning, function bodies are saved as strings. See:
What are '$$' used for in PL/pgSQL
To contrast, consider a reference to a table (or more verbosely: schema.table(column)) in a FK constraint. Object names are resolved to the internal OID of the table (and a column number) at creation time. "Early binding". When names (including schema names) are changed later, that has no effect on the FK at all. Feels like involved names are changed dynamically. But really, actual names just don't matter after the object has been created. So you can rename schemas all day without side effect for the FK.
Names in a function body are stored as strings and interpreted at call time. "Late binding". Those names are not changed dynamically.
Meaning, you'll have to actually edit all function bodies including a hard-coded schema name. A possible alternative is to rely on the search_path instead and not use schema names in function bodies to begin with. There are various. See:
How does the search_path influence identifier resolution and the "current schema"
But that's not always acceptable.
You could hack the dump. Or use sting manipulation inside Postgres to update affected function bodies. Find affected functions with a meta-query like:
SELECT *
FROM pg_catalog.pg_proc
WHERE prosrc ~ '\mschema1\M'; -- not bullet-proof!
Either way, be wary of false matches if the schema name can be part of other strings or pop up as column name etc. And dynamic SQL can concatenate strings in arbitrary ways. If you have such evil trickery in your functions, you need to deal with it appropriately.
I have multiple tables that I would like users to be able to update through the rest api, and many (if not all) have columns with sensible defaults.
The web app itself can be designed to hide these columns, but I want to allow direct access to the api as well so that others can make use of the data however they see fit.
Unfortunately, this means they can set the defaulted columns explicitly (set timestamp columns to 1972, or set id columns to arbitrary values).
What mechanisms are available to restrict this on the backend (Postgres 9.4)?
You should do this at API level.
If anybody issues a malformed request (e.g. they want to overwrite an ID or a timestamp), answer with a proper status code (perhaps 400), amended with a meaningful message, for instance "Hey you tried to update , which is read only."
If you would really insist to handle it at db level, here they suggest that:
The easiest way is to create BEFORE UPDATE trigger that will compare OLD and NEW row and RAISE EXCEPTION if the change to the row is forbidden.
I've had some luck experimenting with Postgres' column-level grants. It's important in a development environment to make sure that your database users isn't a superuser (if it is, create a second superuser, then revoke it from the dev account with alter role).
Then, commands similar to these can be run on a table:
revoke all on schema.table from dev_user;
grant select, delete, references on schema.table to dev_user;
grant update (col1, col2) on schema.table to dev_user;
grant insert (col1, col2) on schema.table to dev_user;
Some caveats:
Remember to grant "references" as well if another table will fkey to it.
Remember to give col1 and col2 (and any other) sane defaults, because the API will be unable to change those in any way.
DO NOT FORGET TO CREATE A SECOND SUPERUSER ACCOUNT BEFORE REVOKING SUPERUSER STATUS FROM THE DEV ACCOUNT. It is possible to recover this, but a big pain in the ass.
Also, if you're keeping these grant/revocations in the same file as the create table statement, the following form might be of use:
do $$begin execute 'grant select, delete, references on schema.table to ' || current_user; end$$;
This way the statements will translate correctly to production, which may not use the same username as in development.
PostgreSQL since version 9.3 supports updatable views, so instead of exposing actual table you can expose a view with a limited subset of columns:
CREATE TABLE foo (id SERIAL, name VARCHAR, protected NUMERIC DEFAULT 0);
CREATE VIEW foo_v AS SELECT name FROM foo;
Now you can do things like:
INSERT INTO foo_v VALUES ('foobar');
UPDATE foo_v SET name = 'foo' WHERE name = 'foobar';
If you need more you can use INSTEAD INSERT/UPDATE RULE or INSTEAD OF INSERT TRIGGER.
I need to make a function that would be triggered after every UPDATE and INSERT operation and would check the key fields of the table that the operation is performed on vs some conditions.
The function (and the trigger) needs to be an universal one, it shouldn't have the table name / fields names hardcoded.
I got stuck on the part where I need to access the table name and its schema part - check what fields are part of the PRIMARY KEY.
After getting the primary key info as already posted in the first answer you can check the code in http://github.com/fgp/pg_record_inspect to get record field values dynamicaly in PL/pgSQL.
Have a look at How do I get the primary key(s) of a table from Postgres via plpgsql? The answer in that one should be able to help you.
Note that you can't use dynamic SQL in PL/pgSQL; it's too strongly-typed a language for that. You'll have more luck with PL/Perl, on which you can access a hash of the columns and use regular Perl accessors to check them. (PL/Python would also work, but sadly that's an untrusted language only. PL/Tcl works too.)
In 8.4 you can use EXECUTE 'something' USING NEW, which in some cases is able to do the job.