Why would a defined variable stop working in SQLDeveloper? - oracle-sqldeveloper

I'm doing something like this to create copies of my schemas:
define dir="E:\Users\Phil\Documents\DDL_Scripts"
alter session set current_schema = AAA_TEST_1;
#"&dir\AAA\AAA_DDL.sql";
alter session set current_schema = AAA_TEST_2;
#"&dir\AAA\AAA_DDL.sql";
alter session set current_schema = BBB_TEST_1;
#"&dir\BBB\BBB_DDL.sql";
alter session set current_schema = BBB_TEST_2;
#"&dir\BBB\BBB_DDL.sql";
alter session set current_schema = CCC_TEST_1;
#"&dir\CCC\CCC_DDL.sql";
alter session set current_schema = CCC_TEST_2;
#"&dir\CCC\CCC_DDL.sql";
alter session set current_schema = DDD_TEST_1;
#"&dir\DDD\DDD_DDL.sql";
alter session set current_schema = DDD_TEST_2;
#"&dir\DDD\DDD_DDL.sql";
It all runs fine for the first five - AAA_TEST_1, AAA_TEST_2, BBB_TEST_1, BBB_TEST_2, and CCC_TEST_1.
Then, it starts failing:
Error starting at line : 213 in command -
#"&dir\CCC\CCC_DDL.sql"
Error report -
SP2-0310: Unable to open file: "&dir\CCC\CCC_DDL.sql"
If I run the whole script again, they all fail with the same problem. I have to close the connection and re-connect in order for the DEFINE to work.
Ok I've found the problem now, but I will continue and post this anyway for future reference.

The problem lies in the CCC_DDL.sql, it contains the following line:
set define off;
So from that point on, the &dir variable is not substituted. I should have thought of checking in the script earlier.

Related

How to unset (remove,not reset) a custom config parameter?

I am using set_config to set some context on the session (i.e user context).
Once I set the context for a parameter - I can't seem to get rid of it. RESET/SET param TO DEFAULT will empty the value but not remove it altogether.
select current_setting('my.test1'); -- SQL Error [42704]: ERROR: unrecognized configuration parameter "my.test1"
select set_config('my.test1','123',false);
select current_setting('my.test1'); -- returns 123
set my.test1 to default; --same as reset my.test1
select current_setting('my.test1'); --returns an empty string rather than an exception
How to remove it (raise exception again) ?
I am catching 42704 but it won't be thrown if I just "reset" it.
p.s I assume pg_reload_conf might help - but it seems too aggressive for this simple task.
Thanks.
The answer is that you cannot (in postgres 10), if you are in the same session.
Those empty parameters ONLY go away if you exit the session and open a new session. pg_reload_conf() has no effect on the custom variables that have been set in a session, or local to a transaction, and doesn't work to remove the parameter. They just stay as '' ... empty string.
For me this is a very legitimate question and issue also ...
i have been finding the same behaviour with custom ( ie name_one.name_two ) configuration parameters, whilst i've been developing a configuration setting wrapper to overlay into individual schemas.
Once the parameter has been set locally with eg set_config ( _name_ , _value_ , TRUE ) OR at session level with set_config ( _name_ , _value_ , FALSE) ... it is not removed if set to NULL or UNSET or to DEFAULT ... there is no way around this i have found, after testing and testing and questioning my own perception of my slightly nested functions and scoping ... and so my only answer has been to alter one of my pure SQL language functions to PLPGSQL and make a test for the particlular parameter that i was relying on as being not existing, because my call that allows missing_ok : current_setting ( '_pre._global_int_' , TRUE )does not return NULL if at some point earlier in any transaction in the session it has been set locally or not locally !!!
It had been frustrating me also, and i was very happy to find that this question had already been asked, and so here i give the answer :
it cannot be done in the same session in PG 10
( i have not tried it yet in 11 or 12 or 13 )
UPDATE :
i just found this answer, https://stackoverflow.com/a/50929568/14653862, of Laurenz Albe, which also says that in the same session you cannot

How to get permission to change run-time parameter?

In this wonderful answer is proposed GUC-pattern to use run-time parameters to detect current user inside trigger (as one solution). It seemed to suit to me too. But problem is: when I declare the variable in postgresql.conf it is usable inside trigger and I can access it from queries, but can't change it:
# SET rkdb.current_user = 'xyzaaa';
ERROR: syntax error at or near "current_user"
LINE 1: SET rkdb.current_user = 'xyzaaa';
The error message is misleading, so I did not dig it a while, but now it seems this user (database owner) has no permissions to change params set in global configuration.
I can set any other params:
# SET jumala.kama = 24;
SET
And read it back:
# SHOW jumala.kama;
jumala.kama
-------------
24
(1 row)
I can't SHOW globally set params:
# SHOW rkdb.current_user;
ERROR: syntax error at or near "current_user"
LINE 1: SHOW rkdb.current_user;
^
but I can reach it with current_setting() function:
# select current_setting('rkdb.current_user');
current_setting
-----------------
www
(1 row)
So my guess is, my database owner does not have permissions to access this param. How could I:
set needed permissions?
or even better
set run-time params with database owner rights?
current_user is an SQL standard function, so your use of that name confuses the parser.
Either use a different name or surround it with double quotes like this:
rkdb."current_user"

How to remove configuration parameter

In Postgres it is possible to create your own configuration parameters, something like a "cookie" that persists for duration of either session or transaction.
This is done like that:
SELECT set_config(setting_name, new_value, is_local)
or
SET [ SESSION | LOCAL ] configuration_parameter { TO | = } { value | 'value' | DEFAULT }
Local is supposed to persist only for duration of transaction, but it does affect configuration parameter even after transaction - instead of said parameter being unrecognized, it will be now set to empty string.
Question
How to make said parameter unrecognized again, without reconnecting?
This does not work:
SELECT set_config('settings.myapp.value', null, true);
RESET settings.myapp.value;
This will not return NULL, instead it gives empty string:
SELECT current_setting('settings.myapp.value', true);
I can of course work around this, but I would like to know if I can somehow revert state of configuration parameter back to what it was before "transaction only" change.
SELECT nullif(current_setting('settings.myapp.value', true), '');
You cannot do that.
If you create a new parameter, it is created as a “placeholder” parameter.
If you later load a module that defines that parameter, it will be converted to a “real” parameter on the fly.
But there is no way to delete a parameter during the lifetime of a database session.

Easiest way to silence "word is too long to be indexed" notices in PostgreSQL

I have some SQL statements that cause this to happen:
NOTICE: word is too long to be indexed
DETAIL: Words longer than 2047 characters are ignored.
What's the easiest way to not have these notices be generated in the first place? (It's a long story why I'd want to do it that way.)
An example of such a statement is this:
update rev set html = regexp_replace(html,
'***=<a href="' || old.url || '">',
'<a href="' || new.url || '">',
'gi')
where id in (
select id
from rev
where to_tsvector('tags_only', html) ##
plainto_tsquery('tags_only','<a href="' || old.url || '">')
)
It's not the A tags with long urls or anything causing the problem. It's probably embedded CDATA-style graphics. I don't care that they're not indexed, whatever they are. I just want these notices to not occur.
If you don't mind suppressing all notices just change PostgreSQL error reporting level. client_min_messages defines lowest level of error/warning/notice messages sent to client, log_min_messages does the same for messages saved in log. Possible values are: DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, PANIC
edit:
Disable for this query only: SET LOCAL client_min_messages TO WARNING;
Disable for this session only: SET SESSION client_min_messages TO WARNING;
Disable for this user: ALTER ROLE username SET client_min_messages TO WARNING;

SQLAlchemy, Psycopg2 and Postgresql COPY

It looks like Psycopg has a custom command for executing a COPY:
psycopg2 COPY using cursor.copy_from() freezes with large inputs
Is there a way to access this functionality from with SQLAlchemy?
accepted answer is correct but if you want more than just the EoghanM's comment to go on the following worked for me in COPYing a table out to CSV...
from sqlalchemy import sessionmaker, create_engine
eng = create_engine("postgresql://user:pwd#host:5432/db")
ses = sessionmaker(bind=engine)
dbcopy_f = open('/tmp/some_table_copy.csv','wb')
copy_sql = 'COPY some_table TO STDOUT WITH CSV HEADER'
fake_conn = eng.raw_connection()
fake_cur = fake_conn.cursor()
fake_cur.copy_expert(copy_sql, dbcopy_f)
The sessionmaker isn't necessary but if you're in the habit of creating the engine and the session at the same time to use raw_connection you'll need separate them (unless there is some way to access the engine through the session object that I don't know). The sql string provided to copy_expert is also not the only way to it, there is a basic copy_to function that you can use with subset of the parameters that you could past to a normal COPY TO query. Overall performance of the command seems fast for me, copying out a table of ~20000 rows.
http://initd.org/psycopg/docs/cursor.html#cursor.copy_to
http://docs.sqlalchemy.org/en/latest/core/connections.html#sqlalchemy.engine.Engine.raw_connection
If your engine is configured with a psycopg2 connection string (which is the default, so either "postgresql://..." or "postgresql+psycopg2://..."), you can create a psycopg2 cursor from an SQL Alchemy session using
cursor = session.connection().connection.cursor()
which you can use to execute
cursor.copy_from(...)
The cursor will be active in the same transaction as your session currently is. If a commit or rollback happens, any further use of the cursor with throw a psycopg2.InterfaceError, you would have to create a new one.
You can use:
def to_sql(engine, df, table, if_exists='fail', sep='\t', encoding='utf8'):
# Create Table
df[:0].to_sql(table, engine, if_exists=if_exists)
# Prepare data
output = cStringIO.StringIO()
df.to_csv(output, sep=sep, header=False, encoding=encoding)
output.seek(0)
# Insert data
connection = engine.raw_connection()
cursor = connection.cursor()
cursor.copy_from(output, table, sep=sep, null='')
connection.commit()
cursor.close()
I insert 200000 lines in 5 seconds instead of 4 minutes
It doesn't look like it.
You may have to just use psycopg2 to expose this functionality and forego the ORM capabilities. I guess I don't really see the benefit of ORM in such an operation anyway since it's a straight bulk insert and dealing with individual objects a la an ORM would not really make a whole lot of sense.
If you're starting from SQLAlchemy, you need to first get to the connection engine (also known by the property name bind on some SQLAlchemy objects):
engine = create_engine('postgresql+psycopg2://myuser:password#localhost/mydb')
# or
engine = session.engine
# or any other way you know to get to the engine
From the engine you can isolate a psycopg2 connection:
# get a psycopg2 connection
connection = engine.connect().connection
# get a cursor on that connection
cursor = connection.cursor()
Here are some templates for the COPY statement to use with cursor.copy_expert(), a more complete and flexible option than copy_from() or copy_to() as it is indicated here: https://www.psycopg.org/docs/cursor.html#cursor.copy_expert.
# to dump to a file
dump_to = """
COPY mytable
TO STDOUT
WITH (
FORMAT CSV,
DELIMITER ',',
HEADER
);
"""
# to copy from a file:
copy_from = """
COPY mytable
FROM STDIN
WITH (
FORMAT CSV,
DELIMITER ',',
HEADER
);
"""
Check out what the options above mean and others that may be of interest to your specific situation https://www.postgresql.org/docs/current/static/sql-copy.html.
IMPORTANT NOTE: The link to the documentation of cursor.copy_expert() indicates to use STDOUT to write out to a file and STDIN to copy from a file. But if you look at the syntax on the PostgreSQL manual, you'll notice that you can also specify the file to write to or from directly in the COPY statement. Don't do that, you're likely just wasting your time if you're not running as root (who runs Python as root during development?) Just do what's indicated in the psycopg2's docs and specify STDIN or STDOUT in your statement with cursor.copy_expert(), it should be fine.
# running the copy statement
with open('/path/to/your/data/file.csv') as f:
cursor.copy_expert(copy_from, file=f)
# don't forget to commit the changes.
connection.commit()
You don't need to drop down to psycopg2, use raw_connection nor a cursor.
Just execute the sql as usual, you can even use bind parameters with text():
engine.execute(text('''copy some_table from :csv
delimiter ',' csv'''
).execution_options(autocommit=True),
csv='/tmp/a.csv')
You can drop the execution_options(autocommit=True) if this PR will be accepted