How can I turnoff case sensitive matching from Vertica ? Either Globally or session wise - unicode

Can I turn case-sensitive data handling from Vertica off session wise . I want it to be dependent on user who may either want to keep it case-sensitive or otherwise !
Also Is there any key to be modified while logging in to mark the session on for Unicode data handling ?

There are indeed ways. I did not test them fully, so there might be corner cases I am not aware of. The keyword you are looking for is collation. You specifically want to update the colstrength keyword and you want a value of 1 I believe (case and accents are ignored).
You can do it in a few ways:
vsql-only : \locale en_US#colstrength=1
from anywhere including via ODBC/JDBC statements: SET LOCALE TO 'en_US#colstrength=1';
by overriding the Locale value in your DSN (not tested) usually in /etc/odbc.ini for odbc
To show the effect, here is an example, first with the default, then after changing the locale:
\locale
en_US#collation=binary
select 'me' = 'ME';
?column?
----------
f
(1 row)
SET LOCALE TO 'en_US#colstrength=1';
\locale
en_US#colstrength=1
select 'me' = 'ME';
?column?
----------
t
(1 row)
I am pretty sure there is more to it, but this should get you started.

Related

Why is this empty?

Please does anyone know why this:
SELECT to_tsvector('an');
returns nothing but
SELECT to_tsvector('nn');
or
SELECT to_tsvector('n');
or
SELECT to_tsvector('aa');
do?
I am testing this on PostgreSQL 13 running on SUPABASE.
Thanks
Because "an" is a stop word in your current setup (probably English, the default).
From the documentation
The to_tsvector function internally calls a parser which breaks the document text into tokens and assigns a type to each token. For each token, a list of dictionaries (Section 12.6) is consulted, where the list can vary depending on the token type.
And (emphasis mine)...
Some words are recognized as stop words (Section 12.6.1), which causes them to be ignored since they occur too frequently to be useful in searching.

How to get permission to change run-time parameter?

In this wonderful answer is proposed GUC-pattern to use run-time parameters to detect current user inside trigger (as one solution). It seemed to suit to me too. But problem is: when I declare the variable in postgresql.conf it is usable inside trigger and I can access it from queries, but can't change it:
# SET rkdb.current_user = 'xyzaaa';
ERROR: syntax error at or near "current_user"
LINE 1: SET rkdb.current_user = 'xyzaaa';
The error message is misleading, so I did not dig it a while, but now it seems this user (database owner) has no permissions to change params set in global configuration.
I can set any other params:
# SET jumala.kama = 24;
SET
And read it back:
# SHOW jumala.kama;
jumala.kama
-------------
24
(1 row)
I can't SHOW globally set params:
# SHOW rkdb.current_user;
ERROR: syntax error at or near "current_user"
LINE 1: SHOW rkdb.current_user;
^
but I can reach it with current_setting() function:
# select current_setting('rkdb.current_user');
current_setting
-----------------
www
(1 row)
So my guess is, my database owner does not have permissions to access this param. How could I:
set needed permissions?
or even better
set run-time params with database owner rights?
current_user is an SQL standard function, so your use of that name confuses the parser.
Either use a different name or surround it with double quotes like this:
rkdb."current_user"

Transact-SQL DEFAULT keyword deprecated? Why?

In MSDN article
Deprecated Database Engine Features in SQL Server 2016
there is a statement on deprecation of DEFAULT keyword (among the others).
Quoted from the table:
Category: Transact-SQL
Deprecated feature: Use of DEFAULT keyword as default value.
Replacement: Do not use the word DEFAULT as a default value.
Feature name: DEFAULT keyword as a default value.
Feature ID: 187.
What is the logic behind this change? I find nothing wrong with
CREATE FUNCTION dbo.GetFirstIdByCode(#Code nvarchar(20), #ExcludeThisId int)
and in most cases, where I don't use 2nd parameter, call it like
IF dbo.GetFirstIdByCode(#Id, DEFAULT) = 0 --- etc...
Of course, I can replace DEFAULT with NULL at every call of the function. To me, this looks like anything but a progress. Why is this planned?
How should I adjust my coding style preparing for this?
The wording was incorrect:Erland raised a connect item for this..please see this connect for more details..
Pasting relevant items from connect item:
Depreceated feature is ..
using the word DEFAULT as the DEFAULT value.
Example:
CREATE TABLE T1
(Col1 int PRIMARY KEY,
Status varchar(10) DEFAULT 'DEFAULT' )
or
CREATE DEFAULT phonedflt AS 'DEFAULT'

Setting application_name on Postgres/SQLAlchemy

Looking at the output of select * from pg_stat_activity;, I see a column called application_name, described here.
I see psql sets this value correctly (to psql...), but my application code (psycopg2/SQLAlchemy) leaves it blank.
I'd like to set this to something useful, like web.1, web.2, etc, so I could later on correlate what I see in pg_stat_activity with what I see in my application logs.
I couldn't find how to set this field using SQLAlchemy (and if push comes to shove - even with raw sql; I'm using PostgresSQL 9.1.7 on Heroku, if that matters).
Am I missing something obvious?
the answer to this is a combination of:
http://initd.org/psycopg/docs/module.html#psycopg2.connect
Any other connection parameter supported by the client library/server can be passed either in the connection string or as keywords. The PostgreSQL documentation contains the complete list of the supported parameters. Also note that the same parameters can be passed to the client library using environment variables.
where the variable we need is:
http://www.postgresql.org/docs/current/static/runtime-config-logging.html#GUC-APPLICATION-NAME
The application_name can be any string of less than NAMEDATALEN characters (64 characters in a standard build). It is typically set by an application upon connection to the server. The name will be displayed in the pg_stat_activity view and included in CSV log entries. It can also be included in regular log entries via the log_line_prefix parameter. Only printable ASCII characters may be used in the application_name value. Other characters will be replaced with question marks (?).
combined with :
http://docs.sqlalchemy.org/en/rel_0_8/core/engines.html#custom-dbapi-args
String-based arguments can be passed directly from the URL string as query arguments: (example...) create_engine() also takes an argument connect_args which is an additional dictionary that will be passed to connect(). This can be used when arguments of a type other than string are required, and SQLAlchemy’s database connector has no type conversion logic present for that parameter
from that we get:
e = create_engine("postgresql://scott:tiger#localhost/test?application_name=myapp")
or:
e = create_engine("postgresql://scott:tiger#localhost/test",
connect_args={"application_name":"myapp"})
If you're using asyncpg driver, you should use
conn = await asyncpg.connect(server_settings={'application_name': 'foo'})
src - https://github.com/MagicStack/asyncpg/issues/204#issuecomment-333917251

How to handle backslash(\) in ENCRYPE/DECRYPT

I m using a update Query.
i.e:-
UPDATE tbl_ecpuser
SET ecpuser_fullname = 'Operator',
ecpuser_password = encrypt(E'Op1111/1\1/1\1' , 'ENCRYPE_KEY', 'ENCRYPE_ALGORITHM'),
where ecpuser_key = '0949600348'
Query is Executing Successfully.
But when I m trying to retrive the value for the Column ecpuser_password, it
returns with some extra character (i.e-00)
The Query for the Retrive the Password is:-
SELECT
decrypt(ecpuser_password,'ENCRYPE_KEY','ENCRYPE_ALGORITHM') AS PASSWORD
FROM tbl_ecpuser
WHERE
ecpuser_key = '0949600348'
This query returens
"Op1111/1\001/1\001"
but it should return "Op1111/1\1/1\1" and I need this.
So can any body help me about this.
Thanks.
One place where PostgreSQL was not conforming to the SQL standard was the treatment of a backslash in string literals.
Since 8.2 a configuration property standard_conforming_strings is available that configures PostgreSQL to comply with the standard here.
If you set that to "on" '\1' is treated correctly as a string with two characters (one backslash and the character 1).
However if that is turned on, the prefix E enables escape sequences again.
So (if I understand your problem correctly) you should set standard_conforming_strings = on and specify the string literal without the leading E.
Seems like E'\1' is treated as chr(1) and returned accordingly.
You probably want: E'\\1'.