Why PostgreSQL does not like UPPERCASE table names? - postgresql

I have recently tried to create some tables in PostgreSQL all in uppercase names. However in order to query them I need to put the table name inside the quotation "TABLE_NAME". Is there any way to avoid this and tell the postgres to work with uppercase name as normal ?
UPDATE
this query create a table with lowercase table_name
create table TABLE_NAME
(
id integer,
name varchar(255)
)
However, this query creates a table with uppercase name "TABLE_NAME"
create table "TABLE_NAME"
(
id integer,
name varchar(255)
)
the problem is the quotations are part of the name now!!
in my case I do not create the tables manually, another Application creates the table and the names are in capital letters. this cause problems when I want to use CQL filters via Geoserver.

put table name into double quotes if you want postgres to preserve case for relation names.
Quoting an identifier also makes it case-sensitive, whereas unquoted
names are always folded to lower case. For example, the identifiers
FOO, foo, and "foo" are considered the same by PostgreSQL, but "Foo"
and "FOO" are different from these three and each other. (The folding
of unquoted names to lower case in PostgreSQL is incompatible with the
SQL standard, which says that unquoted names should be folded to upper
case. Thus, foo should be equivalent to "FOO" not "foo" according to
the standard. If you want to write portable applications you are
advised to always quote a particular name or never quote it.)
from docs (emphasis mine)
example with quoting:
t=# create table "UC_TNAME" (i int);
CREATE TABLE
t=# \dt+ UC
t=# \dt+ "UC_TNAME"
List of relations
Schema | Name | Type | Owner | Size | Description
--------+----------+-------+----------+---------+-------------
public | UC_TNAME | table | postgres | 0 bytes |
(1 row)
example without quoting:
t=# create table UC_TNAME (i int);
CREATE TABLE
t=# \dt+ UC_TNAME
List of relations
Schema | Name | Type | Owner | Size | Description
--------+----------+-------+----------+---------+-------------
public | uc_tname | table | postgres | 0 bytes |
(1 row)
So if you created table with quotes, you should not skip quotes querying it. But if you skipped quotes creating object, the name was folded to lowercase and so will be with uppercase name in query - this way you "won't notice" it.

The question implies that double quotes, when used to force PostgreSQL to recognize casing for an identifier name, actually become part of the identifier name. That's not correct. What does happen is that if you use double quotes to force casing, then you must always use double quotes to reference that identifier.
Background:
In PostgreSQL, names of identifiers are always folded to lowercase unless you surround the identifier name with double quotes. This can lead to confusion.
Consider what happens if you run these two statements in sequence:
CREATE TABLE my_table (
t_id serial,
some_value text
);
That creates a table named my_table.
Now, try to run this:
CREATE TABLE My_Table (
t_id serial,
some_value text
);
PostgreSQL ignores the uppercasing (because the table name is not surrounded by quotes) and tries to make another table called my_table. When that happens, it throws an error:
ERROR: relation "my_table" already exists
To make a table with uppercase letters, you'd have to run:
CREATE TABLE "My_Table" (
t_id serial,
some_value text
);
Now you have two tables in your database:
Schema | Name | Type | Owner
--------+---------------------------+-------+----------
public | My_Table | table | postgres
public | my_table | table | postgres
The only way to ever access My_Table is to then surround the identifier name with double quotes, as in:
SELECT * FROM "My_Table"
If you leave the identifier unquoted, then PostgreSQL would fold it to lowercase and query my_table.

In simple words, Postgres treats the data in (double-quotes) "" as case-sensitive. And remaining as lowercase.
Example: we can create 2-columns with names DETAILS and details and while querying:
select "DETAILS"
return DETAILS column data and
select details/DETAILS/Details/"details"
returns details column data.

Related

Are table names case sensitive in Heroku Postgres add-on? [duplicate]

I have a db table say, persons in Postgres handed down by another team that has a column name say, "first_Name". Now am trying to use PG commander to query this table on this column-name.
select * from persons where first_Name="xyz";
And it just returns
ERROR: column "first_Name" does not exist
Not sure if I am doing something silly or is there a workaround to this problem that I am missing?
Identifiers (including column names) that are not double-quoted are folded to lowercase in PostgreSQL. Column names that were created with double-quotes and thereby retained uppercase letters (and/or other syntax violations) have to be double-quoted for the rest of their life:
"first_Name"
Values (string literals / constants) are enclosed in single quotes:
'xyz'
So, yes, PostgreSQL column names are case-sensitive (when double-quoted):
SELECT * FROM persons WHERE "first_Name" = 'xyz';
Read the manual on identifiers here.
My standing advice is to use legal, lower-case names exclusively so double-quoting is never required.
To quote the documentation:
Key words and unquoted identifiers are case insensitive. Therefore:
UPDATE MY_TABLE SET A = 5;
can equivalently be written as:
uPDaTE my_TabLE SeT a = 5;
You could also write it using quoted identifiers:
UPDATE "my_table" SET "a" = 5;
Quoting an identifier makes it case-sensitive, whereas unquoted names are always folded to lower case (unlike the SQL standard where unquoted names are folded to upper case). For example, the identifiers FOO, foo, and "foo" are considered the same by PostgreSQL, but "Foo" and "FOO" are different from these three and each other.
If you want to write portable applications you are advised to always quote a particular name or never quote it.
The column names which are mixed case or uppercase have to be double quoted in PostgresQL. So best convention will be to follow all small case with underscore.
if use JPA I recommend change to lowercase schema, table and column names, you can use next intructions for help you:
select
psat.schemaname,
psat.relname,
pa.attname,
psat.relid
from
pg_catalog.pg_stat_all_tables psat,
pg_catalog.pg_attribute pa
where
psat.relid = pa.attrelid
change schema name:
ALTER SCHEMA "XXXXX" RENAME TO xxxxx;
change table names:
ALTER TABLE xxxxx."AAAAA" RENAME TO aaaaa;
change column names:
ALTER TABLE xxxxx.aaaaa RENAME COLUMN "CCCCC" TO ccccc;
You can try this example for table and column naming in capital letters. (postgresql)
//Sql;
create table "Test"
(
"ID" integer,
"NAME" varchar(255)
)
//C#
string sqlCommand = $#"create table ""TestTable"" (
""ID"" integer GENERATED BY DEFAULT AS IDENTITY primary key,
""ExampleProperty"" boolean,
""ColumnName"" varchar(255))";

Update an array of text with itself to run a "trigger" function in PostgreSQL

Full-text search in PostgreSQL is new to me. So bear with me. I'm working on an existing PostgreSQL table that has a column of type text[] (named tags) that I'd like to implement full-text search on. I've added another column of type ts_vector (named tags_tsv) that will contain the indexable lexemes from the tags column. I also have an index on tags and a trigger to update tags_tsv any time there are updates or inserts applied to tags.
In order to update the existing rows in this table, I need to run this trigger by updating the tags column with itself. I ran update tableName set tags = tags; but get the error ERROR: column "tags" is not of a character type.
Okay. To get the tags array into text I figured I need to use array_to_string like so update tableName set tags = array_to_string(tags, ','); but I get You will need to rewrite or cast the expression..
I'm lost. What am I missing?
I'm doing all this work in psql and below is the table definition:
Column | Type |
----------+----------|
id | integer |
title | text |
tags | text[] |
title_tsv | tsvector |
tags_tsv | tsvector |
Indexes:
"test_pkey" PRIMARY KEY, btree (id)
"title_idx" gin (title_tsv)
"tags_idx" gin (tags_tsv)
Triggers:
tsvectorupdate BEFORE INSERT OR UPDATE ON test FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger('title_tsv', 'pg_catalog.english', 'title')
tsvectorupdate2 BEFORE INSERT OR UPDATE ON test FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger('tags_tsv', 'pg_catalog.english', 'tags')

At what level do Postgres index names need to be unique?

In Microsoft SQL Server and MySQL, index names need to unique within the table, but not within the database. This doesn't seem to be the case for PostgreSQL.
Here's what I'm doing: I made a copy of a table using CREATE TABLE new_table AS SELECT * FROM old_table etc and need to re-create the indexes.
Running a query like CREATE INDEX idx_column_name ON new_table USING GIST(column_name) causes ERROR: relation "idx_column_name" already exists
What's going on here?
Indexes and tables (and views, and sequences, and...) are stored in the pg_class catalog, and they're unique per schema due to a unique key on it:
# \d pg_class
Table "pg_catalog.pg_class"
Column | Type | Modifiers
----------------+-----------+-----------
relname | name | not null
relnamespace | oid | not null
...
Indexes:
"pg_class_oid_index" UNIQUE, btree (oid)
"pg_class_relname_nsp_index" UNIQUE, btree (relname, relnamespace)
Per #wildplasser's comment, you can omit the name when creating the index, and PG will assign a unique name automatically.
Names are unique within the schema. A schema is basically a namespace for {tables,constraints}, (and indexes, functions,etc).
cross-schema-constraints are allowed
Indexes share their namespace ( :=schema) with tables. (for Postgres: an index is a table).
(IIRC) the SQL standard does not define indexes; use constraints whenever you can (The GIST index in the question is probably an exception)
Ergo You'll need to invent another name.
or omit it: the system can invent a name if you dont supply one.
The downside of this: you can create multipe indices with the same definition (their names will be suffixed with _1, _2, IIRC)

Double quote in the name of table in select query of PostgreSQL

I am running following simple select query in PostgreSQL:
SELECT * FROM "INFORMATION_SCHEMA.KEY_COLUMN_USAGE"
It gives me following error report:
ERROR: relation "INFORMATION_SCHEMA.KEY_COLUMN_USAGE" does not exist
LINE 1: SELECT * FROM "INFORMATION_SCHEMA.KEY_COLUMN_USAGE"
^
********** Error **********
ERROR: relation "INFORMATION_SCHEMA.KEY_COLUMN_USAGE" does not exist
SQL state: 42P01
Character: 15
But when I am running the following query it runs successfully:
SELECT * FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
Again when I select from a table created by me the situation is reversed. Following one fails:
SELECT * FROM countryTable
while following one runs successfully.
SELECT * FROM "countryTable"
Why is it happening? What is the problem?
You probably created your table so:
CREATE TABLE "countryTable" (
id SERIAL NOT NULL,
country TEXT NOT NULL,
PRIMARY KEY(id)
);
Which create a tablespace wrapped in "", you shouldn't use double quote in general in postgres for table names or columns, try without double quotes:
CREATE TABLE countryTable (
id SERIAL NOT NULL,
country TEXT NOT NULL,
PRIMARY KEY(id)
);
An then you can use this query you already have SELECT * FROM countryTable
While my personal advice is to use legal, lower-case names exclusively and never use double-quote, it is no problem per se.
When you look at the table definition in psql (\d tbl), or at table names in the system catalog pg_class or column names in pg_attributes or any of the information schema views, you get identifiers in their correct spelling (and with all other oddities that may have been preserved by double-quoting them). You can use quote_ident() to quote such names automatically as needed - it only adds double quotes if necessary.
Postgres itself isn't foolish enough to use CaMeL case names. All objects in the information schema or in the system catalog are lower-cased (the names of the system tables and columns, not the names of user tables they carry as data).
Start at the basics, read the manual about identifiers.

expand a varchar column very slowly , why?

Hi
We need to modify a column of a big product table , usually normall ddl statments will be
excutely fast ,but the above ddl statmens takes about 10 minnutes。I wonder know the reason!
I just want to expand a varchar column。The following is the detailsl
--table size
wapreader_log=> select pg_size_pretty(pg_relation_size('log_foot_mark'));
pg_size_pretty
----------------
5441 MB
(1 row)
--table ddl
wapreader_log=> \d log_foot_mark
Table "wapreader_log.log_foot_mark"
Column | Type | Modifiers
-------------+-----------------------------+-----------
id | integer | not null
create_time | timestamp without time zone |
sky_id | integer |
url | character varying(1000) |
refer_url | character varying(1000) |
source | character varying(64) |
users | character varying(64) |
userm | character varying(64) |
usert | character varying(64) |
ip | character varying(32) |
module | character varying(64) |
resource_id | character varying(100) |
user_agent | character varying(128) |
Indexes:
"pk_log_footmark" PRIMARY KEY, btree (id)
--alter column
wapreader_log=> \timing
Timing is on.
wapreader_log=> ALTER TABLE wapreader_log.log_foot_mark ALTER column user_agent TYPE character varying(256);
ALTER TABLE
Time: 603504.835 ms
ALTER ... TYPE requires a complete table rewrite, that's why it might take some time to complete on large tables. If you don't need a length constraint, than don't use the constraint. Drop these constraints once and and for all, and you will never run into new problems because of obsolete constraints. Just use TEXT or VARCHAR.
When you alter a table, PostgreSQL has to make sure the old version doesn't go away in some cases, to allow rolling back the change if the server crashes before it's committed and/or written to disk. For those reasons, what it actually does here even on what seems to be a trivial change is write out a whole new copy of the table somewhere else first. When that's finished, it then swaps over to the new one. Note that when this happens, you'll need enough disk space to hold both copies as well.
There are some types of DDL changes that can be made without making a second copy of the table, but this is not one of them. For example, you can add a new column that defaults to NULL quickly. But adding a new column with a non-NULL default requires making a new copy instead.
One way to avoid a table rewrite is to use SQL domains (see CREATE DOMAIN) instead of varchars in your table. You can then add and remove constraints on a domain.
Note that this does not work instantly either, since all tables using the domain are checked for constraint validity, but it is less expensive than full table rewrite and it doesn't need the extra disk space.
Not sure if this is any faster, but it may be you will have to test it out.
Try this until PostgreSQL can handle the type of alter you want without re-writing the entire stinking table.
ALTER TABLE log_foot_mark RENAME refer_url TO refer_url_old;
ALTER TABLE log_foot_mark ADD COLUMN refer_url character varying(256);
Then using the indexed primary key or unique key of the table do a looping transaction. I think you will have to do this via Perl or some language that you can do a commit every loop iteration.
WHILE (end < MAX_RECORDS)LOOP
BEGIN TRANSACTION;
UPDATE log_foot_mark
SET refer_url = refer_url_old
WHERE id >= start AND id <= end;
COMMIT TRANSACTION;
END LOOP;
ALTER TABLE log_foot_mark DROP COLUMN refer_url_old;
Keep in mind that loop logic will need to be in something other than PL\PGSQL to get it to commit every loop iteration. Test it with no loop at all and looping with a transaction size of 10k 20k 30k etc until you find the sweet spot.