Postgres: unique on array and varchar column - postgresql

I would like to ask if it is possible to set UNIQUE on array column - I need to check array items if are uniqued.
Secondly, I wish to have also second column to be included in this.
To imagine what I need, I'm including example: imagine, you have entries with domains and aliases. Column domain is varchar having main domain in it, aliases is array which can be empty. As logical, nothing in column domain can be in aliases as well as right opposite.
If there is any option how to do it, I would be glad for showing how. And the best will be to include help how to do it in sqlalchemy (table declaration, using in TurboGears).
Postgresql: 9.2
sqlalchemy: 0.7
UPDATE:
I have found, how to do multi-column unique in sqlalchemy, however it does not work on array:
client_table = Table('client', metadata,
Column('id', types.Integer, autoincrement = True, primary_key = True),
Column('name', types.String),
Column('domain', types.String),
Column('alias', postgresql.ARRAY(types.String)),
UniqueConstraint('domain', 'alias', name = 'domains')
)
Then desc:
wb=# \d+ client
Table "public.client"
Column | Type | Modifiers | Storage | Description
--------+---------------------+-----------------------------------------------------+----------+-------------
id | integer | not null default nextval('client_id_seq'::regclass) | plain |
name | character varying | | extended |
domain | character varying | not null | extended |
alias | character varying[] | | extended |
Indexes:
"client_pkey" PRIMARY KEY, btree (id)
"domains" UNIQUE CONSTRAINT, btree (domain, alias)
And select (after test insert):
wb=# select * from client;
id | name | domain | alias
----+-------+---------------+--------------------------
1 | test1 | www.test.com | {www.test1.com,test.com}
2 | test2 | www.test1.com |
3 | test3 | www.test.com |
Thanks in advance.

figure out how to do this in pure Postgresql syntax, then use DDL to emit it exactly.

Related

Postgres Alter Column Datatype & Update Values

I am new to writing postgres queries, I am stuck at a problem where the price is a string having $ prefix, I want to change the data type of column price to float and update the values by removing the $ prefix. Can someone help me do that?
bootcamp=# SELECT * FROM car;
id | make | model | price
----+---------+---------------------+-----------
1 | Ford | Explorer Sport Trac | $92075.96
2 | GMC | Safari | $81521.80
3 | Mercury | Grand Marquis | $64391.84
(3 rows)
bootcamp=# \d car
Table "public.car"
Column | Type | Collation | Nullable | Default
--------+-----------------------+-----------+----------+---------------------------------
id | bigint | | not null | nextval('car_id_seq'::regclass)
make | character varying(50) | | not null |
model | character varying(50) | | not null |
price | character varying(50) | | not null |
Thanks
You can cleanup the string while altering the table:
alter table car
alter column price type numeric using substr(price, 2)::numeric;
First you have to disable safe update mode to update without WHERE clause:
SET SQL_SAFE_UPDATES=0;
Then remove '$' from all rows:
UPDATE car SET price = replace(price, '$', '');
Then change the column type:
ALTER TABLE car ALTER COLUMN price TYPE your_desired_type;
If you want to enable safe update mode again:
SET SQL_SAFE_UPDATES=1;

PSQL import fails: ERROR: invalid input syntax for type uuid: "id"

I exported the users table from my Heroku-hosted sql db. The export looks fine, but when I try to import it, I get ERROR: invalid input syntax for type uuid: "id"
This is the command used, per the Heroku site:
\copy users FROM ~/user_export.csv WITH (FORMAT CSV);
EDIT:
I didn't include this, but the error also includes:
CONTEXT: COPY users, line 1, column id: "id"
I had done programmer math and translated that to a zero-based format, but maybe it's the header that's the issue?
-- ANSWER: YES. argh.
/EDIT
I've found some posts in different places that seem to involve JSON fields, but the schema is fairly simple, and only simple objects are used:
Table "public.users"
Column | Type | Collation | Nullable | Default
------------------+--------------------------+-----------+----------+---------------------
id | uuid | | not null |
name | text | | not null |
username | text | | not null |
password_hash | text | | not null |
created_at | timestamp with time zone | | |
updated_at | timestamp with time zone | | |
tournament_id | uuid | | |
background | text | | |
as_player | boolean | | |
as_streamer | boolean | | |
administrator | administrator | | not null | 'no'::administrator
creator | boolean | | not null | false
creator_approved | boolean | | not null | true
Indexes:
"users_pkey" PRIMARY KEY, btree (id)
"uq:users.username" UNIQUE CONSTRAINT, btree (username)
Referenced by:
TABLE "tokens" CONSTRAINT "tokens_userID_fkey" FOREIGN KEY ("userID") REFERENCES users(id) ON DELETE CASCADE
TABLE "tournament_player_pivot" CONSTRAINT "tournament_player_pivot_playerID_fkey" FOREIGN KEY ("playerID") REFERENCES users(id)
The table the data was exported from and the table I'm trying to import to have the identical schema. I've come across suggestions that there is a specific single-quoted format for UUID fields, but manually modifying that has no effect.
What is the problem here?
This is a sample from the export file using a testing user:
id,name,username,password_hash,created_at,updated_at,tournament_id,background,as_player,as_streamer,administrator,creator,creator_approved
ad5230b4-2377-4a8d-8725-d49cd78121af,z9,z9#test.com,$2b$12$97GXVp1p.nfke8L4EYK2Fuev9IE3k0WFAf4O3NvziYHjogFCAppO6,2022-05-07 06:03:44.020019+00,2022-05-07 06:03:44.020019+00,,,f,f,no,f,t

postgREST can find relation

I'm trying to set up postgREST. Have been following the tutorial at http://postgrest.org/en/v5.1/tutorials/tut0.html. Here is what I see. First, the schemas:
entercarlson=# \dn
List of schemas
Name | Owner
--------+---------
api | carlson
public | carlson
Then a table:
carlson=# \d api.todos
Table "api.todos"
Column | Type | Collation | Nullable | Default
--------+--------------------------+-----------+----------+---------------------------------------
id | integer | | not null | nextval('api.todos_id_seq'::regclass)
done | boolean | | not null | false
task | text | | not null |
due | timestamp with time zone | | |
Indexes:
"todos_pkey" PRIMARY KEY, btree (id)
Finally, some data:
carlson=# select * from api.todos;
id | done | task | due
----+------+-------------------+-----
1 | f | finish tutorial 0 |
2 | f | pat self on back |
(2 rows)
But then I get this:
$ curl http://localhost:3000/todos
{"hint":null,"details":null,"code":"42P01","message":"relation
\"api.todos\" does not exist"}
Which is consistent with this:
carlson=# \d
Did not find any relations.
What am I doing wrong?
PS. I don't see which database this schema belongs to
Seems you're targeting the wrong database, check the db-uri config value and make sure this uri contains the api.todos table through psql.
Also, want to clarify that the search_path is modified by PostgREST on each request, so if you ALTER your connection user search_path it'll have no effect on the schemas PostgREST searches.

Postgres varchar column giving error "invalid input syntax for integer"

I'm trying to use a INSERT INTO ... SELECT statement to copy columns from one table into another table but I am getting an error message:
gis=> INSERT INTO places (SELECT 0 AS osm_id, 0 AS code, 'country' AS fclass, pop_est::numeric(10,0) AS population, name, geom FROM countries);
ERROR: invalid input syntax for integer: "country"
LINE 1: ...NSERT INTO places (SELECT 0 AS osm_id, 0 AS code, 'country' ...
The SELECT statement by itself is giving a result like I expect:
gis=> SELECT 0 AS osm_id, 0 AS code, 'country' AS fclass, pop_est::numeric(10,0) AS population, name, geom FROM countries LIMIT 1;
osm_id | code | fclass | population | name | geom
--------+------+---------+------------+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
0 | 0 | country | 103065 | Aruba | 0106000000010000000103000000010000000A000000333333338B7951C0C8CCCCCC6CE7284033333333537951C03033333393D82840CCCCCCCC4C7C51C06066666686E0284000000000448051C00000000040002940333333333B8451C0C8CCCCCC0C18294099999999418351C030333333B3312940333333333F8251C0C8CCCCCC6C3A294000000000487E51C000000000A0222940333333335B7A51C00000000000F62840333333338B7951C0C8CCCCCC6CE72840
(1 row)
But somehow it looks like it's getting confused thinking that the fclass column should be an integer when, in fact, it is actually a character varying(20)
gis=> \d+ places
Unlogged table "public.places"
Column | Type | Modifiers | Storage | Stats target | Description
------------+------------------------+------------------------------------------------------+----------+--------------+-------------
gid | integer | not null default nextval('places_gid_seq'::regclass) | plain | |
osm_id | bigint | | plain | |
code | smallint | | plain | |
fclass | character varying(20) | | extended | |
population | numeric(10,0) | | main | |
name | character varying(100) | | extended | |
geom | geometry | | main | |
Indexes:
"places_pkey" PRIMARY KEY, btree (gid)
"places_geom" gist (geom)
I've tried casting all of the columns to their exact types they need to be for the destination table but that doesn't seem to have any effect.
All of the other instances of this error message I can find online appear to be people trying to use empty strings as an integer which isn't relevant here because I'm selecting a constant string as fclass.
You need to specify the column names you are inserting into:
INSERT INTO places (osm_id, code, fclass, population, name, geom) SELECT ...
Without specifying them individually, it is assumed that all columns are to be inserted into - including gid, which you want to have auto-populate. So, 'country' is actually being inserted into code by your current INSERT statement.

Why psql can't find relation name for existing table?

Here' my current state.
Eonil=# \d+
List of relations
Schema | Name | Type | Owner | Size | Description
--------+------------+-------+-------+------------+-------------
public | TestTable1 | table | Eonil | 8192 bytes |
(1 row)
Eonil=# \d+ TestTable1
Did not find any relation named "TestTable1".
Eonil=#
What is the problem and how can I see the table definition?
Postgres psql needs escaping for capital letters.
Eonil=# \d+ "TestTable1"
So this works well.
Eonil=# \d+ "TestTable1"
Table "public.TestTable1"
Column | Type | Modifiers | Storage | Description
--------+------------------+-----------+----------+-------------
ID | bigint | not null | plain |
name | text | | extended |
price | double precision | | plain |
Indexes:
"TestTable1_pkey" PRIMARY KEY, btree ("ID")
"TestTable1_name_key" UNIQUE CONSTRAINT, btree (name)
Has OIDs: no
Eonil=#