I have the following table definition:
foo=# \d+ tag
Table "public.tag"
Column | Type | Modifiers | Storage | Stats target | Description
-------------+------------------------+--------------------------------------------------+----------+--------------+-------------
id | integer | not null default nextval('tag_id_seq'::regclass) | plain | |
name | character varying(255) | not null | extended | |
version | integer | not null | plain | |
description | character varying(255) | not null | extended | |
active | boolean | not null | plain | |
Indexes:
"tag_pkey" PRIMARY KEY, btree (id)
"unique_tag" UNIQUE CONSTRAINT, btree (name, version)
I am trying to insert a row into as follows:
foo=# insert into tag (name, version, description, active) values ("scala", 1, "programming language", true);
ERROR: column "scala" does not exist
LINE 1: ... tag (name, version, description, active) values ("scala", 1...
I took this command from the manual but it doesn't work. What I am doing wrong? It's a simple thing but I'm stumped. First time I am using postgres.
Postgres uses single quotes.
insert into tag (name, version, description, active) values ('scala', 1, 'programming language', true);
Related
I exported the users table from my Heroku-hosted sql db. The export looks fine, but when I try to import it, I get ERROR: invalid input syntax for type uuid: "id"
This is the command used, per the Heroku site:
\copy users FROM ~/user_export.csv WITH (FORMAT CSV);
EDIT:
I didn't include this, but the error also includes:
CONTEXT: COPY users, line 1, column id: "id"
I had done programmer math and translated that to a zero-based format, but maybe it's the header that's the issue?
-- ANSWER: YES. argh.
/EDIT
I've found some posts in different places that seem to involve JSON fields, but the schema is fairly simple, and only simple objects are used:
Table "public.users"
Column | Type | Collation | Nullable | Default
------------------+--------------------------+-----------+----------+---------------------
id | uuid | | not null |
name | text | | not null |
username | text | | not null |
password_hash | text | | not null |
created_at | timestamp with time zone | | |
updated_at | timestamp with time zone | | |
tournament_id | uuid | | |
background | text | | |
as_player | boolean | | |
as_streamer | boolean | | |
administrator | administrator | | not null | 'no'::administrator
creator | boolean | | not null | false
creator_approved | boolean | | not null | true
Indexes:
"users_pkey" PRIMARY KEY, btree (id)
"uq:users.username" UNIQUE CONSTRAINT, btree (username)
Referenced by:
TABLE "tokens" CONSTRAINT "tokens_userID_fkey" FOREIGN KEY ("userID") REFERENCES users(id) ON DELETE CASCADE
TABLE "tournament_player_pivot" CONSTRAINT "tournament_player_pivot_playerID_fkey" FOREIGN KEY ("playerID") REFERENCES users(id)
The table the data was exported from and the table I'm trying to import to have the identical schema. I've come across suggestions that there is a specific single-quoted format for UUID fields, but manually modifying that has no effect.
What is the problem here?
This is a sample from the export file using a testing user:
id,name,username,password_hash,created_at,updated_at,tournament_id,background,as_player,as_streamer,administrator,creator,creator_approved
ad5230b4-2377-4a8d-8725-d49cd78121af,z9,z9#test.com,$2b$12$97GXVp1p.nfke8L4EYK2Fuev9IE3k0WFAf4O3NvziYHjogFCAppO6,2022-05-07 06:03:44.020019+00,2022-05-07 06:03:44.020019+00,,,f,f,no,f,t
I have a table with a compound primary key:
Table "account.enum"
Column | Type | Modifiers
-----------+------------------------+-----------
classname | character varying(256) | not null
name | character varying(64) | not null
active | boolean | not null
Indexes: "enum_pkey" PRIMARY KEY, btree (classname, name)
Values:
classname | name | active
--------------+--------+--------
CURRENCY | EUR | t
CURRENCY | USD | t
MUTATIONTYPE | CREDIT | t
MUTATIONTYPE | DEBET | t
Another table account uses this table:
Table "account.mutation"
Column | Type | Modifiers
-----------------+------------------------+-------------------------------------------------------
id | bigint | not null default nextval('mutation_id_seq'::regclass)
accountnumber | character varying(9) | not null
interestdate | date | not null
balancebefore | numeric(10,2) | not null
balanceafter | numeric(10,2) | not null
transactiondate | date | not null
amount | numeric(10,2) | not null
description | character varying(512) | not null
ordernumber | smallint | default (-1)
mutationtype | character varying(64) |
currency | character varying(64) |
I want to add foreign key constraints (for mutationtype and currency):
alter table mutation add constraint FK_mutationtype foreign key('MUTATIONTYPE', mutationtype) references enum(classname, name);
alter table mutation add constraint FK_currency foreign key('CURRENCY', currency) references enum(classname, name);
However the string literals are not accepted.
What am I doing wrong? Is what I want possible in postgres?
You cannot do this because foreign keys can only be defined on columns, not on expressions like the literal 'MUTATIONTYPE'.
You could introduce a column like mutation_class in mutation that is always set to 'MUTATIONTYPE', but that sounds wasteful, redundant and silly.
I think that you should solve the problem by having different lookup tables for the different enumerations, then the difficulty would just vanish, and the whole design looks more reasonable from a relational perspective.
I have a table described as following:
Table "public.lead"
Column | Type | Modifiers
-----------------------------+--------------------------------+-----------------------------------------
id | character varying(36) | not null
reference_code | character varying(20) | not null
country_id | character varying(36) | not null
language_id | character varying(36) | not null
locale_id | character varying(36) | not null
from_country_id | character varying(36) | not null
to_country_id | character varying(36) | not null
customer_id | character varying(36) | not null
user_id | character varying(36) |
from_date | date | not null
from_date_type | smallint | not null default (0)::smallint
from_street | character varying(200) |
from_postalcode | character varying(25) |
from_city | character varying(100) |
from_country | character varying(50) |
from_apartment_type | character varying(255) | not null default '0'::character varying
from_floor | smallint |
from_rooms | numeric(3,1) |
from_people | integer |
from_squaremeter | integer |
from_elevator | smallint | not null
I am trying to create foreign keys for (country_id, from_country_id, to_country_id)
As you can see all these 3 fields has a relation with table.
But when i try to create these foreign keys, i got the following error.
ERROR: insert or update on table "lead" violates foreign key constraint "lead_to_country_id" Detail: Key (to_country_id)=(United Kingdom) is not present in table "country".
Details
This error is often related to a missing key.
When you try to create a foreign key AFTER an insert statement, SQL searches for those keys in the table which has the Primary Key (PK).
Eg.
table_with_PK
col1(Pk) | col2| coln ...
id_1 foo bar ...
id_2 nan ana ...
table_connected_to_table_with_PK
col1(Fk) | col2 | etc...
id_1
id_2
id_3 (Error because not present in table_with_PK)
So first create the table which has your primary keys, then populate it.
Second create the table with foreign keys, make foreign keys(Fk), then populate/update it, in order to have coherence in your database.
Check postgresql documentation on constraints: https://www.postgresql.org/docs/current/static/ddl-constraints.html
The error message pretty much says it: You are trying to set the to_country_id column value to 'United Kingdom' which does not exist in the referenced country table. Insert that value into the country table and retry.
I'm trying to use a INSERT INTO ... SELECT statement to copy columns from one table into another table but I am getting an error message:
gis=> INSERT INTO places (SELECT 0 AS osm_id, 0 AS code, 'country' AS fclass, pop_est::numeric(10,0) AS population, name, geom FROM countries);
ERROR: invalid input syntax for integer: "country"
LINE 1: ...NSERT INTO places (SELECT 0 AS osm_id, 0 AS code, 'country' ...
The SELECT statement by itself is giving a result like I expect:
gis=> SELECT 0 AS osm_id, 0 AS code, 'country' AS fclass, pop_est::numeric(10,0) AS population, name, geom FROM countries LIMIT 1;
osm_id | code | fclass | population | name | geom
--------+------+---------+------------+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
0 | 0 | country | 103065 | Aruba | 0106000000010000000103000000010000000A000000333333338B7951C0C8CCCCCC6CE7284033333333537951C03033333393D82840CCCCCCCC4C7C51C06066666686E0284000000000448051C00000000040002940333333333B8451C0C8CCCCCC0C18294099999999418351C030333333B3312940333333333F8251C0C8CCCCCC6C3A294000000000487E51C000000000A0222940333333335B7A51C00000000000F62840333333338B7951C0C8CCCCCC6CE72840
(1 row)
But somehow it looks like it's getting confused thinking that the fclass column should be an integer when, in fact, it is actually a character varying(20)
gis=> \d+ places
Unlogged table "public.places"
Column | Type | Modifiers | Storage | Stats target | Description
------------+------------------------+------------------------------------------------------+----------+--------------+-------------
gid | integer | not null default nextval('places_gid_seq'::regclass) | plain | |
osm_id | bigint | | plain | |
code | smallint | | plain | |
fclass | character varying(20) | | extended | |
population | numeric(10,0) | | main | |
name | character varying(100) | | extended | |
geom | geometry | | main | |
Indexes:
"places_pkey" PRIMARY KEY, btree (gid)
"places_geom" gist (geom)
I've tried casting all of the columns to their exact types they need to be for the destination table but that doesn't seem to have any effect.
All of the other instances of this error message I can find online appear to be people trying to use empty strings as an integer which isn't relevant here because I'm selecting a constant string as fclass.
You need to specify the column names you are inserting into:
INSERT INTO places (osm_id, code, fclass, population, name, geom) SELECT ...
Without specifying them individually, it is assumed that all columns are to be inserted into - including gid, which you want to have auto-populate. So, 'country' is actually being inserted into code by your current INSERT statement.
Here' my current state.
Eonil=# \d+
List of relations
Schema | Name | Type | Owner | Size | Description
--------+------------+-------+-------+------------+-------------
public | TestTable1 | table | Eonil | 8192 bytes |
(1 row)
Eonil=# \d+ TestTable1
Did not find any relation named "TestTable1".
Eonil=#
What is the problem and how can I see the table definition?
Postgres psql needs escaping for capital letters.
Eonil=# \d+ "TestTable1"
So this works well.
Eonil=# \d+ "TestTable1"
Table "public.TestTable1"
Column | Type | Modifiers | Storage | Description
--------+------------------+-----------+----------+-------------
ID | bigint | not null | plain |
name | text | | extended |
price | double precision | | plain |
Indexes:
"TestTable1_pkey" PRIMARY KEY, btree ("ID")
"TestTable1_name_key" UNIQUE CONSTRAINT, btree (name)
Has OIDs: no
Eonil=#