I'm trying to import from CSV to Postgres and generate a uuid, uuid_generate_v4(), during the import to populate a table.
Postgres version: 14
Postgres Table
Country
id (primary key)
country_name
country_ISO_Code
Psql import
\copy "Country" (select uuid_generate_v4() as "id", "country_name", "country_ISO_code") FROM 'C:\Users\.......\data.csv' (format csv, header, delimiter ',')
However, that throws an error \copy: parse error at "as"
How do I properly instruct psql to use uuid_generate_v4() as id column?
You can't copy "from" a select statement. You need to define a default value for your primary key:
create table country
(
id uuid primary key default gen_random_uuid(),
country_name text not null,
country_iso_code text not null
);
Then tell the \copy command that your input file only contains two columns:
\copy country (country_name, country_iso_code) from 'data.csv' (format csv, header, delimiter ',')
Related
I had a look to the IP2 Location database for SQL server 2019, BCP import or TSQL openrowset, Import Data wizard, all fail
Had no luck with the FMT file as it's the wrong version, no problem I guess I will generate it using
bcp [ip2location].[dbo].[IP2LOCATION-LITE-DB5] format nul -T -N -f D:\IP2LOCATION-LITE-DB5.CSV\DB5.fmt
The issue I have is an error:
Cannot bulk load CSV file. Invalid field parameters are specified for source column number 1 in the format file "D:\IP2LOCATION-LITE-DB5.CSV\DB5.FMT". All data fields must be either character or Unicode characters with terminator when CSV format is specified.
The SQL I use to test:
select top(10) *
from openrowset(BULK N'D:\IP2LOCATION-LITE-DB5.CSV\IP2LOCATION-LITE-DB5.CSV'
,FORMATFILE = N'D:\IP2LOCATION-LITE-DB5.CSV\DB5.FMT'
, FORMAT='CSV') AS DATA
I can't seem to be able to import IP2LOCATION-LITE-DB5.csv
Based on the FAQ page https://www.ip2location.com/faqs/db5-ip-country-region-city-latitude-longitude#database, you can create the table and import as below:
CREATE DATABASE ip2location
GO
USE ip2location
GO
CREATE TABLE [ip2location].[dbo].[ip2location_db5](
[ip_from] bigint NOT NULL,
[ip_to] bigint NOT NULL,
[country_code] nvarchar(2) NOT NULL,
[country_name] nvarchar(64) NOT NULL,
[region_name] nvarchar(128) NOT NULL,
[city_name] nvarchar(128) NOT NULL,
[latitude] float NOT NULL,
[longitude] float NOT NULL
) ON [PRIMARY]
GO
CREATE CLUSTERED INDEX [ip_to] ON [ip2location].[dbo].[ip2location_db5]([ip_to]) ON [PRIMARY]
GO
BULK INSERT [ip2location].[dbo].[ip2location_db5]
FROM 'D:\IP2LOCATION-LITE-DB5.CSV\IP2LOCATION-LITE-DB5.CSV'
WITH
(
FORMAT = 'CSV',
FIELDQUOTE = '"',
FIELDTERMINATOR = ',',
ROWTERMINATOR = '0x0D0A',
TABLOCK
)
GO
I want to import csv with Postgres' arrays into a Postgres table.
This is my table:
create table dbo.countries (
id char(2) primary key,
name text not null,
elements text[]
CONSTRAINT const_dbo_countries_unique1 unique (id),
CONSTRAINT const_dbo_countries_unique2 unique (name)
);
and I want to insert into that a csv which looks like this:
AC,ac,{xx yy}
When I type copy dbo.mytable FROM '/home/file.csv' delimiter ',' csv; then the array is read as a one string: {"xx yy"}.
How to change a deafault separator for arrays from , to ?
You cannot to change array's separator symbol. You can read data to table, and later you can run a update on this table:
UPDATE dbo.countries
SET elements = string_to_array(elements[1], ' ')
WHERE strpos(elements[1], ' ') > 0;
I have a table in an sqlite-database which is empty and the first column of the table is autoincrement primary key. I would like to import a csv-file which has no id that is: it has as many columns as the table but one (id). How can I import the values in the csv-file into the corresponding columns of the table?
Here is a mimimal example (Windows 7, DOS):
REM -- Create csv-file
type NUL > data.csv
echo 'a1','b1' >> data.csv
echo 'a2','b2' >> data.csv
type data.csv
'a1','b1'
'a2','b2'
REM -- create database and tables
sqlite3 test.db "SELECT 1;"
sqlite3 test.db "CREATE TABLE tab0 (va TEXT, vb TEXT);"
REM this table has 3 columns:
sqlite3 test.db "CREATE TABLE tab1 (id INTEGER NOT NULL PRIMARY KEY, va TEXT, vb TEXT);"
REM -- Import csv file where number of columns are equal
sqlite3 -separator "," test.db ".import 'data.csv' tab0"
sqlite3 test.db
sqlite> select * from tab0;
'a1'|'b1'
'a2'|'b2'
REM -- Import csv file where number of columns are NOT equal
sqlite3 -separator "," test.db ".import 'data.csv' tab1"
data.csv:1: expected 3 columns bit found - filling the rest with NULL
data.csv:1: INSERT failes: datatype mismatch
data.csv:2: expected 3 columns bit found - filling the rest with NULL
data.csv:2: INSERT failes: datatype mismatch
Any help appreciated.
Update: Is there a way to avoid a temporary table?
You could use the tab0 to fill tab1.
something like
INSERT INTO tab1( va,vb )
SELECT va, vb
FROM tab0
should work.
Use Tab0 as temporary table from which you should insert into tab1.
INSERT INTO tab1(va, vb) SELECT * FROM tab0;
Check out this answer https://stackoverflow.com/a/15998236/2591314
You can move the autoincrement field to the end of the table where the missing value will be substituted by null on import and thus replaced by the autoincrement value ("NOT NULL" is implied, so you don't need it):
CREATE TABLE tab1 (va TEXT, vb TEXT, id INTEGER PRIMARY KEY)
I have data something like this:
Akhoond,1,Akhoond,"{""Akhund"", ""Akhwan""}",0
pgAdmin's import is rejecting this. What format does the text[] need to be in the CSV?
I also tried this:
Akhoond,1,Akhoond,"{Akhund, Akhwan}",0
Here's the table create:
CREATE TABLE private."Titles"
(
"Abbrev" text NOT NULL,
"LangID" smallint NOT NULL REFERENCES private."Languages" ("LangID"),
"Full" text NOT NULL,
"Alt" text[],
"Affix" bit
)
WITH (
OIDS=FALSE
);
ALTER TABLE private."Titles" ADD PRIMARY KEY ("Abbrev", "LangID");
CREATE INDEX ix_titles_alt ON private."Titles" USING GIN ("Alt");
ALTER TABLE private."Titles"
OWNER TO postgres;
The best way to find out is to create a table with the desired values and COPY ... TO STDOUT to see:
craig=> CREATE TABLE copyarray(a text, b integer, c text[], d integer);
CREATE TABLE
craig=> insert into copyarray(a,b,c,d) values ('Akhoond',1,ARRAY['Akhund','Akhwan'],0);
INSERT 0 1
craig=> insert into copyarray(a,b,c,d) values ('Akhoond',1,ARRAY['blah with spaces','blah,with,commas''and"quotes'],0);
INSERT 0 1
craig=> \copy copyarray TO stdout WITH (FORMAT CSV)
Akhoond,1,"{Akhund,Akhwan}",0
Akhoond,1,"{""blah with spaces"",""blah,with,commas'and\""quotes""}",0
So it looks like "{Akhund,Akhwan}" is fine. Note the second example I added showing how to handle commas, quotes spaces in the array text.
This works with the psql \copy command; if it doesn't work with PgAdmin-III then I'd suggest using psql and \copy.
I have a the following simple table in postgreSQL:
CREATE TABLE data ( id bigint NOT NULL, text_column text, );
The values of the text_column , as I see them in the phpPgAdmin web site, are numbers (long).
As I read, postgreSQL keeps a pointer to the actual data.
How can I fetch the actual string value of the text_column?
Doing:
select text_column from data
returns numbers...
Thanks
Following helped us:
select convert_from(loread(lo_open(value::int, x'40000'::int), x'40000'::int), 'UTF8') from t_field;
where value is field, which contains TEXT, and t_field is obviously name of table.
From psql run \lo_export ID FILE where ID is the number stored in the text column in your table and FILE is the path and filename for the results. The number is a reference to the large object table. You can view its contents by running \lo_list.
Provided it's text_column is text, which means it's an oid, this should work too :
select convert_from(lo_get(text_column::oid), 'UTF8') from data;
Works fine , May be the field values are in numbers:
> \d+ type
Column | Type
name | text
test_id | integer
select name from type;
name
AAA