I generated a (UTF-8) file by an external program for importing into PostgreSQL 9.6.1. Problem is the bytea field (PWHASH).
Snippet from this file (using TAB as delimiter)
COPY USERS (ID,CODE,PWHASH,EMAIL) FROM stdin;
7 test1 E'\\\\x657B954D27B4AC56FA997D24A5FF2563' test#amce.org
\.
When importing with
psql mydb myrole -f test.sql
Everything goes well.
However, if i query the result, the byte array is not 16 bytes, but 37 bytes:
select passwordhash,length(passwordhash) from users;
passwordhash | length
------------------------------------------------------------------------------+--------
\x45275c78363537423935344432374234414335364641393937443234413546463235363327 | 37
What is the correct syntax for this?
The format of the input file is wrong. It should be like this:
7 test1 \\x657B954D27B4AC56FA997D24A5FF2563 test#amce.org
I will have to "prepare" data I believe. Smth like here:
t=# insert into u select 'x657B954D27B4AC56FA997D24A5FF2563';
INSERT 0 1
Time: 5990.809 ms
t=# select b from u;
b
----------------------------------------------------------------------
\x783635374239353444323742344143353646413939374432344135464632353633
(1 row)
Time: 0.234 ms
t=# insert into u select decode('657B954D27B4AC56FA997D24A5FF2563','hex');
INSERT 0 1
Time: 62.767 ms
t=# select b from u;
b
----------------------------------------------------------------------
\x783635374239353444323742344143353646413939374432344135464632353633
\x657b954d27b4ac56fa997d24a5ff2563
(2 rows)
Time: 0.208 ms
So in your case you can:
create table t as select ID,CODE,PWHASH::text,EMAIL from users where false;
COPY t (ID,CODE,PWHASH,EMAIL) FROM stdin;
insert into users select ID,CODE,decode(substr(PWHASH,4),'hex'),EMAIL from t;
Related
test tables as below:
create table test(col1 int, col2 varchar,col3 date);
insert into test values (1,'abc','2015-09-10');
insert into test values (1,'abd2','2015-09-11');
insert into test values (21,'xaz','2015-09-12');
insert into test values (2,'xyz','2015-09-13');
insert into test values (3,'tcs','2015-01-15');
insert into test values (3,'tcs','2016-01-18');
Use bash script to get array res of postgresql select.
#!/bin/bash
res_temp=$(psql -tAq postgresql://"$db_user":"$db_password"#localhost:"$db_port"/"$db_name" << EOF
SELECT "col1","col2" FROM "test" WHERE "col2" LIKE '%a%';
EOF
)
read res <<< $res_temp
#should be 3,but output 1
echo ${#res[#]}
for i in "${!res[#]}"; do
printf "%s\t%s\n" "$i" "${res[$i]}"
done
Output as below:
1
0 1|abc
Expect output is:
3
0 1|abc
1 1|abd2
2 21|xaz
Where is the problem?
This is going wrong because of the read res <<< $res_temp. What do you expect to retrieve from that?
I've fixed your script and put an example how to directly create an array (which I think you're trying). I don't have Postgresql running atm, but SQLite does the same.
How I created the data:
$ sqlite ./a.test
sqlite> create table test(col1 int, col2 varchar(100),col3 varchar(100));
sqlite> insert into test values (1,'abc','2015-09-10');
sqlite> insert into test values (1,'abd2','2015-09-11');
sqlite> insert into test values (21,'xaz','2015-09-12');
sqlite> insert into test values (2,'xyz','2015-09-13');
sqlite> insert into test values (3,'tcs','2015-01-15');
sqlite> insert into test values (3,'tcs','2016-01-18');
sqlite> SELECT col1,col2 FROM test WHERE col2 LIKE '%a%';
Your solution
#! /bin/bash
res_tmp=$(sqlite ./a.test "SELECT col1,col2 FROM test WHERE col2 LIKE '%a%';")
read -a res <<< ${res_tmp[#]}
echo ${#res[#]}
for i in "${!res[#]}"; do
printf "%s\t%s\n" "$i" "${res[$i]}"
done
exit 0
Directly an array
#! /bin/bash
res=($(sqlite ./a.test "SELECT col1,col2 FROM test WHERE col2 LIKE '%a%';"))
echo ${#res[#]}
for i in "${!res[#]}"; do
printf "%s\t%s\n" "$i" "${res[$i]}"
done
exit 0
Oh, and output of both:
3
0 1|abc
1 1|abd2
2 21|xaz
Using YugabyteDB 2.5.3.1 (PostgreSQL 11.2).
I currently have this table:
create table bum2(id int, the_t text);
Looking to import this "a"bc" from a csv file into the text column.
Tried with this csv file:
6,""a""bc""
And:
\copy bum2 from data.csv WITH (FORMAT csv);
And getting:
yugabyte=# select * from bum2;
id | the_t
----+-------
6 | abc
(1 row)
You can use additional quotes to escape the quotes. The csv file below works:
6,"""a""bc"""
yugabyte=# \copy bum2 from data.csv WITH (FORMAT csv);
COPY 1
yugabyte=# select * from bum2;
id | the_t
----+--------
6 | "a"bc"
(1 row)
I have a table X with column Y (IBM db2) where column Y is a string of length less than 2048 characters. Some values in column Y contains string like ID some_value. I would like to remove all those keys and values. For example:
Row before update:
some text a ba ba b a ID sffjhdsf32484 further part etc etc
Row adter update:
some text a ba ba b a further part etc etc
how to achieve that?
I have following code up to now:
BEGIN
declare aaa anchor X.Y;
declare cur CURSOR for
SELECT Y
from X for update of Y;
open cur;
fetch cur into aaa;
update X.Y
set Y = //update logic
where current of cur;
close cur;
END;
unfortunatelly, It updates only first row in a table.
Does this help?
$ db2 "create table t(v varchar(2048))"
DB20000I The SQL command completed successfully.
$ db2 "insert into t values 'some text a ba ba b a ID sffjhdsf32484 further part etc etc'"
DB20000I The SQL command completed successfully.
$ db2 "update t set v = REGEXP_REPLACE(v,' ID sffjhdsf32484')"
DB20000I The SQL command completed successfully.
$ db2 "select v::varchar(60) from t"
1
------------------------------------------------------------
some text a ba ba b a further part etc etc
1 record(s) selected.
Use REGEXP_REPLACE function as in the following example:
SELECT REGEXP_REPLACE('String containing ID skskskk999s inside',
'ID\s.*\s', '',1,1,'c')
FROM sysibm.sysdummy1
Answer will be
1
------------------------
String containing inside
Knowing how REGEXP_REPLACE works, now you can use it in an UPDATE statement or any other statement you need. For example
UPDATE TBL SET SPECIFIC_COLUMN = REGEXP_REPLACE(
SPECIFIC_COLUMN,'ID\s.*\s', '',1,1,'c')
I have a schema like this (simplified):
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name NOT NULL
);
CREATE INDEX users_idx
ON users
USING GIN (to_tsvector('finnish', name));
But I'm getting completely invalid results with my queries:
# select name from users where to_tsvector('finnish', name) ## to_tsquery('lemmin');
name
------
(0 rows)
# select name from users where to_tsvector('finnish', name) ## to_tsquery('lemmink');
name
--------------------
Riitta ja Lemminki
Riitta ja Lemminki
(2 rows)
# select name from users where name ilike 'lemmink%';
name
----------------------
Lemminkäinen Matilda
Lemminkäinen Matias
Lemminkäinen Kyösti
Lemminkäinen Tuomas
(4 rows)
Another example:
# select name from users where to_tsvector('finnish', name) ## to_tsquery('partu');
name
----------
Partuuna
(1 row)
# select name from users where to_tsvector('finnish', name) ## to_tsquery('partur');
name
------------------------
Parturi-Kampaamo Raija
Parturi-Kampaamo Siema
(2 rows)
I was expecting to get the bottom two results on both queries...
Using the following version:
psql (9.4.6, server 9.5.2)
WARNING: psql major version 9.4, server major version 9.5.
Some psql features might not work.
I don't speak Finnish, but it seems expected result. FTS looks for lexemes, not for parts of words, Eg, do is not a lexemme for dog, but dog is for dogs:
t=# select to_tsvector('english', 'Dogs eats bone') ## to_tsquery('do');
NOTICE: text-search query contains only stop words or doesn't contain lexemes, ignored
?column?
----------
f
(1 row)
t=# select to_tsvector('english', 'Dogs eats bone') ## to_tsquery('dog');
?column?
----------
t
(1 row)
So I believe in Parturi last i is optional ending - right?..
Update:
from https://en.wiktionary.org/wiki/parturi :
partur[i], partur[eita] => lexeme will be partur
Loading flat file to postgres table.I need to do few transformations while reading the file and load it.
Like
-->Check for characters, if it is present, default some value. Reg_Exp can be used in oracle. How the functions can be called in below syntax?
-->TO_DATE function from text format
-->Check for Null and defaulting some value
-->Trim functions
-->Only few columns from source file should be loaded
-->Defaulting values, say for instance, source file has only 3 columns. But we need to load 4 columns. One column should be defaulted with some value
LOAD CSV
FROM 'filename'
INTO postgresql://role#host:port/database_name?tablename
TARGET COLUMNS
(
alphanm,alphnumnn,nmrc,dte
)
WITH truncate,
skip header = 0,
fields optionally enclosed by '"',
fields escaped by double-quote,
fields terminated by '|',
batch rows = 100,
batch size = 1MB,
batch concurrency = 64
SET work_mem to '32 MB', maintenance_work_mem to '64 MB';
Kindly help me, how this can be accomplished used pgloader?
Thanks
Here's a self-contained test case for pgloader that reproduces your use-case, as best as I could understand it:
/*
Sorry pgloader version "3.3.2" compiled with SBCL 1.2.8-1.el7 Doing kind
of POC, to implement in real time work. Sample data from file:
raj|B|0.5|20170101|ABCD Need to load only first,second,third and fourth
column; Table has three column, third column should be defaulted with some
value. Table structure: A B C-numeric D-date E-(Need to add default value)
*/
LOAD CSV
FROM inline
(
alphanm,
alphnumnn,
nmrc,
dte [date format 'YYYYMMDD'],
other
)
INTO postgresql:///pgloader?so.raja
(
alphanm,
alphnumnn,
nmrc,
dte,
col text using "constant value"
)
WITH truncate,
fields optionally enclosed by '"',
fields escaped by double-quote,
fields terminated by '|'
SET work_mem to '12MB',
standard_conforming_strings to 'on'
BEFORE LOAD DO
$$ drop table if exists so.raja; $$,
$$ create table so.raja (
alphanm text,
alphnumnn text,
nmrc numeric,
dte date,
col text
);
$$;
raj|B|0.5|20170101|ABCD
Now here's the extract from running the pgloader command:
$ pgloader 41287414.load
2017-08-15T12:35:10.258000+02:00 LOG Main logs in '/private/tmp/pgloader/pgloader.log'
2017-08-15T12:35:10.261000+02:00 LOG Data errors in '/private/tmp/pgloader/'
2017-08-15T12:35:10.261000+02:00 LOG Parsing commands from file #P"/Users/dim/dev/temp/pgloader-issues/stackoverflow/41287414.load"
2017-08-15T12:35:10.422000+02:00 LOG report summary reset
table name read imported errors total time
----------------------- --------- --------- --------- --------------
fetch 0 0 0 0.007s
before load 2 2 0 0.016s
----------------------- --------- --------- --------- --------------
so.raja 1 1 0 0.019s
----------------------- --------- --------- --------- --------------
Files Processed 1 1 0 0.021s
COPY Threads Completion 2 2 0 0.038s
----------------------- --------- --------- --------- --------------
Total import time 1 1 0 0.426s
And here's the content of the target table when the command is done:
$ psql -q -d pgloader -c 'table so.raja'
alphanm │ alphnumnn │ nmrc │ dte │ col
═════════╪═══════════╪══════╪════════════╪════════════════
raj │ B │ 0.5 │ 2017-01-01 │ constant value
(1 row)