Postgres copy data & evaluate expression - postgresql

Is it possible a copy command to evaluate expressions upon insertion?
For example consider the following table
create table test1 ( a int, b int)
and we have a file to import
5 , case when b = 1 then 100 else 101
25 , case when b = 1 then 100 else 101
145, case when b = 1 then 100 else 101
The following command fill fail
COPY test1 FROM 'file' USING DELIMITERS ',';
with the following error
ERROR: invalid input syntax for integer
which means that it can not evaluate the case expression. Is there any workaround?

The command COPY only copies data (obviously) and does not evaluate SQL code, as explained in the documentation: http://www.postgresql.org/docs/9.3/static/sql-copy.html
As far as I know there is not workarounds to making COPY evaluating sql code.
You must preprocess your csv file and convert it to a standard sql script with INSERT statements in this form:
INSERT INTO your_table VALUES(145, CASE WHEN 1 = 1 THEN 100 ELSE 101 END);
Then execute the sql script with the client you are using. I.e. with psql you would use the -f option:
psql -d your_database -f your_sql_script

Related

Postgres - error after execution many times the same query

I have a table fepu00 and trigger on it.
The part of code causing tha problem looks like follows:
if (old_record.potime = NEW.putime --new AP (read in statement above)
or old_record.pupisb::NUMERIC != NEW.pupisb::NUMERIC) then
insert into dl356_table
values (old_record.poid, old_record.poidma, old_record.ponmaf,
old_record.adstr, old_record.adpsc, old_record.adcit, old_record.adidze,
NEW.pupisb::numeric,
case when old_record.potime = NEW.putime then '1' else '2' end,
NEW.putime);
end if;
The query I'm executing is really simple:
update fepu00 set pudas = ? where puid = ?
It works, but only for some quantity of times. Then it throws:
ERROR: type of parameter 25 (numeric) does not match that when preparing the plan (text)
Where: PL/pgSQL function dl356_trigger() line 75 at IF
Number of updated records varies from a few to a few hundreds.
When I run the same query (with the same parameters) again, it works properly until the next fail.
Thanks for any suggestions.

Converting function instr from Oracle to PostgreSQL (sql only)

I am working on converting something from Oracle to PostgreSQL. In the Oracle file there is a function:
instr(string,substring,starting point,nth location)
Example:
instr(text, '$', 1, 3)
In PostgreSQL this does not exist, so I looked up an equivalent function (4 parameter is important).
I found:
The function strpos(str, sub) in Postgres is equivalent of instr(str, sub) in Oracle. Tried options via split_part (it didn't work out).
I need the same result only with standard functions Postgres (not own function).
Maybe someone will offer options, even redundant in code.
This may be done in pure SQL using string_to_array.
with tab(val) as (
select 'qwe$rty$123$456$78'
union all
select 'qwe$rty$123$'
union all
select '123$456$'
union all
select '123$456'
)
select
val
/*Oracle's signature: instr(string , substring [, position [, occurrence ] ])*/
, case
when
array_length(
string_to_array(substr(val /*string*/, 1 /*position*/), '$' /*substring*/),
1
) <= 3 /*occurrence*/
then 0
else
length(array_to_string((
string_to_array(substr(val /*string*/, 1 /*position*/), '$' /*substring*/)
)[:3/*occurrence*/],
'$'/*substring*/)
) + 1
end as instr
from tab
val
instr
qwe$rty$123$456$78
12
qwe$rty$123$
12
123$456$
0
123$456
0
Postgres: fiddle
Oracle: fiddle

IBM db2 update substring in column

I have a table X with column Y (IBM db2) where column Y is a string of length less than 2048 characters. Some values in column Y contains string like ID some_value. I would like to remove all those keys and values. For example:
Row before update:
some text a ba ba b a ID sffjhdsf32484 further part etc etc
Row adter update:
some text a ba ba b a further part etc etc
how to achieve that?
I have following code up to now:
BEGIN
declare aaa anchor X.Y;
declare cur CURSOR for
SELECT Y
from X for update of Y;
open cur;
fetch cur into aaa;
update X.Y
set Y = //update logic
where current of cur;
close cur;
END;
unfortunatelly, It updates only first row in a table.
Does this help?
$ db2 "create table t(v varchar(2048))"
DB20000I The SQL command completed successfully.
$ db2 "insert into t values 'some text a ba ba b a ID sffjhdsf32484 further part etc etc'"
DB20000I The SQL command completed successfully.
$ db2 "update t set v = REGEXP_REPLACE(v,' ID sffjhdsf32484')"
DB20000I The SQL command completed successfully.
$ db2 "select v::varchar(60) from t"
1
------------------------------------------------------------
some text a ba ba b a further part etc etc
1 record(s) selected.
Use REGEXP_REPLACE function as in the following example:
SELECT REGEXP_REPLACE('String containing ID skskskk999s inside',
'ID\s.*\s', '',1,1,'c')
FROM sysibm.sysdummy1
Answer will be
1
------------------------
String containing inside
Knowing how REGEXP_REPLACE works, now you can use it in an UPDATE statement or any other statement you need. For example
UPDATE TBL SET SPECIFIC_COLUMN = REGEXP_REPLACE(
SPECIFIC_COLUMN,'ID\s.*\s', '',1,1,'c')

The sql works fine but with python it doesn't insert values into table

I'm trying to use python for stored procs in sql, I have tested my sql code and it works fine, but when I execute it via python, the values are not inserted into my table
Note: I don't have any errors when executing
My code below:
import psycopg2
con = psycopg2.connect(dbname='dbname'
, host='host'
, port='5439', user='username', password='password')
def executeScriptsfromFile(filename):
#Open and read the file as a single buffer
cur = con.cursor()
fd = open(filename,'r')
sqlFile = fd.read()
fd.close()
#all SQL commands(split on ';')
sqlCommands = filter(None,sqlFile.split(';'))
#Execute every command from the input file
for command in sqlCommands:
# This will skip and report errors
# For example, if the tables do not yet exist, this will skip over
# the DROP TABLE commands
try:
cur.execute(command)
con.commit()
except Exception as inst:
print("Command skipped:", inst)
cur.close()
executeScriptsfromFile('filepath.sql')
Insert comment in sql:
INSERT INTO schema.users
SELECT
UserId
,Country
,InstallDate
,LastConnectDate
FROM #Source;
Note: As I said the sql works perfectly fine when I tested it.

Comparing integers to return Boolean in BigQuery

Running a big query Select Case When query from the command line. When looking in a string, for a numeric value and casting that to an integer - this needs to be compared to a value and return a boolean so that the case statement worked.
bq query SELECT case when integer(right(strWithNumb,8))> 10000000 then right(strWithNumb,8) else "no" end FROM [Project:bucket.mytable]
returned
"CASE expects the WHEN expression to be boolean."
I tried:
boolean(integer(right(strWithNumb,8))> 10000000)
but got
" Was expecting: "WHEN" ..."
Even though your original query works in Web UI - it DOES fail in bq command line tool depends on your environment - for example if you are on PC
Try to escape > character with ^ and embrace whole query with " as it is in example below. Please note also escaping of " in "no"
bq query "SELECT case when integer(right(strWithNumb,8)) ^> 10000000 then right(strWithNumb,8) else \"no\" end FROM [Project:bucket.mytable]"
you can avoid later by changing " to '
bq query "SELECT case when integer(right(strWithNumb,8)) ^> 10000000 then right(strWithNumb,8) else 'no' end FROM [Project:bucket.mytable]"
A little more explanations:
when you execute your original command (on PC for example via Google Cloud SDK Shell) your actual query becomes as below
SELECT case when integer(right(strWithNumb,8)) then right(strWithNumb,8) else "no" end FROM [Project:bucket.mytable]
As you can see your > 10000000 part of query gets lost thus making WHEN expression INTEGER instead of expected BOOLEAN
Hope this helped