Set ls_total=ls_concat1 || ls_cocat2;
Getting "is too long" error came
Note:ls_concat1,ls_cocat2,ls_total are CLOB datatypes
The CLOB data type has length in Db2.
It's obvious, that a variable must have an appropriate length to hold the result value.
There is no error in the example below, but you get it, if you define ls_total as CLOB (9) - with insufficient length.
BEGIN
DECLARE ls_total CLOB (10);
--DECLARE ls_total CLOB (9);
DECLARE ls_concat1, ls_concat2 CLOB (5);
SET ls_concat1 = 'ABCDE', ls_concat2 = 'ABCDE';
SET ls_total = ls_concat1 || ls_concat2;
END#
If it doesn't answer your question, then post some reproducible example which returns the error you mentioned.
Related
I need to import data from Excel into a ms sql database and I thought using the OPENROWSET would be a good idea... well, it is not bad but has some side effects.
The data I'm receiving is not allways 100% correct. By correct I mean cells that should be NULL (and in Excel empty) sometimes contain the string "NULL" or some other junk like whitespaces. I tried to fix it with this script:
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER FUNCTION [dbo].[NullIfEmpty](#input nvarchar)
RETURNS nvarchar(max)
AS
BEGIN
if (#input = '' or #input = 'NULL')
begin
return NULL
end
return #input
END
But strange things happen. This gives me a string with the text "NULL" instead of a real NULL so the grid cell after querying the database isn't yellow but contains normal text even though the target column allows NULL.
A simple test with:
select dbo.nullifempty('NULL')
or
select dbo.nullifempty(null)
also yields a string.
Do you know why this is happening and how I can fix it?
To get null for empty strings or strings that are the word NULL, you could just use coalesce and nullif:
COALESCE(NULLIF(#input, 'NULL'), NULLIF(#Input, ''), #input)
Please note that the problem in your original code is because you didn't specify the length of the #input parameter - so SQL Server created it as varchar(1).
You should always specify length for char/varchar/nchar and nvarchar.
From nchar and nvarchar page remarks:
When n is not specified in a data definition or variable declaration statement, the default length is 1. When n is not specified with the CAST function, the default length is 30.
(n referring to the n in nchar(n) or nvarchar(n))
repleace lines with 'ALTER"
ALTER FUNCTION [dbo].[NullIfEmpty](#input nvarchar(max))
and with line with 'if'
if (LTRIM(RTRIM(#input)) = '' or #input IS NULL)
you should reassign value for declared variable using set
''''
BEGIN
if (#input = '' or #input = 'NULL')
begin
set #input = NULL
end
select #input
END
test
I would like to know How can I insert an image "bytea" into a table of my postgreSql database? I've been searching forums for hours and have seen the same question posted dozens of times, but yet to find a single answer. All I see is how to insert .jpeg's into an old column which isn't what I need.
Here's the database table:
create table category (
"id_category" SERIAL,
"category_name" TEXT,
"category_image" bytea,
constraint id_cat_pkey primary key ("id_category"))without oids;
and when I add a new line, it doesn't work :
insert into category(category_name,category_image) values('tablette', lo_import('D:\image.jpg'));
If the column type is bytea then you can simply use the 'pg_read_binary_file'.
Example: pg_read_binary_file('/path-to-image/')
check postgresql documentation of pg_read_binary_file
insert into category(category_name,category_image) values('tablette', bytea('D:\image.jpg'));
The above solution works if column type is bytea
insert into category(category_name,category_image) values('tablette', lo_import('D:\image.jpg'));
The above solution works if column type is oid i.e., Blob
insert into category(category_name,category_image) values('tablette',decode('HexStringOfImage',hex));
The above decode function take two parameters. First parameter is HexString of Image.The second parameter is hex by default.Decode function coverts the hexString to bytes and store in bytea datatype column in postgres.
None of the above example worked well for me and on top of that I needed to add many images at once.
Full working example (python 3) with explanations:
With get_binary_array we get the value of the image (or file) as a binary array, using its path and file name as parameter (ex: '/home/Pictures/blue.png').
With send_files_to_postgresql we send all the images at once.
I previously created the database with one sequential 'id' that will automatically be incremented (but you can use your own homemade id) and one bytea 'image' field
import psycopg2
def get_binary_array(path):
with open(path, "rb") as image:
f = image.read()
b = bytes(f).hex()
return b
def send_files_to_postgresql(connection, cursor, file_names):
query = "INSERT INTO table(image) VALUES (decode(%s, 'hex'))"
mylist = []
for file_name in file_names:
mylist.append(get_binary_array(file_name))
try:
cursor.executemany(query, mylist)
connection.commit() # commit the changes to the database is advised for big files, see documentation
count = cursor.rowcount # check that the images were all successfully added
print (count, "Records inserted successfully into table")
except (Exception, psycopg2.DatabaseError) as error:
print(error)
def get_connection_cursor_tuple():
connection = None
try:
params = config()
print('Connecting to the PostgreSQL database...')
connection = psycopg2.connect(**params)
cursor = connection.cursor()
except (Exception, psycopg2.DatabaseError) as error:
print(error)
return connection, cursor
connection, cursor = connect_db.get_connection_cursor_tuple()
img_names = ['./blue.png', './landscape.jpg']
send_files_to_postgresql(connection, cursor, img_names)
Something like this function (slightly adapted from here) could work out.
create or replace function img_import(filename text)
returns void
volatile
as $$
declare
content_ bytea;
loid oid;
lfd integer;
lsize integer;
begin
loid := lo_import(filename);
lfd := lo_open(loid,131072);
lsize := lo_lseek(lfd,0,2);
perform lo_lseek(lfd,0,0);
content_ := loread(lfd,lsize);
perform lo_close(lfd);
perform lo_unlink(loid);
insert into category values
('tablette',
content_);
end;
$$ language plpgsql
Use it like select * from img_import('D:\image.jpg');
or rewrite to procedure if feeling like it.
create below function:
create or replace function bytea_import(p_path text, p_result out bytea)
language plpgsql as $$
declare
l_oid oid;
begin
select lo_import(p_path) into l_oid;
select lo_get(l_oid) INTO p_result;
perform lo_unlink(l_oid);
end;$$;
and use like this:
insert into table values(bytea_import('C:\1.png'));
For Linux users this is how to add the path to the image
insert into blog(img) values(bytea('/home/samkb420/Pictures/Sam Pics/sam.png'));
create table images (imgname text, img bytea);
insert into images(imgname,img) values ('MANGO', pg_read_binary_file('path_of_image')::bytea);
Use SQL workbench - Database explorer - insert a row and follow the dialogue...
enter image description here
We have fields with varying lengths and want to right-pad them with spaces to the field length defined in the schema.
The following statement is working:
SELECT RPAD(field, LENGTH(field), ' ') AS field FROM schema.table;
This produces an SQL error 206 with SQLState 42703: is not valid in the context where it is used.
// Our application resolves the prepared statement's ? - this is working fine
INSERT INTO schema.table (field) VALUES (RPAD(?, LENGTH(field), ' '));
The same happens with:
INSERT INTO schema.table (field) VALUES (RPAD(?, LENGTH(schema.table.field), ' '));
Is there any possibility to avoid hardcoding the field length?
Your problem is that scalar functions operate on rows; LENGTH(field) only works within a statement that returns rows, such as a select statement. To understand why, imagine putting some other function in place of LENGTH(). LCASE(field), for example, takes the lowercase of the string in a particular row. It wouldn't make sense applied generically to a column. Even LENGTH() can vary row-by-row in some cases: if the column is of type VARCHAR, LENGTH() returns the length of the actual string.
The solution is to select any row, perform the LENGTH() operation on the field, and store the result in a variable:
CREATE OR REPLACE VARIABLE field_length INTEGER;
SET field_length = (
SELECT LENGTH(field) FROM schema.table
WHERE field IS NOT NULL
FETCH FIRST ROW ONLY
);
You only need to do this once in your code. Then, whenever you need to use the length:
INSERT INTO schema.table (field) VALUES (RPAD(?, field_length, ' '));
Note that this solution depends on field being defined as a CHAR(x) rather than a VARCHAR(x). If you had to do this with a VARCHAR, you could find out the length of the field from the syscat.columns system table.
EDIT: added handling of null values since LENGTH() could return null if the value in field is null.
If you want a fixed length column, why are you using VARCHAR? Use CHAR - DB2 will automatically pad the values for you.
I am trying check if a value is null if so the select null else cast to numeric, but it throws an error. This is actually part of an insert statement
INSERT into someTable(name,created,power)
SELECT 'xyz',now(),
case when :power ='null' then NULL else cast(:power as numeric) end from abc
error that I get is
Error: ERROR: invalid input syntax for type numeric: "null"
:power is a variable that can be given any value using java code. If I give a value of null it give an error.
In code I get the following error from the java stack trace
org.postgresql.util.PSQLException: ERROR: cannot cast type bytea to numeric
Error:
SELECT CASE WHEN 'null' = 'null' THEN NULL ELSE cast('null' AS numeric) END
No error:
DO $$
DECLARE
power text := 'null';
BEGIN
PERFORM CASE WHEN power = 'null' THEN NULL ELSE cast(power AS numeric) END;
END;
$$
Explanation:
If you build a query string, the expression cast('null' AS numeric) or simply 'null'::numeric always raises an exception, even in an ELSE block that is never executed, because it is invalid input syntax and the exception is raised during the syntax check (like the error message implies), not during execution.
A CASE statement like you display only makes sense with a parameter or variable not with literals. The second instance of the literal has no connection to the first instance whatsoever after the query string has been assembled.
For dynamic SQL like that, you need to check the value before you build the query string. Or you use a function or prepared statement and pass the value as parameter. That would work, too.
More advice after comment:
In your particular case you could check the value in the app and build a query string like this:
INSERT INTO tbl(name, abc_id, created, power)
SELECT 'xyz'
, abc_id
, now()
, <insert_value_of_power_or_NULL_here> -- automatically converted to numeric
FROM abc
You may be interested in a different approach to INSERT data from a file conditionally.
Use COPY for files local to the server or psql's meta-command \copy for files local to the client.
if the field value is null, and you want in this case to map it to some value you can use coalesce(field_name, 'Some value') or coalesce(field_name, 123).
For full documentation see here.
You have to check with the IS operator, and not with the equal when you dealing with NULL :
INSERT into someTable(name,created,power)
SELECT 'xyz',now(),
case when :power IS null then NULL else cast(:power as numeric) end from abc
INSERT into someTable(name,created,power) SELECT 'xyz',now(),
case :power when 'null' then NULL else :power end::numeric from abc
I was trying to do something similar in order to update/insert some records where a numeric value can be null or not.
You can validate a variable before you send it to the function or inside the function depending the value passed
(For me using a variable is better than use CASE WHEN THEN ELSE END CASE every time you need to validate the value)
So to work with the NULL values using a regular comparison operand in order to find a record to update can be done by turning transform_null_equals to ON
I hope this help someone
CREATE OR REPLACE FUNCTION update_insert_transaction(vcodaccount integer, vcodaccountaux text,
vdescription text, vcodgroup integer)
RETURNS integer AS $$
DECLARE
n integer = 0;
vsql text = 'NULL';
BEGIN
IF vcodaccountaux <> '' THEN
vsql = vcodaccountaux;
END IF;
SET LOCAL transform_null_equals TO ON;
EXECUTE 'UPDATE account_import_conf SET (codaccount, codaccountaux, description, codgroup) =
('||vcodaccount||','||vsql||',trim('||quote_literal(vdescription)||'),'||vcodgroup||')
WHERE codaccount='||vcodaccount||' AND codaccountaux = '||vsql||' RETURNING * ';
GET DIAGNOSTICS n = ROW_COUNT;
IF n = 0 THEN
EXECUTE 'INSERT INTO account_import_conf (codaccount, codaccountaux, description, codgroup)
SELECT '||vcodaccount||','||vsql||' ,trim('||quote_literal(vdescription)||'),'||vcodgroup||';';
END IF;
RETURN n;
END;$$
LANGUAGE plpgsql;
We have decided to move from OIDs in our PostgreSQL 9.0 database and use bytea columns instead. I'm trying to copy the data from one column to the other, but I can't figure out the right query. This is the closest I've gotten to:
update user as thistable set pkcs_as_bytea = (select array_agg(mylargeobject.data) from
(select * from pg_largeobject where loid = thistable.pkcs12_as_oid order by pageno) as mylargeobject) where thistable.pkcs12 is not null
And that gives me the following error message:
ERROR: column "pkcs_as_bytea" is of type bytea but expression is of type bytea[]
What would be the right query then?
Another way which doesn't require a custom function is to use the loread(lo_open(...)) combination, like:
UPDATE user SET pkcs_as_bytea = loread(lo_open(pkcs12_as_oid, 262144), 1000000) WHERE thistable.pkcs12 IS NOT NULL
There is a problem with this code, the loread function requires as the second parameter the maximum number of bytes to read (the 1000000 parameter I used above), so you should use a really big number here if your data is big. Otherwise, the content will be trimmed after this many bytes, and you won't get all the data back into the bytea field.
If you want to convert from OID to a text field, you should also use a conversion function, as in:
UPDATE user SET pkcs_as_text = convert_from(loread(lo_open(pkcs12_as_oid, 262144), 1000000), 'UTF8')
(262144 is a flag for the open mode, 40000 in hexa, which means "open read-only")
Here is a stored procedure that does the magic:
CREATE OR REPLACE FUNCTION merge_oid(val oid)
returns bytea as $$
declare merged bytea;
declare arr bytea;
BEGIN
FOR arr IN SELECT data from pg_largeobject WHERE loid = val ORDER BY pageno LOOP
IF merged IS NULL THEN
merged := arr;
ELSE
merged := merged || arr;
END IF;
END LOOP;
RETURN merged;
END
$$ LANGUAGE plpgsql;
well, i did something like this. I have attachment table and content column with data in oid type. I migrated with four actions:
ALTER TABLE attachment add column content_bytea bytea
UPDATE attachment SET content_bytea = lo_get(content)
ALTER TABLE attachment drop column content
ALTER TABLE attachment rename column content_bytea to content
You need something like array_to_string(anyarray, text) for text arrays, but in this case an array_to_bytea(largeobjectarray) to concat all sections. You have to create this function yourself, or handle this in application logic.
This is what you can do.
--table thistable --
ALTER TABLE thistable add column se_signed_bytea bytea;
UPDATE thistable SET se_signed_bytea = lo_get(pkcs_as_bytea);
ALTER TABLE thistable drop column pkc`enter code here`s_as_bytea;
ALTER TABLE thistable rename column se_signed_bytea to pkcs_as_bytea;