PostgreSQL - Inserting a combination of values into a table (Example Included) - postgresql

For example I calculate a predetermined value which is stored in value. I would like to insert the value into the table as A100.
I tried doing this at first: insert into t values('A'+value);
This didn't seem to work. Does anyone know how I might be able to do this?

You can use concat function to do this:
insert into t values(concat('A',value::character varying));

You can use concat or ||
insert into t values(concat('A', value));
insert into t values('A' || value);
In both cases You can, but You don't need cast int to character varying, because concat:
Concatenate the text representations of all the arguments
|| cast non-string arguments to text.

Related

How to get a list of quoted strings from the output of a SELECT query that has that list of quoted strings in it, but is of type string?

The following code is not a full setup that you can run to check. It shall just make it a bit clearer what the question is about.
With an example function like this (the variadic example is taken from PostgreSQL inserting list of objects into a stored procedure or PostgreSQL - Passing Array to Stored Function):
CREATE OR REPLACE function get_v(variadic _v text[]) returns table (v varchar(50)) as
$F$
declare
begin
return query
select t.v
from test t
where t.v = any(_v)
end;
$F$
language plpgsql
;
If you copy the one-value output of a select string_agg... query, 'x','y','z', by hand and put it as the argument of the function, the function works:
SELECT v FROM get_v_from_v(
'x','y','z'
);
The 'x','y','z' gets read into the function as variadic _v text[] so that the function can check its values with where t.v = any(_v).
If you instead put the (select string_agg...) query that is behind that 'x','y','z' output in the same place, the function does not work:
select v from get_v_from_v(
(select string_agg(quote_literal(x.v), ',') from (select v from get_v_from_name('something')) as x)
);
That means: the "one-value output field" 'x','y','z' that comes from the (select string_agg...) query is not the same as the text[] list type: 'x','y','z'.
With get_v_from_name('something') as another function that returns a table of one column and the "v" values in the rows, and after running the string_agg() on its output, you get the 'x','y','z' output. I learnt this working function string_agg() at How to make a list of quoted strings from the string values of a column in postgresql?. The full list of such string functions is in the postgreSQL guide at 9.4. String Functions and Operators.
I guess that the format of the select query output is just a string, not a list, so that the input is not seen as a list of quoted strings, but rather like a string as a whole: ''x','y','z''. The get_v_from_v argument does not need just one string of all values, but a list of quoted strings, since the argument for the function is of type text[] - which is a list.
It seems as if this question does not depend on the query that is behind the output. It seems rather just a general thing that the output in a tuple of a table and taken as the argument of a function is not the same as the same output hand-copied as the same argument.
Therefore, the question. What needs to be done to make the output of a select query the same as the hand-copy of its output, so that the output is just the list 'x','y','z', as if it was just copied and pasted?
PS: I guess that this way of making lists of quoted strings from the one-column table output only to pass it to the function is not best practice. For example, in TSQL/SQL Server, you should pass "table valued parameters", so that you pass values as a table that you select from within the function to get the values, see How do I pass a list as a parameter in a stored procedure?. Not sure how this is done in postgreSQL, but it might be what is needed here.
CREATE OR REPLACE function get_v(_v text[]) returns table (v varchar(50)) as
$F$
declare
begin
return query
select t.v
from test t
where t.v = any((select * from unnest(_v)))
end;
$F$
language plpgsql
;
With get_v_from_name('something') as another function that returns a table of one column and the "v" values in the rows (this was said in the question), the following works:
select v from get_v_from_v(
(select array_agg(x.v) from (select v from get_v_from_name('something')) as x)
);
Side remark:
array_agg(quote_literal(x.v), ',') instead of array_agg(x.v) fails, the function does not allow a second argument.

How to call decode function on column before insert statement?

I have a table and some of the columns are "bytea" type. What I want to do is; Before any insert statement check if a column type is "bytea" then decode hex value of column value.
Is there a way to create a trigger function like below?
INSERT INTO USERS (ID, NAME, STATUS) VALUES ('0x3BEDDASTSFSFSDS', 'test', 'new')
to
INSERT INTO USERS (ID, NAME, STATUS) VALUES (decode('0x3BEDDASTSFSFSDS', 'hex'), 'test', 'new')
There is no way to intercept the input string before Postgres parses it as a bytea.
It'll be read as an "escape" format literal, i.e. the bytea will end up holding the hex digits' ASCII codepoints. You can undo this, though it's a little unpleasant:
create function trg() returns trigger language plpgsql as $$
begin
assert substring(new.id from 1 for 2) = '0x';
new.id = decode(convert_from(substring(new.id from 3), current_setting('server_encoding')), 'hex');
return new;
end
$$;
create trigger trg
before insert on users
for each row
execute procedure trg();
If you have any control over the input, just change the 0x to \x, and Postgres will convert it for you.

Scalar-valued function does not return NULL but a 'NULL' string

I need to import data from Excel into a ms sql database and I thought using the OPENROWSET would be a good idea... well, it is not bad but has some side effects.
The data I'm receiving is not allways 100% correct. By correct I mean cells that should be NULL (and in Excel empty) sometimes contain the string "NULL" or some other junk like whitespaces. I tried to fix it with this script:
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER FUNCTION [dbo].[NullIfEmpty](#input nvarchar)
RETURNS nvarchar(max)
AS
BEGIN
if (#input = '' or #input = 'NULL')
begin
return NULL
end
return #input
END
But strange things happen. This gives me a string with the text "NULL" instead of a real NULL so the grid cell after querying the database isn't yellow but contains normal text even though the target column allows NULL.
A simple test with:
select dbo.nullifempty('NULL')
or
select dbo.nullifempty(null)
also yields a string.
Do you know why this is happening and how I can fix it?
To get null for empty strings or strings that are the word NULL, you could just use coalesce and nullif:
COALESCE(NULLIF(#input, 'NULL'), NULLIF(#Input, ''), #input)
Please note that the problem in your original code is because you didn't specify the length of the #input parameter - so SQL Server created it as varchar(1).
You should always specify length for char/varchar/nchar and nvarchar.
From nchar and nvarchar page remarks:
When n is not specified in a data definition or variable declaration statement, the default length is 1. When n is not specified with the CAST function, the default length is 30.
(n referring to the n in nchar(n) or nvarchar(n))
repleace lines with 'ALTER"
ALTER FUNCTION [dbo].[NullIfEmpty](#input nvarchar(max))
and with line with 'if'
if (LTRIM(RTRIM(#input)) = '' or #input IS NULL)
you should reassign value for declared variable using set
''''
BEGIN
if (#input = '' or #input = 'NULL')
begin
set #input = NULL
end
select #input
END
test

DB2 right padding with field length

We have fields with varying lengths and want to right-pad them with spaces to the field length defined in the schema.
The following statement is working:
SELECT RPAD(field, LENGTH(field), ' ') AS field FROM schema.table;
This produces an SQL error 206 with SQLState 42703: is not valid in the context where it is used.
// Our application resolves the prepared statement's ? - this is working fine
INSERT INTO schema.table (field) VALUES (RPAD(?, LENGTH(field), ' '));
The same happens with:
INSERT INTO schema.table (field) VALUES (RPAD(?, LENGTH(schema.table.field), ' '));
Is there any possibility to avoid hardcoding the field length?
Your problem is that scalar functions operate on rows; LENGTH(field) only works within a statement that returns rows, such as a select statement. To understand why, imagine putting some other function in place of LENGTH(). LCASE(field), for example, takes the lowercase of the string in a particular row. It wouldn't make sense applied generically to a column. Even LENGTH() can vary row-by-row in some cases: if the column is of type VARCHAR, LENGTH() returns the length of the actual string.
The solution is to select any row, perform the LENGTH() operation on the field, and store the result in a variable:
CREATE OR REPLACE VARIABLE field_length INTEGER;
SET field_length = (
SELECT LENGTH(field) FROM schema.table
WHERE field IS NOT NULL
FETCH FIRST ROW ONLY
);
You only need to do this once in your code. Then, whenever you need to use the length:
INSERT INTO schema.table (field) VALUES (RPAD(?, field_length, ' '));
Note that this solution depends on field being defined as a CHAR(x) rather than a VARCHAR(x). If you had to do this with a VARCHAR, you could find out the length of the field from the syscat.columns system table.
EDIT: added handling of null values since LENGTH() could return null if the value in field is null.
If you want a fixed length column, why are you using VARCHAR? Use CHAR - DB2 will automatically pad the values for you.

How to find the first and last occurrences of a specific character inside a string in PostgreSQL

I want to find the first and the last occurrences of a specific character inside a string. As an example, consider a string named "2010-####-3434", and suppose the character to be searched for is "#". The first occurrence of hash inside the string is at 6-th position, and the last occurrence is at 9-th position.
Well...
Select position('#' in '2010-####-3434');
will give you the first.
If you want the last, just run that again with the reverse of your string. A pl/pgsql string reverse can be found here.
Select length('2010-####-3434') - position('#' in reverse_string('2010-####-3434')) + 1;
My example:
reverse(substr(reverse(newvalue),0,strpos(reverse(newvalue),',')))
Reverse all string
Substring string
Reverse result
In the case where char = '.', an escape is needed. So the function can be written:
CREATE OR REPLACE FUNCTION last_post(text,char)
RETURNS integer LANGUAGE SQL AS $$
select length($1)- length(regexp_replace($1, E'.*\\' || $2,''));
$$;
9.5+ with array_positions
Using basic PostgreSQL array functions we call string_to_array(), and then feed that to array_positions() like this array_positions(string_to_array(str,null), c)
SELECT
arrpos[array_lower(arrpos,1)] AS first,
arrpos[array_upper(arrpos,1)] AS last
FROM ( VALUES
('2010-####-3434', '#')
) AS t(str,c)
CROSS JOIN LATERAL array_positions(string_to_array(str,null), c)
AS arrpos;
I do not know how to do that, but the regular expression functions like regexp_matches, regexp_replace, and regexp_split_to_array may be an alternative route to solving your problem.
This pure SQL function will provide the last position of a char inside the string, counting from 1. It returns 0 if not found ... But (big disclaimer) it breaks if the character is some regex metacharacter ( .$^()[]*+ )
CREATE FUNCTION last_post(text,char) RETURNS integer AS $$
select length($1)- length(regexp_replace($1, '.*' || $2,''));
$$ LANGUAGE SQL IMMUTABLE;
test=# select last_post('hi#-#-#byte','#');
last_post
-----------
7
test=# select last_post('hi#-#-#byte','a');
last_post
-----------
0
A more robust solution would involve pl/pgSQL, as rfusca's answer.
Another way to count last position is to slit string to array by delimeter equals to needed character and then substract length of characters
for the last element from the length of whole string
CREATE FUNCTION last_pos(char, text) RETURNS INTEGER AS
$$
select length($2) - length(a.arr[array_length(a.arr,1)])
from (select string_to_array($2, $1) as arr) as a
$$ LANGUAGE SQL;
For the first position it is easier to use
select position('#' in '2010-####-3434');