postgresql exception catching or error handling in postgresql - postgresql

In below code i have created sample table and written store procedure for exception handling ,the problem is if i insert integer values into columns name and email it is executing .if i pass integer values for name and email columns it should throw exception saying that your passing data types is wrong for name and email columns.
Can any one help me.
CREATE TABLE people
(
id integer NOT NULL,
name text,
email text,
CONSTRAINT people_pkey PRIMARY KEY (id)
)
CREATE OR REPLACE FUNCTION test() RETURNS integer AS'
BEGIN
BEGIN
INSERT INTO people(id,name,email) values(1,5,6);
EXCEPTION
WHEN OTHERS THEN RETURN -1;
END;
RETURN 1;
END'LANGUAGE plpgsql;
select * from test()
select * from people

This is the normal behavior, and it's not related to exception or error handling.
Assigning a numeric value to a text field in an SQL query works seamlessly, because PostgreSQL applies an implicit cast to the numeric literal (this also works with just about any datatype, since they all have a text representation through their I/O conversion routine).
This is tangentially mentioned in the doc for CREATE CAST:
It is normally not necessary to create casts between user-defined
types and the standard string types (text, varchar, and char(n), as
well as user-defined types that are defined to be in the string
category). PostgreSQL provides automatic I/O conversion casts for
that. The automatic casts to string types are treated as assignment
casts, while the automatic casts from string types are explicit-only.

Related

Create expression is not immutable

using the below I get ERROR: generation expression is not immutable why? I've read the docs and most talk about concat being an issue but I'm not using that anywhere so where's my issue?
CREATE TABLE public.source
(
width integer NOT NULL,
sha1 uuid NOT NULL,
height integer NOT NULL,
lastupdated date GENERATED ALWAYS AS (current_timestamp) STORED,
PRIMARY KEY (sha1)
);
ALTER TABLE public.source
OWNER to postgres;
You need to use an immutable expression (i.e. one that always produces the same answer given the same inputs) as current_timestamp produces different answers on subsequent calls.

RavenDB Sql Replication and Postgres uuid

I have set up Sql Replication using Postgres/Npgsql.
We are using Guids for ids in Ravendb.
Everything is working fine as long as my id column in Postgres is of type varchar, but if I set it to uuid, which should be the correct type to match Guid, it fails.
It also fails for other columns than id.
Postgres log gives me:
operator does not exist: uuid = text at character 34
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
Postgres schema looks like this:
CREATE TABLE public.materiels
(
id uuid NOT NULL,
type character varying(50),
nummer integer,
...
CONSTRAINT materiels_pkey PRIMARY KEY (id)
)
Replacing first line with
id character varying(50) NOT NULL
will make it work.
My replication setup looks like this:
If I set the replication up to use MSSql it works using MSSql's uniqueidentifier data type.
If you want to compare UUID with TEXT, then you need to create operators for that. The one solving your error would look like this:
CREATE FUNCTION uuid_equal_text(uuid, text)
RETURNS boolean
LANGUAGE SQL IMMUTABLE
AS
$body$
SELECT $1 = $2::uuid
$body$;
CREATE OPERATOR =(
PROCEDURE = uuid_equal_text,
LEFTARG = uuid,
RIGHTARG = text);
EDIT: Alternate solution suggested by author of this question himself:
CREATE CAST (text AS uuid)
WITH INOUT
AS IMPLICIT;

PostgreSQL - return n-sized varchar from function

As I found in documentation:
Parenthesized type modifiers (e.g., the precision field for type
numeric) are discarded by CREATE FUNCTION
Are there any alternatives to return varchar(N) type from plpgsql function?
question update:
On picture you can see that Name column recognised as varchar(128), however Number column is recognised as nonsized varchar
f_concat function returns: cast(res as varchar(255));
You can preserve the type modifier for a function result by creating a domain. Postgres will use the underlying varchar(N) type when sending column descriptions to your client:

Getting error for auto increment fields when inserting records without specifying columns

We're in process of converting over from SQL Server to Postgres. I have a scenario that I am trying to accommodate. It involves inserting records from one table into another, WITHOUT listing out all of the columns. I realize this is not recommended practice, but let's set that aside for now.
drop table if exists pk_test_table;
create table public.pk_test_table
(
recordid SERIAL PRIMARY KEY NOT NULL,
name text
);
--example 1: works and will insert a record with an id of 1
insert into pk_test_table values(default,'puppies');
--example 2: fails
insert into pk_test_table
select first_name from person_test;
Error I receive in the second example:
column "recordid" is of type integer but expression is of type
character varying Hint: You will need to rewrite or cast the
expression.
The default keyword will tell the database to grab the next value.
Is there any way to utilize this keyword in the second example? Or some way to tell the database to ignore auto-incremented columns and just them be populated like normal?
I would prefer to not use a subquery to grab the next "id".
This functionality works in SQL Server and hence the question.
Thanks in advance for your help!
If you can't list column names, you should instead use the DEFAULT keyword, as you've done in the simple insert example. This won't work with a in insert into ... select ....
For that, you need to invoke nextval. A subquery is not required, just:
insert into pk_test_table
select nextval('pk_test_table_id_seq'), first_name from person_test;
You do need to know the sequence name. You could get that from information_schema based on the table name and inferring its primary key, using a function that takes just the table name as an argument. It'd be ugly, but it'd work. I don't think there's any way around needing to know the table name.
You're inserting value into the first column, but you need to add a value in the second position.
Therefore you can use INSERT INTO table(field) VALUES(value) syntax.
Since you need to fetch values from another table, you have to remove VALUES and put the subquery there.
insert into pk_test_table(name)
select first_name from person_test;
I hope it helps
I do it this way via a separate function- though I think I'm getting around the issue via the table level having the DEFAULT settings on a per field basis.
create table public.pk_test_table
(
recordid integer NOT NULL DEFAULT nextval('pk_test_table_id_seq'),
name text,
field3 integer NOT NULL DEFAULT 64,
null_field_if_not_set integer,
CONSTRAINT pk_test_table_pkey PRIMARY KEY ("recordid")
);
With function:
CREATE OR REPLACE FUNCTION func_pk_test_table() RETURNS void AS
$BODY$
INSERT INTO pk_test_table (name)
SELECT first_name FROM person_test;
$BODY$
LANGUAGE sql VOLATILE;
Then just execute the function via a SELECT FROM func_pk_test_table();
Notice it hasn't had to specify all the fields- as long as constraints allow it.

Postgres Alter table to convert column type from char to bigint [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
how to change column datatype from character to numeric in postgresql 8.4
If I have a field of type varchar (and all the values are null or string representations of numbers) how do I use alter table to convert this column type to bigint?
To convert simply by parsing the string (casting):
alter table the_table alter column the_column type bigint using the_column::bigint
In fact, you can use any expression in terms of the_column instead of the_column::bigint to customise the conversion.
Note this will rewrite the table, locking out even readers until it's done.
You could create a temporary column of type bigint, and then execute SQL like
UPDATE my_table SET bigint_column=varchar_column::bigint;
Then drop your varchar_column and rename bigint_column. This is kinda roundabout, but will not require a custom cast in postgres.
How to convert a string column type to numeric or bigint in postgresql
Design your own custom cast from string to bigint. Something like this:
CREATE OR REPLACE FUNCTION convert_to_bigint(v_input text)
RETURNS BIGINT AS $$
DECLARE v_bigint_value BIGINT DEFAULT NULL;
BEGIN
BEGIN
v_bigint_value := v_input::BIGINT;
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'Invalid bigint value: "%". Returning something else.', v_input;
RETURN 0;
END;
RETURN v_bigint_value;
END;
Then create a new table fixed_table_with_bigint with the same parameters as the old table except change the string column into the bigint column.
Then insert all the rows from the previous table (using the custom cast convert_to_integer ) into the new table:
insert into fixed_table_with_bigint
select mycolumn1,
convert_to_bigint(your_string_bigint_column),
mycolumn3
from incorrect_table
You may have to modify convert_to_bigint in order to handle strings which are not numbers, blankstrings, nulls, control characters and other Weirdness.
Then delete the first table and rename the 2nd table as the first table.