RavenDB Sql Replication and Postgres uuid - postgresql

I have set up Sql Replication using Postgres/Npgsql.
We are using Guids for ids in Ravendb.
Everything is working fine as long as my id column in Postgres is of type varchar, but if I set it to uuid, which should be the correct type to match Guid, it fails.
It also fails for other columns than id.
Postgres log gives me:
operator does not exist: uuid = text at character 34
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
Postgres schema looks like this:
CREATE TABLE public.materiels
(
id uuid NOT NULL,
type character varying(50),
nummer integer,
...
CONSTRAINT materiels_pkey PRIMARY KEY (id)
)
Replacing first line with
id character varying(50) NOT NULL
will make it work.
My replication setup looks like this:
If I set the replication up to use MSSql it works using MSSql's uniqueidentifier data type.

If you want to compare UUID with TEXT, then you need to create operators for that. The one solving your error would look like this:
CREATE FUNCTION uuid_equal_text(uuid, text)
RETURNS boolean
LANGUAGE SQL IMMUTABLE
AS
$body$
SELECT $1 = $2::uuid
$body$;
CREATE OPERATOR =(
PROCEDURE = uuid_equal_text,
LEFTARG = uuid,
RIGHTARG = text);
EDIT: Alternate solution suggested by author of this question himself:
CREATE CAST (text AS uuid)
WITH INOUT
AS IMPLICIT;

Related

Create expression is not immutable

using the below I get ERROR: generation expression is not immutable why? I've read the docs and most talk about concat being an issue but I'm not using that anywhere so where's my issue?
CREATE TABLE public.source
(
width integer NOT NULL,
sha1 uuid NOT NULL,
height integer NOT NULL,
lastupdated date GENERATED ALWAYS AS (current_timestamp) STORED,
PRIMARY KEY (sha1)
);
ALTER TABLE public.source
OWNER to postgres;
You need to use an immutable expression (i.e. one that always produces the same answer given the same inputs) as current_timestamp produces different answers on subsequent calls.

How to correctly associate an id generator sequence with a table

I'm using Grails 3.0.7 and Postgres 9.2. I'm very new to Postgres, so this may be a dumb question. How do I correctly associate an id generator sequence with a table? I read somewhere that if you create a table with an id column that has a serial datatype, then it will automatically create a sequence for that table.
However, the column seems to be created with a type of bigint. How do I get Grails to create the column with a bigserial datatype, and will this even solve my problem? What if I want one sequence per table? I'm just not sure how to go about setting this up because I've never really used Postgres in the past.
You can define a generator in a domain class like this:
static mapping = {
id generator:'sequence', params:[sequence:'domain_sq']
}
If the sequence is already present in the database then you'll need to name it in the params.
There are other properties also available as outlined in the documentation, for example:
static mapping = {
id column: 'book_id', type: 'integer'
}
In Postgres 10 or later consider an IDENTITY column instead. See:
Auto increment table column
However, the column seems to be created with a type of bigint. How do
I get Grails to create the column with a bigserial datatype, and will
this even solve my problem?
That's expected behavior. Define the column as bigserial, that's all you have to do. The Postgres pseudo data types smallserial, serial and bigserial create a smallint, int or bigint column respectively, and attach a dedicated sequence. The manual:
The data types smallserial, serial and bigserial are not true types,
but merely a notational convenience for creating unique identifier
columns (similar to the AUTO_INCREMENT property supported by some
other databases). In the current implementation, specifying:
CREATE TABLE tablename (
colname SERIAL
);
is equivalent to specifying:
CREATE SEQUENCE tablename_colname_seq;
CREATE TABLE tablename (
colname integer NOT NULL DEFAULT nextval('tablename_colname_seq')
);
ALTER SEQUENCE tablename_colname_seq OWNED BY tablename.colname;
Big quote, I couldn't describe it any better than the manual.
Related:
Get table and column "owning" a sequence
Safely rename tables using serial primary key columns

PostgreSQL bigserial & nextval

I've got a PgSQL 9.4.3 server setup and previously I was only using the public schema and for example I created a table like this:
CREATE TABLE ma_accessed_by_members_tracking (
reference bigserial NOT NULL,
ma_reference bigint NOT NULL,
membership_reference bigint NOT NULL,
date_accessed timestamp without time zone,
points_awarded bigint NOT NULL
);
Using the Windows Program PgAdmin III I can see it created the proper information and sequence.
However I've recently added another schema called "test" to the same database and created the exact same table, just like before.
However this time I see:
CREATE TABLE test.ma_accessed_by_members_tracking
(
reference bigint NOT NULL DEFAULT nextval('ma_accessed_by_members_tracking_reference_seq'::regclass),
ma_reference bigint NOT NULL,
membership_reference bigint NOT NULL,
date_accessed timestamp without time zone,
points_awarded bigint NOT NULL
);
My question / curiosity is why in a public schema the reference shows bigserial but in the test schema reference shows bigint with a nextval?
Both work as expected. I just do not understand why the difference in schema's would show different table creations. I realize that bigint and bigserial allow the same volume of ints to be used.
Merely A Notational Convenience
According to the documentation on Serial Types, smallserial, serial, and bigserial are not true data types. Rather, they are a notation to create at once both sequence and column with default value pointing to that sequence.
I created test table on schema public. The command psql \d shows bigint column type. Maybe it's PgAdmin behavior ?
Update
I checked PgAdmin source code. In function pgColumn::GetDefinition() it scans table pg_depend for auto dependency and when found it - replaces bigint with bigserial to simulate original table create code.
When you create a serial column in the standard way:
CREATE TABLE new_table (
new_id serial);
Postgres creates a sequence with commands:
CREATE SEQUENCE new_table_new_id_seq ...
ALTER SEQUENCE new_table_new_id_seq OWNED BY new_table.new_id;
From documentation: The OWNED BY option causes the sequence to be associated with a specific table column, such that if that column (or its whole table) is dropped, the sequence will be automatically dropped as well.
Standard name of a sequence is built from table name, column name and suffix _seq.
If a serial column was created in such a way, PgAdmin shows its type as serial.
If a sequence has non-standard name or is not associated with a column, PgAdmin shows nextval() as default value.

postgresql exception catching or error handling in postgresql

In below code i have created sample table and written store procedure for exception handling ,the problem is if i insert integer values into columns name and email it is executing .if i pass integer values for name and email columns it should throw exception saying that your passing data types is wrong for name and email columns.
Can any one help me.
CREATE TABLE people
(
id integer NOT NULL,
name text,
email text,
CONSTRAINT people_pkey PRIMARY KEY (id)
)
CREATE OR REPLACE FUNCTION test() RETURNS integer AS'
BEGIN
BEGIN
INSERT INTO people(id,name,email) values(1,5,6);
EXCEPTION
WHEN OTHERS THEN RETURN -1;
END;
RETURN 1;
END'LANGUAGE plpgsql;
select * from test()
select * from people
This is the normal behavior, and it's not related to exception or error handling.
Assigning a numeric value to a text field in an SQL query works seamlessly, because PostgreSQL applies an implicit cast to the numeric literal (this also works with just about any datatype, since they all have a text representation through their I/O conversion routine).
This is tangentially mentioned in the doc for CREATE CAST:
It is normally not necessary to create casts between user-defined
types and the standard string types (text, varchar, and char(n), as
well as user-defined types that are defined to be in the string
category). PostgreSQL provides automatic I/O conversion casts for
that. The automatic casts to string types are treated as assignment
casts, while the automatic casts from string types are explicit-only.

Postgres Alter table to convert column type from char to bigint [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
how to change column datatype from character to numeric in postgresql 8.4
If I have a field of type varchar (and all the values are null or string representations of numbers) how do I use alter table to convert this column type to bigint?
To convert simply by parsing the string (casting):
alter table the_table alter column the_column type bigint using the_column::bigint
In fact, you can use any expression in terms of the_column instead of the_column::bigint to customise the conversion.
Note this will rewrite the table, locking out even readers until it's done.
You could create a temporary column of type bigint, and then execute SQL like
UPDATE my_table SET bigint_column=varchar_column::bigint;
Then drop your varchar_column and rename bigint_column. This is kinda roundabout, but will not require a custom cast in postgres.
How to convert a string column type to numeric or bigint in postgresql
Design your own custom cast from string to bigint. Something like this:
CREATE OR REPLACE FUNCTION convert_to_bigint(v_input text)
RETURNS BIGINT AS $$
DECLARE v_bigint_value BIGINT DEFAULT NULL;
BEGIN
BEGIN
v_bigint_value := v_input::BIGINT;
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'Invalid bigint value: "%". Returning something else.', v_input;
RETURN 0;
END;
RETURN v_bigint_value;
END;
Then create a new table fixed_table_with_bigint with the same parameters as the old table except change the string column into the bigint column.
Then insert all the rows from the previous table (using the custom cast convert_to_integer ) into the new table:
insert into fixed_table_with_bigint
select mycolumn1,
convert_to_bigint(your_string_bigint_column),
mycolumn3
from incorrect_table
You may have to modify convert_to_bigint in order to handle strings which are not numbers, blankstrings, nulls, control characters and other Weirdness.
Then delete the first table and rename the 2nd table as the first table.