Using PostgreSQL CITEXT Extension with jOOQ - postgresql

The Postgres CITEXT extension helps with cases-insensitive data. This, for example, can be useful when working with emails. See here and here. I've defined the following table:
CREATE EXTENSION citext;
CREATE TABLE user (
user_id INTEGER PRIMARY KEY,
email CITEXT NOT NULL UNIQUE,
password TEXT NOT NULL,
salt TEXT NOT NULL
);
and added the following to the <database> section in the pom.xml:
<forcedType>
<name>CLOB</name>
<expression>public.user.email</expression>
<types>CITEXT</types>
</forcedType>
</forcedTypes>
When I run the code generator, the fields do get generated, but there are a lot of "missing name" warnings in the log output. For example:
[INFO] Generating routine : CitextLt.java
[WARNING] Missing name : Object citext_ne holds a column without a name at position 1
Am I on the right track on integrating the CITEXT extesion with jOOQ?
If so, how do I provide these missing names?

There are two issues in this question:
Logging
The WARN level is perhaps a bit excessive. I've registered an issue to revert that to INFO: https://github.com/jOOQ/jOOQ/issues/5385
You don't have to worry about those warnings. PostgreSQL supports declaring stored procedures whose parameters are unnamed and can be referenced only by parameter index / position. jOOQ's code generator only indicates that this is "unusual" and that a synthetic parameter name is generated.
This should not affect your using CITEXT with jOOQ.
Your forced type configuration
There is currently a bug that prevents you from matching user-defined types with <types/>: http://github.com/jOOQ/jOOQ/issues/5363
Just remove your <types/> element, and it will work.

Related

How to automatically generate new UUID in PostgreSQL?

I'm using PostgreSQL version 14.4. I installed the uuid-ossp extension.
I created a table like this:
CREATE TABLE reserved_words
ADD id uuid NOT NULL DEFAULT uuid_generate_v1()
ADD word NOT NULL varchar(20);
Unfortunately, when I try adding a new record, rather than a new UUID being generated, instead the "uuid_generate_v1()" string is added in as the id!
I've scoured the Internet but can't find out how to alter things so that the function itself is executed. Any ideas?
My apologies, it actually does work. What's happening is that in DBeaver, the DB client I user, it does at first show the UUID generation function but then when you save the new record, it creates the UUID correctly.
Note: I don't really understand the difference between uuid_generate_v1 and uuid_generate_v4 but am going to opt to use the latter one.
uuid_generate_v1 () → uuid
Generates a version 1 UUID. This involves the MAC address of the computer and a time stamp. Note that UUIDs of this kind reveal the identity of the computer that created the identifier and the time at which it did so, which might make it unsuitable for certain security-sensitive applications.
uuid_generate_v4 () → uuid
Generates a version 4 UUID, which is derived entirely from random numbers.
source
The foremost point is that data type should be of uuid
The 'uuid-ossp' extension offers functions to generate UUID values.
To add the extension to the database run the following command
CREATE EXTENSION "uuid-ossp";
you can use the core function gen_random_uuid() to generate version-4 UUIDs.
To make use of this function in dbeaver ,follow the steps:
1.Go to table in which you want to generate UUID's
2.In the table properties tab find the column to which you need to apply the
uuid function
3.Double click on the column name and it will show expanded view of it's
properties
4.Under default value of that particular column properties, write the
function name as shown in the image
gen_random_uuid()
Dbeaver illustration

Migrating an AnyDAC app to FireDAC fails on the AutoInc fields

I have migrated an AnyDAC app to FireDAC and I can't get to work its Autoinc fields.
The ID field (primary key) has been defined on Postgre SQL as default to nextval('llistapanelspuzzle_id_seq'::regclass), BIGSERIAL, so the server automatically sets its values.
The column was recognized by AnyDAC as an TAutoincField and worked correctly, but when I now open that table on FireDAC it fails saying that the field found is a TLargeIntField. I change the persistent field to a TLargeIntField, but now when inserting records on Delphi, I don't get the new values from the server, it leaves the dataset with a 0 value, and when I add a second record it raises a Key Violation (two records with a 0 value on its primary key).
Do you know how to define AutoInc fields on FireDAC - PostgreSQL, when they are being recognized as LargeInt fields ?.
Update: I have added ID to the UpdateOptions.AutoIncFields, but it doesn't seem to have changed anything.
Thank you.
Looks like you have to activate the ExtendedMetada flag on the FDConnection in order for FireDAC to recognize automatically the PostgreSQL Autoinc columns.
Now it works correctly.

Npgsql.PostgresException: Column cannot be cast automatically to type bytea

Using EF-Core for PostgresSQL, I have an entity with a field of type byte but decided to change it to type byte[]. But when I do migrations, on applying the migration file generated, it threw the following exception:
Npgsql.PostgresException (0x80004005): 42804: column "Logo" cannot be
cast automatically to type bytea
I have searched the internet for a solution but all I saw were similar problems with other datatypes and not byte array. Please help.
The error says exactly what is happening... In some cases PostgreSQL allows for column type changes (e.g. int -> bigint), but in many cases where such a change is non-trivial or potentially destructive, it refuses to do so automatically. In this specific case, this happens because Npgsql maps your CLR byte field as PostgreSQL smallint (a 2-byte field), since PostgreSQL lacks a 1-byte data field. So PostgreSQL refuses to cast from smallint to bytea, which makes sense.
However, you can still do a migration by writing the data conversion yourself, from smallint to bytea. To do so, edit the generated migration, find the ALTER COLUMN ... ALTER TYPE statement and add a USING clause. As the PostgreSQL docs say, this allows you to provide the new value for the column based on the existing column (or even other columns). Specifically for converting an int (or smallint) to a bytea, use the following:
ALTER TABLE tab ALTER COLUMN col TYPE BYTEA USING set_bytea(E'0', 0, col);
If your existing column happens to contain more than a single byte (should not be an issue for you), it should get truncated. Obviously test the data coming out of this carefully.

Creating a "table of tables" in PostgreSQL or achieving similar functionality?

I'm just getting started with PostgreSQL, and I'm new to database design.
I'm writing software in which I have various plugins that update a database. Each plugin periodically updates its own designated table in the database. So a plugin named 'KeyboardPlugin' will update the 'KeyboardTable', and 'MousePlugin' will update the 'MouseTable'. I'd like for my database to store these 'plugin-table' relationships while enforcing referential integrity. So ideally, I'd like a configuration table with the following columns:
Plugin-Name (type 'text')
Table-Name (type ?)
My software will read from this configuration table to help the plugins determine which table to update. Originally, my idea was to have the second column (Table-Name) be of type 'text'. But then, if someone mistypes the table name, or an existing relationship becomes invalid because of someone deleting a table, we have problems. I'd like for the 'Table-Name' column to act as a reference to another table, while enforcing referential integrity.
What is the best way to do this in PostgreSQL? Feel free to suggest an entirely new way to setup my database, different from what I'm currently exploring. Also, if it helps you answer my question, I'm using the pgAdmin tool to setup my database.
I appreciate your help.
I would go with your original plan to store the name as text. Possibly enhanced by additionally storing the schema name:
addin text
,sch text
,tbl text
Tables have an OID in the system catalog (pg_catalog.pg_class). You can get those with a nifty special cast:
SELECT 'myschema.mytable'::regclass
But the OID can change over a dump / restore. So just store the names as text and verify the table is there by casting it like demonstrated at application time.
Of course, if you use each tables for multiple addins it might pay to make a separate table
CREATE TABLE tbl (
,tbl_id serial PRIMARY KEY
,sch text
,name text
);
and reference it in ...
CREATE TABLE addin (
,addin_id serial PRIMARY KEY
,addin text
,tbl_id integer REFERENCES tbl(tbl_id) ON UPDATE CASCADE ON DELETE CASCADE
);
Or even make it an n:m relationship if addins have multiple tables. But be aware, as #OMG_Ponies commented, that a setup like this will require you to execute a lot of dynamic SQL because you don't know the identifiers beforehand.
I guess all plugins have a set of basic attributes and then each plugin will have a set of plugin-specific attributes. If this is the case you can use a single table together with the hstore datatype (a standard extension that just needs to be installed).
Something like this:
CREATE TABLE plugins
(
plugin_name text not null primary key,
common_int_attribute integer not null,
common_text_attribute text not null,
plugin_atttributes hstore
)
Then you can do something like this:
INSERT INTO plugins
(plugin_name, common_int_attribute, common_text_attribute, hstore)
VALUES
('plugin_1', 42, 'foobar', 'some_key => "the fish", other_key => 24'),
('plugin_2', 100, 'foobar', 'weird_key => 12345, more_info => "10.2.4"');
This creates two plugins named plugin_1 and plugin_2
Plugin_1 has the additional attributes "some_key" and "other_key", while plugin_2 stores the keys "weird_key" and "more_info".
You can index those hstore columns and query them very efficiently.
The following will select all plugins that have a key "weird_key" defined.
SELECT *
FROM plugins
WHERE plugin_attributes ? 'weird_key'
The following statement will select all plugins that have a key some_key with the value the fish:
SELECT *
FROM plugins
WHERE plugin_attributes #> ('some_key => "the fish"')
Much more convenient than using an EAV model in my opinion (and most probably a lot faster as well).
The only drawback is that you lose type-safety with this approach (but usually you'd lose that with the EAV concept as well).
You don't need an application catalog. Just add the application name to the keys of the table. This of course assumes that all the tables have the same structure. If not: use the application name for a table name, or as others have suggested: as a schema name( which also would allow for multiple tables per application).
EDIT:
But the real issue is of course that you should first model your data, and than build the applications to manipulate it. The data should not serve the code; the code should serve the data.

How to get the key fields for a table in plpgsql function?

I need to make a function that would be triggered after every UPDATE and INSERT operation and would check the key fields of the table that the operation is performed on vs some conditions.
The function (and the trigger) needs to be an universal one, it shouldn't have the table name / fields names hardcoded.
I got stuck on the part where I need to access the table name and its schema part - check what fields are part of the PRIMARY KEY.
After getting the primary key info as already posted in the first answer you can check the code in http://github.com/fgp/pg_record_inspect to get record field values dynamicaly in PL/pgSQL.
Have a look at How do I get the primary key(s) of a table from Postgres via plpgsql? The answer in that one should be able to help you.
Note that you can't use dynamic SQL in PL/pgSQL; it's too strongly-typed a language for that. You'll have more luck with PL/Perl, on which you can access a hash of the columns and use regular Perl accessors to check them. (PL/Python would also work, but sadly that's an untrusted language only. PL/Tcl works too.)
In 8.4 you can use EXECUTE 'something' USING NEW, which in some cases is able to do the job.