Is it bad for columns in composite keys to have mismatching types? - postgresql

Problem:
I'd like to make a composite primary key from columns id and user_id for a postgres database table. Column user_id is a foreign key with an integer type, whereas id is a string. Will this cause a conflict because the types are different?
Edit: Also, are there combinations of types that would cause problems?
Context:
I obviously should match the type of the User.id field for its foreign key. And, the id for my table will be derived from a uuid to prevent data leaks. So I would prefer not to change the types of either field I want in this table.
Research:
I am using sqlalchemy. Their documentation mentions how to create a composite primary key, but it doesn't discuss dealing with different types for each column.

No, this won't be a problem.
Your question seems to indicate that you think, the values of the indexed columns are somehow concatenated and then stored in the index as a single value. This is not the case. Each column value is stored independently but together. Similar to the way the column values are stored in the actual table.

Related

PostgreSQL oximoron

Hi all,
Can any understand what's going on here?
The case is:
There are 2 tables, called "matricula" and "pagament" with a 1:1 relationship cardinality.
Table matricula primary key composed by 3 fields "edicio","curs" and "estudiant".
Table pagament primary key, the same as above. Furthermore, it references matricula.
As shown, trying to insert a row in pagament table is rejected because it does not exists a row in table matricula. However, asking for this row returns one result.
What am I missing?
Thanks you all
Carles
The problem is that the order of the fields in both tables is not the same, and, moreover, the restriction of the foreign key in table pagament, said that
foreign key (estudiant,curs,edicio) references matricula
without specifying the matricula fields.
It's been solved by setting this restriction as
foreign key (estudiant,curs,edicio) references matricula(estudiant,curs,edicio)

Postgres JSONB unique constraint

I have a table as following table.
create table person {
firstname varchar,
lastname varchar,
person_info jsonb,
..
}
I already have unique constraints on firstname + lastname. I recently identify there is always something different in person_info jsonb. I want to uniquely identify by person_info jsonb.
Should I add person_info as part of unique constraints firstname + lastname + person_info ? Is there any performance impact with such implementation ? I heard JSONB is not good for index when number of data increases.
I am thinking to use store person_info hashvalue in different field and combine this new hashvalue field as part of unique index.
I would appreciate if I get some help from expert on this.
This seems like a wrong idea.
A primary key should be immutable and uniquely identify a table row.
Names are not good for that, because
different people can have the same name
names can change
This is probably why you are tempted to add additional information to truly identify each individual row.
Unless you have some immutable attribute that uniquely identifies each person (such as the social security nubmer), you should generate an artificial primary key for the table:
ALTER TABLE person
ADD id bigint
GENERATED ALWAYS AS IDENTITY
PRIMARY KEY;
Indexing a jsonb is possible, but you will get problems with long values since index entries are limited in size, and you will get an error if you exceed the limit.
I recommend that any attribute that you might want to index is not stored in a jsonb, but as a regular table column.
JSONB indexing IMHO refers to the ability to index fields inside the binary JSON rather than the whole block. Be aware also that key ordering is not kept! So if you can obtain two different hashes for two json with the exact same data but different ordering. Instead, if you can find which json fields gives you uniqueness, than you can use directly those for indexing.
Try also to look at this page

How to make ARRAY field with foreign key constraint in SQLAlchemy?

How to make column with ARRAY(Integer) type, where each integer is primary key from some other table? If it's impossible, how to achieve similar table relationships with other method?
As of PostgreSQL 9.3, this is not implemented, see
http://blog.2ndquadrant.com/postgresql-9-3-development-array-element-foreign-keys/
One should turn array into other table.

Composite key with user-supplied string column, foreign keys

Let's say I have the following table
TABLE subgroups (
group_id t_group_id NOT NULL REFERENCES groups(group_id),
subgroup_name t_subgroup_name NOT NULL,
more attributes ...
)
subgroup_name is UNIQUE to a group(group_id).
A group can have many subgroups.
The subgroup_names are user-supplied. (I would like to avoid using a subgroup_id column. subgroup_name has meaning in the model and is more than just a label, I am providing a list of predetermined names but allow a user to add his owns for flexibility).
This table has 2 levels of referencing child tables containing subgroup attributes (with many-to-one relations);
I would like to have a PRIMARY KEY on (group_id, upper(trim(subgroup_name)));
From what I know, postgres doesn't allow to use PRIMARY KEY/UNIQUE on a function.
IIRC, the relational model also requires columns to be used as stored.
CREATE UNIQUE INDEX ON subgroups (group_id, upper(trim(subgroup_name))); doesn't solve my problem
as other tables in my model will have FOREIGN KEYs pointing to those two columns.
I see two options.
Option A)
Store a cleaned up subgroup name in subgroup_name
Add an extra column called subgroup_name_raw that would contained the uncleaned string
Option B)
Create both a UNIQUE INDEX and PRIMARY KEY on my key pair. (seems like a huge waste)
Any insights?
Note: I'm using Postgres 9.2
Actually you can do a UNIQUE constraint on the output of a function. You can't do it in the table definition though. What you need to do is create a unique index after. So something like:
CREATE UNIQUE INDEX subgroups_ukey2 ON subgroups(group_id, upper(trim(subgroup_name)));
PostgreSQL has a number of absolutely amazing indexing capabilities, and the ability to create unique (and partial unique) indexes on function output is quite underrated.

How does 'UQ' function if another field is 'PK' in MySQL workbench?

I'm creating a schema for a database using MySQL's workbench. One of my tables contains fields for a personId, as well as a national id number if they have one (which they may not).
The personId field is the one used as a unique identifier throughout the schema, so I've ticked the "PK" and "NN" options for it. Now I'd like to be able to ensure that the system won't allow a new insert with a different personId if it has the same national id as an entity that already exists. However, national ids are not primary keys and may in fact be null.
I've been looking at the 'UQ' option, but I can't find clear documentation on what it actually does. I'm worried it'll create the numbers automatically when I actually want them to be inserted by a user or left null. Does anyone know?
UQ tags a field as a unique key. This enforces uniqueness in a given field, except for NULLs. This is exactly what I need for my national id field.
From http://dev.mysql.com/doc/refman/5.5/en/create-table.html :
A UNIQUE index creates a constraint such that all values in the index must be distinct. An error occurs if you try to add a new row with a key value that matches an existing row. For all engines, a UNIQUE index permits multiple NULL values for columns that can contain NULL.