-7008 error in sqlrpgle even though commitment control is set to *NONE - db2

I have an SQLRPGLE program that performs a delete operation using exec sql. The problem is I get an sqlcod of -7008 even though I have compiled the program with commit set to *NONE and also, I tried adding "with none" at end of the statement but nothing just seems to work.
Any ideas what can be done here?
Update:
So, the issue seems to be coming up because the tables I am trying to delete from seems to be parent tables which have a dependent table.
But what is confusing to me is, the dependent table is already cleared and it does not have any data. So why would it error?
Parent table :
create table grid_details
(id integer not null generated always as identity
(start with 1 increment by 1)
PRIMARY KEY,
grid_name varchar(20),
grid_description varchar(40),
source_file_name varchar(30),
created_date date default current_date,
created_by varchar(30),
last_updated_date date default current_date,
updated_by varchar(30));
Parent table :
create table action_code_details
(id integer not null generated always as
identity (start with 1 increment by 1)
PRIMARY KEY,
action_code varchar(10),
action_description varchar(40),
required_input clob,
end_point clob,
created_date
date default current_date,
created_by varchar(30),
last_updated_date date default current_date,
updated_by varchar(30));
Child table :
create table grid_action_details
(id integer not null
generated always as identity (start with 1 increment by 1)
PRIMARY KEY,
grid_details_id integer,
foreign key(grid_details_id) references grid_details(id),
action_code_details_id integer,
foreign key(action_code_details_id) references action_code_details(id),
action_code_status varchar(2),
created_date date default
current_date,
created_by varchar(30),
last_updated_date date default current_date,
updated_by varchar(30),
required_parameter clob);
I am able to delete everything from the child table but not from the parent table.
I found this when I tried to clear the parent tables through a CLRPFM. I don't understand this as the child table has no data. So why would the parent tables not get cleared?

What reason code are you getting? Should be returned as part of the message text.
Reason codes are:
-- &1 has no members.
-- &1 has been saved with storage free.
-- &1 not journaled, no authority to the journal, or the journal state is *STANDBY. Files with an RI constraint action of CASCADE, SET
NULL, or SET DEFAULT must be journaled to the same journal.
and 5
-- &1 is in or being created into production library but the user has debug mode UPDPROD(*NO).
-- Schema being created, but user in debug mode with UPDPROD(*NO).
-- A based-on table used in creation of a view is not valid. Either the table is program described table or it is in a temporary
schema.
-- Based-on table resides in a different ASP than ASP of object being created.
-- Index is currently held or is not valid.
-- A constraint or trigger is being added to an invalid type of table, or the maximum number of triggers has been reached, or all
nodes of the distributed table are not at the same release level.
-- Distributed table is being created in schema QTEMP, or a view is being created over more than one distributed table.
-- Table could not be created in QTEMP, QSYS, QSYS2, or SYSIBM because it contains a column of type DATALINK having the FILE LINK
CONTROL option.
-- The table contains a DATALINK, LOB, or XML column that conflicts with the data dictionary.
-- A DATALINK, LOB, XML, or IDENTITY column cannot be added to a non SQL table.
-- Attempted to create or change an object using a commitment definition in a different ASP.
-- Sequence &1 in &2 was incorrectly modified with a CL command.
-- The table is not usable because it contains partial transactions.
EDIT
So the correction for reason code 3 is
Start journaling on &1 (STRJRNPF), get access to the journal, or
change the journal state to *ACTIVE (CHGJRN).
Now normally, that code is thrown when you try to use commitment control and the file is not journaled. But you mention that you've compiled with commit(*NONE) and that you've even added with none to your statement. Thus, the problem likely lies the fact that the table is journaled and the journal state is *STANDBY. I don't believe there'd be an authority issue with the journal itself.
EDIT2
Actually, taking a look at *STANDBY
*STANDBY Most journal entries are not deposited into the journal. If an attempt is made to deposit an entry into the journal, there will be
no errors indicating that the entry was not deposited. While in
*STANDBY state, journaling can be started or stopped. However, using commitment control is not allowed while in *STANDBY state. Because
commitment control is not allowed, functions where the system uses
commitment control internally are also not allowed. Similarly, access
paths built over files journaled to a journal in *STANDBY state will
not be eligible for System-Managed Access Path Protection (SMAPP).
This may impact system performance, system IPL, or independent
auxiliary storage pool (IASP) vary-on duration, or both system
performance and IPL or vary-on duration, as the system tries to
achieve the specified access path recovery time. Note: This value
cannot be specified for remote journals.
It doesn't seem that it should be a problem with a DELETE unless commitment control is being used or unless the file has a RI constraint with an action of CASCADE, SET NULL, or SET DEFAULT.

Related

Postgres: CREATE TABLE IF NOT EXISTS ⇒ 23505

I start multiple programs that all more or less simultaneously do
CREATE TABLE IF NOT EXISTS log (...);
Sometimes this works perfectly. But most of the time, one or more of the programs crash with the error:
23505: duplicate key value violates unique constraint "pg_class_relname_nsp_index".
Can somebody explain to me how the actual Christmas tree CREATE TABLE IF NOT EXISTS is giving me an error message about the table already existing? Isn't that, like, the entire point of this command?? What is going on here? More to the point, how do I get it to actually work correctly?
After this command, there's also a couple of CREATE INDEX IF NOT EXISTS commands. These occasionally fail in a similar way too. But most of the time, it's the CREATE TABLE statement that fails.
You can reproduce this with 2 parallel sessions:
First session:
begin;
create table if not exists log(id bigint generated always as identity, t timestamp with time zone, message text not null);
Notice that the first session did not commit yet, so the table does not really exists.
Second session:
begin;
create table if not exists log(id bigint generated always as identity, t timestamp with time zone, message text not null);
The second session will now block, as the name "log" is reserved by the first session. But it is not yet known, if the transaction, that reserved it, will be committed or not.
Then, when you commit the first session, the second will fail:
ERROR: duplicate key value violates unique constraint "pg_class_relname_nsp_index"
DETAIL: Key (relname, relnamespace)=(log_id_seq, 2200) already exists.
To avoid it you have to make sure that the check for existence of a table, is done after some common advisory lock is taken:
begin;
select pg_advisory_xact_lock(12345);
-- any bigint value, but has to be the same for all parallel sessions
create table if not exists log(id bigint generated always as identity, t timestamp with time zone, message text not null);
commit;

Would this PostgresQL model work for long-term use and security?

I'm making a real-time chat app and was stuck figuring out how the DB model should look like. I've made this diagram, but would this work? My issue is more to do with foreign keys.
I know this is a very vague question. But have been struggling with this model for a while now. This is the first database I'm setting up so it's probably got a load of errors.
Actually you are fairly close, but over complicated it a bit. At the conceptual/logical model you have just 2 entities. Users and Messages
with a many-to-many relationship. At the physical level the Channels table resolves the M:M into the 2 one_to_many you have described. But the
viewing this way ravels a couple issues. The attribute user is not required in the Messages table and if physically implemented requires a not easily done validation
that the user there exists in the Channels table. Further everything that Message:User relationship provides is a available
via Users:Channels:Messages relationship. A similar argument applies to Channels column in Users - completely resolved by the resolution table. Suggestion: drop user from message table and channels from users.
Now lets look at the columns of Channels. It looks like you using a boiler plate for created_at and updated_at, but are they necessary?
Well at least for updated_at No. What can be updated? If either User or Message is updated you have a brand new entry. Yes it may seem like the same physical row (actually it is not)
but the meaning is completely different. Well how about last massage? What is it trying to indicate that the max value created at for the user does not give you?
I cannot see anything. I guess you could change the created at but what is the point of tracking when I changed that column. Suggestion: drop last message sent and updated at (unless required by Institution standards) from message table.
That leaves the Users table itself. Besides Channels mentioned above there is the Contacts column. Physically as a array it violates 1NF and becomes difficult to manage - (as wall as validating that the contact is in fact a user)
Logically it is creating a M:M on USER:USER. So resolve it the same way as User:Messages, pull it out into another table, say User_Contacts with 2 attributes to the Users table. Suggestion drop contacts for the users table and create a resolution table.
Unfortunately, I do not have a good ERD diagrammer, so I just provide DDL.
create table users (
user_id integer generated always as identity primary key
, name text
, phone_number text
, last_login timestamptz
, created_at timestamptz
, updated_at timestamptz
) ;
create type message_type as enum ('short', 'long'); -- list all values
create table messages(
msg_id integer generated always as identity primary key
, msg_type message_type
, message text
, created_at timestamptz
, updated_at timestamptz
);
create table channels( -- resolves M:M Users:Messages
user_id integer
, msg_id integer
, created_at timestamptz
, constraint channels_pk
primary key (user_id, msg_id)
, constraint channels_2_users_fk
foreign key (user_id)
references users(user_id)
, constraint channels_2_messages_fk
foreign key (msg_id)
references messages(msg_id )
);
create table user_contacts( -- resolves M:M Users:Users
user_id integer
, contact_id integer
, created_at timestamptz
, constraint user_contacts_pk
primary key (user_id, contact_id)
, constraint user_2_users_fk
foreign key (user_id)
references users(user_id)
, constraint contact_2_user_fk
foreign key (user_id)
references users(user_id)
, constraint contact_not_me_check check (user_id <> contact_id)
);
Notes:
Do not use text as PK, use either integer (bigint) or UUID, and generate them during insert.
Caution on ENUM. In Postgres you can add new values, but you cannot remove a value. Depending upon number of values and how often the change consider creating a lookup/reference table for them.
Do not use the data type TIME. It is really not that useful without the date. Simple example I login today at 15:00, you login tomorrow at 13:00. Now, from the database itself, which of us logged in first.

Why does Id increases by two instead of one using insert

I have been trying to understand after lots of hours and still cannot understand why it is happening.
I have created two tables with ALTER:
CREATE TABLE stores (
id SERIAL PRIMARY KEY,
store_name TEXT
-- add more fields if needed
);
CREATE TABLE products (
id SERIAL,
store_id INTEGER NOT NULL,
title TEXT,
image TEXT,
url TEXT UNIQUE,
added_date timestamp without time zone NOT NULL DEFAULT NOW(),
PRIMARY KEY(id, store_id)
);
ALTER TABLE products
ADD CONSTRAINT "FK_products_stores" FOREIGN KEY ("store_id")
REFERENCES stores (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE RESTRICT;
and everytime I am inserting a value to products by doing
INSERT
INTO
public.products(store_id, title, image, url)
VALUES((SELECT id FROM stores WHERE store_name = 'footish'),
'Teva Flatform Universal Pride',
'https://www.footish.se/sneakers/teva-flatform-universal-pride-t1116376',
'https://www.footish.se/pub_images/large/teva-flatform-universal-pride-t1116376-p77148.jpg?timestamp=1623417840')
I can see that the column of id increases by two everytime I insert instead of one and I would like to know what is the reason behind that?
I have not been able to figure out why and it would be nice to know! :)
There could be 3 reasons:
You've tried to create data but it failed. Even on failed creation and transaction rollback, a sequence does count. A used number will never be put back.
You're using a global sequence and created other data on other data meanwhile. Using a global sequence will always increase on any table data added, even on other tables be modified.
DB configuration for your sequence is set to stepsize/allocationsize=2. It can be configured however you want.
Overall it is not important. The most important thing is that it increases automatically and that even on a error/delete a already tried ID will never be put back.
If you want to have concrete information you need to procive the information about the sequence. You can check that using a SQL CLI or show it via DBeaver/....

I need the name of the enterprise to be the same as it was when it was registered and not the value it currently has

I will explain the problem with an example:
I am designing a specific case of referential integrity in a table. In the model there are two tables, enterprise and document. We register the companies and then someone insert the documents associated with it. The name of the enterprise is variable. When it comes to recovering the documents, I need the name of the enterprise to be the same as it was when it was registered and not the value it currently has. The solution that I thought was to register the company again in each change with the same code, the updated name in this way would have the expected result, but I am not sure if it is the best solution. Can someone make a suggestion?
There are several possible solutions and it is hard to determine which one will exactly be the easiest.
Side comment: your question is limited to managing names efficiently but I would like to comment the fact that your DB is sensitive to files being moved, renamed or deleted. Your database will not be able to keep records up-to-date if anything happen at OS level. You should consider to do something about it too.
Amongst the few solution I considered, the one that is best normalized is the schema below:
CREATE TABLE Enterprise
(
IdEnterprise SERIAL PRIMARY KEY
, Code VARCHAR(4) UNIQUE
, IdName INTEGER DEFAULT -1 /* This will be used to get a single active name */
);
CREATE TABLE EnterpriseName (
IDName SERIAL PRIMARY KEY
, IdEnterprise INTEGER NOT NULL REFERENCES Enterprise(IdEnterprise) ON UPDATE NO ACTION ON DELETE CASCADE
, Name TEXT NOT NULL
);
ALTER TABLE Enterprise ADD FOREIGN KEY (IdName) REFERENCES EnterpriseName(IdName) ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED;
CREATE TABLE Document
(
IdDocument SERIAL PRIMARY KEY
, IdName INTEGER NOT NULL REFERENCES EnterpriseName(IDName) ON UPDATE NO ACTION ON DELETE NO ACTION
, FilePath TEXT NOT NULL
, Description TEXT
);
Using flag and/or timestamps or moving the enterprise name to the document table are appealing solutions, but only at first glance.
Especially, the part where you have to ensure a company always has 1, and 1 only "active" name is no easy thing to do.
Add a date range to your enterprise: valid_from, valid_to. Initialise to -infinity,+infinity. When you change the name of an enterprise, instead: update existing rows where valid_to = +infinity to be now() and insert the new name with valid_from = now(), valid_to = +infinity.
Add a date field to the document, something like create_date. Then when joining to enterprise you join on ID and d.create_date between e.valid_from and e.valid_to.
This is a simplistic approach and breaks things like uniqueness for your id and code. To handle that you could record the name in a separate table with the id,from,to,name. Leaving your original table with just the id and code for uniqueness.

How to ensure the accuracy of aggregated data in PostgreSQL table?

I have two PostgreSQL tables - table A contains individual client's credit movements records (increase / decrease) and Table B contains data of aggregated table A. Simplified structure of the tables (I removed FK and rules):
CREATE TABLE "public"."credit_review" (
"id" SERIAL,
"client_id" INTEGER NOT NULL,
"credit_change" INTEGER DEFAULT 0 NOT NULL,
"itime" TIMESTAMP(0) WITH TIME ZONE DEFAULT now()
) WITHOUT OIDS;
CREATE TABLE "public"."credit_review_aggregated" (
"id" SERIAL,
"credit_amount" INT DEFAULT 0 NOT NULL,
"valid_to_review_id" INT NOT NULL,
"client_id" INTEGER NOT NULL,
"itime" TIMESTAMP(0) WITH TIME ZONE DEFAULT now()
) WITHOUT OIDS;
Column "credit_review_aggregated.valid_to_review_id" is FK to "credit_review.id".
Because it is very important to have data in aggregation table correct I'm looking for a way of ensuring this need. It occurred to me:
Disable deleting and udpating records in both tables
On aggregated table create trigger to check if the entered data are correct (and if not, don't allow insert). I don't like it too much because when a record is inserted into aggregation tables credit_amount value will be counted twice (once in application a second time in the trigger).
Do you have some advice for me how to ensure this situation?
I'm not entirely clear on the invariant you're trying to enforce, but from the general outlines of the problem, I would be inclined to use trigger code to enforce it, and ues SERIALIZABLE transactions. Enforcing invariants across multiple tables becomes very tricky very quickly otherwise.
http://wiki.postgresql.org/wiki/SSI
Full disclosure: Because my employer needed to enforce complex integrity rules across multiple tables, I worked on adding SSI to PostgreSQL, along with Dan R.K. Ports of MIT.