IBM DB2 Timetravel logging based on some criteria - db2

I have been searching for the condition, where, lets say when we enable time travel to a certain table in DB2 , but don't want to capture all the updates done, but only the updates that's done by some specific user.
Wanted to know if this is at all possible with the DB2 time travel and how we can achieve it .

It's not possible with DB2 temporal tables.

Alter the temporal table add a user column maintained by system.
db2 for Iseries column shown
EMP_CHANGE_USER VARCHAR(18) GENERATED ALWAYS AS (USER)
The new column will go automatically to the history table of the temporal table. You can report on the history table and have emp_change user.
Note: IRL Don't single out users. You can give management a report that lists out all users and management can filter it down to individuals. Programmers do not single out users for reporting and logging.

Related

How to design security policies for a following system including counters in postgres/supabase if postgres functions are used?

I am unsure how to design security policies for a following system including counters in postgres/supabase. My database includes two tables:
Users:
uuid|name|follower_counter
------------------------------
xyz |tobi| 1
Following-Relationship
follower| following
---------------------------
uuid_1 | uuid_2
Once a user follows a different user, I would like to use a postgres function/transaction to
Insert a new following-follower relationship
Update the followed users' counter
BEGIN
create follower_relationship(follower_id, following_id);
update increment_counter_of_followed_person(following_id);
END;
The constraint should be that the users table (e.g. the name column) can only be altered by the user owning the row. However, the follower_counter should open to changes from users who start following that user.
What is the best security policy design here? Should I add column security or should exclude the counters to a different table?
Do I have to pass parameters to the "block transaction" to ensure that the update and insert functions are called with the needed rights? With which rights should I call the block function?
It might be better to take a different approach to solve this problem. Instead of having a column dedicated to counting the followers, I would recommend actually counting the number of followers when you query the users. Since you already have Following-Relationship table, we just need to count the rows within the table where following or follower is the querying user.
When you have a counter, it might be hard to keep the counter accurate. You have to make sure the number gets decremented when someone unfollows. What if someone blocks a user? What if a user was deleted? There could be a lot of situations that could throw off the counter.
If you count the number of followings/followers on the fly, you don't need to worry about those situations at all.
Now obvious concern with this approach that you might have is performance, but you should not worry too much about it. Postgres is a powerful database that has been battle tested for decades, and with a proper index in place, it can easily perform these query on the fly.
The easiest way of doing this in Supabase would be to create a view like this the following. Once you create a view, you can query it from your Supabase client just like a typical table!
create or replace view profiles as
select
id,
name,
(select count(*) from following_relationship where followed_user_id = id) as follower_count,
(select count(*) from following_relationship where following_user_id = id) as following_count
from users;

Does DB2 database have feature to add Sensitive Data Indicator to an Object?

SQL Server gives a feature to add Sensitive Indicator for Columns/Objects to identify what kind of data is store in that Column.
CREATE TABLE STUDENT (SNAME VARCHAR(1000))
ADD SENSITIVITY CLASSIFICATION TO
dbo.STUDENT.SNAME
WITH ( LABEL='Highly Confidential', INFORMATION_TYPE='Financial', RANK=CRITICAL )
Then we can fetch this Information with the following query.
SELECT *FROM sys.sensitivity_classifications
Does DB2 have any feature similar to this?
SQLServer Documentation : SQLServer_Documention_For_Sensitive_Data_Indicator
Db2 has the security feature of Label-Based Access Control (LBAC). You can define and later assign security labels and policies to data. Moreover, you then define access control rules based on those labels.
INSERT INTO student VALUES ('Henrik', SECLABEL_BY_NAME('Highly Confidential', 'Financial') )

Google Data Studio: Connect to multiple schemas in multi-tenant postgres DB

I have a multi-tenant database in postgres. So, I have one schema per customer and each schema has a fixed set of tables.
When I connect to the DB using Google Data Studio(GDS), I only see the table names without their associated schema.
How do I connect to tables belonging to one or more schemas?
Also, what do I do if my tables have more than 700k rows, as GDS has a limit on number of rows that can be queried right?
You'll have to use the "Custom Query" option instead of the basic table selection if you need anything more complex.
Regarding the row limit. I wasn't aware of the limit, but if that is true I'd suggest using the Custom Query to pre-group your rows in the query into whatever makes sense...days...months...etc to bring the row count down.
Data Studio will likely choke on anywhere near that many rows and make for a horrible user experience. Let Postgres do as much of the heavy lifting as you can.
Answering only: "what do I do if my tables have more than 700k rows, as GDS has a limit on number of rows that can be queried right?"
Not exactly. The limit is on the number of rows returned, not the amount of rows queried. And that matters since Data Studio will almost always push down queries to connectors.
Here's an example: Lets say you have a purchase table in a PostgreSQL db with 1M+ rows where each record is a purchase event. You add this table as a data source in your report and add a bar chart that shows average purchase by customer type. Let's say you have 12 customer types. Data Studio will then push down the GROUP BY clause to the PostgreSQL db. Thus, your result will have only 12 rows of data instead of 1M+. In most chart types, Data Studio will aggregate or page the results thus issuing a query statement that limits the number of rows returned.
You will only run into the limit if you end up creating a scenario where Data Studio cannot issue an aggregation or paging over the query results or if the aggregated results cross the row limit.

How to update PostgreSQL full text search field when relational data changes

I have the following strategy for the full text search in my web app which uses PostgreSQL for relational data storage. For example I will take Invoices table.
In the tables I have one additional field ALTER TABLE invoices ADD COLUMN tsv tsvector on which the full text search query is done like this ... WHERE tsv ## to_tsquery('query:*') ...
On every full text search table I have set an update trigger that updates tsv field on every change of the record. Update sets and concatenates the data from different fields to tsv field, sets the right weights, etc...
The data that gets set into tsv field can also be relational data from other tables. From example in table invoices I have client_id field but since I want to search invoices by the client name as well I also include clients.client_name data in the invoices.tsv field
My question is what is the best strategy to keep the relational data in tsv selectors in sync. In above scenario -> if client name changes I would need to update this in tsv field for every invoice...
Should I set cron job setup up that would do this every night? It could be also done with triggers, but since my database schema is very large I am scared it might get out of control if I have triggers all over the place.
If you add the clients name into the tsv field you will end up with more complexity. You might want to look into Materialized views as mentioned in this article. The trade-off might be speed in showing results and the need to refresh the view periodically. As of Postgres 9.4 you can now refresh a view concurrently.
Another thing you could do is create an update trigger in the Client table and when there's an update it will update the data in the Invoices table as well.

DB2 AS400 Triggers

I've been tasked with finding a way to migrate data into a DB2 AS400 database. When the data is entered (currently manually) on the front end, the system is doing some calculations and inserting the results in a table.
My understanding is that it's using a trigger to do so. I don't know very much about this stuff, but I have written code to directly insert values into that same table. Is there a way for me to figure out what trigger is being fired when users enter data manually?
I've looked in QSYS2/SYSTRIGGERS and besides not making much sense to me, I see no triggers that belong to the SCHEMA with my table in it.
Any help here would be awesome, as I am stuck.
SELECT *
FROM QSYS2.SYSTRIGGERS
WHERE TABSCHEMA = 'MYSCHEMA'
AND TABNAME = 'MYTABLE'
Should work fine.
If you'd prefer to use a 5250 command line, the Display File Description (DSPFD) command will show you the triggers on a file (table)
DSPFD FILE(MYSCHMA/MYTABLE) TYPE(*TRG)
Lastly, trigger information is available via the IBM i Navigator GUI. Either the older fat client version or the newer web based one.