DB2 records count - db2

I have a simple user registration table that contains e.g. just two columns - event ID and user name who is registered to that event. This table can contain million of records. Now to show how many users registered to certain event I don't wanna execute SQL query Count(*) every time user loads event WEB-page. Instead of that I want to keep events' counts in a separate table and update that count once a new user registered. So I can use a TRIGGER that would update the counts once new record added/updated to the registration table. Is it a good approach? What if 1000 users are registered at the same time and 1000 records updated/created in registration table? Is TRIGGER gonna work correct? What is the best solution to autocalculate counts? Thanks

Related

Track history of ManyToMany Relationship table with extra fields

I have a many to many relationship table (named UserLabel) in the postgres db with some extra field. I want to be able to track the history of changes to this many to many table. I came up with the following structure. I'd like to know if there's any better way of implementing it
User
id
Label
id
UserLabel
id
user_id
label_id
label_info (jsonb)
is_deleted (true or false)
UserLabel can contain more than one record with same user_id and label_id but with different label_info. At any point of time, if I want to query for all the labels for a given user I can do that using this table. Now, updates could occur on this table on label_id or label_info or is_deleted fields. I want to be able to know at any given point of time, what were the labels and label info of a user. For this, I'm using the below table.
UserLabelEvent
id
user_label_id
user_id
label_id
label_info
change_type (value will be one of (create, update, delete))
created_timestamp
If I want to check the user labels for any user at any time, I just have to query on user_id and created_timestamp and order the records by created_timestamp and loop over the records to construct the user labels at any given time.
The problems in my current approach:
By default, anyone seeing at the schema of UserLabel table feels like there cannot be more than one record with same user_id and same label_id.
By looking at the UserLabelEvent, it's not obvious to understand how that table is working.
I need to do some post processing to find out the user labels at any given time. By post processing, I mean, loop over the query results and construct the user labels.
Please do suggest any other problems you find with this approach. I will update the post with new inputs.

How do I set my hasura permission to only see the rows of my table corresponding to a user?

Here's the thing. I have the 3 tables depicted here:
People on my application can place orders. Then, I want
a user with rex permission to see all the orders table's rows
a user with delivery permission to only see the rows of the orders table that have the zip column set to the delivery user's zip
From the orders table, I can get for each order a zip. With the table zip_user, I can get a user_id out of a zip. Out of that user_id, I can get the delivery user from the users table.
While it is trivial to get the rex to see all of the orders table, I have not yet been able to configure the permissions for the delivery user. What do I need to do?
In other words, given the user performing a select on the orders table has x-hasura-user-id set to some user id and x-hasura-role set to delivery, how does that user get only the rows from the orders table that match with the zips associated with that user's user_id?
Hasura has the concept of relations. If you have foreign keys, it makes the relations automatically, if not you can make them yourself in the UI. Once the relationships have been set up, you will be able to set deep permissions, so on the orders table, you'll be able to use users.id.
Start here: https://hasura.io/docs/1.0/graphql/manual/schema/relationships/index.html

Query items in dynamodb table within a numeric range

I need some guidance in implementing mail notifications for new chat messages. The mail notification would inform the user of all the chats that had new messages in the previous hour.
To get it done, I'll need to query all chats in a table within a time interval. First thing that came to mind was adding new global index where a hash would be a boolean for whether the chat has unread messages, and range would be timestamp for the latest message within that chat.
But I have learned that boolean hash keys are quite the anti-pattern, as they would squeeze the documents in a single partition.
Is there a different model that would allow us to query all items in a table within a numeric range?
I’m assuming that you want to query unread messages for a given user, since (again) I’m assuming that the read/unread status of a given notification should not change for one user if another user reads a notification for the same thing.
Going on that assumption, you should use a sparse index with the userId (or equivalent) as the hashkey and unreadNotificationTime as the sort key. When you insert a new notification into your table, set the value of unreadNotificationTime to the time stamp for the notification. When the user has read the notification, delete the unreadNotificationTime attribute from the item.
Why does this work?
DynamoDB only requires that an item has the key attributes of the base table, and any other attributes are optional. The way indexes work in DynamoDB is that an item from the base table will only appear in an index of the item has all of the key attributes of that particular index.
By setting a value for unreadNotificationTime when you store a notification, all newly created notifications will automatically be populated to the unread messages index. By deleting the unreadNotificationTime when a message is read, you the notification from that index. With this schema, there’s no need for any filtering or scan operations. Your index will only contain notifications that are unread, grouped by userId, and sorted by date.

Handling multi table transaction in cassandra

I have two table:
posts : {post_id, text}
This will store all post by its id. Another table that stores the count of likes, comments of each post:
counts: {post_id, likes, comments}
i have another table that map users who has already liked a post so that by checking the entry here we may/may not allow to like again
post_like_user: {post_is, user_id}
last one is comment table for each post:
comments: {post_id, comment_id, comment_text, }
So the use cases are:
If a user make any comment in comments tale increment the comments count in counts table.
If a use likes a post check first post_like_user table if the entry doesn't exist then increment the likes count in counts table insert the user id in post_like_user table.
Does these kind of use case are handle by cassandra/mongodb in production? How can i implement these use cases in cassandra/mongodb as it doesn't support ACID ?
Cassandra has a concept of batches, which is quite similar to transactions (at least from description). Link to documentation: https://docs.datastax.com/en/cql/3.3/cql/cql_using/useBatchGoodExample.html
So basically what you'd do:
BEGIN LOGGED BATCH
// do your dml
APPLY BATCH;
If a use likes a post check first post_like_user table if the entry doesn't exist then increment the likes count in counts table insert the user id in post_like_user table.
There are possible issues with this case:
Race condition. 'Check and update' will be not performed as an atomic operation. In Cassandra there is no way to provide atomicity for between several tables and several operations.
Inconsistent data in post_like_user table between replicas or you will have to provide strong consistency which will cost you some performance.
It will be better to avoid 'check and update' behavior: do not use separate table for counter and use count() function to get number of likes by post:
SELECT COUNT(*) FROM post_like_user where post_id='post id';
This request should work pretty fast because it will performed within one partition (if post_id is partition key)
Another way is use separate counts table, but update it in a background process, which will periodically request 'likes' count from post_like_user table by count(*) function and put the count into counts table.

APEX - Creating a page with multiple forms linked to multiple related tables... that all submit with one button?

I have two tables in APEX that are linked by their primary key. One table (APEX_MAIN) holds the basic metadata of a document in our system and the other (APEX_DATES) holds important dates related to that document's processing.
For my team I have created a contrl panel where they can interact with all of this data. The issue is that right now they alter the information in APEX_MAIN on a page then they alter APEX_DATES on another. I would really like to be able to have these forms on the same page and submit updates to their respective tables & rows with a single submit button. I have set this up currently using two different regions on the same page but I am getting errors both with the initial fetching of the rows (Which ever row is fetched 2nd seems to work but then the page items in the form that was fetched 1st are empty?) and with submitting (It give some error about information in the DB having been altered since the update request was sent). Can anyone help me?
It is a limitation of the built-in Apex forms that you can only have one automated row fetch process per page, unfortunately. You can have more than one form region per page, but you have to code all the fetch and submit processing yourself if you do (not that difficult really, but you need to take care of optimistic locking etc. yourself too).
Splitting one table's form over several regions is perfectly possible, even using the built-in form functionality, because the region itself is just a layout object, it has no functionality associated with it.
Building forms manually is quite straight-forward but a bit more work.
Items
These should have the source set to "Static Text" rather than database column.
Buttons
You will need button like Create, Apply Changes, Delete that submit the page. These need unique request values so that you know which table is being processed, e.g. CREATE_EMP. You can make the buttons display conditionally, e.g. Create only when PK item is null.
Row Fetch Process
This will be a simple PL/SQL process like:
select ename, job, sal
into :p1_ename, :p1_job, :p1_sal
from emp
where empno = :p1_empno;
It will need to be conditional so that it only fires on entry to the form and not after every page load - otherwise if there are validation errors any edits will be lost. This can be controlled by a hidden item that is initially null but set to a non-null value on page load. Only fetch the row if the hidden item is null.
Submit Process(es)
You could have 3 separate processes for insert, update, delete associated with the buttons, or a single process that looks at the :request value to see what needs doing. Either way the processes will contain simple DML like:
insert into emp (empno, ename, job, sal)
values (:p1_empno, :p1_ename, :p1_job, :p1_sal);
Optimistic Locking
I omitted this above for simplicity, but one thing the built-in forms do for you is handle "optimistic locking" to prevent 2 users updating the same record simultaneously, with one's update overwriting the other's. There are various methods you can use to do this. A common one is to use OWA_OPT_LOCK.CHECKSUM to compare the record as it was when selected with as it is at the point of committing the update.
In fetch process:
select ename, job, sal, owa_opt_lock.checksum('SCOTT','EMP',ROWID)
into :p1_ename, :p1_job, :p1_sal, :p1_checksum
from emp
where empno = :p1_empno;
In submit process for update:
update emp
set job = :p1_job, sal = :p1_sal
where empno = :p1_empno
and owa_opt_lock.checksum('SCOTT','EMP',ROWID) = :p1_checksum;
if sql%rowcount = 0 then
-- handle fact that update failed e.g. raise_application_error
end if;
Another, easier solution for the fetching part is creating a view with all the feilds that you need.
The weak point is it that you later need to alter the "submit" code to insert to the tables that are the source for the view data