I would like to know the quantity of logs used (active logs) by each connection in the database.
I know how to retrieve the quantity of active logs for the database, but not for each application. Knowing the quantity of active logs in the database helps me to identify if a log-full condition is approaching.
However, I want to know which application is approaching to this condition of log-full. For this reason, I need to know how much log is used by each transaction (each application), but I have not found a view, snapshot or something else for each application.
Any ideas?
Logs are used by transactions (units of work), not connections, so -- something like this, may be?
select
application_handle, uow_log_space_used
from
table(sysproc.mon_get_unit_of_work(null,null))
order by 2 desc
fetch first 5 rows only
Related
I would like to query the history of the balance of an XRPL account with the new WebSocket API.
For example, how do I check the balance of an account on a particular day?
I know with the v2 api, there was a possibility to query balance_changes. But this doesn't seem to be part of the new version.
For example:
https://data.ripple.com/v2/accounts/rf1BiGeXwwQoi8Z2ueFYTEXSwuJYfV2Jpn/balance_changes?start=2018-01-01T00:00:00Z
How is this done with the new Websocket API's?
There's no convenient API call that the WebSocket API can do to get this. I assume you want the XRP balance, not token/issued currency balances, which are in a different place.
One way to go about it is to make an account_tx call and then iterate through the metadata. Many, but not all, transactions will have a ModifiedNode entry of type AccountRoot—if that transaction changed the account's XRP balance, you can see the difference in the PreviousFields vs. FinalFields for that entry. The Look Up Transaction Results tutorial has some details on how to parse out metadata this way. There are some kind of tricky edge cases here: for example, if you send a transaction that buys 10 drops of XRP in the exchange but burns 10 drops of XRP as a transaction cost, then the metadata won't show a balance change because the net change was zero (+10, -10).
Another approach could be to estimate what ledger_index was most recently closed at a given time, then use account_info to look up the account's balance as of that time. The hard part there is figuring out what the latest ledger index was at a given time. This is one of the places where the Data API was just more convenient than the WebSocket API—there's no way to look up by date in WebSocket so you have to try a ledger index, see what the close time of the ledger was, try another ledger index, see what the date is, etc.
I have come across a problem, not sure how to implement it with DB. I have go lang on the application side.
I have product table with column assigned as last_port_used. I need to assign ports to services when someone hits an api. It need to increment the last_port_id by 1 against its product name.
one possible solution would have been to use redis server and sync this value over there. Since we dont have redis. I wanted to achieve the same by psql.
I read more about locks and i think i need ACCESS EXCLUSIVE lock. is this the right way to do it?
product
id
name
start_port //11000
end_port//11999
last_port_used// 11023
How to handle it concurrently properly?
You could do simply:
UPDATE products SET last_port_used = last_port_used+1
WHERE id=...
AND last_port_used < end_port
RETURNING *
This will perform the update in a thread-safe manner, and only if a port number is available (last_port_used < end_port) and return the assigned port.
If you need to lock the row, you can also use SELECT FOR UPDATE.
I am building an app that helps people transfer money from one account to another. I have two tables "Users" and "Transactions". The way I am currently handling transfers is by
Check if the sender has enough balance to make a transfer.
Deduct the balance from the sender and update the sender's balance.
Add the amount deducted from the sender's account to the recipient's account and then update the balance of the recipient.
Then finally write the transaction record on the "Transactions" table as a single entry like below:
id | transactionId | senderAccount | recipientAccount | Amount |
—--+---------------+---------------+------------------+--------+
1 | ijiej33 | A | B | 100 |
so my question is, is recording a transaction as a single entry like above a good practice or will this kind of database model design produce future challenges?
Thanks
Check if the sender has enough balance to make a transfer.
Deduct the balance from the sender and update the sender's balance.
Yes, but.
If two concurrent connections attempt to deduct money from the sender at the same time, they may both successfully check that there is enough money for each transaction on its own, then succeed even though the balance is insufficient for both transactions to succeed.
You must use a SELECT FOR UPDATE when checking. This will lock the row for the duration of the transaction (until COMMIT or ROLLBACK), and any concurrent connection attempting to also SELECT FOR UPDATE on the same row will have to wait.
Presumably the receiver account can always receive money, so there is no need to lock it explicitly, but the UPDATE will lock it anyway. And locks must always be acquired in the same order or you will get deadlocks.
For example if a transatcion locks rows 1 then 2, while another locks rows 2 then 1: the first one will lock 1, the second will lock 2, then the first will try to lock 2 but it is already locked, and the second will try to lock 1 but it is also already locked by the other transaction. Both transactions will wait for each other forever until the deadlock detector nukes one of them.
One simple way to dodge this is to use ORDER BY:
SELECT ... FROM users WHERE user_id IN (sender_id,receiver_id)
ORDER BY user_id FOR UPDATE;
This will lock both rows in the order of their user_ids, which will always be the same.
Then you can do the rest of the procedure.
Since it is always a good idea to hold locks for the shortest amount of time, I'd recommend to put the whole thing inside a plpgsql stored procedure, including the COMMIT/ROLLBACK and error handling. Try to make the stored procedure failsafe and atomic.
Note, for security purposes, you should:
Store the balance of both accounts before the money transfer occured into the transactions table. You're already SELECT'ing it in the SELECT for update, might as well use it. It will be useful for auditing.
For security, if a user gets their password stolen there's not much you can do, but if your application gets hacked it would be nice if the hacker was not able to issue global UPDATEs to all the account balances, mess with the audit tables, etc. This means you need to read up on this and create several postgres users/roles with suitable permissions for backup, web application, etc. Some tables and especially the transactions table should have all UPDATE privileges revoked, and INSERT allowed only for the transactions stored procs, for example. The aim is to make the audit tables impossible to modify, basically append-only from the point of view of the application code.
Likewise you can handle updates to balance via stored procedures and forbid the web application role from messing with it. You could even add take a user-specific security token passed as a parameter to the stored proc, to authenticate the app user to the database, so the database only allows transfers from the account of the user who is logged in, not just any user.
Basically if it involves money, then it involves legislation, and you have to think about how not to go to jail when your web app gets hacked.
Is it possible to track in the shoretel database who has silently monitored others calls? If so where is the data stored? tables?
Here is a basic sample query that will get you a list of calls that were silent monitor calls. There is obviously a lot of refining to do based on exactly what details you are looking for. Feel free to PM me if you want help with something more specific.
SELECT `call`.SIPCallId AS `GUID`
, `call`.StartTime AS `StartTime`
, `call`.Extension AS `DN`
, `call`.DialedNumber
FROM `call`
LEFT JOIN connect ON (`call`.ID = connect.CallTableID)
WHERE connect.connectreason = 21
ORDER BY `call`.Extension, `call`.StartTime
The where clause here limits your rows to only those with a reason code of 21, silent monitor. Look at the values in the connectreason table for more details on what reason codes are tracked.
PLEASE NOTE that this is in the CDR Database (port 4309, username of 'st_cdrreport' and readonly password of 'passwordcdrreport') you don't want to accidentally write to the CDR database...
Due to some vague reasons we are using replicated orient-db for our application.
It looks likely that we will have cases when a single record could be created twice. Here is how it happens:
we have entities user_document, they have ID of user and ID of document - both users and documents are managed in another application and stored in another database;
this another application on creating new document sends broadcast event (via rabbit-mq topic);
several instances of our application receive this message and create another user_document with the same pair of user_id and document_id.
If I understand correct, we should use UNIQUE index on the pair of these ids and rely upon distributed transactions.
However due to some reasons (we have another team writitng layer between application and database) we probably could not use UNIQUE though it may sound stupid :)
What are our chances then?
Could we, for example, allow all instances to create redundant records and immediately after creation select by user_id and document_id and if more than one were found, delete ones with lexicografically higher own id?
Sure you can do in that way.
You can try to use something like
DELETE FROM (SELECT FROM user_document where user_id=? and document_id=? skip 1)
However, take a notice that without creation of index this approach may consume some additional resources on server, and you might got a significant slow down if user_document have big amount of records.