Is it possible to track in the shoretel database who has silently monitored others calls? If so where is the data stored? tables?
Here is a basic sample query that will get you a list of calls that were silent monitor calls. There is obviously a lot of refining to do based on exactly what details you are looking for. Feel free to PM me if you want help with something more specific.
SELECT `call`.SIPCallId AS `GUID`
, `call`.StartTime AS `StartTime`
, `call`.Extension AS `DN`
, `call`.DialedNumber
FROM `call`
LEFT JOIN connect ON (`call`.ID = connect.CallTableID)
WHERE connect.connectreason = 21
ORDER BY `call`.Extension, `call`.StartTime
The where clause here limits your rows to only those with a reason code of 21, silent monitor. Look at the values in the connectreason table for more details on what reason codes are tracked.
PLEASE NOTE that this is in the CDR Database (port 4309, username of 'st_cdrreport' and readonly password of 'passwordcdrreport') you don't want to accidentally write to the CDR database...
Related
I have come across a problem, not sure how to implement it with DB. I have go lang on the application side.
I have product table with column assigned as last_port_used. I need to assign ports to services when someone hits an api. It need to increment the last_port_id by 1 against its product name.
one possible solution would have been to use redis server and sync this value over there. Since we dont have redis. I wanted to achieve the same by psql.
I read more about locks and i think i need ACCESS EXCLUSIVE lock. is this the right way to do it?
product
id
name
start_port //11000
end_port//11999
last_port_used// 11023
How to handle it concurrently properly?
You could do simply:
UPDATE products SET last_port_used = last_port_used+1
WHERE id=...
AND last_port_used < end_port
RETURNING *
This will perform the update in a thread-safe manner, and only if a port number is available (last_port_used < end_port) and return the assigned port.
If you need to lock the row, you can also use SELECT FOR UPDATE.
I would like to know the quantity of logs used (active logs) by each connection in the database.
I know how to retrieve the quantity of active logs for the database, but not for each application. Knowing the quantity of active logs in the database helps me to identify if a log-full condition is approaching.
However, I want to know which application is approaching to this condition of log-full. For this reason, I need to know how much log is used by each transaction (each application), but I have not found a view, snapshot or something else for each application.
Any ideas?
Logs are used by transactions (units of work), not connections, so -- something like this, may be?
select
application_handle, uow_log_space_used
from
table(sysproc.mon_get_unit_of_work(null,null))
order by 2 desc
fetch first 5 rows only
The repository in the CommonDomain only exposes the "GetById()". So what to do if my Handler needs a list of Customers for example?
On face value of your question, if you needed to perform operations on multiple aggregates, you would just provide the ID's of each aggregate in your command (which the client would obtain from the query side), then you get each aggregate from the repository.
However, looking at one of your comments in response to another answer I see what you are actually referring to is set based validation.
This very question has raised quite a lot debate about how to do this, and Greg Young has written an blog post on it.
The classic question is 'how do I check that the username hasn't already been used when processing my 'CreateUserCommand'. I believe the suggested approach is to assume that the client has already done this check by asking the query side before issuing the command. When the user aggregate is created the UserCreatedEvent will be raised and handled by the query side. Here, the insert query will fail (either because of a check or unique constraint in the DB), and a compensating command would be issued, which would delete the newly created aggregate and perhaps email the user telling them the username is already taken.
The main point is, you assume that the client has done the check. I know this is approach is difficult to grasp at first - but it's the nature of eventual consistency.
Also you might want to read this other question which is similar, and contains some wise words from Udi Dahan.
In the classic event sourcing model, queries like get all customers would be carried out by a separate query handler which listens to all events in the domain and builds a query model to satisfy the relevant questions.
If you need to query customers by last name, for instance, you could listen to all customer created and customer name change events and just update one table of last-name to customer-id pairs. You could hold other information relevant to the UI that is showing the data, or you could simply hold IDs and go to the repository for the relevant customers in order to work further with them.
You don't need list of customers in your handler. Each aggregate MUST be processed in its own transaction. If you want to show this list to user - just build appropriate view.
Your command needs to contain the id of the aggregate root it should operate on.
This id will be looked up by the client sending the command using a view in your readmodel. This view will be populated with data from the events that your AR emits.
Say that I have a User table in my ReadDatabase (use SQL Server). In a regulare read/write database I can put like a index on the table to make sure that 2 users aren't addedd to the table with the same emailadress.
So if I try to add a user with a emailadress that already exist in my table for a diffrent user, the sql server will throw an exception back.
In Cqrs I can't do that since if I decouple the write to my readdatabas from the domain model, by puting it on an asyncronus queue I wont get the exception thrown back to me, and I will return "OK" to the UI and the user will think that he is added to the database, when infact he will never be added to the read database.
I can do a search in the read database checking if there is a user already in my database with the emailadress, and if there is one, then thru an exception back to the UI. But if they press the save button the same time, I will do 2 checks to the database and see that there isn't any user in the database with the emailadress, I send back that it's okay. Put it on my queue and later it will fail (by hitting the unique identifier).
Am I suppose to load all users from my EventSource (it's a SQL Server) and then do the check on that collection, to see if I have a User that already has this emailadress. That sounds a bit crazy too me...
How have you people solved it?
The way I can see is to not using an asyncronized queue, but use a syncronized one but that will affect perfomance really bad, specially when you have many "read storages" to write to...
Need some help here...
Searching for CQRS Set Based Validation will give you solutions to this issue.
Greg Young posted about the business impact of embracing eventual consistency http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/
Jérémie Chassaing posted about discovering missing aggregate roots in the domain http://thinkbeforecoding.com/post/2009/10/28/Uniqueness-validation-in-CQRS-Architecture
Related stack overflow questions:
How to handle set based consistency validation in CQRS?
CQRS Validation & uniqueness
What's a good way to implement a Web Page counter?
On the surface this is a simple problem, but it gets problematic when dealing with search engine crawlers and robots, multiple clicks by the same user, refresh clicks.
Specifically what is a good way to ensure links aren't just 'clicked up' by user by repeatedly clicking? IP address? Cookies? Both of these have a few drawbacks (IP Addresses aren't necessarily unique, cookies can be turned off).
Also what is the best way to store the data? Increment a counter individually or store each click as a record in a log table, then summarize occasionally.
Any live experience would be helpful,
+++ Rick ---
Use IP Addresses in conjunction with Sessions. Count every new session for an IP address as one hit against your counter. You can store this data in a log database if you think you'll ever need to look through it. This can be useful for calculating when your site gets the most traffic, how much traffic per day, per IP, etc.
So I played around with this a bit based on the comments here. What I came up with is counting up a counter in a simple field. In my app I have code snippet entities with a Views property.
When a snippet is viewed a method filters out (white list) just what should hopefully be browsers:
public bool LogSnippetView(string snippetId, string ipAddress, string userAgent)
{
if (string.IsNullOrEmpty(userAgent))
return false;
userAgent = userAgent.ToLower();
if (!(userAgent.Contains("mozilla") || !userAgent.StartsWith("safari") ||
!userAgent.StartsWith("blackberry") || !userAgent.StartsWith("t-mobile") ||
!userAgent.StartsWith("htc") || !userAgent.StartsWith("opera")))
return false;
this.Context.LogSnippetClick(snippetId, IpAddress);
}
The stored procedure then uses a separate table to temporarily hold the latest views which store the snippet Id, entered date and ip address. Each view is logged and when a new view comes in it's checked to see if the same IP address has accessed this snippet within the last 2 minutes. if so nothing is logged.
If it's a new view the view is logged (again SnippetId, IP, Entered) and the actual Views field is updated on the Snippets table.
If it's not a new view the table is cleaned up with any views logged that are older than 4 minutes. This should result in a minmal number of entries in the View log table at any time.
Here's the stored proc:
ALTER PROCEDURE [dbo].[LogSnippetClick]
-- Add the parameters for the stored procedure here
#SnippetId AS VARCHAR(MAX),
#IpAddress AS VARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
-- check if don't allow updating if this ip address has already
-- clicked on this snippet in the last 2 minutes
select Id from SnippetClicks
WHERE snippetId = #SnippetId AND ipaddress = #IpAddress AND
DATEDIFF(minute, Entered, GETDATE() ) < 2
IF ##ROWCOUNT = 0
BEGIN
INSERT INTO SnippetClicks
(SnippetId,IpAddress,Entered) VALUES
(#SnippetId,#IpAddress,GETDATE())
UPDATE CodeSnippets SET VIEWS = VIEWS + 1
WHERE id = #SnippetId
END
ELSE
BEGIN
-- clean up
DELETE FROM SnippetClicks WHERE DATEDIFF(minute,Entered,GETDATE()) > 4
END
END
This seems to work fairly well. As others mentioned this isn't perfect but it looks like it's good enough in initial testing.
If you get to use PHP, you may use sessions to track activity from particular users. In conjunction with a database, you may track activity from particular IP addresses, which you may assume are the same user.
Use timestamps to limit hits (assume no more than 1 hit per 5 seconds, for example), and to tell when new "visits" to the site occur (if the last hit was over 10 minutes ago, for example).
You may find $_SERVER[] properties that aid you in detecting bots or visitor trends (such as browser usage).
edit:
I've tracked hits & visits before, counting a page view as a hit, and +1 to visits when a new session is created. It was fairly reliable (more than reliable enough for the purposes I used it for. Browsers that don't support cookies (and thus, don't support sessions) and users that disable sessions are fairly uncommon nowadays, so I wouldn't worry about it unless there is reason to be excessively accurate.
If I were you, I'd give up on my counter being accurate in the first place. Every solution (e.g. cookies, IP addresses, etc.), like you said, tends to be unreliable. So, I think your best bet is to use redundancy in your system: use cookies, "Flash-cookies" (shared objects), IP addresses (perhaps in conjunction with user-agents), and user IDs for people who are logged in.
You could implement some sort of scheme where any unknown client is given a unique ID, which gets stored (hopefully) on the client's machine and re-transmitted with every request. Then you could tie an IP address, user agent, and/or user ID (plus anything else you can think of) to every unique ID and vice-versa. The timestamp and unique ID of every click could be logged in a database table somewhere, and each click (at least, each click to your website) could be let through or denied depending on how recent the last click was for the same unique ID. This is probably reliable enough for short term click-bursts, and long-term it wouldn't matter much anyway (for the click-up problem, not the page counter).
Friendly robots should have their user agent set appropriately and can be checked against a list of known robot user agents (I found one here after a simple Google search) in order to be properly identified and dealt with seperately from real people.