I have an event that is triggered when a device starts a process with a unique process id.
When the process stops it sends another event with its Timestamp and the same process id.
Now I want to calculate the total process time. So subtract the Timestamp from the Startevent from the Timestamp from the Endevent.
I tried multiple ways to accomplish this but they all failed.
Is it possible to save an item from a query to a variable?
e.g.
select
#var = d.ProcessID
from table d
or is it possible to make subqueries??
e.g.
select
d.TimeStamp
from table d
where d.ProcessID = (select
e.ProcessID
from table e)
Or if anyone has a different suggestion it would be great to have some input :)
Thanks in advance
Greets
You can use patterns to achieve this. Something like that might work:
select * from pattern [every a=StartEvent -> b=StopEvent(sourceId = a.sourceId, processId = a.processId)]
For more info have a look at the Esper doc.
Related
Is there a way to list all objects from a server (for all db) and its activities?
What I mean by activities:
If an object is a table/view, I'd like to know if last time something
got updated or this table was accessed.
If an object is a function, I'd like to know last time function used.
If an object is a stored
procedure, I'd like to know last time executed.
Goal is to eliminate some of the non-used objects or at least identify them so we can further analyze it. If there is a better way to do this please let me know.
Without a specific audit or explicit logging instructions in your code what you are asking might be difficult to achieve.
Here are some hints that, in my opinion, can help you retrieving the information you need:
Tables/Views You can rely on dynamic management view that record index information: sys.dm_db_index_usage_stats (more info here)
SELECT last_user_update, *
FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID('YourDBName')
AND OBJECT_ID = OBJECT_ID('[YourDBName].[dbo].[YourTableName]')
Stored Procedures If SP execution is still cached you can query sys.dm_exec_procedure_stats (more info here)
select last_execution_time, *
from sys.dm_exec_procedure_stats
WHERE database_id = DB_ID('YourDBName')
AND OBJECT_ID = OBJECT_ID('[YourDBName].[dbo].[YourSpName]')
Functions If function execution is still cached you can query sys.dm_exec_query_stats (from this great answer), more info here
SELECT qs.last_execution_time
FROM sys.dm_exec_query_stats qs
CROSS APPLY (SELECT 1 AS X
FROM sys.dm_exec_plan_attributes(qs.plan_handle)
WHERE ( attribute = 'objectid'
AND value = OBJECT_ID('[YourDBName].[dbo].[YourFunctionName]') )
OR ( attribute = 'dbid'
AND value = DB_ID('YourDBName') )
HAVING COUNT(*) = 2) CA
I'm using Siddhi to reduce the amount of events existing in a system. To do so, I declared a batch time window, that groupes all the events based on their target_ip.
from Events#window.timeBatch(30 sec)
select id as meta_ID, Target_IP4 as target_ip
group by Target_IP4
insert into temp;
The result I would like to have is a single event for each target_ip and the meta_ID parameter value as the concatenation of the distinct events that forms the event.
The problem is that the previous query generates as many events as distinct meta_ID values. for example, I'm getting
"id_10", "target_1"
"id_11", "target_1"
And I would like to have
"id_10,id_11", "target_1"
I'm aware that some aggregation method is missing in my query, I saw a lot of aggregation function in Siddhi, including the siddhi-execution-string extension which has the method str:concat, but I don't know how to use it to aggregate the meta_ID values. Any idea?
You could write an execution plan as shown below, to achieve your requirement:
define stream inputStream (id string, target string);
-- Query 1
from inputStream#window.timeBatch(30 sec)
select *
insert into temp;
-- Query 2
from temp#custom:aggregator(id, target)
select *
insert into reducedStream;
Here, the custom:aggregator is the custom stream processor extension that you will have to implement. You can follow [1] when implementing it.
Let me explain a bit about how things work:
Query 1 generates a batch of events every 30 seconds. In other words, we use Query 1 for creating a batch of events.
So, at the end of every 30 second interval, the batch of events will be fed into the custom:aggregator stream processor. When an input is received to the stream processor, its process() method will be hit.
#Override
protected void process(ComplexEventChunk<StreamEvent> streamEventChunk, Processor nextProcessor, StreamEventCloner streamEventCloner, ComplexEventPopulater complexEventPopulater) {
//implement the aggregation & grouping logic here
}
The batch of events is there in the streamEventChunk. When implementing the process() method, you can iterate over the streamEventChunk and create one event per each destination. You will need to implement this logic in the process() method.
[1] https://docs.wso2.com/display/CEP420/Writing+a+Custom+Stream+Processor+Extension
New to Esper and EPL in general, i have two use cases which are basically the opposites of oneanother. First I need to catch all unique events in a timewindow, using firstunique(*parameters*).win:time(*time*).
Now what I need to do is the exact opposite, basically catch all events that arrive in that window and that are NOT thrown by that statement, basically all the duplicates.
How can I achieve this ? Thanks !
You could use a subquery and "not exists". For example:
select * from Event e1 where not exists (select * from Event#firstunique(*parameters*)#time(*time*) e2 where e1.parameters = e2.parameters)
I've actually found a solution, it involves using unique id's for incoming events on top of comparing their parameters.
The query looks something like this :
select * from Event a where exists (select * from Event.std:firstUnique(*parameters*).win:time(*time*) b where a.eventId <> b.eventId)
This solves the problem I had where the exists method would return every event (duplicates and unique events) because the window in the subquery would be filled first.
I feel I am missing a simple T-SQL answer to this question. I have a Measurements table, and an Activity table related by MeasurementID column, and there are at least 3 activities (sometimes more) related to a single measurement. How do I construct a query such that the output would look like this:
Measurement ID Activities
1 Running:Walking:Eating
2 Walking:Eating:Sleeping
I would also be satisfied if the output looked like this:
Measurement ID Activity1 Activity2 Activity3
1 Running Walking Eating
Is there a simple single query way to do this, or must I use (shudder) cursors to do the trick?
Unfortunately, there is no GROUP_CONCAT() in T-SQL. There is a trick to simulate it, though:
SELECT
MeasurmentID,
Activities = REPLACE((SELECT Activity AS [data()]
FROM MeasurmentActivities
WHERE MeasurmentID = ma.MeasurmentID
FOR xml path('')), ' ', ':')
FROM
MeasurmentActivities AS ma
GROUP BY
MeasurmentID
If you know in advance the number of activies by measurement, you can try the PIVOT operator.
Otherwise there is no easy way to do it.
If you can, I would suggest you to "horizontalize" the rows on your application-end.
If that is not an option, the only way I suppose would work in Transact-SQL is to convert your result in XML output and tweak it using XPATH queries.
I have a query like this, which we use to generate data for our custom dashboard (A Rails app) -
SELECT AVG(wait_time) FROM (
SELECT TIMESTAMPDIFF(MINUTE,a.finished_time,b.start_time) wait_time
FROM (
SELECT max(start_time + INTERVAL avg_time_spent SECOND) finished_time, branch
FROM mytable
WHERE name IN ('test_name')
AND status = 'SUCCESS'
GROUP by branch) a
INNER JOIN
(
SELECT MIN(start_time) start_time, branch
FROM mytable
WHERE name IN ('test_name_specific')
GROUP by branch) b
ON a.branch = b.branch
HAVING avg_time_spent between 0 and 1000)t
GROUP BY week
Now I am trying to port this to tableau, and I am not being able to find a way to represent this data in tableau. I am stuck at how to represent the inner group by in a calculated field. I can also try to just use a custom sql data source, but I am already using another data source.
columns in mytable -
start_time
avg_time_spent
name
branch
status
I think this could be achieved new Level Of Details formulas, but unfortunately I am stuck at version 8.3
Save custom SQL for rare cases. This doesn't look like a rare case. Let Tableau generate the SQL for you.
If you simply connect to your table, then you can usually write calculated fields to get the information you want. I'm not exactly sure why you have test_name in one part of your query but test_name_specific in another, so ignoring that, here is a simplified example to a similar query.
If you define a calculated field called worst_case_test_time
datediff(min(start_time), dateadd('second', max(start_time), avg_time_spent)), which seems close to what your original query says.
It would help if you explained what exactly you are trying to compute. It appears to be some sort of worst case bound for avg test time. There may be an even simpler formula, but its hard to know without a little context.
You could filter on status = "Success" and avg_time_spent < 1000, and place branch and WEEK(start_time) on say the row and column shelves.
P.S. Your query seems a little off. Don't you need an aggregation function like MAX or AVG after the HAVING keyword?