I need to implement a rule engine using ESPER
For this I have to prepare query for rules (if there is any other optimized way, please suggest). Rules must be declarable as well as modifiable at run time.Also I will have to create a UI to define rules.
Please suggest any better and optimized way of doing this.An example:
Some more rules can be defined at run time.
There is very little context to go by, but a simple solution is to simply keep the latest state of each Order and each Price and implement each rule as an engine subscription. Your engine could be initialized with the following EPL statements:
/* Minimal schema-s */
create schema LiveOrder (user string, orderid string, quantity, double, symbol string);
create schema LivePrice (symbol string, price double);
/* create two windows to store the latest order by orderid, and latest price by symbol */
create window LiveOrders.std:unique(orderid) as select * from LiveOrder;
create window LivePrices.std:unique(symbol) as select * from LivePrice;
/* insert data into the windows when data arrives */
insert into LiveOrders select * from LiveOrder;
insert into LivePrices select * from LivePrice;
At this point you will have all order and prices stored, so they can be easily "joined" for different rules. If a user requires an alert when he places and order with quantity > 100, you simply create the following EPL statement and attach a listener to it which would send the alert:
select * from LiveOrders where user='U1' and quantity > 100;
To create an alert if an order amount for any symbol exceed 10000, you do the same with this EPL:
select LiveOrders.symbol as symbol, LiveOrders.quantity*LivePrices as total from LiveOrders
inner join LivePrices on LiveOrders.symbol=LivePrices.symbol
where LiveOrders.quantity*LivePrices > 10000
Whenever the alert is no longer necessary, you simply remove the listener and destroy the EPL statement.
Related
Is there a way to list all objects from a server (for all db) and its activities?
What I mean by activities:
If an object is a table/view, I'd like to know if last time something
got updated or this table was accessed.
If an object is a function, I'd like to know last time function used.
If an object is a stored
procedure, I'd like to know last time executed.
Goal is to eliminate some of the non-used objects or at least identify them so we can further analyze it. If there is a better way to do this please let me know.
Without a specific audit or explicit logging instructions in your code what you are asking might be difficult to achieve.
Here are some hints that, in my opinion, can help you retrieving the information you need:
Tables/Views You can rely on dynamic management view that record index information: sys.dm_db_index_usage_stats (more info here)
SELECT last_user_update, *
FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID('YourDBName')
AND OBJECT_ID = OBJECT_ID('[YourDBName].[dbo].[YourTableName]')
Stored Procedures If SP execution is still cached you can query sys.dm_exec_procedure_stats (more info here)
select last_execution_time, *
from sys.dm_exec_procedure_stats
WHERE database_id = DB_ID('YourDBName')
AND OBJECT_ID = OBJECT_ID('[YourDBName].[dbo].[YourSpName]')
Functions If function execution is still cached you can query sys.dm_exec_query_stats (from this great answer), more info here
SELECT qs.last_execution_time
FROM sys.dm_exec_query_stats qs
CROSS APPLY (SELECT 1 AS X
FROM sys.dm_exec_plan_attributes(qs.plan_handle)
WHERE ( attribute = 'objectid'
AND value = OBJECT_ID('[YourDBName].[dbo].[YourFunctionName]') )
OR ( attribute = 'dbid'
AND value = DB_ID('YourDBName') )
HAVING COUNT(*) = 2) CA
I have a query like this, which we use to generate data for our custom dashboard (A Rails app) -
SELECT AVG(wait_time) FROM (
SELECT TIMESTAMPDIFF(MINUTE,a.finished_time,b.start_time) wait_time
FROM (
SELECT max(start_time + INTERVAL avg_time_spent SECOND) finished_time, branch
FROM mytable
WHERE name IN ('test_name')
AND status = 'SUCCESS'
GROUP by branch) a
INNER JOIN
(
SELECT MIN(start_time) start_time, branch
FROM mytable
WHERE name IN ('test_name_specific')
GROUP by branch) b
ON a.branch = b.branch
HAVING avg_time_spent between 0 and 1000)t
GROUP BY week
Now I am trying to port this to tableau, and I am not being able to find a way to represent this data in tableau. I am stuck at how to represent the inner group by in a calculated field. I can also try to just use a custom sql data source, but I am already using another data source.
columns in mytable -
start_time
avg_time_spent
name
branch
status
I think this could be achieved new Level Of Details formulas, but unfortunately I am stuck at version 8.3
Save custom SQL for rare cases. This doesn't look like a rare case. Let Tableau generate the SQL for you.
If you simply connect to your table, then you can usually write calculated fields to get the information you want. I'm not exactly sure why you have test_name in one part of your query but test_name_specific in another, so ignoring that, here is a simplified example to a similar query.
If you define a calculated field called worst_case_test_time
datediff(min(start_time), dateadd('second', max(start_time), avg_time_spent)), which seems close to what your original query says.
It would help if you explained what exactly you are trying to compute. It appears to be some sort of worst case bound for avg test time. There may be an even simpler formula, but its hard to know without a little context.
You could filter on status = "Success" and avg_time_spent < 1000, and place branch and WEEK(start_time) on say the row and column shelves.
P.S. Your query seems a little off. Don't you need an aggregation function like MAX or AVG after the HAVING keyword?
I have a multicolumn datastore indexed using Oracle Text, and I am running queries using Contains keyword.
To weight the different columns differently I proceed as follow.
If the user searches for "horrible", the query issued to oracle will look like this :
WHERE CONTAINS(indexname,
'((horrible WITHIN column1) * 3)
OR ((horrible WITHIN column2) * 2))') > 1
But to add a category filter that is also indexed, I do this :
WHERE CONTAINS(indexname,
'((horrible WITHIN Column1) * 3)
OR ((horrible WITHIN Column2) * 2))
AND (movie WITHIN CategoryColumn)', 1) > 1
This filters by category, but that messes up completely the scoring, because Oracle text will take the lowest score from any side of the AND keyword.
Instead I would like to instruct oracle to ignore the right side of my AND.
Is there a way to get this specific part of the query ignored by the scoring?
Basically, I want to score according to
(horrible WITHIN Column1) * 3
OR (horrible WITHIN Column2) * 2)
but I want to select according to
'((horrible WITHIN Column1) * 3)
OR ((horrible WITHIN Column2) * 2))
AND (movie WITHIN CategoryColumn)'
There is a mention of
Specify how the score from child elements of OR and AND operators should be merged.
in Oracle Docs in the Alternative and User-defined Scoring secion, but not a lot of examples.
Using query relaxation might be simpler in this case (if it works), e.g.:
where CONTAINS (indexname,
'<query>
<textquery lang="ENGLISH" grammar="CONTEXT">
<progression>
<seq>(horrible WITHIN Column1) AND (movie WITHIN CategoryColumn)</seq>
<seq>(horrible WITHIN Column2) AND (movie WITHIN CategoryColumn)</seq>
</progression>
</textquery>
<score datatype="INTEGER" algorithm="COUNT"/>
</query>')>0;
This way you don't need to assign weights, as scoring from the more relaxed query never exceeds the previous one in sequence.
I have a table listing (gameid, playerid, team, max_minions) and I want to get the players within each team that have the lowest max_minions (within each team, within each game). I.e. I want a list (gameid, team, playerid_with_lowest_minions) for each game/team combination.
I tried this:
SELECT * FROM MinionView GROUP BY gameid, team
HAVING MIN(max_minions) = max_minions;
Unfortunately, this doesn't seem to work as it seems to select a random row from the available rows for each (gameid, team) and then does the HAVING comparison. If the randomly selected row doesn't match, it's simply skipped.
Using WHERE won't work either since you can't use aggregate functions within WHERE clauses.
LIMIT won't work since I have many more games and LIMIT limits the total number of rows returned.
Is there any way to do this without adding another table/view that contains (gameid, teamid, MIN(max_minions))?
Example data:
sqlite> SELECT * FROM MinionView;
gameid|playerid|team|champion|max_minions
21|49|100|Champ1|124
21|52|100|Champ2|18
21|53|100|Champ3|303
21|54|200|Champ4|356
21|57|200|Champ5|180
21|58|200|Champ6|21
64|49|100|Champ7|111
64|50|100|Champ8|208
64|53|100|Champ9|8
64|54|200|Champ0|226
64|55|200|ChampA|182
64|58|200|ChampB|15
...
Expected result (I mostly care about playerid, but included champion, max_minions here for better overview):
21|52|100|Champ2|18
21|58|200|Champ6|21
64|53|100|Champ9|8
64|58|200|ChampB|15
...
I'm using Sqlite3 under Python 3.1 if that matters.
This is in SQL Server, hopefully the syntax works for you too:
SELECT
MV.*
FROM
(
SELECT
team, gameid, min(max_minions) as maxmin
FROM
MinionView
GROUP BY
team, gameid
) groups
JOIN MinionView MV ON
MV.team = groups.team
AND MV.gameid = groups.gameid
AND MV.max_minions = groups.maxmin
In words, first you make the usual grouping query (the nested one). At this point you have the min value for each group but you don't know to which row it belongs. For this you join with the original table and match the "keys" (team, game and min) to get the other columns as well.
Note that if a team will have more than one member with the same value for max_minions then all these rows will be selected. If you only want one of them then that's probably a bit more complicated.
I have two T-SQL scalar functions that both perform calculations over large sums of data (taking 'a lot' of time) and return a value, e.g. CalculateAllIncomes(EmployeeID) and CalculateAllExpenditures(EmployeeID).
I run a select statement that calls these and returns results for each Employee. I also need the balance of each employee calculated as AllIncomes-AllExpenditures.
I have a function GetBalance(EmployeeID) that calls the two above mentioned functions and returns the result {CalculateAllIncomes(EmployeeID) - CalculateAllExpenditures(EmployeeID)}. But if I do:
Select CalculateAllIncomes(EmployeeID), CalculateAllExpenditures(EmployeeID), GetBalance(EmployeeID) .... the functions CalcualteAllIncomes() and CalculateAllExpenditures get called twice (once explicitly and once inside the GetBalance funcion) and so the resulting query takes twice as long as it should.
I'd like to find some better solution. I tried:
select alculateAllIncomes(EmployeeID), AS Incomes, CalculateAllExpenditures
(EmployeeID) AS Expenditures, (Incomes - Expenditures) AS Balance....
but it throws errors:
Invalid column name Incomes and
Invalid column name Expenditures.
I'm sure there has to be a simple solution, but I cannot figure it out. For some reason it seems that I am not able to use column Aliases in the SELECT clause. Is it so? And if so, what could be the workaround in this case?
Thanks for any suggestions.
Forget function calls: you can probably do it everything in one normal query.
Function calls misused (trying for OO encapsulation) force you into this situation. In addition, if you have GetBalance(EmployeeID) per row in the Employee table then you are CURSORing over the table. And you've now compounded this by multiple calls too.
What you need is something like this:
;WITH cSUMs AS
(
SELECT
SUM(CASE WHEN type = 'Incomes' THEN SomeValue ELSE 0 END) AS Income),
SUM(CASE WHEN type = 'Expenditures' THEN SomeValue ELSE 0 END) AS Expenditure)
FROM
MyTable
WHERE
EmployeeID = #empID --optional for all employees
GROUP BY
EmployeeID
)
SELECT
Income, Expenditure, Income - Expenditure
FROM
cSUMs
I once got a query down from a weekend to under a second by eliminating this kind of OO thinking from a bog standard set based aggregate query.