creating a trigger for snmp item in zabbix - triggers

I have a problem in defining a trigger for a SNMP item in zabbix. The SNMP OID is 'IF-MIB::ifHCInOctets.10001' & the key is 'inbound_traffic10001' for the item.
The item stores Delta (simple change) & update interval is set 20 120 seconds.
I have defined a trigger with the following expression:
{Template SNMP-v2 Switch C3750:inbound_traffic10001.last()}>2000000
I want the trigger fires if the inbound traffic of the port 1 of the switch goes over 2MBs.
But the problem is that the traffic goes over 2MB but the trigger does not fire!!!
Any help is appreciated.

As we discovered through the discussion, the "Store value" setting in the item should be set to Delta (Speed per second).

Related

Zabbix trigger of drop user

My question : is it possible to have a trigger on that item that will be activated if there's a difference of xx% between the two last queries ?
Example :
Query at 01:00 -> 2000 users connected
Query at 01:10 -> 2100 users, difference is positive, we don't care
Query at 01:20 -> 2050 users, -50 users, around 2-3%, no big deal
Query at 01:30 -> 800 users, around 60% less connections, there's something wrong here
Is it possible to have a trigger that activates when the difference is, let's say, 20% negative ?
You can use the abschange function:
The amount of absolute difference between last and previous values
to alert for both positive and negative changes.
Or you can use the last function to get the latest values you need:
For example:
last() is always equal to last(#1)
last(#3) - third most recent value (not three latest values)
In both cases you need to compute the % value in your trigger with the usual proportion:
older_value:100 = newer_value:x

Concurrent data insert client (golang) results in first 50 rows missing in database (postgres), but the rest of the 390 are okay

I am pulling down stock market data and inserting it into a postgresql database. I have 500 stocks for 60 days of historical data. Each day has 390 trading minutes, and each minute is a row in the database table. The summary of the issue is that the first 20-50 minutes of each day are missing for the each stock. Sometimes its less than 50, but it is never more than 50. Every minute after that for each day is fine (EDIT: on further inspection there are missing minutes all over the place). The maximum matches the max number of concurrent goroutines (https://github.com/korovkin/limiter).
The hardware is set up in my home. I have a laptop that pulls the data, and a 8 year old gaming computer that has been repurposed as a postgres database running in ubuntu. They are connected through a netgear nighthawk x6 router and communicate over the LAN.
The laptop is running a go program that pulls data down and performs concurrent inserts. I loop through the 60 days, for each day I loop through each stock, and for each stock I loop through each minute and insert it into the database via a INSERT statement. Inside the minute loop I used a library that limits the max number of goroutines.
I am fixing it by grabbing the data again, and inserting until the first time the postgres server responds that the entry is a duplicate and violates the unique constraints on the table and breaking out of the loop for each stock.
However, I'd like to know what happened, as I want to better understand how these problems can arise under load. Any ideas?
limit := NewConcurrencyLimiter(50)
for _, m := range ms {
limit.Execute(func() {
m.Insert()
})
}
limit.Wait()
The issue is that using a receiver means that everything is passed by reference. I needed to copy the values I wanted inserted within the for loop, and change the method away from a receiver to one with input parameters
for i, _ := range ms {
value := ms[i]
limit.Execute(func() {
Insert(value)
})
}
limit.Wait()

KSQL Hopping Window : accessing only oldest subwindow

I am tracking the rolling sum of a particular field by using a query which looks something like this :
SELECT id, SUM(quantity) AS quantity from stream \
WINDOW HOPPING (SIZE 1 MINUTE, ADVANCE BY 10 SECONDS) \
GROUP BY id;
Now, for every input tick, it seems to return me 6 different aggregated values I guess which are for the following time periods :
[start, start+60] seconds
[start+10, start+60] seconds
[start+20, start+60] seconds
[start+30, start+60] seconds
[start+40, start+60] seconds
[start+50, start+60] seconds
What if I am interested is only getting the [start, start+60] seconds result for every tick that comes in. Is there anyway to get ONLY that?
Because you specify a hopping window, each record falls into multiple windows and all windows need to be updated when processing a record. Updating only one window would be incorrect and the result would be wrong.
Compare the Kafka Streams docs about hopping windows (Kafka Streams is KSQL's internal runtime engine): https://docs.confluent.io/current/streams/developer-guide/dsl-api.html#hopping-time-windows
Update
Kafka Streams is adding proper sliding window support via KIP-450 (https://cwiki.apache.org/confluence/display/KAFKA/KIP-450%3A+Sliding+Window+Aggregations+in+the+DSL). This should allow to add sliding window to ksqlDB later, too.
I was in a similar situation and creating a user defined function to access only the window with collect_list(column).size() = window duration appears to be a promising track.
In the udf use List type to get one of your aggregate base column list of values. Then assess is the formed list size is equal to the hopping window number of period, return null otherwise.
From this create a table selecting data and transforming it with the udf.
Create a table from this latest table and filter out null values on the transformed column.

Grafana Singlestat: select with timerange

I am trying to use the SingleStat Plugin of Grafana to add an online/offline indicator to one of my dashboards.
So what I have so far is this with an influxdb datasource:
What I am missing is the option to define a timerange for this query. Lets say I want the count() of the last 30min. If the count is 0 I know that the server is offline. If the count is > 0 he is online. (For example my server adds a new entry every 20min. So if I donĀ“t have an entry in the last 30min I know he must be offline)
So is it possible to get define a query with a timerange? When yes how ?
UPDATE
This is what I have so far now. But I get an error now which says a.form is undefined. Alos if I have a entry in the last 35min it doesnst switch to online.
The singlestat panel uses, by default, the timerange of the dashboard it is placed on.
For your case, make use of the 'override relative time' on the Time range tab and set it to "30m".
When using the count as you described, turn coloring on and set the threshold to 1. This will change the coloring when no entry is present (count is 0) in the last 30 minutes.

Get un retrieved rows only in DB2 select

I have an BPM application where I am polling some rows from DB2 database at every 5 mins with a scheduler R1 with below query -
- select * from Table where STATUS = 'New'
based on rows returned I do some processing and then change the status of these rows to 'Read'.
But while this processing is being completed, its takes more than 5 mins and in meanwhile scheduler R1 runs and picks up some of the cases already picked up in last run.
How can I ensure that every scheduler picks up the rows which were not selected in last run. What changes do i need to do in my select statement? Please hep.
How can I ensure that every scheduler picks up the rows which were not selected in last run
You will need to make every scheduler aware of what was selected by other schedulers. You can do this, for example, by locking the selected rows (SELECT ... FOR UPDATE). Of course, you will then need to handle lock timeouts.
Another option, allowing for better concurrency, would be to update the record status before processing the records. You might introduce an intermediary status, something like 'In progress', and include the status in the query condition.