Fetch date of unexpected system shutdown from EventData - event-log

Windows' Event ID 6008 is Unexpected Shutdown event (see in System's Event Viewer). The problem is that time of the unexpected system's shutdown is written in bytes array (which is called EventData).
I want to fetch this date (in my C# application) from EventData but I don't know how time is encoded in bytes array.

Related

Session window how calculate gap?

I try to understand this shema of window session:
As I got right we have four events:
12:00:00 - event started in this time
12:00:25 - another event was ended
12:00:30 - event started in this time
12:00:50 - another event was ended
How do we get gap 15 seconds?
Could you explain what is start/end - is it one event or two different?
Events don't have a start or end time, but only a single scalar event-timestamp.
If you use session windows, events that have a time difference to each other smaller than the gap parameter, fall into the same window.
Thus, the start and end of a session window always corresponds to an event.
Note that session windows are not designed for the case when you have dedicate start/end events in your input stream. Thinks of session windows more like a "session detection" scenario, i.e., you don't have sessions in your input stream, and want to sessionize your input data based on the record timestamps.
Check out the docs for more details: https://docs.confluent.io/current/streams/developer-guide/dsl-api.html#session-windows

Khafka consumer reading with junk value from event hub

I am reading byte array content from event hub using consumer and I have a junk value followed by count of items(byte arrays) and then the byte array fetched .
Eg value [-95 , 4 , 50,48,50,48]
I want this junk -95 to be removed , is it kafka issue or event hub issue.
If so how to resolve it kindly need help
Your consumers receive whatever is sent to Event Hubs. Event Hubs never changes the payload in an EventData and relays it as is. Thus, if you are seeing some content which looks like changed from the consumer's perspective, I strongly recommend you to check the senders and make sure they send correct payload.

StreamsException: Extracted timestamp value is negative, which is not allowed

This could be a duplicate of Error in Kafka Streams using kafka-node - negative timestamp, but certainly not. My Kafka Streams app does some transformation logic on each message and forwards it to a new topic. There is no time-based aggregation/processing in the app, so there is no need of using any custom timestamp extractor. This app was running fine for several days, but all of sudden the app thrown a negative timestamp exception.
Exception in thread "StreamThread-4" org.apache.kafka.streams.errors.StreamsException: Extracted timestamp value is negative, which is not allowed.
After throwing this exception from all StreamThreads (10 in total), the app was kind of frozen as there was no further progress on the stream for several hours. There was no exception thrown after that. When I restarted the app, it started to process only the newly coming messages.
Now the question is, what happened to those messages that came in between (after throwing the exception and before restarting the app). In case, those missing messages had no embedded timestamp (Highly impossible as no changes happened in the broker and producer), isn't that the app should have thrown an exception for each such message? Or is't like the app stop the stream progress when it detects the negative timestamp in the message at first time? Is there a way to handle this situation so that the app can progress the stream, even after detecting any negative timestamp?My app uses Kafka Streams library version 0.10.0.1-cp1.
Note: I can easily put up a custom timestamp extractor which can check the negative timestamp in each message, but that is a lot of unnecessary overhead for my app. All I want to understand is why was the stream not progressed after detecting a message with negative timestamp.
Even if you do not have any time based operator, a Kafka Streams application checks if timestamps returned from timestamp extractor are valid, because timestamps are used to determine processing order of records from different partitions, to ensure records are processes in-order and all partitions are consumed in an time-based aligned manner.
If a negative timestamp is detected, the application (or actually the corresponding thread) dies. Unfortunately, it is currently not possible to recover from such an exception and you would need to restart your application. See also Confluent FAQs: http://docs.confluent.io/3.1.1/streams/faq.html#invalid-timestamp-exception
If your application dies and you restart it, it will resume processing where it left off. Unfortunately, in Kafka 0.10.0.1 there is a bug (fixed in upcoming release 0.10.2) and in case of failure an incorrect offset can get committed and the application "steps over" some records. I assume this happened in your case, and if you have only some records with an invalid timestamp, those record might have been skipped allowing your application to resume after restart. This behavior is actually a bug -- without the bug, Kafka Stream would try to process those records with invalid timestamp again and again and fail every time until you provide a custom timestamp extractor that fixes the problem by returning a valid timestamp.
How to fix it:
The correct fix would be to provide a custom timestamp extractor that does never return an invalid (ie, negative) timestamp.
I have no explanation why you got invalid timestamps though... This is quite strange and you might want to investigate your producer setup and try to figure out if there is the possibility that your producer puts and invalid timestamp (even if this is unlikely -- I have no other idea what the root cause of the problem could be).
Further remarks:
In the next release (0.10.2), handling invalid timestamps gets simplified and Kafka Streams provides more built-in timestamp extractors that handle records with invalid timestamps differently. For example, this allows you to auto-skip records with invalid timestamps instead of raising an error (current behavior). For more details see KIP-93: https://cwiki.apache.org/confluence/display/KAFKA/KIP-93%3A+Improve+invalid+timestamp+handling+in+Kafka+Streams

Error with pseudo clock in drools when I have two rules matching different events

I want to test drools 6.3 with a scenario, But I have a problem in a special situation.
Here is my scenario in a simple form:
I have two systems, A and B, in a simulated Network that generate events. I want to write two rules to find out patterns in these events. Two rules for testing this scenario is:
declare A
#timestamp(timestampA)
end
declare B
#timestamp(timestampB)
end
Rule “1”
When
accumulate( A() over window:time( 10s ) ; s:count(1) ; s>1)
Then
System.out.println( " Rule 1 matched " );
Rule “2”
When
B()
Then
System.out.println( " Rule 2 matched " );
Timestamp of each event is the timestamp from log generated in each system on when received by drools and inserted in working memory.
I’m using STREAM mode with pseudo clock, because events from System B receives with 25min delay due to network congestion and I should adjust session clock manually. Session clock set with the timestamp of every event inserted into the session. And All rules fire when every event inserted.
When order of receiving and inserting events are like below matched correctly.
Event A received at 10:31:21 – Session clock : 10:31:21 – insert A and fire
Event A received at 10:31:23 - Session clock : 10:31:23 – insert A and fire
Rule 1 matched
Event B received at 10:06:41 - Session clock : 10:06:41 – insert B and fire
Rule 2 matched
But when order of receiving and inserting events are like below matched incorrectly:
Event A received at 10:31:21 – Session clock : 10:31:21 – insert A and fire
Event B received at 10:06:41 - Session clock : 10:06:41 – insert B and fire
Rule 2 matched
Event A received at 10:31:23 - Session clock : 10:31:23 – insert A and fire
When second A event inserted tow A events in last 10s are in working memory but rule 1 does not match. Why?
What you are doing is somewhat in conflict with the assumptions underlying the CEP (Continuous Event Processing) of Drools. STREAM mode implies that events should be inserted in the order of their timestamps, irrespective of their origin. Setting the pseudo clock back and forth in big jumps is another good way to confuse the Engine.
Don't use STREAM mode, window:time and forget about session clocks.
You have facts containing time stamps, and you can easily write your rules by referring to these time stamps, either using plain old arithmetic or by applying the temporal operators (which are nothing but syntactic sugar for testing the relation of long (as in java.lang) values.

in Linux, when reading an I2C-based RTC, who handles counter carry-over conditions?

When reading multiple bytes from an I2C-based RTC, it seems that it is possible that while reading each byte, one of the values may increment.
For instance, if the time is:
2014-12-31 23:59:59
as you're reading this value, the time may roll-over to
2015-01-01 00:00:00
so you may actually read:
2015-01-01 23:59:59
(depending on which values you read first).
So, is it the rtc driver's responsibility to ensure a reliable read?
Reading the datasheet for the DS1337, page 9 states:
When reading or writing the time and date registers, secondary (user)
buffers are used to prevent errors when the internal registers update.
When reading the time and date registers, the user buffers are
synchronized to the internal registers on any start or stop and when
the register pointer rolls over to zero.
Therefore, if reading (or writing) occurs with a single I2C operation (without wrapping around), the RTC device guarantees that everything is synchronized.
[I haven't examined the datasheets for any other devices, but I assume they all work similarly.]