raw githubusercontent doesn't want to update for more than 12 hours - github

this happened for the second time. the first time I waited about a day, but now it happened again, and I don't want to wait so long again for the raw githubusercontent to be updated
previously, raw githubusercontent was successfully updated in 5 minutes, i also found information on internet about delay of 5 minutes, i tried to find something, but without success

Related

How do I consume Kafka-Messages older than x minutes, but all messages on restart?

I need some grace period before consuming the kafka message.
My approach is to use a hopping window.
e.g. If I want to consume the message after 5 minutes, the hopping window would be 6 minutes and will advance by 1 minute.
Then I'll use a filter to get data older than 5 minutes (there's also a timestamp in the message itself). Hence I will process data from minute 0 to minute 1. Then the hopping window jumps 1 minute forward and I process data from minute 1 to minute 2 and so on.
However I need to consume all messages when starting the application and not just the last 6 minutes.
I'm also open for other suggestions, regarding the 5 minute grace period.
I've made wrong assumptions here. All the data in the topic will be consumed, no matter how old it is.
e.g. It's 12:10 now and we start the Kafka-Stream.
The data in the topic, we want to consume, was pushed at 12:00 and we have a window of 6 minutes.
I was expecting everything to be consumed from 12:04 to 12:10 (6 minutes) and everything ago would be lost.
But the 12:00 data will be consumed anyway, it just falls into an older window.

Why does Gatling still sends requests when scenario injection is on nothingFor?

So I have the following scenario:
setUp(scenario.inject(
nothingFor(30 seconds), // 1
rampUsers(10) during (30 seconds),
nothingFor(1 minute),
rampUsers(20) during (30 seconds)
).protocols(httpconf)).maxDuration(3 minutes)
I expected this scenario to start by doing nothing for 30 seconds, ramping up 10 users over 30 seconds, do nothing(pause) for a minute and finish by ramping up 20 users for 30 seconds.
But what I got is a 30 second pause, ramp up 10 users over 30 seconds, steady state of 10 users for a minute and then an additional ramp up of 20 users. (I ended up running 30 users)
What am I missing here?
The injection profiles only specify when users start a scenario, not how long they're active for - that will be determined by how long it takes for a user to finish the scenario. So when you ramp 10 users over 30 seconds one user will start the scenario every 3 seconds, but they keep running until they finish (however long that is). I'm guessing your scenario takes more than a couple of minutes for a user to complete.

select prometheus alerts newer than a given time

I am working with grafana, trying to show a list of pods that are triggering a custom prometheus alert.
This query do the trick:
sum(ALERTS{alertname="myCustomAlert"}) BY (pod_name)
The problem is, it list all the alerts, and don't seems affected if I change the time interval to see only the ones launched in the last 5 minutes, or last hour
There is any way to limit in time the alert list? Lot of thanks!!
That expression will produce the number of alerts by pod_name firing at the current time (just as you would expect up{instance="foo"} to tell you whether instance foo is up now, whether you're looking at a dashboard that shows the last 5 minutes or the last hour).
If you want to see the values change over time, you could e.g. graph it. Then you'd see it change over time. And when the alert started and stopped firing for each pod.
And if you want the value at some past time, simply set the end time of the Grafana dashboard range to that time. (E.g. if your dashboard was showing the time range between 2 PM and 3 PM on January 1st, then your query would return the alerts firing at 3 PM on January 1st.

Does `persistentEntityRegistry.eventStream` really took atleast ~8-12 seconds to get triggered

Just wanted what is the possible reasons why my persistentEntityRegistry.eventStream takes an approximately ~8-12 seconds to be emitted.
I have just figured out that its the cassandra that's taking a delay. What I've done is to set cassandra-query-journal.eventual-consistency-delay to 200ms.
My references are the following:
https://groups.google.com/forum/#!topic/lagom-framework/cLXf6r5Ouw4
https://groups.google.com/forum/#!topic/akka-user/TH8hL-A8I4k/discussion

Designs for counting occurrences of events in streaming processing?

The following discussion is in the context of Apache Flink:
Imagine that we have a keyedStream whose key is its id and event time is its timestamp, if we want to calculate how many events arrived within 10 minutes for each event.
The problems need to be solved are:
How to design the window ?
We can create a window of 10 minutes after each event arrives, but this mean that for each event, there will be a delay of 10 minutes because the wait for the window of 10 minutes.
We can create a window of 10 minutes which takes the timestamp of each event as the maximum timestamp in this window, which means that we don't need to wait for 10 minutes, because we take the last 10 minutes of elements before the element arrives. But this kind of window is not easy to define, as far as I know.
How to deal with memory or other resource issues ? Even we succeed to create a window, maybe the kind of ids of events are diverse, so many window like this, how the system keep their states in the memory ? There is a big possibility of stakoverflow of memory.
Maybe there are some problems that I don't mention here, or maybe there are some good solutions except window(i.e. Patterns). If you have a good solutions, please give me a clue, thank you.
You could do this with a GlobalWindow and a Trigger than fires on every event and an Evictor that removes events that are more than 10 minutes old before counting the remaining events. (A naive implementation could easily perform very poorly, however.)
Yes, this may require keeping a lot of state -- you'll be keeping every event from the past 10 minutes (well, you only need to store the timestamp from each event). If you setup the RocksDB state backend then Flink will spill to disk if need be, but with some obvious performance penalty. Probably better to use a cluster large enough to hold 10 minutes of traffic in memory. Even at one million events per second, each with a 32-bit timestamp, that's only 2.4GB in 10 minutes (1 million events per second x 600 seconds x 4 bytes per event) -- doesn't seem like a problem at all.