how to get number of messages per second from MSMQ without using performance counter - msmq

procedure to get number of messages passed through queue per second or number of messages queue receiving per second, without using performance counter
System.messaging.MessageQueue.

Related

How to read specific number of messages per minute from apache kafka message queue

How to read specific number of messages per minute from apache kafka message queue? For example, imagine that there are 100 messages in the queue, how can I get 5 messages to be read per minute. I don't know how to set "max.partition.fetch.bytes" as my byte size is not the same in every message.
Is there a way to dynamically set this to read 5 messages per minute?

Consume only a specific number of messages from the queue

Is it possible to consume only a specific number of messages like 10 or 50 or 100 messages from the 1000 that are in the queue? I was looking at 'fetch.max.bytes' config, but it seems like it is for a message size rather than number of messages.
I don't know how to set "max.partition.fetch.bytes" as my byte size is not the same in every message.
Is there a way to dynamically set this to read 10 or 50 or 100 messages per minute?
Or is there any way I can do this?
note : please "max.poll.records" note that I cannot use the method
Per minute? No, not really, because you have little control as a consumer client over producer speeds or even network speeds.
If you just want a static number, seek the consumer to a specific partition offset and simply count the number of records consumed until you're satisfied with the number, then commit the offsets back (or don't).

when will trigger producer send a request?

if i send just one record at producer side and wait, when will producer sends the record to broker?
In kafka docs, i found the config called "linger.ms", and it says:
once we get batch.size worth of records for a partition it will be
sent immediately regardless of this setting, however if we have
fewer
than this many bytes accumulated for this partition we will 'linger'
for the specified time waiting for more records to show up.
According above docs, i have two questions.
if producer receives datas which size reaches batch.size, it will immediately trigger to send a request which only contains one batch to broker? But as we know, one request can contain many batches, so how does it happen?
does it mean that even the received datas are not enough of batch.size, it will also trigger to send a request to broker after waiting linger.ms ?
In Kafka, the lowest unit of sending is a record (a KV pair).
Kafka producer attempts to send records in batches in-order to optimize data transmission. So a single push from producer to the cluster -- to the broker leader to be precise -- could contain multiple records.
Moreover, batching always applies only to a given partition. Records produced to different partitions cannot be batched together, though they could form multiple batches.
There are a few parameters which influence the batching behaviour, as described in the documentation:
buffer.memory -
The total bytes of memory the producer can use to buffer records
waiting to be sent to the server. If records are sent faster than they
can be delivered to the server the producer will block for
max.block.ms after which it will throw an exception.
batch.size -
The producer will attempt to batch records together into fewer
requests whenever multiple records are being sent to the same
partition. This helps performance on both the client and the server.
This configuration controls the default batch size in bytes. No
attempt will be made to batch records larger than this size.
Requests sent to brokers will contain multiple batches, one for each
partition with data available to be sent.
linger.ms -
The producer groups together any records that arrive in between
request transmissions into a single batched request. Normally this
occurs only under load when records arrive faster than they can be
sent out. However in some circumstances the client may want to reduce
the number of requests even under moderate load. This setting
accomplishes this by adding a small amount of artificial delay—that
is, rather than immediately sending out a record the producer will
wait for up to the given delay to allow other records to be sent so
that the sends can be batched together. This can be thought of as
analogous to Nagle's algorithm in TCP. This setting gives the upper
bound on the delay for batching: once we get batch.size worth of
records for a partition it will be sent immediately regardless of this
setting, however if we have fewer than this many bytes accumulated for
this partition we will 'linger' for the specified time waiting for
more records to show up. This setting defaults to 0 (i.e. no delay).
Setting linger.ms=5, for example, would have the effect of reducing
the number of requests sent but would add up to 5ms of latency to
records sent in the absence of load.
So from above documentation, you could understand - linger.ms is an artificial delay to wait if there are not enough bytes to transmit, but if producer accumulates enough bytes before linger.ms is elapsed, then the request is sent anyway.
On top of that, batching is also influenced by max.request.size
max.request.size -
The maximum size of a request in bytes. This setting will limit the
number of record batches the producer will send in a single request to
avoid sending huge requests. This is also effectively a cap on the
maximum record batch size. Note that the server has its own cap on
record batch size which may be different from this.

Real time windowed counters using kafka

I would like to have a real-time efficient system to perform real-time windowed counters of event. For example, number of clicks per country in the last 30 minutes. My idea is the following, using Kafka and Cassandra:
when a click event e happens at time t, an increment is sent to an accumulator (I would like to use Cassandra counters for this purpose); at the same time, this event should generate a decrement event at time t+30min, which basically marks the exit of the event e from the window of interest. This decrement will be then stored in the accumulator at time t+30min. Querying the accumulator at any time will then give the current correct counter value
I am not sure how to achieve this using Kafa. I thought about 2 approaches
clicks are sent to topic C; a consumer will read from C and produce events in a topic Cdelayed with message (Tdelay, key); another consumer will read from Cdelayed and check the content of the first message read: if the timestamp in the message is equal or after the current time, then the message is read and the decrement is sent to the accumulator
a main consumer will poll from C and when performing the first read will trigger a delayed consumer which will sleep 30 minutes before start reading from earliest timestamp; the latter will be responsible for decrements, the former for increments
Which of the 2 solutions is the best? Am I using Kafka API in the correct way if I 1) block a consumer or 2) delay a consumer?

Apache NiFi - Asynchronously process messages after Kafka consumer

Currently we are using Apache NiFi to consume messages via Kafka consumer. Output of kafka consumer is connected to DB processor which gets the messages in queue (from consumer) and runs the stored proc/processing on it. So the DB processor will be working on one message per from queue and I can set the DB processor to work in parallel for n threads, but primarily each thread can work on one message per queue.
I am looking to do something like below:
processor after consumer will just consume message (or take messages) from queue and say will wait for "batch" or total to 1000 messages.
As soon as it gets 1000 messages OR 60 secs passed and message count is < 1000, push to another processor which can be DB stored proc for business logic on group of those messages.
Mainly, I want above to be multithreaded i.e. if we get 3000 messages, the first processor will read them in 3 batches and push to DB processor (parallely).
So I want to know any such processor which can do point 2 above i.e. just read messages and push it to next based on batch/time rules?
If you can leverage NiFi's record processor's then using ConsumeKafkaRecord with a batch size of 1000 followed by PutDatabaseRecord will give you similar behavior to what you are describing.
If you may not always have enough messages available in the Kafka topic at the time of consuming, then adding MergeContent or MergeRecord in the middle would let you wait for a certain amount of time or number of messages.