I m trying to add pgbench result to csv . The result I got is in below given format
pgbench -c 10 -j 2 -t 10000 example
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 50
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
latency average: 4.176 ms
tps = 2394.718707 (including connections establishing)
tps = 2394.874350 (excluding connections establishing)
Related
I am trying to come up with a configuration that would enforce producer quota setup based on an average byte rate of producer.
I did a test with a 3 node cluster. The topic however was created with 1 partition and 1 replication factor so that the producer_byte_rate can be measured only for 1 broker (the leader broker).
I set the producer_byte_rate to 20480 on client id test_producer_quota.
I used kafka-producer-perf-test to test out the throughput and throttle.
kafka-producer-perf-test --producer-props bootstrap.servers=SSL://kafka-broker1:6667 \
client.id=test_producer_quota \
--topic quota_test \
--producer.config /myfolder/client.properties \
--record.size 2048 --num-records 4000 --throughput -1
I expected the producer client to learn about the throttle and eventually smooth out the requests sent to the broker. Instead I noticed there is alternate throghput of 98 rec/sec and 21 recs/sec for a period of more than 30 seconds. During this time average latency slowly kept increseing and finally when it hits 120000 ms, I start to see Timeout exception as below
org.apache.kafka.common.errors.TimeoutException : Expiring 7 records for quota_test-0: 120000 ms has passed since batch creation.
What is possibly causing this issue?
The producer is hitting timeout when latency reaches 120 seconds (default value of delivery.timeout.ms )
Why isnt the producer not learning about the throttle and quota and slowing down or backing off
What other producer configuration could help alleviate this timeout issue ?
(2048 * 4000) / 20480 = 400 (sec)
This means that, if your producer is trying to send the 4000 records full speed ( which is the case because you set throughput to -1), then it might batch them and put them in the queue.. in maybe one or two seconds (depending on your CPU).
Then, thanks to your quota settings (20480), you can be sure that the broker won't 'complete' the processing of those 4000 records before at least 399 or 398 seconds.
The broker does not return an error when a client exceeds its quota, but instead attempts to slow the client down. The broker computes the amount of delay needed to bring a client under its quota and delays the response for that amount of time.
Your request.timeout.ms being set to 120 seconds, you then have this timeoutException.
I am using the kafka-producer-perf-test utility to baseline the producer performance in our Kafka cluster. I am specifically interested in "records written/sec". I see the following lines in the output of the utility:
32561 records sent, 6407.1 records/sec (6.11 MB/sec), 1223.3 ms avg latency, 2399.0 ms max latency.
50000 records sent, 8123.476848 records/sec (7.75 MB/sec), 1628.84 ms avg latency, 2801.00 ms max latency, 1701 ms 50th, 2701 ms 95th, 2796 ms 99th, 2799 ms 99.9th
This seems to indicate that the throughput is 8123 records/sec
However in the sections that follow there is another metric:
producer-topic-metrics:record-send-rate:{client-id=producer-1, topic=test1} : 1409.602
I am unable to reconcile the two figures. Which should be considered the throughput of the producer?
I am using PostgreSQL 10.1 on MAC on which I am trying to set up streaming replication. I configured both master and slave to be on the same machine. I find the streaming replication lag to be slower than expected on mac. The same test runs on a Linux Ubuntu 16.04 machine without much lag.
I have the following insert script.
for i in $(seq 1 1 1000)
do
bin/psql postgres -p 8999 -c "Insert into $1 select tz, $i * 127361::bigint, $i::real, random()*12696::bigint from generate_series('01-01-2018'::timestamptz, '02-01-2018'::timestamptz, '30 sec'::interval)tz;"
echo $i
done
The lag is measured using the following queries,
SELECT pg_last_wal_receive_lsn() - pg_last_wal_replay_lsn();
SELECT (extract(epoch FROM now()) - extract(epoch FROM pg_last_xact_replay_timestamp()))::int;
However, the observation is very unexpected. The lag is increasing from the moment the transactions are started on master.
Slave localhost_9001: 12680304 1
Slave localhost_9001: 12354168 1
Slave localhost_9001: 16086800 1
.
.
.
Slave localhost_9001: 3697460920 121
Slave localhost_9001: 3689335376 122
Slave localhost_9001: 3685571296 122
.
.
.
.
Slave localhost_9001: 312752632 190
Slave localhost_9001: 308177496 190
Slave localhost_9001: 303548984 190
.
.
Slave localhost_9001: 22810280 199
Slave localhost_9001: 8255144 199
Slave localhost_9001: 4214440 199
Slave localhost_9001: 0 0
It took around 4.5 minutes for a single client inserting on a single table to complete on master and another 4 minutes for the slave to catch up. Note that NO simultaneous selects are run other than the script to measure the lag.
I understand that replay in PostgreSQL is pretty simple like, "move a particular block to a location", but I am not sure about this behavior.
I have the following other configurations,
checkpoint_timeout = 5min
max_wal_size = 1GB
min_wal_size = 80MB
Now, I run the same tests with same configurations on a Linux Ubuntu 16.04 machine, I find the lag perfectly reasonable.
Am I missing anything?
Can someone please explain on how is performance tested in Kafka using,
bin/kafka-consumer-perf-test.sh --topic benchmark-3-3-none \
--zookeeper kafka-zk-1:2181,kafka-zk-2:2181,kafka-zk-3:2181 \
--messages 15000000 \
--threads 1
and
bin/kafka-producer-perf-test.sh --topic benchmark-1-1-none \
--num-records 15000000 \
--record-size 100 \
--throughput 15000000 \
--producer-props \
acks=1 \
bootstrap.servers=kafka-kf-1:9092,kafka-kf-2:9092,kafka-kf-3:9092 \
buffer.memory=67108864 \
compression.type=none \
batch.size=8196
I am not clear on what are the paramters and what is the output that should be obtained. How will I check if I send 1000 messages to Kafka topics ,its performance and acknowledgement.
When we run this we get the following,
Producer
| start.time | end.time | compression | message.size | batch.size | total.data.sent.in.MB | MB.sec | total.data.sent.in.nMsg | nMsg.sec |
| 2016-02-03 21:38:28:094 | 2016-02-03 21:38:28:449 | 0 | 100 | 200 | 0.01 | 0.0269 | 100 | 281.6901 |
Where,
• total.data.sent.in.MB shows total data send to cluster in MB.
• MB.sec indicates how much data transferred in MB per sec(Throughput on size).
• total.data.sent.in.nMsg will show the count of total message which were sent during this test.
• And last nMsg.sec shows how many messages sent in a sec(Throughput on count of messages
Consumer
| start.time | end.time | fetch.size | data.consumed.in.MB | MB.sec | data.consumed.in.nMs | nMsg.sec |
| 2016-02-04 11:29:41:806 | 2016-02-04 11:29:46:854 | 1048576 | 0.0954 | 1.9869 | 1001 | 20854.1667
where,
• start.time, end.time will show when was test started and completed.
• fetch.size** shows the amount of data to fetch in a single request.
• data.consumed.in.MB**** shows the size of all messages consumed.
• ***MB.sec* indicates how much data transferred in MB per sec(Throughput on size).
• data.consumed.in.nMsg will show the count of total message which were consumed during this test.
• And last nMsg.sec shows how many messages consumed in a sec(Throughput on count of messages).
I would rather suggest going for a specialized performance testing tool like Apache JMeter and Pepper-Box - Kafka Load Generator in order to load test your Kafka installation.
This way you will be able to conduct the load having full control of threads, ramp-up time, message size and content, etc. You will also be able to generate HTML Reporting Dashboard having tables and charts with interesting metrics.
See Apache Kafka - How to Load Test with JMeter article for more details if needed.
If anyone runs into this question please note that kafka-producer-perf-test.sh should produce a different output as of Kafka v2.12-3.3.2.
For example, to send 1000 messages to a Kafka topic use command line parameter --num-records 1000 (and --topic <topic_name> of course). Generated output should resemble the following and include number of messages sent in steps, speed in terms of messages sent per second and MB per seconds, average latencies (I chose to send 1M messages):
323221 records sent, 64644.2 records/sec (63.13 MB/sec), 7.5 ms avg latency, 398.0 ms max latency.
381338 records sent, 76267.6 records/sec (74.48 MB/sec), 1.2 ms avg latency, 27.0 ms max latency.
1000000 records sent, 70244.450688 records/sec (68.60 MB/sec), 15.21 ms avg latency, 475.00 ms max latency, 1 ms 50th, 96 ms 95th, 353 ms 99th, 457 ms 99.9th.
I'm using 3 VM servers, each one has 16 core/ 56 GB Ram /1 TB, to setup a kafka cluster. I work with Kafka 0.10.0 version. I installed a broker on two of them. I have created a topic with 2 partitions, 1 partition/broker and without replication.
My goal is to attend 1 000 000 messages / second.
I made a test with kafka-producer-perf-test.sh script and i get between 150 000 msg/s and 204 000 msg/s.
My configuration is:
-batch size: 8k (8192)
-message size: 300 byte (0.3 KB)
-thread num: 1
The producer configuration:
-request.required.acks=1
-queue.buffering.max.ms=0 #linger.ms=0
-compression.codec=none
-queue.buffering.max.messages=100000
-send.buffer.bytes=100000000
Any help will be appreciated to get 1 000 000 msg / s
Thank you
You're running an old version of Apache Kafka. The most recent release (0.11) had improvements including around performance.
You might find this useful too: https://www.confluent.io/blog/optimizing-apache-kafka-deployment/