Our current implementation has all calls to Neo4j through REST API. We are in process of replacing some of the code through neo4j-java-driver.
We had some issues with performance which we tried to resolve through Cypher optimization and moving load from Neo4j DB to application layer.
Using java driver, will it further reduce load on Neo4j DB or will just help in terms of reducing network latency?
The driver is a bit more optimal for some things. Version 1.5 will also allow
async operations. The next major version will also provide backpressure and reactive operations.
It doesn't have to generate JSON anymore but will stream a binary protocol. So that might reduce the load a bit. I'm not sure, it will have a lot of impact.
Best to measure yourself.
I did some testing and below are the results, which are not very encouraging
Windows with 16GB RAM, 100K nodes with local connecting to Neo4j.
String defaultNodes = "1000";
if(args.length > 0) {
defaultNodes = args[0];
}
String query = "MATCH (n) return n LIMIT "+defaultNodes;
long time = System.currentTimeMillis();
session.run(query);
System.out.println("With bolt for LIMIT "+defaultNodes+" -- "+(System.currentTimeMillis() - time));
time = System.currentTimeMillis();
Neo4jRESTHandler dbHandler = new Neo4jRESTHandler();
dbHandler.executeCypherQuery(query);
System.out.println("With REST for LIMIT "+defaultNodes+" -- "+(System.currentTimeMillis() - time));
C:\Migration>java -jar neo4jtestexamples-1.0.0-SNAPSHOT-jar-with-dependencies.jar
With bolt for LIMIT 1000 -- 131
With REST for LIMIT 1000 -- 162
C:\Migration>java -jar neo4jtestexamples-1.0.0-SNAPSHOT-jar-with-dependencies.ja
r
With bolt for LIMIT 1000 -- 143
With REST for LIMIT 1000 -- 156
C:\Migration>java -jar neo4jtestexamples-1.0.0-SNAPSHOT-jar-with-dependencies.ja
r 10000
With bolt for LIMIT 10000 -- 377
With REST for LIMIT 10000 -- 156
C:\Migration>java -jar neo4jtestexamples-1.0.0-SNAPSHOT-jar-with-dependencies.ja
r 10000
With bolt for LIMIT 10000 -- 335
With REST for LIMIT 10000 -- 157
C:\Migration>java -jar neo4jtestexamples-1.0.0-SNAPSHOT-jar-with-dependencies.ja
r
With bolt for LIMIT 1000 -- 104
With REST for LIMIT 1000 -- 161
C:\Migration>java -jar neo4jtestexamples-1.0.0-SNAPSHOT-jar-with-dependencies.ja
r 25000
With bolt for LIMIT 25000 -- 595
With REST for LIMIT 25000 -- 155
C:\Migration>java -jar neo4jtestexamples-1.0.0-SNAPSHOT-jar-with-dependencies.ja
r 25000
With bolt for LIMIT 25000 -- 544
With REST for LIMIT 25000 -- 151
Related
I am trying to come up with a configuration that would enforce producer quota setup based on an average byte rate of producer.
I did a test with a 3 node cluster. The topic however was created with 1 partition and 1 replication factor so that the producer_byte_rate can be measured only for 1 broker (the leader broker).
I set the producer_byte_rate to 20480 on client id test_producer_quota.
I used kafka-producer-perf-test to test out the throughput and throttle.
kafka-producer-perf-test --producer-props bootstrap.servers=SSL://kafka-broker1:6667 \
client.id=test_producer_quota \
--topic quota_test \
--producer.config /myfolder/client.properties \
--record.size 2048 --num-records 4000 --throughput -1
I expected the producer client to learn about the throttle and eventually smooth out the requests sent to the broker. Instead I noticed there is alternate throghput of 98 rec/sec and 21 recs/sec for a period of more than 30 seconds. During this time average latency slowly kept increseing and finally when it hits 120000 ms, I start to see Timeout exception as below
org.apache.kafka.common.errors.TimeoutException : Expiring 7 records for quota_test-0: 120000 ms has passed since batch creation.
What is possibly causing this issue?
The producer is hitting timeout when latency reaches 120 seconds (default value of delivery.timeout.ms )
Why isnt the producer not learning about the throttle and quota and slowing down or backing off
What other producer configuration could help alleviate this timeout issue ?
(2048 * 4000) / 20480 = 400 (sec)
This means that, if your producer is trying to send the 4000 records full speed ( which is the case because you set throughput to -1), then it might batch them and put them in the queue.. in maybe one or two seconds (depending on your CPU).
Then, thanks to your quota settings (20480), you can be sure that the broker won't 'complete' the processing of those 4000 records before at least 399 or 398 seconds.
The broker does not return an error when a client exceeds its quota, but instead attempts to slow the client down. The broker computes the amount of delay needed to bring a client under its quota and delays the response for that amount of time.
Your request.timeout.ms being set to 120 seconds, you then have this timeoutException.
I am using the kafka-producer-perf-test utility to baseline the producer performance in our Kafka cluster. I am specifically interested in "records written/sec". I see the following lines in the output of the utility:
32561 records sent, 6407.1 records/sec (6.11 MB/sec), 1223.3 ms avg latency, 2399.0 ms max latency.
50000 records sent, 8123.476848 records/sec (7.75 MB/sec), 1628.84 ms avg latency, 2801.00 ms max latency, 1701 ms 50th, 2701 ms 95th, 2796 ms 99th, 2799 ms 99.9th
This seems to indicate that the throughput is 8123 records/sec
However in the sections that follow there is another metric:
producer-topic-metrics:record-send-rate:{client-id=producer-1, topic=test1} : 1409.602
I am unable to reconcile the two figures. Which should be considered the throughput of the producer?
Could you please help me understand the metrics on spark UI Memory: 10 MB Used (552.6 GB Total)
PartitionNumber.nbExecutors = conf.getInt("spark.executor.instances", 20)
PartitionNumber.nbPartitions = PartitionNumber.nbExecutors * conf.getInt("spark.executor.cores", 2) * 3
conf.set("spark.sql.shuffle.partitions", PartitionNumber.nbPartitions.toString())
Is it correct that the memory used is 10Mb and the available memory 552Gb ?
Any help or suggestions you could provide would be greatly appreciated
Thanks
The total memory available for all executors: 552.6 Gb and the total memory used by all executors: 10 Mb
You can see it in "Storage memory" column (memory available to a Spark executor for storing and caching rdds/dfs). Each executor uses 169.1 Kb out of 9.1 Gb and there are 61 execs:
61 * 169.1 Kb ~= 10 Mb
61 * 9.1 Gb ~= 555 Gb
I am using PostgreSQL 10.1 on MAC on which I am trying to set up streaming replication. I configured both master and slave to be on the same machine. I find the streaming replication lag to be slower than expected on mac. The same test runs on a Linux Ubuntu 16.04 machine without much lag.
I have the following insert script.
for i in $(seq 1 1 1000)
do
bin/psql postgres -p 8999 -c "Insert into $1 select tz, $i * 127361::bigint, $i::real, random()*12696::bigint from generate_series('01-01-2018'::timestamptz, '02-01-2018'::timestamptz, '30 sec'::interval)tz;"
echo $i
done
The lag is measured using the following queries,
SELECT pg_last_wal_receive_lsn() - pg_last_wal_replay_lsn();
SELECT (extract(epoch FROM now()) - extract(epoch FROM pg_last_xact_replay_timestamp()))::int;
However, the observation is very unexpected. The lag is increasing from the moment the transactions are started on master.
Slave localhost_9001: 12680304 1
Slave localhost_9001: 12354168 1
Slave localhost_9001: 16086800 1
.
.
.
Slave localhost_9001: 3697460920 121
Slave localhost_9001: 3689335376 122
Slave localhost_9001: 3685571296 122
.
.
.
.
Slave localhost_9001: 312752632 190
Slave localhost_9001: 308177496 190
Slave localhost_9001: 303548984 190
.
.
Slave localhost_9001: 22810280 199
Slave localhost_9001: 8255144 199
Slave localhost_9001: 4214440 199
Slave localhost_9001: 0 0
It took around 4.5 minutes for a single client inserting on a single table to complete on master and another 4 minutes for the slave to catch up. Note that NO simultaneous selects are run other than the script to measure the lag.
I understand that replay in PostgreSQL is pretty simple like, "move a particular block to a location", but I am not sure about this behavior.
I have the following other configurations,
checkpoint_timeout = 5min
max_wal_size = 1GB
min_wal_size = 80MB
Now, I run the same tests with same configurations on a Linux Ubuntu 16.04 machine, I find the lag perfectly reasonable.
Am I missing anything?
I'm using 3 VM servers, each one has 16 core/ 56 GB Ram /1 TB, to setup a kafka cluster. I work with Kafka 0.10.0 version. I installed a broker on two of them. I have created a topic with 2 partitions, 1 partition/broker and without replication.
My goal is to attend 1 000 000 messages / second.
I made a test with kafka-producer-perf-test.sh script and i get between 150 000 msg/s and 204 000 msg/s.
My configuration is:
-batch size: 8k (8192)
-message size: 300 byte (0.3 KB)
-thread num: 1
The producer configuration:
-request.required.acks=1
-queue.buffering.max.ms=0 #linger.ms=0
-compression.codec=none
-queue.buffering.max.messages=100000
-send.buffer.bytes=100000000
Any help will be appreciated to get 1 000 000 msg / s
Thank you
You're running an old version of Apache Kafka. The most recent release (0.11) had improvements including around performance.
You might find this useful too: https://www.confluent.io/blog/optimizing-apache-kafka-deployment/