I am running a series of benchmarks with Cassandra. Among others, I tried the following configuration: 1 client node, 3 server nodes (same ring). All experiments are run after cleaning up the servers:
pkill -9 java; sleep 2; rm -r /var/lib/cassandra/*; ./apache-cassandra-1.2.2/bin/cassandra -f
then I run cassandra-stress from the client node (3 replica, consistency ANY/ALL):
[stop/clean/start servers]
./tools/bin/cassandra-stress -o INSERT -d server1,server2,server3 -l 3 -e ANY
[224 seconds]
[stop/clean/start servers]
./tools/bin/cassandra-stress -o INSERT -d server1,server2,server3 -l 3 -e ALL
[368 seconds]
One would deduce that decreasing the consistency level increases performance. However, there is no reason why this should happen. The bottleneck is the CPU on the servers and they all have to eventually do a local write. In fact, a careful read of the server logs reveals that hinted hand-off has taken place. Repeating the experiment, I sometimes get UnavailableException on the client and "MUTATION messages dropped" on the server.
Is this issue documented? Should CL != ALL be considered harmful on writes?
I'm not quite sure what your point is. Things appear to be working as designed.
Yes, if you're writing at CL.ONE it will complete the write faster that at CL.ALL - because it only has to get an ACK from one node - not all of them.
However, you're not measuring the time that will be taken to repair the data. You will get time spent queueing up and processing the hinted handoffs - however, nodes only hold this up for an hour.
Eventually, you'll have to run a nodetool repair to correct the consistency and delete the tombstones.
Related
I have referred to https://redis.io/topics/mass-insert and tried the Luke protocol,
and did
cat data.txt | redis-cli -a <pass> -h <events-k8s-service> --pipe-timeout 100 > /dev/null
The redirection to /dev/null is to ignore the replies. The CLIENT REPLY of redis can't serve its purpose here from CLI and it may turn into a blocking command.
The data.txt has around 18 Million records/commands like
SELECT 1
SET key1 '"field1":"val1","field2":"val2","field3":"val3","field4":"val4","field5":val5,"field6":val6'
SET key2 '"field1":"val1","field2":"val2","field3":"val3","field4":"val4","field5":val5,"field6":val6'
.
.
.
This command is executed from a cronJob which execs into the redis pod, and executes the above command from within the pod, to understand the footprint, the redis pod had no resources limit and following are the observation:
Keys loaded: 18147292
Time taken: ~31 minutes
Peak CPU: 2063 m
Peak Memory: 4745 Mi
The resources consumed are way too high and the time taken is too long.
The questions:
How do we load mass load data of the order 50 Million keys using redis pipe, is there an alternate approach to this problem ?
Is there a golang/python library that does the same mass loading efficiently(less time , little footprint of mem and cpu) ?
Do we need to fine tune redis here ?
The help is appreciated, Thanks in advance.
If you are using the redis-cli inside the pod to move the millions of key into Redis POD won't be able to handle it.
Also, you have not specified any resources that you are giving to Redis however it's a memory store so it will be better to give proper memory to redis 2-3 GB depends on usegae.
You can try out the Redis-riot : https://github.com/redis-developer/riot
to add data into the Redis.
There is also good video across loading the Big foot data into the redis : https://www.youtube.com/watch?v=TqTg6RijfaU
Do we need to fine tune redis here.
Increase the memory for redis if it's getting OOMkilled.
We're on Heroku and trying to understand if it's time to upgrade our Postgres database or not. I have two questions:
Is there any tools you know of that track heroku postgres logs to track their memory and cpu
usage stats over time?
Are those (Memory and CPU usage) even the best metrics to look at to determine if we should upgrade to a larger instance or not?
The most useful tool I've found for monitoring heroku postgres instancs is the logs associated with the database's dyno, which you can monitor using heroku logs -t -d heroku-postgres. This spits out some useful stats every 5 minutes, so if you fill your logs up quickly, this might not output anything right away — use -t to wait for the next log line.
Output will look something like this:
2022-06-27T16:34:49.000000+00:00 app[heroku-postgres]: source=HEROKU_POSTGRESQL_SILVER addon=postgresql-fluffy-55941 sample#current_transaction=81770844 sample#db_size=44008084127bytes sample#tables=1988 sample#active-connections=27 sample#waiting-connections=0 sample#index-cache-hit-rate=0.99818 sample#table-cache-hit-rate=0.9647 sample#load-avg-1m=0.03 sample#load-avg-5m=0.205 sample#load-avg-15m=0.21 sample#read-iops=14.328 sample#write-iops=15.336 sample#tmp-disk-used=543633408 sample#tmp-disk-available=72435159040 sample#memory-total=16085852kB sample#memory-free=236104kB sample#memory-cached=15075900kB sample#memory-postgres=223120kB sample#wal-percentage-used=0.0692420374380985
The main stats I pay attention to are table-cache-hit-rate which is a good proxy for how much of your active dataset fits in memory, and load-avg-1m, which tells you how much load per CPU the server is experiencing.
You can read more about all these metrics here.
I'm following the instructions on Scaling Out Data Ingestion, with this command:
find . -type f | xargs -n 1 -P 320 sh -c 'echo $0 `copy_to_distributed_table -C $0 table_name`'
My cluster has a master and eight workers, each with two SSDs. The table is spread across 320 shards.
Data loading is taking a very long time. The average insertion rate seems to be about 750k per minute. Is that normal or is there a way to speed it up?
The only thing I can think of is that I have replication enabled. Should that be turned off for loading and then reset?
I assume that you want to use hash partitioning. If that is the case, we're deprecating copy_to_distributed_table in favor of distributed COPY. COPY provides a native PostgreSQL experience, resolves several known issues, and improves ingest performance by more than an order of magnitude. This is now available as of Citus 5.1, which was released this month and is available in the official PostgreSQL Linux package repositories (PGDG).
In the documentation for celeryd-multi, we find this example:
# Advanced example starting 10 workers in the background:
# * Three of the workers processes the images and video queue
# * Two of the workers processes the data queue with loglevel DEBUG
# * the rest processes the default' queue.
$ celeryd-multi start 10 -l INFO -Q:1-3 images,video -Q:4,5 data
-Q default -L:4,5 DEBUG
( From here: http://docs.celeryproject.org/en/latest/reference/celery.bin.celeryd_multi.html#examples )
What would be a practical example of why it would be good to have more than one worker on a single host process the same queue, as in the above example? Isn't that what setting the concurrency is for?
More specifically, would there be any practical difference between the following two lines (A and B)?:
A:
$ celeryd-multi start 10 -c 2 -Q data
B:
$ celeryd-multi start 1 -c 20 -Q data
I am concerned that I am missing some valuable bit of knowledge about task queues by my not understanding this practical difference, and I would greatly appreciate if somebody could enlighten me.
Thanks!
What would be a practical example of why it would be good to have more
than one worker on a single host process the same queue, as in the
above example?
Answer:
So, you may want to run multiple worker instances on the same machine
node if:
You're using the multiprocessing pool and want to consume messages in parallel. Some report better performance using multiple worker
instances instead of running a single instance with many pool
workers.
You're using the eventlet/gevent (and due to the infamous GIL, also the 'threads') pool), and you want to execute tasks on multiple CPU
cores.
Reference: http://www.quora.com/Celery-distributed-task-queue/What-is-the-difference-between-workers-and-processes
I run this command every time I build my project from the project directory:
egrep -r -n --include=*.java <my regex> .
And I cannot understand why consecutive runs are up to 10 times faster than first one. Actually I have seen this behavior in other disk IO operations involving large directories (calculation directory size, code commits etc.).
I think that it is related to operation system's disk IO internals. Probably it is caching on some level. Can somebody point my nose in right direction?
Because recently accessed files are cached by the operating system.
Have a look here.