Druid Too much memory consumption - druid

I have deployed Druid Single server using command (./bin/start-micro-quickstart)
My server specification is 8vCPU and 32GB RAM (EC2 t2.2Xlarge) and also added 100GB SWAP
I'm trying to ingesting 27M records from kafka to Druid.
Now at this point I have 4M records shown in Druid Datasource with total size is 506 MB and segment are 5400 (Average Segment size is 92.99 KB)
And my Memory usage is
total used free shared buff/cache available
Mem: 31G 30G 248M 24K 207M 96M
Swap: 99G 78G 21G
My DataSource size is 506 MB So why RAM Consumption is 108 GB ?
And are those all segment is in memory?
Whice druid service uses CPU and Which Druid Service uses Memory?

How many peon task you are running ? Since you are using kafka to ingest, I am assuming you are using superviser spec. If you have too many topics, and a supervisor spect for each topic, it will take memory. Check direct memory requirements. https://druid.apache.org/docs/latest/configuration/index.html

Related

High CPU utilization KSQL db

We are running KSQL db server on kubernetes cluster.
Node config:
AWS EKS fargate
No of nodes: 1
CPU: 2 vCPU (Request), 4 vCPU (Limit)
RAM: 4 GB (Request), 8 GB (Limit)
Java heap: 3 GB (Default)
Data size:
We have ~11 source topic with 1 partition, some one them having 10k record few has more than 100k records. ~7 sink topic but to create those 7 sink topic have ~60 ksql table, ~38 ksql streams & ~64 persistent queries because of joins and aggregation. So heavy computation.
KSQLdb version: 0.23.1 and we are using confluent official KSQL docker image
The problem:
When running our KSQL script we are seeing spike in CPU to 350-360% and memory 20-30%. And when that happen kubernetes restarting the server instance. Which is resulting ksql-migration to fail.
Error:
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection
refused:
<deployment-name>.<namespace>.svc.cluster.local/172.20.73.150:8088
Error: io.vertx.core.VertxException: Connection was closed
We have 30 migration files, and each file has multiple table and stream creation.
And its always failing on v27.
What we have tried so far:
Running it alone. And in that case it pass with no error.
Increase initial CPU to 4 vCPU but no change in CPU utilization
Had 2 nodes with 2 partition in kafka, but that also had same issue with addition few data columns having no data.
So something is not right in our configuration or resource allocation.
What's the standard of deployment for KSQL in kubernetes? maybe its not meant for kubernetes.

kafka with jbod disks + what the max numbers of disks that we can set on kafka?

we are planing to build 17 kafka machines
since we need a huge storage , we are thinking to use jbod disks for each kafka machine
so the plan is like this
number of kafka machines -17
kafka version - 2.7
number of disks on jbod - 44 disks ( when the size of each disk is 2.4T )
so just to give here more perspective from kafka side and kafka configuration
in server.properies file we need to set the logs.dir with 44 disks
based on that we are just thinking if huge number of disks like 44 , is maybe higher then threshold
actually we search a lot to find some useful post that talk about this
but without success
so lets summary:
what is the limit about number of disks ( jbod disks ) that we can connect to kafka machine?

Druid kafka Ingestion Process Data Drop

My Druid Server is running in single-server and I'm Ingesting 30M records in One Datasource from Kafka. There is 16GB RAM and 100GB Swap Memory allocated.Java Memory Heap size is 15.62GB.
Druid also contains another datasource with 2.6M records and that supervisor is suspended.
Now there is 4.5M records stored successfully but When Records increases to 6M there is problem stats " Unable to reconnect to Zookeeper service, Session expired event received " and then Data Dropped to 4.5M and reinstating, and again same process repeated with records goes to up at some point like 6.2M and then same error occurred and data dropped to 4.5M . then After 4-5 Hours Druid Service restarts and record in datasource starts from 4.5M.
Segment granularity is set to HOUR.
Following is the statistics of Memory usage in system
total used free shared buff/cache available
Mem: 15G 15G 165M 36K 160M 83M
Swap: 99G 45G 54G
What Should I have to do? Is this a problem of Memory?

How multiple executors are managed on the worker nodes with a Spark standalone cluster?

Until now, I have only used Spark on a Hadoop cluster with YARN as the resource manager. In that type of cluster, I know exactly how many executors to run and how the resource management works. However, know that I am trying to use a Standalone Spark Cluster, I have got a little bit confused. Correct me where I am wrong.
From this article, by default, a worker node uses all the memory of the node minus 1 GB. But I understand that by using SPARK_WORKER_MEMORY, we can use lesser memory. For example, if the total memory of the node is 32 GB, but I specify 16 GB, Spark worker is not going to use anymore than 16 GB on that node?
But what about executors? Let us say if I want to run 2 executors per node, can I do that by specifying executor memory during spark-submit to be half of SPARK_WORKER_MEMORY, and if I want to run 4 executors per node, by specifying executor memory to be the quarter of SPARK_WORKER_MEMORY?
If so, besides executor memory, I would also have to specify executor cores correctly, I think. For example, if I want to run 4 executors on a worker, I would have to specify executor cores to be the quarter of SPARK_WORKER_CORES? What happens, if I specify a bigger number than that? I mean if I specify executor memory to be the quarter of SPARK_WORKER_MEMORY, but executor cores to be only half of SPARK_WORKER_CORES? Would I get 2 or 4 executors running on that node in that case?
This is the best way to control number of executors, cores and memory in my experience.
Cores: You can set total number of cores across all executors and number of cores per each executor
Memory: Executor memory individually
--total-executor-cores 12 --executor-cores 2 --executor-memory 6G
This would give you 6 executors and 2 cores/6G per each executor, so in total you are looking at 12 Cores and 36G
You can set driver memory using
--driver-memory 2G
So, I experimented with the Spark Standalone cluster myself a bit, and this is what I noticed.
My intuition that muliple executors can be run inside a worker, by tuning executor cores was indeed correct. Let us say, your worker has 16 cores. Now if you specify 8 cores for executors, Spark would run 2 executors per worker.
How many executors run inside a worker also depend upon the executor memory you specify. For example, if worker memory is 24 GB, and you want to run 2 executors per worker, you cannot specify executor memory to be more than 12 GB.
A worker's memory can be limited when starting a slave by specifing the value for optional parameter--memory or by changing the value of SPARK_WORKER_MEMORY. Same with the number of cores (--cores/SPARK_WORKER_CORES).
If you want to be able to run multiple jobs on the Standalone Spark cluster, you could use the spark.cores.max configuration property while doing spark-submit. For example, like this.
spark-submit <other parameters> --conf="spark.cores.max=16" <other parameters>
So, if your Standalone Spark Cluster allows 64 cores in total, and you give only 16 cores to your program, other Spark jobs could use the remaining 48 cores.

Kafka Producer and Broker Throughput Limitations

I have configured a two node six partition Kafka cluster with a replication factor of 2 on AWS. Each Kafka node runs on a m4.2xlarge EC2 instance backed by an EBS.
I understand that rate of data flow from Kafka producer to Kafka broker is limited by the network bandwidth of producer.
Say network bandwidth between Kafka producer and broker is 1Gbps ( approx. 125 MB/s) and bandwidth between Kafka broker and storage ( between EC2 instance and EBS volume ) is 1 Gbps.
I used the org.apache.kafka.tools.ProducerPerformance tool for profiling the performance.
I observed that a single producer can write at around 90 MB/s to the broker when a message size is 100 bytes.( hence network is not saturated)
I also observed that disk write rate to EBS volume is around 120 MB/s.
Is this 90 MB/s due to some network bottleneck or is it a limitation of Kafka ? (forgetting batch size and compression etc. for simplicity )
Could this be due to the bandwidth limitation between broker and ebs volume?
I also observed that when two producers ( from two separate machines ) produce data, throughput of one producer dropped to around 60 MB/s.
What could be the reason for this? Why doesn't that value reach 90 MB/s ? Could this be due to the network bottleneck between broker and ebs volume?
What confuses me is that in both cases (single producer and two producers ) disk write rate to ebs stays around 120 MB/s ( closer to its upper limit ).
Thank you
I ran into the same issue as per my understanding, in first case one producer is sending data to two brokers (there is nothing else in the network) so you got 90 MB/s and each broker at 45MB/s (approx), but in the second case two producers are sending data to two brokers so from the producer perspective it is able to send data at 60 MB/s but from the broker perspective it is receiving data at 60MB/s. so you are actually able to push more data through kafka.
There are a couple things to consider:
There are separate disk and network limits that apply to both the instance and the volume.
You have to account for replication. If you have RF=2, the amount of write traffic taken by a single node is 2*(PRODUCER_TRAFFIC)/(PARTITION_COUNT) assuming even distribution of writes across partitions.