Couchbase - Full Eviction vs Value Eviction - spring-data

How advantageous is Value Eviction over Full Eviction? In case of Value eviction , I assume meta-data is present in the RAM. How does presence of meta-data help in quicker retrieval of content? Is the NRU documents evicted high watermark is reached? Are there any aspects that we need to consider before changing the eviction policy from value eviction to full eviction.

Value eviction keeps all document meta data in memory and full eviction does not.
Let's say you do a get on a key that does not exist. In value eviction mode you instantly know that the key is not there since it is a memory only operation. In full eviction mode if the meta data for that key is not in memory then you have to do a disk fetch to be sure that they key does not exist.
Basically any operation that requires knowledge of some information about a keys meta data may require a background fetch. Some other operations that may be slow if the meta data is not in memory are CAS set (check and set only if the value has not changed), appends, incrs, decrs, and prepends. Also keep in mind that extra disk activity may cause disk contention that affects other parts of Couchbase.
The NRU is the same across both full eviction and value eviction and Couchbase will do its best to keep your working set in memory.
I would recommend trying to get an idea of what your workload looks like before switching modes and test it out with full eviction because you may see performance issues with will vary depending on your workload.

In addition to Mike's answer it's worth mentioning bloom filters here which is a very powerful feature of Couchbase and can decrease the trips to disk significantly. Bloom filters are also enabled in value-only ejection but Couchbase really leverages their functionality in full ejection mode. I was in the situation where the system had reached its limit with value-only ejections buckets and I tested the two eviction modes and eventually the full ejection ended up being far superior - at least for my case.

Related

How does Scylla handles agressive memflush & compaction on write work load?

How does Scylla guarantee/keeps write latency low for write workload case, as more write would produce more memflush and compaction? Is there a throttling to it? Would be really helpful if someone can asnwer.
In essence, Scylla provides consistent low latency by parallelizing the problem, and then properly prioritizing user-facing vs. back-office tasks.
Parallelizing - Scylla uses a shard-per-thread architecture. Each thread is responsible for all activities for its token range. Reads, writes, compactions, repairs, etc.
Prioritizing - Each thread is scheduled according to the priorities of the tasks. High priority tasks like dealing with read (query) and write (commitlog) receive the highest priority. Back-office tasks such as memtable flushes, compaction and repair are only done when there are spare cycles. Which - given the nanosecond scale of modern CPUs - there usually are.
If there are not enough spare cycles, and RAM or Disk start to fill, Scylla will bump the priority of the back-office tasks in order to save the node. So that will, in fact, inject some latency. But that is an indication that you are probably undersized, and should add some resources.
I would recommend starting with the Scylla Architecture whitepaper at https://go.scylladb.com/real-time-big-data-database-principles-offer.html. There are also many in-depth talks from Scylla developers at https://www.scylladb.com/resources/tech-talks/
For example, https://www.scylladb.com/2020/03/26/avi-kivity-at-core-c-2019/ talks at great depth about shard-per-core.
https://www.scylladb.com/tech-talk/oltp-or-analytics-why-not-both/ talks at great depth about task prioritization.
Memtable flush is more urgent than regular compaction as we on one hand want to flush late, in order to create fewer sstables in level 0 and on the other, we like to evacuate memory from ram. We have a memory controller which automatically determine the flush condition. It is done in the background while operations for to the commitlog and get flushed according to the configured criteria.
Compaction is more of a background operation and we have controllers for it too. Go ahead and search the blog for compaction

MongoDB: Disk I/O % utilization on Data Partition has gone

Last time I get alert from MongoDB Atlas:
Disk I/O % utilization on Data Partition has gone above 70 on nvme2n1
But I have no any ideas how can I localize / query / index / part of code / problematic collection.
In what way can I perform any analyze to find out problem root-cause?
Not answer, but just seen that many people faced with similar problem.
In My case root cause was: we had collection with huge documents that contain array of data (in fact - list of coordinates with some metadata), and update it as many times, as coordinates we have (when adding new coordinates). + some additional operations.
As I know MongoDB cannot fetch just part of document, it fetch full document, and when we fetch many different and big documents, they are not fit into MongoDB in-memory cache, and each time access into hard disc, that lead to this issue.
So, we just split up this document on several, and this fixed issue. While we need frequent access to update/add this data, we keep it into different documents, and finally, after process done, we gather back all this documents into one big document, for "history check" purpose.
Recently, we met this alert on MongoDB Atlas Disk I/O % utilization on Data Partition has gone above 90 after the instance reboots maintenance. After a discussion with Atlas support guys, we clearly understand this metric.
Understanding Disk I/O % Utilization
The definition of Disk I/O % Utilization and Disk I/O % utilization on Data Partition per doc
Disk I/O % Utilization alerts indicate that the percentage of time during which requests are being issued reaches a specified threshold.
Disk I/O % utilization on Data Partition occurs if the percentage of time during which requests are being issued to any partition that contains the MongoDB collection data meets or exceeds the threshold.
Two traps in iostat: %util and svctm
Device saturation occurs when this value is close to 100% for devices serving requests serially. But for devices serving requests in parallel, such as RAID arrays and modern SSDs, this number does not reflect their performance limits.
This means if there was even just one I/O operation in progress for a given time period, the operating system would report 100% Disk Util, as the disk was in use 100% of that time.
Thus, the disk utilization percentage by itself is NOT an indicator of stress on the disk relative to its maximum IOPS capacity.
Having disk utilization at 100% does not in itself imply there is an issue. Disk utilization is the percentage of time requests are issued to any partition containing the MongoDB collection data. This includes requests from any process, not just MongoDB processes. Modern disk storage can sustain multiple I/O operations simultaneously, so having a ~100% utilization is not unusual, because it just means that the disk is constantly processing at least one operation during the 100% interval.
Conclusion
We should look at a combination of all the available disk-related metrics, as well as IOWait in the System CPU when diagnosing potential disk performance-related issues.
Possible actions to help resolve Disk Utilization % alerts
Optimize your queries
Create an Index to Support Read Operations
Pay attention to Query Selectivity and Covered Query
Use the Atlas Performance Advisor to view slow queries and suggested indexes.
Review Indexing Strategies for possible further indexing improvements.
Analyze Query Performance to review how your queries are using your indexes.
Analyze Profile to optimize the long execution time query
Increase hardware resources, such as instance size and IOPS on Atlas
Source: Mongo Doc
As the alert says, it is due to the high utilization of the disk. The most common cause of it is unoptimized queries with poor Query Targeting Ratio, or simply reading/writing a lot of documents from/to the disk in a relatively shorter time window.
In order to identify these queries, start with the Profiler and look for the operations with a poor Examined:Returned ratio. You can also refer to the Performance Advisor to see if it suggests any indexes on the inefficient operations. Since Profiler's window is limited to the last 24 hours, you can also refer to your logs to identify the Slow Queries.
Ultimately, the effort to solve this is tri-directional:
Optimizing the query execution with efficient indexing and filtering strategies
Keep a check on the volume of data being read/written in one go.
Increase the IOPS of the cluster
For official reference, checkout the documentation here.

Couchbase - Eviction

Couchbase stores the data in the disk and also retains it in the RAM. Once high watermark is reached , I guess it starts the eviction process. I assume the data would also be in the disk at this point. So does eviction really mean deletion of message from RAM? Or does mean deletion of data from RAM and writing it to disk? If it also includes writing it to disk , why should data already present in the disk be overwritten?
Couchbase only evicts documents that have been persisted to disk. As you say, eviction means clearing the document data from RAM. When using the value eviction strategy, which is the default, Couchbase keeps the key and metadata in RAM and only evicts the document value. With the full eviction strategy, it deletes both the key, metadata and value from RAM.
Couchbase first writes to RAM and then asynchronously writes to disk as well. Depending on the configuration you can have document along with metadata of the document in the RAM. And, yes when the threshold (High Water Mark) reaches Couchbase will start evicting the data (value and/or metadata as per configuration) from RAM until it reaches Low Water Mark.

Garbage Collection issues on MapPartitions

I currently have a mapPartitions job which is flatMapping each value in the
iterator, and I'm running into an issue where there will be major GC costs
on certain executions. Some executors will take 20 minutes, 15 of which are
pure garbage collection, and I believe that a lot of it has to do with the
ArrayBuffer that I am outputting. Does anyone have any suggestions as to how
I can do some form of a stream output?
Also, does anyone have any advice in general for tracking down/addressing GC
issues in spark?
Please refer to the below documentation from official page of Spark tuning. I hope it will at least help to give direction to your analysis:
Memory Management Overview
Memory usage in Spark largely falls under one of two categories: execution and storage. Execution memory refers to that used for computation in shuffles, joins, sorts and aggregations, while storage memory refers to that used for caching and propagating internal data across the cluster. In Spark, execution and storage share a unified region (M). When no execution memory is used, storage can acquire all the available memory and vice versa. Execution may evict storage if necessary, but only until total storage memory usage falls under a certain threshold (R). In other words, R describes a subregion within M where cached blocks are never evicted. Storage may not evict execution due to complexities in implementation.
This design ensures several desirable properties. First, applications that do not use caching can use the entire space for execution, obviating unnecessary disk spills. Second, applications that do use caching can reserve a minimum storage space (R) where their data blocks are immune to being evicted. Lastly, this approach provides reasonable out-of-the-box performance for a variety of workloads without requiring user expertise of how memory is divided internally.
Although there are two relevant configurations, the typical user should not need to adjust them as the default values are applicable to most workloads:
spark.memory.fraction expresses the size of M as a fraction of the (JVM heap space - 300MB) (default 0.6). The rest of the space (40%) is reserved for user data structures, internal metadata in Spark, and safeguarding against OOM errors in the case of sparse and unusually large records.
spark.memory.storageFraction expresses the size of R as a fraction of M (default 0.5). R is the storage space within M where cached blocks immune to being evicted by execution.
The value of spark.memory.fraction should be set in order to fit this amount of heap space comfortably within the JVM’s old or “tenured” generation. See the discussion of advanced GC tuning below for details.

How many requests can mongodb handle before sharding is necessary?

Does anybody know (from personal experience or official documentation) how many concurrent requests can a single MongoDb server handle before sharding is advised?
If your working set exceeds the RAM you can afford for a single server, or your disk I/O requirements exceed what you can provide on a single server, or (less likely) your CPU requirements exceed what you can get on one server, then you'll need to shard. All these depend tremendously on your specific workload. See http://docs.mongodb.org/manual/faq/storage/#what-is-the-working-set
One factor is hardware. Although for this you have replica sets. They reduce the load from the master server by answering read-only queries with replicated data. Another option would be memcaching for very frequent and repetitive queries, which would be even faster.
A factor for whether sharding is necessary is the data size & variation. When you have a wide range of varying data you need to access, which would render a server's cache uneffective by distributing the access to the data to the wide range, then you would consider using sharding. Off-loading work is merely a side-effect of this.