I want to design my cluster and want to set proper size of key_cache and row_cache
depending on the size of the tables/columnfamilies.
Similar to mysql, do we have something like this in Cassandra/CQL?
SELECT table_name AS "Tables",
round(((data_length + index_length) / 1024 / 1024), 2) "Size in MB"
FROM information_schema.TABLES
WHERE table_schema = "$DB_NAME";
Or any other way to look for the data-size and indexes' size separately.
Or what configuration of each node would be needed to put my table completely in the memory
without considering any replication factor.
The key cache and row caches work rather differently. It's important to understand the difference for calculating sizes.
The key cache is a cache of offsets within files for locations for rows. It is basically a map from (key, file) to offset. Therefore scaling the key cache size depends on number of rows, not overall data size. You can find the number of rows from the 'Number of keys' parameter in 'nodetool cfstats'. Note this is per node, not a total, but that's what you want to decide on cache sizes. The default size is min(5% of Heap (in MB), 100MB), which is probably sufficient for most applications. A subtlety here is that rows may exist in multiple files (SSTables), the number depending on your write pattern. However, this duplication is accounted for (approximately) in the estimated count from nodetool.
The row cache caches the actual row. To get a size estimate for this you can use the 'Space used' parameter in 'nodetool cfstats'. However, the row cache caches deserialized data and only the latest copy so the size could be quite different (higher or lower).
There is also a third less configurable cache - your OS filesystem cache. In most cases this is actually better than the row cache. It avoids duplicating data in memory, because when using the row cache most likely data will be in the filesystem cache too. And reading from an SSTable in the filesystem cache is only 30% slower than the row cache in my experiments (a while ago, probably not valid any more but unlikely to be significantly different). The main use case for the row cache is when you have one relatively small CF that you want to ensure is cached. Otherwise using the filesystem cache is probably the best.
In conclusion, the Cassandra defaults of a large key cache and no row cache are the best for most setups. You should only play with the caches if you know your access pattern won't work with the defaults or if you're having performance issues.
Related
I have a huge database (9 billon rows, 3000 columns) which is currently host on postgres, it is huge about 30 TB in size. I am wondering if there is some practical ways to reduce the size of the database while preserving the same information in the database, as storage is really costly.
If you don't want to delete any data.
Vacuuming. Depending on how many updates/deletes your database performs there will be much garbage. A database that size is likely full of tables that do not cross vacuum thresholds often (pg13 has a fix for this). Manually running vacuums will allocate dead rows for re-use, and free up space where ends of pages are no longer needed.
Index management. These bloat over time and should be smaller than your tables. Re-indexing (concurrently) will give you some space back, or allow re-use of existing pages.
data de-duplication / normalisation. See where you can remove data from tables where it is not needed, or presented elsewhere in the database.
orientdb 2.0.5
I have a database in which we create non-unque index on 2 properties on a class called indexstat.
The two properties which make up the index are a string identifier plus a long timestamp.
Data is created in batches of few hundred records every 5 minutes. After a few hours old records are deleted.
This is file listing are the files related to that table.
Question:
Why is the .irs file which according to documentation (is related to non-unique indexes)...so monstrously huge after a few hours. 298056704 bytes larger than actual data (.irs size - .sbt size - .cpm size).
I would think the index would be smaller than the actual data.
Second question:
What is best practice here. Should I be using unique indexes instead of non-unique? Should I find a way to make the data in the index smaller (e.g. use longs instead of strings as identifiers)?
Below are file names and the sizes of each.
indexstat.cpm 727778304
indexstatidx.irs 1799095296
indexstatidx.sbt 263168
indexstat.pcl 773260288
This is repeated for a few tables where the index size is larger than the database data.
Internals of *.irs files organised in a such way that when you delete something from an index there is an unused hole left in the file. At some point, when about a half of the file space is wasted, those unused holes come into play again and become available for reuse and allocation. That is done for performance reasons to lower the index data fragmentation. In your case this means that sooner or later the *.irs file will stop growing, and its maximum size should be around 2-3 times larger than the maximum observed size of the corresponding *.pcl file, assuming your single stat record size is not much bigger compared to the size of the id-timestamp pair size.
Regarding the second question, in a long run it is almost always better to use the most specific/strict data types to model the data and the most specific/strict index types to index it.
At this link is shown a discussion relative to the index file, maybe can help you.
For the second question, the index should be chosen according to your purpose and your data (not vice versa). The data type (long, string) must be the one that best represents your fields (and already here if for example if you just an integer and this is sufficient to the scope, it is useless to use a long). The same choice for the index, if you need to not have duplicate the choice will be non-unique. if you need an index that allows to choose the range sb-tree instead of the hash and so on ...
In my environments I can have DB of 5-10 GB or DB of 10 TB (video recordings).
Focusing on the 5-10 GB: if I keep default settings for prealloc an small-files I can actually loose 20-40% of the disk space because of allocations.
In my production environments, the disk size can be 512G, but user can limit DB allocation to only 10G.
To implement this, I have a scheduled task that deletes the old documents from the DB when DB dataSize reached a certain threshold.
I can't use capped-collection (GridFS, sharding limitation, cannot delete random documents..), I can't use --no-prealloc/small-files flags, cause i need the files insert to be efficient.
So what happens, is this: if dataSize gets to 10G, the fileSize would be at least 12G, so I need to take that in consideration and lower the threshold in 2GB (and lose a lot of disk space).
What I do want, is to tell mongo to pre-allocate all the 10 GB the user requested, and disable further pre-alloc.
For example, running mongod with --no-prealloc and --small-files, but pre-allocate in advance all the 10 GB.
Another protection I gain here, is protecting the user against sudden disk-full errors. If he regularly downloads Game of Thrones episodes to the same drive, he can't take space from the DB 10G, since it's already pre-allocated.
(using C# driver)
I think I found a solution: You might want to look at the --quota and --quotafiles command line opts. In your case, you also might want to add the --smalfiles option. So
mongod --smallfiles --quota --quotafiles 11
should give you a size of exactly 10224 MB for your data, which, adding the default namespace file size of 16MB equals your target size of 10GB, excluding indices.
The following applies to regular collections as per documentation. But since metadata can be attached to files, it might very well apply to GridFS as well.
MongoDB uses what is called a record to store data. A record consists of two parts: the actual data and something which is called "padding". The padding is basically unused data which is used if the document grows in size. The reason for that is that a document or file chunk in GridFS respectively never gets fragmented to enhance query performance. So what would happen when the document or a file chunk grows in size is that it had to be moved to a different location in the datafile(s) every time the file is modified, which can be a very costly operation in terms of IO and time. So with the default settings, if the document or file chunk grows in size is that the padding is used instead of moving the file, thus reducing the need of moving around data in the data file and thereby improving performance. Only if the growth of the data exceeds the preallocated padding the document or file chunk is moved within the datafile(s).
The default strategy for preallocating padding space is "usePowerOf2Sizes", which determines the padding size by taking the document size and uses the next power of two size as the size preallocated for the document. Say we have a 47 byte document, the usePowerOf2Sizes strategy would preallocate 64 bytes for that document, resulting in a padding of 17 bytes.
There is another preallocation strategy, however. It is called "exactFit". It determines the padding space by multiplying the document size with a dynamically computed "paddingFactor". As far as I understood, the padding factor is determined by the average document growth in the respective collection. Since we are talking of static files in your case, the padding factor should always be 0, and because of this, there should not be any "lost" space any more.
So I think a possible solution would be to change the allocation strategy for both the files and the chunks collection to exactFit. Could you try that and share your findings with us?
Can anyone explain how index files are loaded in memory while searching?
Is the whole file (fnm, tis, fdt etc) loaded at once or in chunks?
How individual segments are loaded and in which order?
How to encrypt Lucene index?
The main point of having the index segments is that you can rarely load the whole index in the memory.
The most important limitation that is taken into account while designing the index format is that disk seek time is relatively long (on plate-base hard drives, that are still most widely used). A good estimation is that the transfer time per byte is about 0.01 to 0.02 μs, while average seek time of disk head is about 5 ms!
So the part that is kept in memory is typically only the dictionary, used to find out the beginning block of the postings list on the disk*. The other parts are loaded only on-demand and then purged from the memory to make room for other searches.
As for encryption, it depends on whether you need to keep the index encrypted all the time (even when in memory) or if it suffices to encrypt only the index files. As for the latter, I think that an encrypted file system will be enough. As for the former, it is also certainly possible, as different index compression techniques are already in place. However, I don't think it's widely used, as the first and foremost requirement for full-text engine is speed.
[*] It's not really such simple, as we're performing binary searches against the dictionary, so we need to ensure that all entries in the first structure have equal length. As it's clearly not the case with normal words in dictionary and applying padding is too much costly (think of word lengths for some chemical substances), we actually maintain two levels of dictionary, the first one (which needs to fit in the memory and is stored in .tii files) keeps sorted list of starting positions of terms in the second index (.tis files). The second index is then a concatenated array of all terms in an increasing order, along with pointer to the sector in the .frq file. The second index often fits in the memory and is loaded at the start, but it can be impossible e.g. for bigram indexes. Also note that for some time Lucene by default doesn't use individual files, but so called compound files (with .cfs extension) to cut down the number of open files.
We're still evaluating Cassandra for our data store. As a very simple test, I inserted a value for 4 columns into the Keyspace1/Standard1 column family on my local machine amounting to about 100 bytes of data. Then I read it back as fast as I could by row key. I can read it back at 160,000/second. Great.
Then I put in a million similar records all with keys in the form of X.Y where X in (1..10) and Y in (1..100,000) and I queried for a random record. Performance fell to 26,000 queries per second. This is still well above the number of queries we need to support (about 1,500/sec)
Finally I put ten million records in from 1.1 up through 10.1000000 and randomly queried for one of the 10 million records. Performance is abysmal at 60 queries per second and my disk is thrashing around like crazy.
I also verified that if I ask for a subset of the data, say the 1,000 records between 3,000,000 and 3,001,000, it returns slowly at first and then as they cache, it speeds right up to 20,000 queries per second and my disk stops going crazy.
I've read all over that people are storing billions of records in Cassandra and fetching them at 5-6k per second, but I can't get anywhere near that with only 10mil records. Any idea what I'm doing wrong? Is there some setting I need to change from the defaults? I'm on an overclocked Core i7 box with 6gigs of ram so I don't think it's the machine.
Here's my code to fetch records which I'm spawning into 8 threads to ask for one value from one column via row key:
ColumnPath cp = new ColumnPath();
cp.Column_family = "Standard1";
cp.Column = utf8Encoding.GetBytes("site");
string key = (1+sRand.Next(9)) + "." + (1+sRand.Next(1000000));
ColumnOrSuperColumn logline = client.get("Keyspace1", key, cp, ConsistencyLevel.ONE);
Thanks for any insights
purely random reads is about worst-case behavior for the caching that your OS (and Cassandra if you set up key or row cache) tries to do.
if you look at contrib/py_stress in the Cassandra source distribution, it has a configurable stdev to perform random reads but with some keys hotter than others. this will be more representative of most real-world workloads.
Add more Cassandra nodes and give them lots of memory (-Xms / -Xmx). The more Cassandra instances you have, the data will be partitioned across the nodes and much more likely to be in memory or more easily accessed from disk. You'll be very limited with trying to scale a single workstation class CPU. Also, check the default -Xms/-Xmx setting. I think the default is 1GB.
It looks like you haven't got enough RAM to store all the records in memory.
If you swap to disk then you are in trouble, and performance is expected to drop significantly, especially if you are random reading.
You could also try benchmarking some other popular alternatives, like Redis or VoltDB.
VoltDB can certainly handle this level of read performance as well as writes and operates using a cluster of servers. As an in-memory solution you need to build a large enough cluster to hold all of your data in RAM.