I am trying to understand whether read latency stats obtained through
nodetool cfstats
nodetool cfhistograms
, will count each read within secondary index queries separately.
I guess the answer might depend on whether secondary index queries are handled by thrift clients or internally by Cassandra. I don't know that either.
[1] Cassandra - cfstats and meaning of read/write latency
Cassandra read latency is calculated with calculating average time taken by each read query.
In that a single get query or multiget query is considered as a single read and average is calculated.[ StorageProxy.read() is the function where Cassandra is collecting time taken for each query ]
Related
What is the maximum theoretical number of parallel requests that we can squize from single mongodb instance before deciding to shard?
Considering the database and indexes fit in memory and all requests are find() queries fetching single document based on indexed field. The hosting OS is Ubuntu , the data partition is SSD. ulimits are set to max.
In my laptop with simple test on single instance I reach near 40k/sec , after that the avg execution times start to increase significantly, but wondering what can be the upper theoretical limit?
It depends. If your active dataset can fit in the memory - if most of the requests don't need to perform any disk I/O - then you can achieve 24k+ requests pretty easily. If not on a (bigger) single machine, then at least use a replica set cluster with multiple secondaries.
If an active dataset is much larger than the available RAM then you have the same problem as with any other database. The advantage of MongoDB's new engine WiredTiger (since v3.0) is a transparent compression - it can reduce the amount of data and I/O and thus improve performance - even despite the fact that compression adds CPU load.
For more performance it really helps:
if the most accessed documents are small so it takes less time to
load them, transfer them, and less time to deserialize in your app List item
If you use projections in find(), for the same reasons
if you use bulk operations to reduce networking I/O and context switches
Even MongoDB itself has an option to limit the maximum number of incoming connections. It defaults to 64k.
for more information you can refer link
As I understand, when you perform a query that doesn't filter by one primary key, you perform a cross-partition query. For this to be executed, the query is sent to all physical partitions of your CDB collection, executed in parallel in each of them, and then returned.
As you scale to tens of thousands of requests per second, that means that each of the tens of thousands of requests is executed on each physical partition.
Does this mean that eventually each partition will reach its limit of requests per second it can serve, and horizontal scaling will no longer give any benefit? Because for every new physical partition CDB adds, it will need to serve all requests coming in, so it's not adding new throughout capacity, only storage.
The downstream implication being that even if at a small scale you're ok with incurring the increased RU cost for cross-partition queries, to truly be able to scale indefinitely your data model should ensure queries hit only one partition (possibly by denormalizing it).
Yes, cross partition queries will not allow a database like Cosmos DB (or any horizontally scalable database) to scale.
Databases like Cosmos DB provide unlimited scale because it scales horizontally. The objective for your partition strategy should be to answer your high volume queries with one, or at a minimum, a bounded set of partitions. The effort around partition strategy is to chose a property that is nearly always passed in queries. Denormalization is generally more a function of modeling data around requests. It has less to do with partitioning directly.
If you would like to learn more about partitioning and modeling with Cosmos DB I highly recommend watching this video. It presents the topics very well, Data modeling & partitioning: What every relational database dev needs to know
I want to transfer data from oracle to MongoDB using apache nifi. Oracle has a total of 9 million records.
I have created nifi flow using QueryDatabaseTable and PutMongoRecord processors. This flow is working fine but has some performance issues.
After starting the nifi flow, records in the queue for SplitJson -> PutMongoRecord are increasing.
Is there any way to slow down records putting into the queue by SplitJson processor?
OR
Increase the rate of insertion in PutMongoRecord?
Right now, in 30 minutes 100k records are inserted, how to speed up this process?
#Vishal. The solution you are looking for is to increase the concurrency of PutMongoRecord:
You can also experiment with the the BATCH size in the configuration tab:
You can also reduce the execution time splitJson. However you should remember this process is going to take 1 flowfile and make ALOT of flowfiles regardless of the timing.
How much you can increase concurrency is going to depend on how many nifi nodes you have, and how many CPU Cores each node has. Be experimental and methodical here. Move up in single increments (1-2-3-etc) and test your file in each increment. If you only have 1 node, you may not be able to tune the flow to your performance expectations. Tune the flow instead for stability and as fast as you can get it. Then consider scaling.
How much you can increase concurrency and batch is also going to depend on the MongoDB Data Source and the total number of connections you can get fro NiFi to Mongo.
In addition to Steven's answer, there are two properties on QueryDatabaseTable that you should experiment with:
Max Results Per Flowfile
Use Avro logical types
With the latter, you might be able to do a direct shift from Oracle to MongoDB because it'll convert Oracle date types into Avro ones and those should in turn by converted directly into proper Mongo date types. Max results per flowfile should also allow you to specify appropriate batching without having to use the extra processors.
Is Cassandra with multiple nodes a good choice as replacement to single node PostgreSql? Data being stored is a time series. It is about tens of gigabytes already and is expected to grow. Database should be integrated into pipeline with apache spark as source and possibly result destination.
What is needed:
1) redundancy: one node failure shouldn't stop the system (all data should be available)
2) speed: more nodes - less time per single insert/select for one client
3) concurrency: more nodes - better speed for simultaneous inserts/selects from different clients
For your points:
1) This is a question which is up to you while choosing the keyspace replication factor RF and the consistency levels CL of your inserts and selects. To be available and consistent you need RF=3 on your and CL.QUORUM for both insert and select for hande loss of one node (for QUORUM you need RF/2+1 nodes online, 3/2+1=2 - integer division, with RF=5 you would neeed 5/2+1=3 nodes online, so you can handle loss of 2).
2) A single request will be handled by a single node as coordinator in your cluster. You do not gain much performance here with singe and synchronous requsts. If you issue any requests and use async you will split your requests across more nodes and gain performance.
3) With more clients you have the same effect - the coordinator will be picked at random (ok there is the TokenAwarePolicy which will pick a appropriate coordinator).
You've mentioned that you use time series data.
1. Naturally, you can vary the replication factor and consistency level. So yes, Cassandra would be good as a replacement.
2. The insert would be really fast as Cassandra writes memory first. So yes, Cassandra would be good as a replacement.
3. Cassandra has linear horizontal scalability. So yes, Cassandra would be good as a replacement.
The drawbacks are that Cassandra is a key-value storage. So you should model the table structure around the queries. And PostgreSQL as RDBMS is more flexible as support the whole set of SQL operations.
You can read more about some pros and cons of using Cassandra with time series data here and here.
I read mongodb sharding guide, but I am not sure what kind of shard key suits my application. Any suggestions are welcome.
My application is a huge database of network events. Each document has a time filed and a couple of network related values like IP address and port number. I have the insert rate of 100-1000 items per seconds. In my experience with a single mongod, one single shard has no problem with this insert rate.
But I extensively use aggregation framework on huge amount of data. All the aggregations has a time limit--i.e. mostly the recent month or the recent week. I tested the aggregations on one singe mongod and a query with response time of 5 minutes while the insertion is off could take as long as two hours if the 200 insert per seconds is activated.
Can I improve mongodb aggregation query response time by sharding?
If yes, I think I have to use time as the shard key, because in my application, every query has to be run on a time limit (e.g. Top IP addresses in the recent month) and if we can separate the shard that inserting is take place and the shard that the query is working the mongodb could work much faster.
But the documenation says
"If the shard key is a linearly increasing field, such as time, then all requests for a given time range will map to the same chunk, and thus the same shard. In this situation, a small set of shards may receive the majority of requests and the system would not scale very well."
So what shall I do?
Can I improve mongodb aggregation query response time by sharding?
Yes.
If you shard your database on different machines this will provide parallel computing power on aggregation functions. The important thing is here the distribution of the data in your shards. It should be uniform distributed.
If you choose your shard key as a monotonicly increasing(or decreasing) field of the document-like time- and use hashed sharding, this will provide uniform distribution over the cluster.