Postgres greatest-n-per-group is slow - postgresql

the following query find greatest value per day and it is executed to slow. How to make it faster?

Related

Measuring the performance of a count query in MongoDB

I'm currently evaluating MongoDB and comparing it to OracleDB. In order to compare them I measure the performance of a dataset in both database environments.
I tried to measure the performance of the count() function in MongoDB but couldn't seem to make it work.
This is what my MongoDB count query looks like at the moment:
db.test2.find({"Interpret": "Apashe"}).count();
It works fine, but how can I measure the time it took MongoDB to perform this? I tried the usual
explain("executionStats")
but it doesn't work that way with count it seems.
Any help would be appreciated
count is estimated and comes out of collection stats. Benchmarking it against oracle doesn't seem very useful.
You can try benchmarking countDocuments which should provide a meaningful signal. Though I am also confused why you decided to benchmark counts, a more sensible starting point should be finds and once you understand how counts are implemented you can benchmark counts and get some useful signal.
I think according to the documentation here:
https://docs.mongodb.com/manual/reference/method/cursor.explain/#mongodb-method-cursor.explain
count() is equivalent to the db.collection.find(query).count() construct. So essentially you can measure the find query.

Why postgres is slow updating a simple JSONB field?

I am running on postgresql 12 and I have a pretty small table users (~5000 records).
I am logging slow queries and I found that updating JSONB fields is pretty slow, here is an example:
update "users" set "artifacts" = '[{"xx": "xxx", "xxx": "xxx"}]' where "id" = 1000;
It is a pretty simple query on an index, but in my production node this query pops out in the slow
queries. (~100ms).
I run an EXPLAIN ANALYZE on it but can't get nothing useful, at least for my knowledge :)
https://explain.depesz.com/s/2DGg
If I run an UPDATE query on the same table, but on a non-JSONB field the query is super fast.
Any hint?
The slow query log only shows you the times it was slow. How many times did it run when it wasn't slow? You can use pg_stat_statements to help find that out. (You could also log the duration of every query to avoid selection bias, but that might cause excessive log bloat).
If I run an UPDATE query on the same table, but on a non-JSONB field the query is super fast.
And when you ran this query on stage, it was also super fast. Maybe it is only slow when your server is severely overloaded. Is the column indexed? Maybe it had to stop to clean up the fastupdate buffer.

mongo db -- time query along with geolocation query

I need to do a geo-spatial query that is filtered by time (past 5 hrs). Thinking of doing the following, but not sure what will yield in the fastest query:
Migrate old data (older than 5 hours) to Archive table (runs in background periodically)
Geo-spatial query with time-box (query objects within a time frame)
think this one might be computationally expensive at scale ?
Geo-spatial query and filter results (drop all data less older than time constraint)
Really thinking about how this will work at a large scale.
Thanks !

Strange behavior in MongoDb large collection

Today I had a super weird issue that I couldn't fix for a couple of hours. It was pretty simple but it involved a huge collection so every time I tried to query using a filter I got stuck for 15 minutes with no answer. I know what's the problem but I would like to understand why Mongo behaves in that way.
My collection is around 50 million records. It's indexed only in one record, datetime, so we can rapidly get the time period to analyze and then we extract data. Our analysis can involve in between 20 thousands to 1 million records.
As I use a lot of aggregations my mistake was that in the filter clause I added a $ symbol to one field name.
Instead of:
db.collection.find({field:{$gte:...},...})
I wrote:
db.collection.find({$field:{$gte:...},...})
Running db.currentOp() in both cases looks exactly the same, no difference, except for $field instead of field, seems to be doing the same operation. But the one with the wrong $ symbol never ends and never fails.
I'm curious about what MongoDB is trying to do in this case and why it turns into a blocked query that never finishes.

java, Calculating mongodb query execution time

I want to log my query times for each time when a query made.
I'm using mongodb on playframework. Simply I'm thinkinig to substract start and end time.For ex:
a=currenttime();
madequert();
querytime=currenttime()-a;
Is there any better way?
You probably want to use Mongo's DB profiler. That way you'll keep that of your code (less work to maintain it) and it will give you more options to check Mongo behaviour.