Query (of class "PhabricatorDaemonLogQuery") overheated: examined more than 10 raw rows without finding 1 visible objects - daemon

I got an error as Query (of class "PhabricatorDaemonLogQuery") overheated: examined more than 10 raw rows without finding 1 visible objects.
Phabricator was running fine. But don't know how this error comes. I restarted the daemons. And the phabricator start to run again normally. Actually I want to know what this error means.

Related

Anylogic model stop without message

I have created a model to generate a product that will be cycled through a list of machines. Technically the product list is for a single-day run, but I run the model for long durations to stabilise the model output.
The model can run properly for months until around 20 months, then suddenly stops without any error message as shown in the screenshot. I do not know how to debug this since I do not know where the error comes from.
Does anyone have a similar encounter and could advise on how to approach this issue? Could it be an issue of memory overload?
Without more details, it's hard to pinpoint the exact reason, but this generally happens if the run is stuck in an infinite While Loop or similar. So check all your loops where it's possible for such a scenario to happen and it's likely that one of them (or more) is causing the issue.

Paraview crashes when loading a saved state

I had results of 2 CFD simulations, which I visualized using Paraview. Let me name the 2 results as case A and case B. In principle I had to analyze same parameters for both the cases. Hence I named the states of case A and case B with the same name, but had saved them in entirely different folders. There were about 4 states for each of the case (with the same name). I was able to load the states without any problems until yesterday. Today when I tried to load the very same state that I was able to load yesterday, Paraview crashed. What could be the reason for this?
I thought the problem occurred as I had used the same names for the states of both the cases. Hence I even tried load them after renaming. Still I couldn't load. I also reinstalled Paraview from scratch. Yet it crashed when I tried to load the state. The version I am using is 5.7.0

Variable-length path query runs forever but executes immediately when edge is bidrectional

Problem
We have a graph in which locations are connected by services. Services that have common key-values are connected by service paths. We want to find all the service paths we can use to get from Location A to Location Z.
The following query matches services that go directly from A to Z and service paths that take one hop to get from A to Z:
MATCH p=(origin:location)-[:route]->(:service)-[:service_path*0..1]->
(:service)-[:route]->(destination:location)
WHERE origin.name='A' AND destination.name='Z'
RETURN p;
and runs fine.
But if we expand the search to service paths that may take two hops between A and Z:
MATCH p=(origin:location)-[:route]->(:service)-[:service_path*0..2]->
(:service)-[:route]->(destination:location)
WHERE origin.name='A' AND destination.name='Z'
RETURN p;
the query runs forever.
It never times out or crashes the server - it just runs continuously.
However, if we make the variable-length part of the query bidirectional:
MATCH p=(origin:location)-[:route]->(:service)-[:service_path*0..2]-
(:service)-[:route]->(destination:location)
WHERE origin.name='A' AND destination.name='Z'
RETURN p;
The same query that ran forever now executes instantaneously (~30ms on a dev database with default Postgres config).
More info
Behavior in different versions
This problem occurs in AgensGraph 2.2dev (cloned from GitHub). In Agens 2.1.0 , the first query -[:service_path*0..1]-> still works fine, but the broken query -[:service_path*0..2]-> AND the version that works in 2.2dev, -[:service_path*0..2]- result in an error:
ERROR: btree index keys must be ordered by attribute
This leads us to believe that the problem is related to this commit, which was included as a bug fix in Agens 2.1.1
fix: Make re-scanning inner scan work
VLE threw "btree index keys must be ordered by attribute" because
re-scanning inner scan is not properly done in some cases. A regression
test is added to check it.
Duplicate paths returned, endlessly
In AgensBrowser v1.0, we are able to get results out using LIMIT on the end of the broken query. The query always returns the maximum number of rows, but the resulting graph is very sparse. Direct paths and paths with one hop show up, but only one longer path appears.
In the result set, the shorter paths are returned in individual records as expected, but first occurrence of a two-hop path is duplicated for the rest of the rows.
If we return some collection along with the paths, like RETURN p, nodes(p) LIMIT 100, the query again runs infinitely.
(Interestingly, this row duplication also occurs for a slightly different query in which we used the bidirectional fix, but the entire expected graph was returned. This may deserve its own post.)
The EXPLAIN plans are identical for the one-hop and two-hop queries
We could not compare EXPLAIN ANALYZE (because you can't ANALYZE a query that never finishes running) but the query plans were exactly identical between all of the queries - those that ran and those that didn't.
Increased logging revealed nothing
We set the logging level for Postgres to DEBUG5, the highest level, and ran the infinitely-running query. The logs showed nothing amiss.
Is this a bug or a problem with our data model?

Can you calculate active users using time series

My atomist client exposes metrics on commands that are run. Each command is a metric with a username element as well a status element.
I've been scraping this data for months without resetting the counts.
My requirement is to show the number of active users over a time period. i.e 1h, 1d, 7d and 30d in Grafana.
The original query was:
count(count({Username=~".+"}) by (Username))
this is an issue because I dont clear the metrics so its always a count since inception.
I then tried this:
count(max_over_time(help_command{job=“Application
Name”,Username=~“.+“}[1w]) -
max_over_time(help_command{job=“Application name”,Username=~“.+“}[1w]
offset 1w) > 0)
which works but only for one command I have about 50 other commands that need to be added to that count.
I tried the:
"{__name__=~".+_command",job="app name"}[1w] offset 1w"
but this is obviously very expensive (timeout in browser) and has issues with integrating max_over_time which doesn't support it.
Any help, am I using the metric in the wrong way. Is there a better way to query... my only option at the moment is the count (format working above for each command)
Thanks in advance.
To start, I will point out a number of issues with your approach.
First, the Prometheus documentation recommends against using arbitrarily large sets of values for labels (as your usernames are). As you can see (based on your experience with the query timing out) they're not entirely wrong to advise against it.
Second, Prometheus may not be the right tool for analytics (such as active users). Partly due to the above, partly because it is inherently limited by the fact that it samples the metrics (which does not appear to be an issue in your case, but may turn out to be).
Third, you collect separate metrics per command (i.e. help_command, foo_command) instead of a single metric with the command name as label (i.e. command_usage{commmand="help"}, command_usage{commmand="foo"})
To get back to your question though, you don't need the max_over_time, you can simply write your query as:
count by(__name__)(
(
{__name__=~".+_command",job=“Application Name”}
-
{__name__=~".+_command",job=“Application name”} offset 1w
) > 0
)
This only works though because you say that whatever exports the counts never resets them. If this is simply because that exporter never restarted and when it will the counts will drop to zero, then you'd need to use increase instead of minus and you'd run into the exact same performance issues as with max_over_time.
count by(__name__)(
increase({__name__=~".+_command",job=“Application Name”}[1w]) > 0
)

Exception in sumTypeTopicCounts

Hi I am trying to use MALLET to obtain 500 topics but I hit the below exception in MALLET. Is this a known issue and are there any workarounds?
overflow in merging on type 4975
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 3
at cc.mallet.topics.ParallelTopicModel.sumTypeTopicCounts(ParallelTopicModel.java:453)
at cc.mallet.topics.ParallelTopicModel.estimate(ParallelTopicModel.java:825)
at cc.mallet.topics.tui.TopicTrainer.main(TopicTrainer.java:245)
I am using mallet-2.0.8RC2.
Recently, I ran Mallet with two different datasets (one with 100M and the other one around 1G). Usually this kind of exception happened with the larger dataset and when I wanted to run in in parallel for larger iteration number like 100 for the larger dataset. It threw Exception: ArrayIndexOutOfBoundsException in two different files: WorkerRunnable and ParallelTopicModel in different spots. So the thing is when the array reaches the end of the array, it prints “overflow in merging on type” to the logger and after that point, the program doesn’t do anything to get out of the situation. I was able to patch these edge cases with index checking before accessing the array. It helps me run it without breaking it but I am not sure how it might change the output anyways and it also keeps printing the same message “overflow in merging on type ” as usual but it goes on and doesn’t throw an exception.
I have uploaded the patches on my Github and follow the instructions. It has been able to resolve the issues for me as I haven’t seen this break again under different circumstances. If it doesn’t resolve your issues, you should probably download the latest version from their Github and debug and build it yourself.
I have also uploaded both datasets; both are four years of data; (1 Jan 2015- 1 Jan 2019), smaller one is StackExchange (DataScience) and the larger one is Reddit (9 DataScience Subreddits) (datasets) and you would like to play with it.
Good luck.