NodeRED : chart node does not show multiple lines? - charts

I have a NodeRED flow setup with a chart node. In my function node I have a super simple code as following
var left = {payload : msg.payload.readResults[0].v };
var right = {payload : msg.payload.readResults[1].v };
return [[left,right]];
Now I have a debug node connected to both function node and chart node and was hoping the cart would output 2 lines(one for each result).
But all I see is just one line with one color and the debug node connected to the graph is outputting Arrays of multiple objects while the debug node connected to the function is outputting the numbers as I'm expecting.
What am I doing wrong here to get output of 2 lines(left and right variables) with different colors?

You need to add msg.topic values to each message so the chart node knows to assign the values to different datasets.
As it is it will just plot both messages as if they are the same dataset.
From the docs:
Line charts
To display two or more lines on the same chart then each msg must also
contain topic property that identifies which data series it belongs to
for example:
{topic:"temperature", payload:22}
{topic:"humidity", payload:66}

Related

Grafana dashboard to display a metric for a key in JSON Loki record

I'm having trouble understanding how to create a dashboard time series plot to display a single key/value from a Loki log which is in JSON format.
eg:
here is my query in the Explorer:
{job="railsdevlogs"}|json
which returns log lines such as:
{"date":"2022-01-05T21:27:21.895Z","pool":{"Pool Size":50,"Current":5,"Active":1,"Idle":4,"Dead":0,"Timeout":"5 sec"},"puma":{"Started At":"2022-01-05T20:35:26Z","Max Threads":16,"Pool Capacity":16,"Running":1,"Backlog":0,"IO Handles":15,"File Handles":2,"Socket Handles":4,"Server Log Size":46750072},"process":[{"Name":"ruby.exe","Process ID":656,"Threads":11,"Working Set":150728704,"Virtual Size":288079872},{"Name":"mysqld.exe","Process ID":4836,"Threads":3,"Working Set":360448,"Virtual Size":4445065216},{"Name":"mysqld.exe","Process ID":5808,"Threads":49,"Working Set":69906432,"Virtual Size":4924059648},{"Name":"aaaaa.exe","Process ID":14460,"Threads":18,"Working Set":49565696,"Virtual Size":5478469632},{"Name":"bbbbb.exe","Process ID":9584,"Threads":14,"Working Set":35012608,"Virtual Size":4496551936},{"Name":"ccccc.exe","Process ID":11944,"Threads":14,"Working Set":29609984,"Virtual Size":4481880064}],"gc":{"count":242,"heap_allocated_pages":1277,"heap_sorted_length":1279,"heap_allocatable_pages":9,"heap_available_slots":869213,"heap_live_slots":464541,"heap_free_slots":404672,"heap_final_slots":0,"heap_marked_slots":411311,"heap_swept_slots":457903,"heap_eden_pages":1268,"heap_tomb_pages":9,"total_allocated_pages":1278,"total_freed_pages":1,"total_allocated_objects":74364715,"total_freed_objects":73900174,"malloc_increase_bytes":640096,"malloc_increase_bytes_limit":16777216,"minor_gc_count":131,"major_gc_count":111,"remembered_wb_unprotected_objects":57031,"remembered_wb_unprotected_objects_limit":114062,"old_objects":349257,"old_objects_limit":698512,"oldmalloc_increase_bytes":640288,"oldmalloc_increase_bytes_limit":16777216},"os":{"System Name":"xxxxx","Description":"","Organization":"","Operating System":"Microsoft Windows 10 Enterprise LTSC","OS Version":"10.0.17763","OS Serial Number":"xxxxx-xxxxx-xxxxx-xxxxx","System Time":"2022-01-05T16:27:22.000-05:00","System Time Zone":-300,"Last Boot Time":"2021-12-15T23:26:38.000-05:00","System Drive":"C:","Total Physical Memory":34204393472,"Free Physical Memory":20056260608,"Total Virtual Memory":39304667136,"Free Virtual Memory":13915041792,"Number of Processes":307,"Number of Users":2,"volumes":[{"Drive":"C:\\","Type":"NTFS","Total Space":1023563264000,"Free Space":681182343168,"Block Size":4096}]},"symbol":{"size":28106},"stats_collection_time":387}
using |json will automatically create dynamic labels for all the key/values in the json log line:
gc_count = 123
os_Free_Virtual_Memory = 456789
etc.
Now I would like to plot one of these values in a grafana time series plot, but I am struggling to understand how to isolate one dynamic label and plot it.
Perhaps I'm using |json incorrectly. The documentation and examples I have read so far shows how to filter the logs using the dynamic labels, but I dont need that since I want to plot every log line.
thanks
I think this should help https://grafana.com/go/observabilitycon/2020/keynote-what-is-observability/ if you go to minute 41.
There's an example which is very similar to what you're trying to achieve.
Your query should look something like:
quantile_over_time(0,99, {job="railsdevlogs"}
| json
| unwrap gc_count [1m]}
by (job)

How to join 2 sets of Prometheus metrics?

AKS = 1.17.9
Prometheus = 2.16.0
kube-state-metrics = 1.8.0
My use case: I want to alert when 1 of my persistent volumes are not in a "Bound" phase and only when this falls within a predefined set of namespaces.
This got me to my first attempt at joining Prometheus metrics - so, please bear with me : )
I opted to use the following to obtain the pv phase:
kube_persistentvolume_status_phase{phase="Bound",job="kube-state-metrics"}
Renders:
kube_persistentvolume_status_phase{instance="10.147.5.110:8080",job="kube-state-metrics",persistentvolume="pvc-33197ae6-d42a-777e-b8ca-efbd66a8750d",phase="Bound"} 1
kube_persistentvolume_status_phase{instance="10.147.5.110:8080",job="kube-state-metrics",persistentvolume="pvc-165d5006-erd4-481e-8acc-eed4a04a3bce",phase="Bound"} 1
This worked well, except for the fact that it does not include the namespace.
So I managed to determine the persistentvolumeclaim namespaces with this:
kube_persistentvolumeclaim_info{namespace=~"monitoring|vault"}
Renders:
kube_persistentvolumeclaim_info{instance="10.147.5.110:8080",job="kube-state-metrics",namespace="vault",persistentvolumeclaim="vault-file",storageclass="default",volumename="pvc-33197ae6-d42a-777e-b8ca-efbd66a8750d"} 1
kube_persistentvolumeclaim_info{instance="10.147.5.110:8080",job="kube-state-metrics",namespace="monitoring",persistentvolumeclaim="prometheus-prometheus-db-prometheus-prometheus-0",storageclass="default",volumename="pvc-165d5006-erd4-481e-8acc-eed4a04a3bce"} 1
So my idea was to join these sets with the matching values in the following fields:
(kube_persistentvolume_status_phase)persistentvolume
on
(kube_persistentvolumeclaim_info)volumename  
BUT, if I understood it correctly you are only able to join two metrics sets on labels that match exactly (text and their values). I hence opted for the "instance" and "job" labels as these were common on both sides and matching. 
kube_persistentvolume_status_phase{phase!="Bound",job="kube-state-metrics"}  * on(instance,job) group_left(namespace) kube_persistentvolumeclaim_info{namespace=~"monitoring|vault"}
Renders:
Error executing query: found duplicate series for the match group {instance="10.147.5.110:8080" , job="kube-state-metrics"} on the right hand-side of the operation: [{__name__="kube_persistentvolumeclaim_info", instance="10.147.5.110:8080", job="kube-state-metrics", namespace="monitoring", persistentvolumeclaim="alertmanager-prometheusam-db-alertmanager-prometheusam-0", storageclass="default", volumename="pvc-b8406fb8-3262-7777-8da8-151815e05d75"}, {__name__="kube_persistentvolumeclaim_info", instance="10.147.5.110:8080", job="kube-state-metrics", namespace="vault", persistentvolumeclaim="vault-file", storageclass="default", volumename="pvc-33197ae6-d42a-777e-b8ca-efbd66a8750d"}];many-to-many matching not allowed: matching labels must be unique on one side
So in all fairness, the query does communicate well on what the problem is - so I attempted to solve this with the "ignoring" option - attempting to keep only the matching labels and values (instance and job) and "excluding/ignoring" the non-matching ones on both sides. This did not work either - resulting in a parsing error. Which in turn nudged me to take a step back and reassess what I am doing.
I am just a bit concerned that I am perhaps barking up the wrong tree here.
My question is: Is this at all possible and if so how? or is there perhaps another, more prudent way to achieve this?
Thanks in advance!

AnyLogic - is there a way to convert a Node from network.nodes() to Rectangular Node?

I am looking to dynamically add each Rectangular Node from my network to a particular collection, on simulation startup. I'm doing this because I will have 1000+ nodes and adding them manually is undesirable.
Each of these nodes is named using a convention and start with either "lane" or "grid".
If it is "lane" then it adds to one collection, and "grid" to the other.
Currently I am using the code:
for (Node n : network.nodes()) {
String layoutIdentifier = n.getName().split("_")[0];
if (layoutIdentifier.equals("lane")) {
laneNodes.add(n);
traceln(n);
//Lane newLane = add_lanes(n);
} else if (layoutIdentifier.equals("grid")) {
gridNodes.add(n);
}
}
This is working fine and adds them to the collections as Nodes, but I was really wanting to add them to collections of Rectangular Nodes (which is what they are) as I need to use this type in my Agents.
I tried this code (changing Node to RectangularNode):
for (RectangularNode n : network.nodes()) {
String layoutIdentifier = n.getName().split("_")[0];
if (layoutIdentifier.equals("lane")) {
laneNodes.add(n);
traceln(n);
//Lane newLane = add_lanes(n);
} else if (layoutIdentifier.equals("grid")) {
gridNodes.add(n);
}
}
but get the error
Type mismatch: cannot convert from element type Node to RectangularNode. Location: pbl_congestion_simulation/Main - Agent Type
Is there a way to convert the Node to RectangularNode? Or a better way to run through all the nodes in the network and add them as Rectangular nodes to collections of the same?
I can see that the Nodes are referenced as com.anylogic.engine.markup.RectangularNode#293cc7d0 so was hoping that RectangularNode portion could be accessed.
many thanks.
Your first code is fine, you can have a collection with elements of type RectangularNode and the only thing you need to change on that code is:
laneNodes.add((RectangularNode)n);
gridNodes.add((RectangularNode)n);
You can transform a node into a rectangular node, but not the other way around, that's why your second code doesn't work.
If you have other nodes on the network, that are not rectangular nodes, then you can add something like this:
if(n.getClass().equals(RectangularNode.class))
laneNodes.add((RectangularNode)n);

How can I filter the result of label_values(label) to get a list of labels that match a regex?

I have several metrics with the label "service". I want to get a list of all the "service" levels that begin with "abc" and end with "xyz". These will be the values of a grafana template variable.
This is that I have tried:
label_values(service) =~ "abc.*xyz"
However this produces a error Template variables could not be initialized: parse error at char 13: could not parse remaining input "(service_name) "...
Any ideas on how to filter the label values?
This should work (replacing up with the metric you mention):
label_values(up{service=~"abc.*xyz"}, service)
Or, in case you actually need to look across multiple metrics (assuming that for some reason some metrics have some service label values and other metrics have other values):
label_values({__name__=~"metric1|metric2|metric3", service=~"abc.*xyz"}, service)

pygraphviz/networkx external label (xlabel)

I have a question regarding external labels in pygraphviz. Sadly, I haven't found anything regarding this on the internet.
I want to use networkx to create/parse a graph in tree structure and then use pygraphviz/pydot to draw it. I need external labels on top of the normal labels for the nodes because I want to display values for the nodes + the node name itself.
Let's say I have the following graph (very simplified example of what I'm doing later):
g = nx.Graph()
g.add_edges_from([('A','B'), ('A','C')])
p = nx.drawing.nx_pydot.to_pydot(g)
So I'm using the last line to generate a tree like hraph and I need external labels for B and C.
How to do it?
For pygraphviz you can pass through arbitrary graphviz attributes, so you can do:
dot.node('point', '', shape='point', xlabel='Label')