Possible bug in Pd patch - puredata

I have made a very simple patch, by which when a bang is triggered, it is meant to trigger a unique number between 0-2, in other words, no numbers are repeated.
In the way that I set it up, it is meant to work in theory. Even my programming mentor said that it should work, in theory, and he's generally a very smart man. He's informally known as being the boffin of the academy.
A few more details:
This happens in both purr data and pure data, with the exact same setup.
There are no external libraries are used. Just plain Vanilla objects.
Since there doesn't seem to be a way to attach the actual file itself, I will instead post an image of the code:

The problem is with depth-first processing (as used by Pd) and the related stack-unrolling, as this might lead to setting the 2nd input of [select] to an old value (which you didn't expect).
Example
Note: select:in0 means the left-most inlet of [select],... The numbers generated by [random] are shown in bold (1) and the numbers output the patch are shown in bold italics (e.g. 3)
Imagine the [select] is initialized to 0 and the [random 3] object outputs a list 2 0 0 2 0 2 ... (hint: [seed 96().
The expected output would be 2 0 2 0 2 ..., however the output really is 2 0 2 2 2 ...
Now this is what happens if you consecutively send [bang( to the random generator:
random generates 2
2 is sent to the sel:in0, which compares it to 0 (no match)
and sends it out of sel:out1 (the reject outlet), displaying the number 2
after that the number is sent to sel:in1, setting it's internal state to 2.
random generates 0
0 is sent to the sel:in0, which compares it to 2 (no match)
and sends it out of sel:out1, displaying the number 0
after that the number is sent to sel:1, setting it's internal state to 0.
random generates 0
0 is sent to the sel:in0, which compares it to 0 (match!)
and sends a bang through sel:out0 (the match outlet)
triggering a new call to random, which now generates 2
2 is sent to the sel:in0, which compares it to 0 (no match)
and sends it out of sel:out1, displaying the number 2
after that the number is sent to sel:1, setting it's internal state to 2.
after that the number 0 (still pending in the trigger:out0) is sent to sel:1, setting it's internal state to 0!!!
random generates 0
0 is sent to the sel:in0, which compares it to 0 (match!)
and sends a bang through sel:out0
triggering a new call to random, which now generates 2
2 is sent to the sel:in0, which compares it to 0
and sends it out of sel:out1, displaying the number 2
after that the number is sent to sel:1, setting it's internal state to 2.
after that the number 0 (still pending in the trigger:out0) is sent to sel:1, setting it's internal state to 0!!!
As you can see, at the end of #3 the internal state of [select] is 0, even though the last number generated by [random] was 2 (because the left-most outlet of [trigger] will only send to 0 after it has sent the 2, due to stack-unrolling).
Solution
The solution is simple: make sure that the state of [select] contains the last displayed value, rather than the last one generated on the stack. avoid feedback when modifying the internal state.
E.g (using local send/receive to for nicer ASCII-art)
[r $0-again]
|
[bang(
|
[random 3]
|
| [r $0-last]
| |
[select]
| |
| [t f f]
| | |
| | [s $0-last]
| |
| [print]
|
[s $0-again]

Related

How to exclude spikes from SumoLogic alert?

We have SumoLogic alert that happens if more than 10 errors logged in 60 min.
I prefer to have something like: 
if there is a spike and all the errors happen in e.g. 1 minute ( consider as issue has been auto resolved ) do not generate alert.
How can I set such sumoLogic query?
Variances of the requirements :
Logs have clientIp field, and if all errors are reported for the same client, do not generate alert( problem with particular client, not with application)
if more than 10 errors logged in 60 min, send an alert, unless the errors are of type A, but if there are more than 100 errors of type A, send the alert.( log errors of type A are acceptable, unless the number is too big)
if more than 10 errors logged in 60 min, send an alert Only if the last error happened less than 30 min ago(otherwise consider as auto-fixed)
I am not fully sure how is your data shaped, but...
if there is a spike and all the errors happen in e.g. 1 minute ( consider as issue has been auto resolved ) do not generate alert.
This you can solve by aggregating:
| timeslice 1m
| count by _timeslice
| where _count > 1
or similar.
if all errors are reported for the same client, do not generate alert
It sounds like:
| count by _timeslice, clientIp
would do the job.
if more than 10 errors logged in 60 min, send an alert, unless the errors are of type A, but if there are more than 100 errors of type A,
Rough sketch of the query clause would be:
| if(something, 1, 0) as is_of_type_A
| count by is_of_type_A, ...
| where (is_of_type_A = 1 and _count > 100)
OR (is_of_type_A = 0 and _count > 10)
Disclaimer: I am currently employed by Sumo Logic.

promql example with related fields but different labels

I'm using Prometheus and Grafana, and I'm trying to track a web server app.
I want to graph the average duration in ms of a particular query. I think I can get there from the data below, but I'm struggling.
My two sets of values:
rate(http_server_request_duration_seconds_sum[5m])
Element Value
{instance="dbserver:5000",job="control-tower",method="get",path="/api/control/v1/node/config.json"} 0.0010491088980113385
{instance="dbserver:5000",job="control-tower",method="get",path="/api/schedule/v1/programs/:id.json"} 0
{instance="dbserver:5000",job="control-tower",method="get",path="/api/schedule/v1/users.json"} 0
{instance="dbserver:5000",job="control-tower",method="get",path="/metrics"} 0.00009133616130826839
{instance="dbserver:5000",job="control-tower",method="post",path="/api/caption/v1/messages.json"} 0
{instance="dbserver:5000",job="control-tower",method="post",path="/api/caption/v1/sessions.json"} 0
{instance="dbserver:5000",job="control-tower",method="post",path="/api/schedule/v1/programs.json"} 0
{instance="dbserver:5000",job="control-tower",method="put",path="/api/caption/v1/sessions/captioners.json"} 0
{instance="dbserver:5000",job="control-tower",method="put",path="/api/control/v1/agents/:id.json"}
rate(http_server_requests_total[5m])
Element Value
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="get",path="/api/control/v1/node/config.json"} 0.03511075688258612
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="get",path="/api/schedule/v1/programs/:id.json"} 0
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="get",path="/api/schedule/v1/users.json"} 0
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="get",path="/metrics"} 0.06671043807691363
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="post",path="/api/caption/v1/sessions.json"} 0
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="post",path="/api/schedule/v1/programs.json"} 0
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="put",path="/api/caption/v1/sessions/captioners.json"} 0
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="put",path="/api/control/v1/agents/:id.json"} 0
{code="422",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="post",path="/api/schedule/v1/programs.json"} 0
{code="502",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="post",path="/api/caption/v1/messages.json"}
They have different labels. For this, I only care where path="/api/caption/v1/messages.json".
I think I need to use a combination of rate, sum, and "on" or "ignore", but I haven't been able to get on or ignore to work at all.
I can get the numerator (in seconds) with:
rate( http_server_request_duration_seconds_sum { path="/api/caption/v1/messages.json" }[5m])
And that returns:
{instance="dbserver:5000", job="control-tower", method="post", path="/api/caption/v1/messages.json"}
But the denominator can have different return codes, so I have to sum those, and I need to do some ignore or on or something, but I haven't found an example that helps me out, and I'm really new at this.
Anyone?
Okay, I continued to play. Because I only have one path I worry about, i figured out I could sum the rates. I think this works:
sum( rate( http_server_request_duration_seconds_sum {path="/api/caption/v1/messages.json"}[2h])) / sum( rate( http_server_requests_total{ path="/api/caption/v1/messages.json"}[2h]))
I changed the sample rate as my sample data fell off my 5-minute window, and I had zeros.
I THINK what this is doing is summing the rates, which gets rid of all the labels. And I THINK what it's also doing is using 2 hours of data. I think the rate value is how quickly the value changed over that 2 hour period.
I would love comments.
This solution won't work if I want one chart to include other paths, and I'm still not sure what to do about that, so this solves my current problem but still doesn't help me figure out how to do something similar with ignore or on.

CJ1W-CT021 Card Error Omron PLC

I got this error on a CJ1W-CT021 card. It happen all of a sudden after its been running the program for some time. How i found it was by going to the IO Table and Unit Set up. Clicked on parameters for that card and found two settings in red.
Output Control Mode and And/Or Counter Output Patterns. This was there reading
Output Control Mode = 0x40 No Applicable Set Data
And/Or Counter Output Patterns = 0x64 No Applicable Set Data
no idea on how or why these would change they should of been
Output Control Mode = Range Mode
And/Or Counter Output Patterns = Logically Or
I have added some new code, but nothing big or really even used as i had the outputs of the new rungs jumped out. One thing i thought might cause this is every cycle of the program it was checking the value of an encoder connected to this card. Maybe checking it too offten? Anyhow if anyone has any idea what these do or how they would change please post.
Thanks
Glen
EDIT.. I wanted to add the bits i used, dont think any are part of this cards internal io but i may be wrong?
Work bits 66.01 - 66.06 , 60.02 - 60.07 , 160.12, 160.01 - 160.04, 161.02, 161.03
and
Data Bits (D)20720, 20500, 20600, 20000, 20590, 20040
I would check section 4-1 through 4-2-4 of the CT021 manual - make sure you aren't writing to reserved memory locations used for configuration data of the CT021 unit.
EDIT:
1) Check Page 26 of the above manual to see the location of the machine switch settings. The bottom dial sets the '1's digit and the top dial sets the '10's digit (ie machine number can be 0-99);
2) Per page 94, D-Memory is allocated from D20000 + (N X 100) (400 Words) where N is equal to the machine number.
I would guess that your machine number is set to 0 (ie: both dials at '0'), 5, or 6. In the case of machine number '0', this would make the reserved DM range D20000 -> D20399. In this case (see pages 97, 105) D20000 would contain configuration data for Output Control Mode (bits 00-07) and Counter Output Patterns (bits 08-15). It looks like you are writing 0x6440 to D20000 (or D20500, D20600 for machine number 5 or 6, respectively) and are corrupting the configuration data.
If your machine number is 0 then stay away from D20000-D20399 unless you are directly trying to modify the counter's configuration state (ie: don't use them in your program!).
If the machine number is 1 then likewise for D20100-D20499, etc. If you have multiple counters they can overlap ranges so they should always be set with machine numbers which are 4 apart from each other.

Identifying DNS packets

When looking a packet byte code, how would you identify a dns packet.
The IP header's protocol field would tell that a UDP frame follows, but inside the UDP frame no protocol field exists to specify what comes next and, from what I can see, there is nothing inside the frame that would uniquely identify it as a dns packet.
Other than it being on port 53, there's a few things you can look out for which might give a hint that you're looking at DNS traffic.
I will refer to the field names used in §4.1 of RFC 1035 a lot here:
1 1 1 1 1 1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ID |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|QR| Opcode |AA|TC|RD|RA| Z | RCODE |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| QDCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ANCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| NSCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ARCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
As you can see above the header is 12 bytes long - a 2 byte ID, 2 bytes of flags, and 4 x 2 bytes of counts.
In any DNS packet the QDCOUNT field will be exactly one (0x0001). Technically other values are allowed by the protocol, but in practise they are never used.
In a query (QR == 0) the ANCOUNT and NSCOUNT values will be exactly zero (0x0000), and the ARCOUNT will typically be 0, 1, or 2, depending on whether EDNS0 (RFC 2671)and TSIG (RFC 2845) are being used. RCODE will also be zero in a query.
Responses are somewhat harder to identify, unless you're observing both sides of the conversation and can correlate each response to the query that triggered it.
Obviously the QR bit will be set, and as above the QDCOUNT should still be one. The remaining counters however will have many and varied permutations. However it's exceedingly unlikely that any of the counters will be greater than 255, so you should be able to rely on bytes 4, 6, 8 and 10 all being zero.
Following the headers you'll start to find resource records, the first one being the actual question that was asked (§4.1.2). The unfortunate part here is that the designers of the protocol saw fit to include a variable length label field (QNAME) in front of two fixed fields (QTYPE and QCLASS).
[To further complicate matters labels can be compressed, using a backwards pointer to somewhere else in the packet. Fortunately you will almost never see a compressed label in the Question Section, since by definition you can't go backwards from there. Technically a perverse implementor could send a compression pointer back into the header, but that shouldn't happen].
So, start reading each length byte and then skip that many bytes until you reach a null byte, and then the next two 16 bit words will be QTYPE and QCLASS. There are very few legal values for QCLASS, and almost all packets will contain the value 1 for IN ("Internet"). You may occasionally see 3 for CH (Chaos).
That's it for now - if I think of anything else I'll add it later.
How about checking the port number? Should be 53 for both source and target port.

How to interpret memcached cachedump?

I added an entry to memcached and looking at the cachedump.
set a 0 0 5
hello
STORED
stats cachedump 1 5
ITEM a [5 b; 1312548967 s]
END
The first value 5 b is size of the item which is 5 bytes. The second one is what I'm confused about. It looks like time since epoch.
I tried adding another entry and even that entry got exactly same value.
set b 0 0 5
hello
STORED
stats cachedump 1 5
ITEM b [5 b; 1312548967 s]
ITEM a [5 b; 1312548967 s]
END
It looks like some kind of time, but I'm not sure what it really is. Can some one explain it?
This is meant for debugging purposes only and will very likely be removed soon.
You're invited to participate in that conversation. Please let us know what you're interested in.