Can I adjust the graph-on-parent dimensions of a pd subpatch by sending messages to it (dynamic patching) - puredata

I'm currently building an abstraction which creates its parts dynamically based on the argument to the abstraction. There is one main element and several of another type of element (as many as the argument dictates) that will be attached to that main element. All need to be visible in the host patch. I know that this in itself will work fine, but I need to be able to expand the graph-on-parent frame dynamically according to how many of the varying number of elements should be created.
I have looked here: https://puredata.info/docs/tutorials/TipsAndTricks#undocumented-pd-internal-messages but this is notoriously undocumented and I wonder which of these messages work and are appropriate...mycnv perhaps?

you are looking for the donecanvasdialog message to the subpatch
donecanvasdialog <xunit> <yunits> <gopmode> <xfrom> <yfrom> <xto> <yto> <width> <height> <xoffset> <yoffset> 1
usually you'll leave most elements at their default values and only change width and height:
donecanvasdialog 0 0 1 0 -1 1 1 $1 $2 100 100 1
to find out these things yourself, you can always run Pd in debug mode to intercept the Pd <-> Pd-Gui communication and see how Pd does it internally (2 shows the GUI->Pd messages, and 1 hows the Pd->GUI messages):
$ pd -nrt -d 2
And of course, all this requires a disclaimer: these messages are internal messages. There are not promises that this message will work in the next bugfix release. (Unlike the object-creation messages which are engraved in the Pd-patch format, these messages only ever occur within a running Pd and are therefore indeed considered private and can change if need ever arises)

Related

Cimplicity Screen - one object/button that is dependent on hundreds of points

So I have created a huge screen that essentially just shows the robot status for every robot in this factory (individually)… At the very end of the project, they decided they want one object on the screen that blinks if any of the 300 robots fault. I am trying to think of a way to make this work. Maybe a global script of some kind? Problem is, I do not do much scripting in Cimplicity, so any help is appreciated.
All the points that are currently used on this screen (to indicate a fault) have very similar names… as in, the beginning is the same… so I was thinking of a script that could maybe recognize if a bit is high based on PART of it's string name characteristic. The end will change a little each time, but I am sure there is a way to only look for part of a string and negate the rest. If the end has to be hard coded, that's fine.
You can use a Python script in Cimplicity.
I will not go into detail on the use of python in Cimplicity, which is well described in the documentation indicated above.
Here's an example of what can be done... note that I don't have a way to test it and, of course, this will work if the name of your robots in the declaration follows the format Robot_1, Robot_2, Robot_3 ... Robot_10 ... Robot_300 and it also depends on the Name and the Type of the fault variable... as you didn't define it, I imagine it can be an integer, with ZERO indicating no error. But if you use something other than that, you can easily change it.
import cimplicity
(...)
OneRobotWithFault = False
# Here you get the values and check for fault
for i in range(0, 300):
pointName = f'MyFactory.Robot_{i}.FaultCode'
robotFaultCode = cimplicity.point_get(pointName)
if robotFaultCode > 0:
OneRobotWithFault = True
break
# Set the status to the variable "WeHaveRobotWithFault"
cimplicity.point_set("WeHaveRobotWithFault", OneRobotWithFault)

How to encode normalized(A,B) properly?

I am using clingo to solve a homework problem and stumbled upon something I can't explain:
normalized(0,0).
normalized(A,1) :-
A != 0.
normalized(10).
In my opinion, normalized should be 0 when the first parameter is 0 or 1 in every other case.
Running clingo on that, however, produces the following:
test.pl:2:1-3:12: error: unsafe variables in:
normalized(A,1):-[#inc_base];A!=0.
test.pl:2:12-13: note: 'A' is unsafe
Why is A unsafe here?
According to Programming with CLINGO
Some error messages say that the program
has “unsafe variables.” Such a message usually indicates that the head of one of
the rules includes a variable that does not occur in its body; stable models of such
programs may be infinite.
But in this example A is present in the body.
Will clingo produce an infinite set consisting of answers for all numbers here?
I tried adding number(_) around the first parameter and pattern matching on it to avoid this situation but with the same result:
normalized(number(0),0).
normalized(A,1) :-
A=number(B),
B != 0.
normalized(number(10)).
How would I write normalized properly?
With "variables occuring in the body" actually means in a positive literal in the body. I can recommend the official guide: https://github.com/potassco/guide/releases/
The second thing, ASP is not prolog. Your rules get grounded, i.e. each first order variable is replaced with its domain. In your case A has no domain.
What would be the expected outcome of your program ?
normalized(12351,1).
normalized(my_mom,1).
would all be valid replacements for A so you create an infinite program. This is why 'A' has to be bounded by a domain. For example:
dom(a). dom(b). dom(c). dom(100).
normalized(0,0).
normalized(A,1) :- dom(A).
would produce
normalize(0,0).
normalize(a,1).
normalize(b,1).
normalize(c,1).
normalize(100,1).
Also note that there is no such thing as number/1. ASP is a typefree language.
Also,
normalized(10).
is a different predicate with only one parameter, I do not know how this will fit in your program.
Maybe your are looking for something like this:
dom(1..100).
normalize(0,0).
normalize(X,1) :- dom(X).
foo(43).
bar(Y) :- normalize(X,Y), foo(X).

How do I track states across runners in a DataFlow Job?

I'm currently creating a Streaming Dataflow job that only carries out computation if and only if there is an increment in the "Ring" column of my data.
My data flow code
Job= (p | "Read" >> beam.io.ReadFromPubSub(topic=topic)
| "Parse Json" >> beam.Map(json.loads)
| "ParDo Divisors" >> ParDo(UpdateDelayTable()))
Data flowing in from pubsub:
Ring [
{...,"Ring":1},
{...,"Ring":1},
{...,"Ring":1},
{...,"Ring":2}
...]
I want my dataflow to track the current ring number and only triggers a function if and only if the ring number has incremented. How should I go about doing this.
Pub/Sub
There is no guarantee that {"Ring": 2} will definitely be received/sent by Pub/Sub after {"Ring": 1}.
It seems that you have to enable receiving messages in order first for Pub/Sub. And also make sure the Pub/Sub service receives Ring data incrementally.
Dataflow
Then to achieve it with Dataflow, you can use stateful processing.
But be mindful that the "state" of "Ring" is per key (and per window). To do what you want, all the elements need to have the same key and fall into the same window (global window in this case). It's going to be a very "hot" key.
An example code:
from apache_beam.transforms.userstate import ReadModifyWriteStateSpec
from apache_beam.coders import coders
class RingFn(beam.DoFn):
RING_STATE = ReadModifyWriteStateSpec(
name='Ring', coder=coders.VarIntCoder())
def process(self, element, ring=beam.DoFn.StateParam(RING_STATE)):
current_ring = ring.read() or 0
if element['Ring'] > current_ring:
print('Carry out your computation here!')
ring.write(element['Ring'])
# Usage
pcoll | beam.ParDo(RingFn())
# Check your keys if you are not sure what they are.
pcoll | beam.Keys() | beam.Map(print)

How is "after" and "before" method implemented in Google Spanner truetime?

According to their pager, it says "The TT.after() and TT.before() methods are convenience wrappers around TT.now()."
And according to What is the TrueTime API in Google's Spanner?
It also provides two functions:
after(t) returns true if t has definitely passed. E.g. t < now().earliest.
before(t) returns true if t has definitely not arrived, or t > now().latest.
My question are:
On all servers in spanner, does TT.now() return the same result?
For a given time t, is possible that on server A before(t) is true and on server B is false?
Are they monotonic? e.g. On server A, TT.after(t) is true, sometime later, is it possible that TT.after(t) is false?
For details of how Truetime and Spanner work, look at section 3 of the Spanner Whitpaper (1).
A discussion of how it could be implemented is in the question "Why is Google's TrueTime API hard to duplicate?" (2)
From the Spanner Whitepaper, a TrueTime value is not a single value but a timestamp range which is guaranteed to contain the absolute value. This range takes into account the potential clock drift - which in Google's network, with syncing the server clocks to the atomic/GPS reference time every 30secs is up to 7ms (from the whitepaper).
So if TTstamp1 is the range (t1_lo, t1_hi) and TTstamp2 is the range (t2_lo, t2_hi), then before() and after() simply compare the extremes of these ranges to confirm that they do not overlap.
TTstamp1.before(TTValue2) = t1_hi < t2_lo
TTstamp1.after(TTValue2) = t1_lo > t2_hi
The answer to your questions are therefore:
No, TT.now() does not return the same result on all servers even if called at the exact same instant. However the values obtained on all servers when called at that instant will overlap each other, meaning that none of them are before nor after each other.
So for a given TrueTime t, it is technically possible that on server A t.before(now) is true and on server B t.before(now) is false due to the overlap comparison and the differences between the possible ranges. This is not an problem for Spanner because it will wait until there is no overlap (t.before(now)==true) before committing a transaction and storing its timestamp.
(Note: this information is derived from the public whitepapers and documentation)
According to the picture below , TT.after(t) is TT.now().earliest > t, so I think it's quite possible that TT.before(t) is TT.now().latest < t.
Try to answer my own question:
TT.now() does not return the same result on all servers at the same time.
Yes, it's possbile. Even though for case of TT.after(),this means time t has definitely passed only on some servers.
I don't know. If it is possbile, then it means t has definitely passed, and then not sure about that, which sounds a little weird
I think "definitely" in their description is a little misleading, they should simply be "higher than the latest" or "lower than the earliest”

COBOL add 0 to a Variable in COMPUTE

I ran into a strange statement when working on a COBOL program from $WORK.
We have a paragraph that is opening a cursor (from DB2), and the looping over it until it hits an EOT (in pseudo code):
... working storage ...
01 I PIC S9(9) COMP VALUE ZEROS.
01 WS-SUB PIC S9(4) COMP VALUE 0.
... code area ...
PARA-ONE.
PERFORM OPEN-CURSOR
PERFORM FETCH-CURSOR
PERFORM VARYING I FROM 1 BY 1 UNTIL SQLCODE = DB2EOT
do stuff here...
END-PERFORM
COMPUTE WS-SUB = I + 0
PERFORM CLOSE-CURSOR
... do another loop using WS-SUB ...
I'm wondering why that COMPUTE WS-SUB = I + 0 line is there. My understanding is that I will always at least be 1, because of the perform block above it (i.e., even if there is an EOT to start with, I will be set to one on that initial iteration).
Is that COMPUTE line even needed? Is it doing some implicit casting that I'm not aware of? Why would it be there? Why wouldn't you just MOVE I TO WS-SUB?
Call it stupid, but with some compilers (with the correct options in effect), given
01 SIGNED-NUMBER PIC S99 COMP-5 VALUE -1.
01 UNSIGNED-NUMBER PIC 99 COMP-5.
...
MOVE SIGNED-NUMBER TO UNSIGNED-NUMBER
DISPLAY UNSIGNED-NUMBER
results in: 255. But...
COMPUTE UNSIGNED-NUMBER = SIGNED-NUMBER + ZERO
results in: 1 (unsigned)
So to answer your question, this could be classified as a technique used cast signed numbers into unsigned numbers. However, in the code example you gave it makes no sense at all.
Note that the definition of "I" was (likely) coded by one programmer and of WS-SUB by another (naming is different, VALUE clause is different for same purpose).
Programmer 2 looks like "old school": PIC S9(4), signed and taking up all the digits which "fit" in a half-word. The S9(9) is probably "far over the top" as per range of possible values, but such things concern Programmer 1 not at all.
Probably Programmer 2 had concerns about using an S9(9) COMP for something requiring (perhaps many) fewer than 9999 "things". "I'll be 'efficient' without changing the existing code". It seems to me unlikely that the field was ever defined as unsigned.
A COMP/COMP-4 with nine digits does have a performance penalty when used for calculations. Try "ADD 1" to a 9(9) and a 9(8) and a 9(10) and compare the generated code. If you can have nine digits, define with 9(10), otherwise 9(8), if you need a fullword.
Programmer 2 knows something of this.
The COMPUTE with + 0 is probably deliberate. Why did Programmer 2 use the COMPUTE like that (the original question)?
Now it is going to get complicated.
There are two "types" of "binary" fields on the Mainframe: those which will contain values limited by the PICture clause (USAGE BINARY, COMP and COMP-4); those which contain values limited by the field size (USAGE COMP-5).
With BINARY/COMP/COMP-4, the size of the field is determined from the PICture, and so are the values that can be held. PIC 9(4) is a halfword, with a maxiumum value of 9999. PIC S9(4) a halfword with values -9999 through +9999.
With COMP-5 (Native Binary), the PICture just determines the size of the field, all the bits of the field are relevant for the value of the field. PIC 9(1) to 9(4) define halfwords, pic 9(5) to 9(9) define fullwords, and 9(10) to 9(18) define doublewords. PIC 9(1) can hold a maximum of 65535, S9(1) -32,768 through +32,767.
All well and good. Then there is compiler option TRUNC. This has three options. STD, the default, BIN and OPT.
BIN can be considered to have the most far-reaching affect. BIN makes BINARY/COMP/COMP-4 behave like COMP-5. Everything becomes, in effect, COMP-5. PICtures for binary fields are ignored, except in determining the size of the field (and, curiously, with ON SIZE ERROR, which "errors" when the maxima according to the PICture are exceeded). Native Binary, in IBM Enterprise Cobol, generates, in the main, though not exclusively, the "slowest" code. Truncation is to field size (halfword, fullword, doubleword).
STD, the default, is "standard" truncation. This truncates to "PICture". It is therefore a "decimal" truncation.
OPT is for "performance". With OPT, the compiler truncates in whatever way is the most "performant" for a particular "code sequence". This can mean intermediate values and final values may have "bits set" which are "outside of the range" of the PICture. However, when used as a source, a binary field will always only reflect the value specified by the PICture, even if there are "excess" bits set.
It is important when using OPT that all binary fields "conform to PICture" meaning that code must never rely on bits which are set outside the PICture definition.
Note: Even though OPT has been used, the OPTimizer (OPT(STD) or OPT(FULL)) can still provide further optimisations.
This is all well and good.
However, a "pickle" can readily ensue if you "mix" TRUNC options, or if the binary definition in a CALLing program is not the same as in the CALLed program. The "mix" can occur if modules within the same run-unit are compiled with different TRUNC options, or if a binary field on a file is written with one TRUNC option and later read with another.
Now, I suspect Programmer 2 encountered something like this: Either, with TRUNC(OPT) they noticed "excess bits" in a field and thought there was a need to deal with them, or, through the "mix" of options in a run-unit or "across file usage" they noticed "excess bits" where there would be a need to do something about it (which was to "remove the mix").
Programmer 2 developed the COMPUTE A = B + 0 to "deal" with a particular problem (perceived or actual) and then applied it generally to their work.
This is a "guess", or, better, a "rationalisation" which works with the known information.
It is a "fake" fix. There was either no problem (the normal way that TRUNC(OPT) works) or the correct resolution was "normalisation" of the TRUNC option across modules/file use.
I do not want loads of people now rushing off and putting COMPUTE A = B + 0 in their code. For a start, they don't know why they are doing it. For a continuation it is the wrong thing to do.
Of course, do not just remove the "+ 0" from any of these that you find. If there is a "mix" of TRUNCs, a program may stop "working".
There is one situation in which I have used "ADD ZERO" for a BINARY/COMP/COMP-4. This is in a "Mickey Mouse" program, a program with no purpose but to try something out. Here I've used it as a method to "trick" the optimizer, as otherwise the optimizer could see unchanging values so would generate code to use literal results as all values were known at compile time. (A perhaps "neater" and more flexible way to do this which I picked up from PhilinOxford, is to use ACCEPT for the field). This is not the case, for certain, with the code in question.
I wonder if a testing version of the sources ever had
COMPUTE WS-SUB = I + 0
ON SIZE ERROR
DISPLAY "WS-SUB overflow"
STOP RUN
END-COMPUTE
with the range test discarded when the developer was satisfied and cleaning up? MOVE doesn't allow declarative SIZE statements. That's as much of a reason as I could see. Or perhaps developer habit of using COMPUTE to move, as a subtle reminder to question the need for defensive code at every step? And perhaps not knowing, as Joe pointed out, the SIZE clause would be just as effective without the + 0? Or a maintainer struggled with off by one errors and there was a corrective change from 1 to 0 after testing?