just curious..is there a particular reason (historical or some sort) why Swift uses numbers from 160 to 90 to express default precedence levels of operators. Thanks
According to Apple's Operation Declaration documentation
The precedence level can be any whole number (decimal integer) from 0
to 255
Although the precedence level is a specific number, it is significant
only relative to another operator.
The simple answer is that 90 to 160 fall near the center of the 0 to 255 range.
Now if you check all of Apple's binary expressions documentation, you will see that the default operators range from a precedence of 90 to a precedence of 160, as you stated in your question. This is a range of 70 and because precedence values are relative to each other, any starting/ending point for this range could be chosen.
However, if they made the default values 0 to 70 or 185 to 255 then when a user created a custom operator, they could not give it a lower precedence than 0 or a higher precedence than 255, causing the operator to be equal to the precedence of Assignment operators or Exponentiative operators respectively.
Therefore, the only logical thing to do is to start this range in the middle of the 0 to 255 range and rather than set the default values of the range to 93 - 163 (the closest to the actual center of the range as possible), they chose to choose the multiples of 10 (90 to 160) instead because in actuality the values do not matter except in relation to each other.
Related
I've read that priority can be a value between 0 and 255 (http://man7.org/linux/man-pages/man3/seccomp_syscall_priority.3.html). Why using seccomp_export_pfc the baseline priority is 65535???
# filter for syscall "exit_group" (231) [priority: 65535]
if ($syscall == 231)
action ALLOW;
# filter for syscall "exit" (60) [priority: 65535]
if ($syscall == 60)
action ALLOW;
They are two different things: with seccomp_syscall_priority, you provide a priority hint, whereas seccomp_export_pfc outputs libseccomp's internal priority.
As explained in the source code comments, the internal priority contains your priority hint, if any, as the first 16 bits. The last 16 bits are useful is case of tie (i.e., two filters have the same upper 16 bits), in which case libseccomp gives higher priority to filters that are easier to evaluate.
So, in your case, since you did not provide any hint, the internal priority is equal to 0x0000FFFF, or 65535.
Is there official documentation to resolve the apparent conflict between these two statements from the NetLogo 5.0.5 Programming Guide:
"A patch's coordinates are always integers" (from the Agents section)
"All numbers in NetLogo are stored internally as double precision floating point numbers" (from the Math section on the same page.)
Here's why I ask: if the integer patch coordinates are stored as floating point numbers that are very close to integer values then I should avoid comparisons for equality. For example, if there are really no integers, instead of
if pxcor = pycor...
I should use the usual tolerance-checking, like
if abs( pxcor – pycor) < 0.1 ...
Is there some official word that the more complicated code is unnecessary?
The Math section also seems to imply the absence of integer literals: "No distinction is made between 3 and 3.0". So is the official policy to avoid comparisons for equality with constants? For example, is there official sanction for writing code like
if pxcor = 3...
?
Are sliders defined somewhere to produce floating point values? If so, it seems invalid to compare slider values for equality, also. That is, if so, one should avoid writing code like
if pxcor = slider-value
even when the minimum, maximum, and increment values for the slider look like integers.
The focus on official sources in this question arises because I'm not just trying to write a working program. Rather, I'm seeking to tell students how they should program. I'd hate to mislead them, so thanks for any good advice.
NetLogo isn't the only language that works this way, with all numbers stored internally as double precision floating point. The best known other such language is JavaScript.
Math in NetLogo follows IEEE 754, so what follows isn't actually specific to NetLogo, but applies to IEEE 754 generally.
There's no contradiction in the User Manual because mathematically, some floating point numbers are integers, exactly. If the fractional part is exactly zero, then mathematically, it's an integer, and IEEE 754 guarantees that arithmetic and comparison operations will behave as you would expect. If you add 2 and 2 you'll always get 4, never 3.999... or 4.00...01. Integers in, integers out. That holds for comparison, addition, subtraction, multiplication, and divisions that divide evenly. (It may not hold for other operations, so e.g. log 1000 10 isn't exactly 3, and cos 90 isn't exactly 0.)
Therefore if pxcor = 3 is completely valid, correct code. pxcor never has a fractional part, and 3 doesn't have one, either, so no issue of floating point imprecision can arise.
As for NetLogo sliders, if the slider's min, max, and increment are all integers, then there's nothing to worry about; the value of the slider is also always an integer.
(Note: I am the lead developer of NetLogo, and I wrote the sections of the User Manual that you are quoting.)
Just to stress what Seth writes:
Integers in, integers out. That holds for comparison, addition,
subtraction, multiplication, and divisions that divide evenly (emphasis added).
Here's a classic instance of floating point imprecision:
observer> show (2 + 1) / 10
observer: 0.3
observer> show 2 / 10 + 1 / 10
observer: 0.30000000000000004
For nice links that explain why, check out http://0.30000000000000004.com/
I am reading Stephen Wolfram's "A New Kind of Science".
At present, I cannot understand how the cellular automata illustrations on p79 are created.
In the patterns, the active cell, representing the head, appears to change orientation between up and -45 degrees. However, none of the rules seem to include an active cell with an orientation other than up or down. How does the active cell orientation of -45 degrees come about in the patterns?
Am I missing something obvious (I am a beginner in this area)?
You have a simple rule. Just a mapping from 3 binary digits to 1 binary digit. For example:
111 - 0
110 - 0
101 - 0
100 - 1
011 - 1
010 - 1
001 - 1
000 - 0
Then you have some sequence of digits during time t0. For example 00111010. To find what will happen at time t1, you need to use this mapping. So 001 will be 1, then 011 will be 1, 111 = 0, then ... and 010 = 1. This way you will receive the sequence of the same length for the second generation (t1). And you move on further and further again till you will see repetition.
So on that pictures your X axis is this sequence (empty square 0, gray square 1). On your Y axis is how this sequence evolve under specific rule. In my example it was a rule 30 (because 00011110 = 30 in binary)
You can read high level overview here. Also these rules are simple, they can give rise to complex behavior.
P.S. paper was published in Nature (high level science journal) and considered revolutionized because it showed that complicated structures and motifs like dots on the leopard's skin or the image on the shell can arise from really simple rules.
I think that it is an inconsistency between the printing of the rules and the diagrams.
If the downward (-90 degrees) arrows in the rules are replaced with arrows pointing to the bottom right (-45 degrees) then the rules seem to make sense.
I'm testing a photo application for Facebook. I'm getting object IDs from the Facebook API, but I received some incorrect ones, which doesn't make sense - why would Facebook send wrong IDs? I investigated a bit and found out that numbers with 17 and more digits are automatically turning into even numbers!
For example, let's say the ID I'm supposed to receive from Facebook is 12345678912345679. In the debugger, I've noticed that Flash Player automatically turns it into 12345678912345678. And I even tried to manually set it back to an odd number, but it keeps changing back to even.
Is there any way to stop Flash Player from rounding the numbers? BTW the variable is defined as Object, I receive it like that from Facebook.
This is related to the implementation of data types:
int is a 32-bit number, with an even distribution of positive and
negative values, including 0. So the maximum value is
(2^32 / 2 ) - 1 == 2,147,483,647.
uint is also a 32-bit number, but it doesn't have negative values. So the
maximum value is
2^32 - 1 == 4,294,967,295.
When you use a numerical value greater than the maximum value of int or uint, it is automatically cast to Number. From the Adobe Doc:
The Number data type is useful when you need to use floating-point
values. Flash runtimes handle int and uint data types more efficiently
than Number, but Number is useful in situations where the range of
values required exceeds the valid range of the int and uint data
types. The Number class can be used to represent integer values well
beyond the valid range of the int and uint data types. The Number data
type can use up to 53 bits to represent integer values, compared to
the 32 bits available to int and uint.
53 bits have a maximum value of:
2^53 - 1 == 9,007,199,254,740,989 => 16 digits
So when you use any value greater than that, the inner workings of floating point numbers apply.
You can read about floating point numbers here, but in short, for any floating point value, the first couple of bits are used to specify a multiplication factor, which determines the location of the point. This allows for a greater range of values than are actually possible to represent with the number of bits available - at the cost of reduced precision.
When you have a value greater than the maximum possible integer value a Number could have, the least significant bit (the one representing 0 and 1) is cut off to allow for a more significant bit (the one representing 2^54) to exist => hence, you lose the odd numbers.
There is a simple way to get around this: Keep all your IDs as Strings - they can have as many digits as your system has available bytes ;) It's unlikely you're going to do any calculations with them, anyway.
By the way, if you had a value greater than 2^54-(1+2), your numbers would be rounded down to the next multiple of 4; if you had a value greater than 2^55-(1+2+4), they would be rounded down to the next multiple of 8, etc.
I'm currently implementing an application to perform some tasks on MIDI files, and my current problem is to output the notes I've read to a LilyPond file.
I've merged note_on and note_off events to single notes object with absolute start and absolute duration, but I don't really see how to convert that duration to actual music notation. I've guessed that a duration of 376 is a quarter note in the file I'm reading because I know the song, and obviously 188 is an eighth note, but this certainly does not generalise to all MIDI files.
Any ideas?
By default a MIDI file is set to a tempo of 120 bpm and the MThd chunk in the file will tell you the resolution in terms of "pulses per quarter note" (ppqn).
If the ppqn is, say, 96 than a delta of 96 ticks is a quarter note.
Should you be interested in the real duration (in seconds) of each sound you should also consider the "tempo" that can be changed by an event "FF 51 03 tt tt tt"; the three bytes are the microseconds for a quarter note.
With these two values you should find what you need. Beware that the duration in the midi file can be approximate, especially if that MIDI file it's the recording of a human player.
I've put together a C library to read/write midifiles a long time ago: https://github.com/rdentato/middl in case it may be helpful (it's quite some time I don't look at the code, feel free to ask if there's anything unclear).
I would suggest to follow this approach:
choose a "minimal note" that is compatible with your division (e.g. 1/128) and use it as a sort of grid.
Align each note to the closest grid line (i.e. to the closest integer multiple of the minimal node)
Convert it to standard notation (e.g a quarter note, a dotted eight note, etc...).
In your case, take 1/32 as minimal note and 384 as division (that would be 48 ticks). For your note of 376 tick you'll have 376/48=7.8 which you round to 8 (the closest integer) and 8/32 = 1/4.
If you find a note whose duration is 193 ticks you can see it's a 1/8 note as 193/48 is 4.02 (which you can round to 4) and 4/32 = 1/8.
Continuing this reasoning you can see that a note of duration 671 ticks should be a double dotted quarter note.
In fact, 671 should be approximated to 672 (the closest multiple of 48) which is 14*48. So your note is a 14/32 -> 7/16 -> (1/16 + 2/16 + 4/16) -> 1/16 + 1/8 + 1/4.
If you are comfortable using binary numbers, you could notice that 14 is 1110 and from there, directly derive the presence of 1/16, 1/4 and 1/8.
As a further example, a note of 480 ticks of duration is a quarter note tied with a 1/16 note since 480=48*10 and 10 is 1010 in binary.
Triplets and other groups would make things a little bit more complex. It's not by chance that the most common division values are 96 (3*2^5), 192 (3*2^6) and 384 (3*2^7); this way triplets can be represented with an integer number of ticks.
You might have to guess or simplify in some situations, that's why no "midi to standard notation" program can be 100% accurate.