what does << mean in sprite kit collision categories? - sprite-kit

My friend said that he implemented some code so that my collision detection would work better. What he put in I don't understand so I wanted to know what it means. More specifically what << means.
typedef NS_OPTIONS(NSUInteger, CollisionCategory){
rectangulo = (1 << 0),
circulo = (1 << 1)
thanks,
<3

This is totally unrelated to Xcode or sprite collision; the << is the C language left shift operator. If, as in your case, applied to integers, it performs a bitwise shift of the first argument by the second argument. In your case, the two expressions yield 1 and 2, respectively.

Related

What's the difference between &<< and << operators in Swift?

What's the difference between &<< and << operators in Swift? It seems like they return the same results:
print(2 << 3) // 16
print(2 &<< 3) // 16
The FixedWithInteger protocol defines the “masking left shift operator” &<< as
Returns the result of shifting a value’s binary representation the
specified number of digits to the left, masking the shift amount to
the type’s bit width.
Use the masking left shift operator (&<<) when you need to perform a
shift and are sure that the shift amount is in the range
0..<lhs.bitWidth. Before shifting, the masking left shift operator
masks the shift to this range. The shift is performed using this
masked value.
So the results can differ if the shift amount is larger than or equal to the bit width of the left operand: Example:
print(2 << 70) // 0
print(2 &<< 70) // 128
Here the shift amount (70) is larger than the bitwith of Int (64), so that 2 << 70 evaluates to zero.
In the second line, the number is shifted to the left by 70 % 64 = 6 bits.
There is also a similar “masking right shift operator” &>>. Example:
let x = Int8.min // -128 = 0b10000000
print(Int8.min >> 8) // -1 = 0b11111111
print(Int8.min &>> 8) // -128 = 0b10000000
Here the first result is -1 because shifting a signed integer to the right fills empty positions on the left with the sign bit. The second result is -128 because the shift amount is zero: 8 % 8 = 0.
Naming and intended usage is also described in SR-6749:
The goal of the operator, however, is never to have this wrapping
behavior happen — it's what you use when you have static knowledge
that your shift amount is <= the bitWidth. By masking, you and the
compiler can agree that there's no branch needed, so you get slightly
faster code without zero branching and zero risk of undefined behavior
(which a negative or too large shift would be).
The docs are confusing because they give an example I don't think
anyone would ever intentionally write—relying on the wrapping behavior
to use some out of bounds value for the shift amount.
So using masked shift operators can increase the performance. Many examples can be found in the source code of the Swift standard library, for example in UTF8.swift.

How can a+b be NOT equal to b+a?

Our professor said that in computer logic it's important when you add a number to another so a+b and b+a are not always equal.
Though,I couldn't find an example of when they would be different and why they won't be equal.
I think it would have to do something with bits but then again ,I'm not sure.
Although you don't share a lot of context it sounds as if your professor did not elaborate on that or you missed something.
In the case that he was talking about logic in general, he could have meant that the behavior of the + operator depends on how you define it.
Example: The definition (+) a b := if (a==0) then 5 else 0 results in a + operator which is not associative, e.g. 1 + 0 would be 0 but 0 + 1 would be 5. There are many programming languages that allow this redefinition (overwriting) of standard operators.
But with the context you share, this is all speculative.
One obscure possibility is if one or other of a or b is a high-definition timer value - ticks since program start.
Due to the cpu cycle(s) consumed to pop one of the values before addition, it's possible the sum could be different dependant on the order.
One more possibility is if a and b are expressions with side effects. E.g.
int x = 0;
int a() {
x += 1;
return x;
}
int b() {
return x;
}
a() + b() will return 2 and b() + a() will return 1 (both from initial state).
Or it could be that a or b are NaN, in which case even a == a is false. Though this one isn't connected with "when you add a number to another".

|= operator cannot be applied to two NSWindowMask operations

In swift 4 this fails
self.window.styleMask |= NSWindowStyleMask.fullSizeContentView
and I'd also like to undo
self.window.styleMask ^= NSWindowStyleMask.fullSizeContentView
as I would in objective-c
In Swift, NSWindowStyleMask (in Swift 4, NSWindow.StyleMask) is OptionSet. You need to use methods defined for SetAlgebra.
Swift 4:
self.window!.styleMask.formUnion(NSWindow.StyleMask.fullSizeContentView)
self.window!.styleMask.formSymmetricDifference(NSWindow.StyleMask.fullSizeContentView)
The code below compiles both in Swift 3 & Swift 4:
self.window!.styleMask.formUnion(.fullSizeContentView)
self.window!.styleMask.formSymmetricDifference(.fullSizeContentView)
This is ugly
self.window.styleMask = NSWindowStyleMask(rawValue: NSWindowStyleMask.fullSizeContentView.rawValue + panel.styleMask.rawValue)
seems to work? The net affect is the content shrinks (by title height) when it's toggled. So I might to back to what I had been using - .borderless

How to do bitwise operation decently?

I'm doing analysis on binary data. Suppose I have two uint8 data values:
a = uint8(0xAB);
b = uint8(0xCD);
I want to take the lower two bits from a, and whole content from b, to make a 10 bit value. In C-style, it should be like:
(a[2:1] << 8) | b
I tried bitget:
bitget(a,2:-1:1)
But this just gave me separate [1, 1] logical type values, which is not a scalar, and cannot be used in the bitshift operation later.
My current solution is:
Make a|b (a or b):
temp1 = bitor(bitshift(uint16(a), 8), uint16(b));
Left shift six bits to get rid of the higher six bits from a:
temp2 = bitshift(temp1, 6);
Right shift six bits to get rid of lower zeros from the previous result:
temp3 = bitshift(temp2, -6);
Putting all these on one line:
result = bitshift(bitshift(bitor(bitshift(uint16(a), 8), uint16(b)), 6), -6);
This is doesn't seem efficient, right? I only want to get (a[2:1] << 8) | b, and it takes a long expression to get the value.
Please let me know if there's well-known solution for this problem.
Since you are using Octave, you can make use of bitpack and bitunpack:
octave> a = bitunpack (uint8 (0xAB))
a =
1 1 0 1 0 1 0 1
octave> B = bitunpack (uint8 (0xCD))
B =
1 0 1 1 0 0 1 1
Once you have them in this form, it's dead easy to do what you want:
octave> [B A(1:2)]
ans =
1 0 1 1 0 0 1 1 1 1
Then simply pad with zeros accordingly and pack it back into an integer:
octave> postpad ([B A(1:2)], 16, false)
ans =
1 0 1 1 0 0 1 1 1 1 0 0 0 0 0 0
octave> bitpack (ans, "uint16")
ans = 973
That or is equivalent to an addition when dealing with integers
result = bitshift(bi2de(bitget(a,1:2)),8) + b;
e.g
a = 01010111
b = 10010010
result = 00000011 100010010
= a[2]*2^9 + a[1]*2^8 + b
an alternative method could be
result = mod(a,2^x)*2^y + b;
where the x is the number of bits you want to extract from a and y is the number of bits of a and b, in your case:
result = mod(a,4)*256 + b;
an extra alternative solution close to the C solution:
result = bitor(bitshift(bitand(a,3), 8), b);
I think it is important to explain exactly what "(a[2:1] << 8) | b" is doing.
In assembly, referencing individual bits is a single operation. Assume all operations take the exact same time and "efficient" a[2:1] starts looking extremely inefficient.
The convenience statement actually does (a & 0x03).
If your compiler actually converts a uint8 to a uint16 based on how much it was shifted, this is not a 'free' operation, per se. Effectively, what your compiler will do is first clear the "memory" to the size of uint16 and then copy "a" into the location. This requires an extra step (clearing the "memory" (register)) that wouldn't normally be needed.
This means your statement actually is (uint16(a & 0x03) << 8) | uint16(b)
Now yes, because you're doing a power of two shift, you could just move a into AH, move b into AL, and AH by 0x03 and move it all out but that's a compiler optimization and not what your C code said to do.
The point is that directly translating that statement into matlab yields
bitor(bitshift(uint16(bitand(a,3)),8),uint16(b))
But, it should be noted that while it is not as TERSE as (a[2:1] << 8) | b, the number of "high level operations" is the same.
Note that all scripting languages are going to be very slow upon initiating each instruction, but will complete said instruction rapidly. The terse nature of Python isn't because "terse is better" but to create simple structures that the language can recognize so it can easily go into vectorized operations mode and start executing code very quickly.
The point here is that you have an "overhead" cost for calling bitand; but when operating on an array it will use SSE and that "overhead" is only paid once. The JIT (just in time) compiler, which optimizes script languages by reducing overhead calls and creating temporary machine code for currently executing sections of code MAY be able to recognize that the type checks for a chain of bitwise operations need only occur on the initial inputs, hence further reducing runtime.
Very high level languages are quite different (and frustrating) from high level languages such as C. You are giving up a large amount of control over code execution for ease of code production; whether matlab actually has implemented uint8 or if it is actually using a double and truncating it, you do not know. A bitwise operation on a native uint8 is extremely fast, but to convert from float to uint8, perform bitwise operation, and convert back is slow. (Historically, Matlab used doubles for everything and only rounded according to what 'type' you specified)
Even now, octave 4.0.3 has a compiled bitshift function that, for bitshift(ones('uint32'),-32) results in it wrapping back to 1. BRILLIANT! VHLL place you at the mercy of the language, it isn't about how terse or how verbose you write the code, it's how the blasted language decides to interpret it and execute machine level code. So instead of shifting, uint32(floor(ones / (2^32))) is actually FASTER and more accurate.

hash function providing unique uint from an integer coordinate pair

The problem in general:
I have a big 2d point space, sparsely populated with dots.
Think of it as a big white canvas sprinkled with black dots.
I have to iterate over and search through these dots a lot.
The Canvas (point space) can be huge, bordering on the limits
of int and its size is unknown before setting points in there.
That brought me to the idea of hashing:
Ideal:
I need a hash function taking a 2D point, returning a unique uint32.
So that no collisions can occur. You can assume that the number of
dots on the Canvas is easily countable by uint32.
IMPORTANT: It is impossible to know the size of the canvas beforehand
(it may even change),
so things like
canvaswidth * y + x
are sadly out of the question.
I also tried a very naive
abs(x) + abs(y)
but that produces too many collisions.
Compromise:
A hash function that provides keys with a very low probability of collision.
Cantor's enumeration of pairs
n = ((x + y)*(x + y + 1)/2) + y
might be interesting, as it's closest to your original canvaswidth * y + x but will work for any x or y. But for a real world int32 hash, rather than a mapping of pairs of integers to integers, you're probably better off with a bit manipulation such as Bob Jenkin's mix and calling that with x,y and a salt.
a hash function that is GUARANTEED collision-free is not a hash function :)
Instead of using a hash function, you could consider using binary space partition trees (BSPs) or XY-trees (closely related).
If you want to hash two uint32's into one uint32, do not use things like Y & 0xFFFF because that discards half of the bits. Do something like
(x * 0x1f1f1f1f) ^ y
(you need to transform one of the variables first to make sure the hash function is not commutative)
Like Emil, but handles 16-bit overflows in x in a way that produces fewer collisions, and takes fewer instructions to compute:
hash = ( y << 16 ) ^ x;
You can recursively divide your XY plane into cells, then divide these cells into sub-cells, etc.
Gustavo Niemeyer invented in 2008 his Geohash geocoding system.
Amazon's open source Geo Library computes the hash for any longitude-latitude coordinate. The resulting Geohash value is a 63 bit number. The probability of collision depends of the hash's resolution: if two objects are closer than the intrinsic resolution, the calculated hash will be identical.
Read more:
https://en.wikipedia.org/wiki/Geohash
https://aws.amazon.com/fr/blogs/mobile/geo-library-for-amazon-dynamodb-part-1-table-structure/
https://github.com/awslabs/dynamodb-geo
Your "ideal" is impossible.
You want a mapping (x, y) -> i where x, y, and i are all 32-bit quantities, which is guaranteed not to generate duplicate values of i.
Here's why: suppose there is a function hash() so that hash(x, y) gives different integer values. There are 2^32 (about 4 billion) values for x, and 2^32 values of y. So hash(x, y) has 2^64 (about 16 million trillion) possible results. But there are only 2^32 possible values in a 32-bit int, so the result of hash() won't fit in a 32-bit int.
See also http://en.wikipedia.org/wiki/Counting_argument
Generally, you should always design your data structures to deal with collisions. (Unless your hashes are very long (at least 128 bit), very good (use cryptographic hash functions), and you're feeling lucky).
Perhaps?
hash = ((y & 0xFFFF) << 16) | (x & 0xFFFF);
Works as long as x and y can be stored as 16 bit integers. No idea about how many collisions this causes for larger integers, though. One idea might be to still use this scheme but combine it with a compression scheme, such as taking the modulus of 2^16.
If you can do a = ((y & 0xffff) << 16) | (x & 0xffff) then you could afterward apply a reversible 32-bit mix to a, such as Thomas Wang's
uint32_t hash( uint32_t a)
a = (a ^ 61) ^ (a >> 16);
a = a + (a << 3);
a = a ^ (a >> 4);
a = a * 0x27d4eb2d;
a = a ^ (a >> 15);
return a;
}
That way you get a random-looking result rather than high bits from one dimension and low bits from the other.
You can do
a >= b ? a * a + a + b : a + b * b
taken from here.
That works for points in positive plane. If your coordinates can be in negative axis too, then you will have to do:
A = a >= 0 ? 2 * a : -2 * a - 1;
B = b >= 0 ? 2 * b : -2 * b - 1;
A >= B ? A * A + A + B : A + B * B;
But to restrict the output to uint you will have to keep an upper bound for your inputs. and if so, then it turns out that you know the bounds. In other words in programming its impractical to write a function without having an idea on the integer type your inputs and output can be and if so there definitely will be a lower bound and upper bound for every integer type.
public uint GetHashCode(whatever a, whatever b)
{
if (a > ushort.MaxValue || b > ushort.MaxValue ||
a < ushort.MinValue || b < ushort.MinValue)
{
throw new ArgumentOutOfRangeException();
}
return (uint)(a * short.MaxValue + b); //very good space/speed efficiency
//or whatever your function is.
}
If you want output to be strictly uint for unknown range of inputs, then there will be reasonable amount of collisions depending upon that range. What I would suggest is to have a function that can overflow but unchecked. Emil's solution is great, in C#:
return unchecked((uint)((a & 0xffff) << 16 | (b & 0xffff)));
See Mapping two integers to one, in a unique and deterministic way for a plethora of options..
According to your use case, it might be possible to use a Quadtree and replace points with the string of branch names. It is actually a sparse representation for points and will need a custom Quadtree structure that extends the canvas by adding branches when you add points off the canvas but it avoids collisions and you'll have benefits like quick nearest neighbor searches.
If you're already using languages or platforms that all objects (even primitive ones like integers) has built-in hash functions implemented (Java platform Languages like Java, .NET platform languages like C#. And others like Python, Ruby, etc ).
You may use built-in hashing values as a building block and add your "hashing flavor" in to the mix. Like:
// C# code snippet
public class SomeVerySimplePoint {
public int X;
public int Y;
public override int GetHashCode() {
return ( Y.GetHashCode() << 16 ) ^ X.GetHashCode();
}
}
And also having test cases like "predefined million point set" running against each possible hash generating algorithm comparison for different aspects like, computation time, memory required, key collision count, and edge cases (too big or too small values) may be handy.
the Fibonacci hash works very well for integer pairs
multiplier 0x9E3779B9
other word sizes 1/phi = (sqrt(5)-1)/2 * 2^w round to odd
a1 + a2*multiplier
this will give very different values for close together pairs
I do not know about the result with all pairs