My extensions to the Int type do not work for raw, negative values. I can work around it but the failure seems to be a type inference problem. Why is this not working as expected?
I first encountered this within the application development environment but I have recreated a simple form of it here on the Playground. I am using the latest version of Xcode; Version 6.2 (6C107a).
That's because - is interpreted as the minus operator applied to the integer 2, and not as part of the -2 numeric literal.
To prove that, just try this:
-(1.foo())
which generates the same error
Could not find member 'foo'
The message is probably misleading, because the error is about trying to apply the minus operator to the return value of the foo method.
I don't know if that is an intentional behavior or not, but it's how it works :)
This is likely a compiler bug (report on radar if you like). Use brackets:
println((-2).foo())
Related
I am reading through "A Minizinc Tutorial" by Kim Marriott and it says that
the combination of variable instantiation and type is called type-inst. As you start to use Minizinc, you will undoubtedly see examples of type-inst errors.
What exactly are type-inst errors?
I believe the terminology is not often used in the MiniZinc literature these days, but for every value in MiniZinc the compiler keeps track of two things: it's type (int, bool, float, etc.) and if it is a decision variable (not known at solve time) or a problem parameter (must be known when rewriting the model for the solver). Together these two things are called the Type Instantiation or type-inst.
A type-inst error is an error given by the type checker of the compiler. These error can occur in many places, such as when in a declaration the declared type instantiation doesn't match it's right hand side, or when two side of an if-then-else have a different type-instantiation, or when the arguments of a call do not match the declared type-instantiation of the function-declaration.
The mismatch that causes these errors can come from either side of the type-inst: either the types are incompatible (e.g. used float instead of bool), or you used a decision variable where only a problem parameter was allowed. These issues are usually caused by mistakes in the model and are usually resolved easily by changing the value used or using different language constructs.
Note that MiniZinc does allow sub-typing: You are allowed to use bool instead of int and it is converted to a 0/1 value. Similarly you can use a integer value instead of a float, and you can use a parameter in place of a variable.
The newest version of the MiniZinc Tutorial can be found with its documentation: https://www.minizinc.org/doc-latest/en/part_2_tutorial.html
I copied and pasted the widely available code for the djb2 hashing function, but it generates the error shown below (I am using the CS50.ide, which may be a factor). Since this error IS fixed by a second set of parentheses, can someone explain why those aren't in the code I find everywhere online?
dictionary.c:67:14: error: using the result of an assignment as a condition without
parentheses [-Werror,-Wparentheses]
while (c = *word++)
~~^~~~~~~~~
dictionary.c:67:14: note: place parentheses around the assignment to silence this
warning
while (c = *word++)
^
( )
dictionary.c:67:14: note: use '==' to turn this assignment into an equality comparison
while (c = *word++)
^
==
= is used for setting variables to a value. == is the relational operator used for comparing the equality of values. Perhaps you are finding the C++ version of the function. Perhaps it is the IDE compiler rules/config.+
I understand about = vs ==. My question was how come i get the compiler error with code that is correct, since it is a well established hash function.
turns out is related to the cs50 makefile being more stringent than clang on it's own. needlessly frustrating.
I'm using GWT 2.8.2.
When I run the following code in SuperDev mode, it logs 123.456, which is what I expect.
double d = 123.456789;
JsNumber num = Js.cast(d);
console.log(num.toFixed(3));
When I compile to JavaScript and run, it logs 123 (i.e. it does not show the decimal places).
I have tried running the code on Android Chrome, Windows Chrome and Windows Firefox. They all exhibit the same behavior.
Any idea why there is a difference and is there anything I can do about it?
Update: After a bit more digging, I've found that it's to do with the coercion of the integer parameter.
console.log(num.toFixed(3)); // 123 (wrong)
console.log(num.toFixed(3d)); // 123.456 (correct)
It seems that the JsNumber class in Elemental2 has defined the signature as:
public native String toFixed(Object digits);
I think it should be:
public native String toFixed(int digits);
I'm still not sure why it works during SuperDev mode and not when compiled though.
Nice catch! This appears to be a bug in the jsinterop-generator configuration used when generating Elemental2's sources. Since JS doesn't have a way to say that a number is either an integer or a floating point value, the source material that jsinterop-generator works with can't accurately describe what that argument needs to be.
Usually, the fix is to add this to the integer-entities.txt (https://github.com/google/elemental2/blob/master/java/elemental2/core/integer_entities.txt), so that the generator knows that this parameter can only be an integer. However, when I made this change, the generator didn't act on the new line, and logged this fact. It turns out that it only makes this change when the parameter is a number of some kind, which Object clearly isn't.
The proper fix also then is probably to fix the externs that are used to describe what "JsNumber.toFixed" is supposed to take as an argument. The spec says that this can actually take some non-number value and after converting to a number, doesn't even need to be an integer (see https://www.ecma-international.org/ecma-262/5.1/#sec-15.7.4.5 and https://www.ecma-international.org/ecma-262/5.1/#sec-9.3).
So, instead we need to be sure to pass whatever literal value that the Java developer provides to the function, so that it is parsed correctly within JS - this means that the argument needs to be annotated with #DoNotAutobox. Or, we could clarify this to say that it can either be Object or Number for the argument, and the toFixed(Object) will still be emitted, but now there will be a numeric version too.
Alternatively, you can work around this as you have done, or by providing a string value of the number of digits you want:
console.log(num.toFixed("3"));
Filed as https://github.com/google/elemental2/issues/129
The problem is that "java" automatically wraps the int as an Integer and GWT end up transpiling the boxed Integer as a special object in JS (not a number). But, if you use a double, the boxed double is also transpiled as a native number by GWT and the problem disappears.
I'm not absolutely sure why this works in super-devmode, but it should not. I think that the difference is that SDM maps native toString to the Java toString, and (even weirder) the native toFixed call the toString of the argument. In SDM the boxed-interger#toString returns the string representation of the number which ends up coercing back to int, but in production, the boxed-interger#toString returns "[object Object]", which is handled as NaN.
There is a special annotation #DoNotAutobox to being able to use primitive integers in JS native APIs. This prevents integer auto-wrap, so the int is transpired to a native number (example usage in the Js#coerceToInt method). Elemental2 might add this annotation or change the type to int as you suggest. Please, create an issue in the elemental2 repo to fix this (https://github.com/google/elemental2/issues/new).
I recently checked out a fresh copy of the Surge framework here. I successfully added it into our Xcode project however I am receiving the following compile-error.
Binary operator '-' cannot be applied to two '[Float]' operands
I then tried importing it into a fresh clean XCode project and I still get the same error. Has anyone seen this error before and knows a fix? Or is it an issue with the framework itself.
You can not us binary operator on arrays, which is what you have. You can tell because "Float" is surrounded by brackets. You can only use binary operators on individual values.
var myArray = [Float]() //An array(note the brackets)
myArray.append(15) //myArray now contains one value(15), but you still can't use binary operators on it because its an array.
print(myArray + myArray) //Error
print(myArray[0] + myArray[0]) //prints 30, using this syntax "myArray[0]" represents the first object in the array, 30 in this case.
Long story short, switch to a non array float if you don't need an array, or directly access the items you want in your array in order to use binary operator on them.
SOLVED: Turns out that this was an oversight with the latest commit to the framework, and wasn't me missing something obvious.
I'm debugging a UIProgressView. Specifically I'm calling -setProgress: and -setProgress:animated:.
When I call it in LLDB using:
p (void) [_progressView setProgress:(float)0.5f]
the progressView ends up with a progress value of 0. Apparently, LLDB doesn't parse the float value correctly.
Any idea how I can get float arguments being parsed correctly by LLDB?
Btw, I'm experiencing the same problem in GDB.
In Xcode 4.5 and before, this was a common problem. If I remember correctly, what was really happening there was that the old C no-prototype type promotion rules were in effect. If you were passing a floating point value, it had type double. If you were passing an integral value, it was passed as int, that kind of thing. If you wrote (float)0.8f, lldb would take those 4 bytes of (float) and pass them to something that reads 8 bytes and interprets it as a double.
In Xcode 4.6, lldb will fetch the argument types from the Objective-C runtime, if it can, so it knows that the argument is really taking a float here. You shouldn't even need the (float) cast.
My guess is that when you give lldb a pointer to an object p (void) [0x260da630 setProgress:..., the expression parser isn't looking at the object's isa to get the class & getting the types out of it. As soon as you added a cast to the object address, it got the types.
I think when you wrote setProgress:(float)0.8f for gdb, it would take this as a special indication that this argument is a float type -- in essence, you were providing the prototype. It's something that I think lldb's expression parser should do some time in the future, but the fact that clang is used to do all the expression parsing means that it's a little harder to shoehorn these non-standard meanings into it. (there are already a few, of course, e.g. p $r0 works ;)
Found the problem. In reality my LLDB command looked slightly different:
p (void) [0x260da630 setProgress:(float)0.8f animated:NO]
where 0x260da630 is my UIProgressView. Apparently, the debugger really needs to know the exact type of the receiving object and doesn't honor the cast of the argument, so
p (void) [(UIProgressView*)0x260da630 setProgress:(float)0.8f animated:NO]
works. (Even casting to id wasn't sufficient!)
Thanks for your comments, Martin R and Martin Ullrich, and apologies for having broken my question for better readability!
Btw, I swear, I had used the property instead of the address as well. But perhaps restarting Xcode also helped…