I'm debugging a UIProgressView. Specifically I'm calling -setProgress: and -setProgress:animated:.
When I call it in LLDB using:
p (void) [_progressView setProgress:(float)0.5f]
the progressView ends up with a progress value of 0. Apparently, LLDB doesn't parse the float value correctly.
Any idea how I can get float arguments being parsed correctly by LLDB?
Btw, I'm experiencing the same problem in GDB.
In Xcode 4.5 and before, this was a common problem. If I remember correctly, what was really happening there was that the old C no-prototype type promotion rules were in effect. If you were passing a floating point value, it had type double. If you were passing an integral value, it was passed as int, that kind of thing. If you wrote (float)0.8f, lldb would take those 4 bytes of (float) and pass them to something that reads 8 bytes and interprets it as a double.
In Xcode 4.6, lldb will fetch the argument types from the Objective-C runtime, if it can, so it knows that the argument is really taking a float here. You shouldn't even need the (float) cast.
My guess is that when you give lldb a pointer to an object p (void) [0x260da630 setProgress:..., the expression parser isn't looking at the object's isa to get the class & getting the types out of it. As soon as you added a cast to the object address, it got the types.
I think when you wrote setProgress:(float)0.8f for gdb, it would take this as a special indication that this argument is a float type -- in essence, you were providing the prototype. It's something that I think lldb's expression parser should do some time in the future, but the fact that clang is used to do all the expression parsing means that it's a little harder to shoehorn these non-standard meanings into it. (there are already a few, of course, e.g. p $r0 works ;)
Found the problem. In reality my LLDB command looked slightly different:
p (void) [0x260da630 setProgress:(float)0.8f animated:NO]
where 0x260da630 is my UIProgressView. Apparently, the debugger really needs to know the exact type of the receiving object and doesn't honor the cast of the argument, so
p (void) [(UIProgressView*)0x260da630 setProgress:(float)0.8f animated:NO]
works. (Even casting to id wasn't sufficient!)
Thanks for your comments, Martin R and Martin Ullrich, and apologies for having broken my question for better readability!
Btw, I swear, I had used the property instead of the address as well. But perhaps restarting Xcode also helped…
Related
I am learning Editor Script of Unity recently, and I come across a piece of code in Unity Manual like this:
EditorWindowTest window = (EditorWindowTest)EditorWindow.GetWindow(typeof(EditorWindowTest), true, "My Empty Window");
I don't know why bother casting the result with (EditorWindowTest) again since the type has been specified in the parameter field of GetWindow().
Thanks in advance :)
There are multiple overloads of the EditorWindow.GetWindow method.
The one used in your code snippet is one of the non-generic ones. It accepts a Type argument which it can use at runtime to create the right type of window. However, since it doesn't use generics, it's not possible to know the type of the window at compile time, so the method just returns an EditorWindow, as that's the best it can do.
You can hover over a method in your IDE to see the return type of any method for yourself.
When using one of the generic overloads of the GetWindow method, you don't need to do any manual casting, since the method already knows at compile time the exact type of the window and returns an instance of that type directly.
The generic variants should be used when possible, because it makes the code safer by removing the need for casting at runtime, which could cause exceptions.
If you closely look, GetWindow's return type is EditorWindow. Not the EditorWindowTest, so typecasting makes sense.
https://docs.unity3d.com/ScriptReference/EditorWindow.GetWindow.html
I'm using GWT 2.8.2.
When I run the following code in SuperDev mode, it logs 123.456, which is what I expect.
double d = 123.456789;
JsNumber num = Js.cast(d);
console.log(num.toFixed(3));
When I compile to JavaScript and run, it logs 123 (i.e. it does not show the decimal places).
I have tried running the code on Android Chrome, Windows Chrome and Windows Firefox. They all exhibit the same behavior.
Any idea why there is a difference and is there anything I can do about it?
Update: After a bit more digging, I've found that it's to do with the coercion of the integer parameter.
console.log(num.toFixed(3)); // 123 (wrong)
console.log(num.toFixed(3d)); // 123.456 (correct)
It seems that the JsNumber class in Elemental2 has defined the signature as:
public native String toFixed(Object digits);
I think it should be:
public native String toFixed(int digits);
I'm still not sure why it works during SuperDev mode and not when compiled though.
Nice catch! This appears to be a bug in the jsinterop-generator configuration used when generating Elemental2's sources. Since JS doesn't have a way to say that a number is either an integer or a floating point value, the source material that jsinterop-generator works with can't accurately describe what that argument needs to be.
Usually, the fix is to add this to the integer-entities.txt (https://github.com/google/elemental2/blob/master/java/elemental2/core/integer_entities.txt), so that the generator knows that this parameter can only be an integer. However, when I made this change, the generator didn't act on the new line, and logged this fact. It turns out that it only makes this change when the parameter is a number of some kind, which Object clearly isn't.
The proper fix also then is probably to fix the externs that are used to describe what "JsNumber.toFixed" is supposed to take as an argument. The spec says that this can actually take some non-number value and after converting to a number, doesn't even need to be an integer (see https://www.ecma-international.org/ecma-262/5.1/#sec-15.7.4.5 and https://www.ecma-international.org/ecma-262/5.1/#sec-9.3).
So, instead we need to be sure to pass whatever literal value that the Java developer provides to the function, so that it is parsed correctly within JS - this means that the argument needs to be annotated with #DoNotAutobox. Or, we could clarify this to say that it can either be Object or Number for the argument, and the toFixed(Object) will still be emitted, but now there will be a numeric version too.
Alternatively, you can work around this as you have done, or by providing a string value of the number of digits you want:
console.log(num.toFixed("3"));
Filed as https://github.com/google/elemental2/issues/129
The problem is that "java" automatically wraps the int as an Integer and GWT end up transpiling the boxed Integer as a special object in JS (not a number). But, if you use a double, the boxed double is also transpiled as a native number by GWT and the problem disappears.
I'm not absolutely sure why this works in super-devmode, but it should not. I think that the difference is that SDM maps native toString to the Java toString, and (even weirder) the native toFixed call the toString of the argument. In SDM the boxed-interger#toString returns the string representation of the number which ends up coercing back to int, but in production, the boxed-interger#toString returns "[object Object]", which is handled as NaN.
There is a special annotation #DoNotAutobox to being able to use primitive integers in JS native APIs. This prevents integer auto-wrap, so the int is transpired to a native number (example usage in the Js#coerceToInt method). Elemental2 might add this annotation or change the type to int as you suggest. Please, create an issue in the elemental2 repo to fix this (https://github.com/google/elemental2/issues/new).
My extensions to the Int type do not work for raw, negative values. I can work around it but the failure seems to be a type inference problem. Why is this not working as expected?
I first encountered this within the application development environment but I have recreated a simple form of it here on the Playground. I am using the latest version of Xcode; Version 6.2 (6C107a).
That's because - is interpreted as the minus operator applied to the integer 2, and not as part of the -2 numeric literal.
To prove that, just try this:
-(1.foo())
which generates the same error
Could not find member 'foo'
The message is probably misleading, because the error is about trying to apply the minus operator to the return value of the foo method.
I don't know if that is an intentional behavior or not, but it's how it works :)
This is likely a compiler bug (report on radar if you like). Use brackets:
println((-2).foo())
In iPhone (Xcode 4), using the function,
srandom(time(NULL));
both srand and srandom is giving this warning. But when running its working fine.
Why I am getting the warning in one of my class file? I have used that in other files, but no warning.
Warning: passing argument 1 of 'srand' makes integer from pointer
without a cast
However, using arc4random() can solve this problem. But in most example srand() is used in this way and nobody complains. Thats why I am confused.
Because srand is expecting an integer and time() is returning a pointer (from the looks of your particular error). Casting explicitly to an int will make it go away. Or perhaps reading the pointer to get the actual time value might be what you are looking for instead. Not 100% certain of time's return value here, but I'd double check to make sure it is indeed returning a tics value instead of a pointer to a time_t object that will remain mostly the same over time.
According to what I just read, it's supposed to return a time_t value, which when cast as an int, is the number of seconds elapsed since 1972ish. So not a pointer usually, but in your case it may be. Either way, add either a dereference and a cast, or just a cast if you can get it to return the time_t directly.
size_t pixelsWidth = (size_t)bitmapSize.width;
Or is it totally fine to do without the casting to size_t? bitmapSize is of type CGSize...
You should use the proper type, which is probably CGFloat. size_t is something int'ish and inappropriate.
In this case, the type of bitmapSize.width is CGFloat (currently float on iPhone).
Converting from float to size_t has undefined behavior (according to the C standard - not sure whether Apple provides any further guarantees) if the value converted doesn't fit into a size_t. When it does fit, the conversion loses the fractional part. It makes no difference whether the conversion is implicit or explicit.
The cast is "good" in the sense that it shows, in the code, that a conversion takes place, so programmers reading the code won't forget. It's "bad" in the sense that it probably suppresses any warnings that the compiler would give you that this conversion is dangerous.
So, if you're absolutely confident that the conversion will always work then you want the cast (to suppress the warning). If you're not confident then you don't want the cast (and you probably should either avoid the conversion or else get confident).
In this case, it seems likely that the conversion is safe as far as size is concerned. Assuming that your CGSize object represents the size of an actual UI element, it won't be all that big. A comment in the code would be nice, but this is the kind of thing that programmers stop commenting after the fiftieth time, and start thinking it's "obvious" that of course an image width fits in a size_t.
A further question is whether the conversion is safe regarding fractions. Under what circumstances, if any, would the size be fractional?
C supports implicit casting, and you will also get a warning if size_t is less precise than CGSize for some reason.
The size_t type is for something completely different, you should not use it for such purposes.
Its purpose is to express the sizes of different types in memory. For example, the sizeof(int) is of type size_t and it returns the size of the int type.
As the others suggested, use the appropriate type for that variable.
A cast is usually not needed (and sometimes wrong).
C does the "right thing" most of the time.
In your case, the cast is not needed (but not wrong, merely redundant).
You need to cast
arguments to printf when they don't match the conversion specification
printf("%p\n", (void*)some_pointer)
arguments to is*() (isdigit, isblank, ...) and toupper() and tolower()
if (isxdigit((unsigned char)ch)) { /* ... */ }
(If I remember more, I'll add them here)