Error "Cast from pointer to smaller type 'int' loses information" in EAGLView.mm when update Xcode to 5.1 (5B130a) - iphone

Yesterday, I updated Xcode to the newest version (5.1 (5B130a)) to compatible with iOS 7.1. Then I build my project, I get the error "Cast from pointer to smaller type 'int' loses information" in EAGLView.mm file (line 408) when 64-bit simulators (e.g.: iPhone Retina 4-inch 64-bit) is selected.
I'm using cocos2d-x-2.2.2. Before I update Xcode, my project still can build and run normally with all devices.
Thanks for all recommendation.
Update: Today, i download the latest version of cocos2d-x (cocos2d-x 2.2.3). But the problem has still happened.
Here is some piece of code where that error occur:
/cocos2d-x-2.2.2/cocos2dx/platform/ios/EAGLView.mm:408:18: Cast from pointer to smaller type 'int' loses information
// Pass the touches to the superview
#pragma mark EAGLView - Touch Delegate
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
if (isKeyboardShown_)
{
[self handleTouchesAfterKeyboardShow];
return;
}
int ids[IOS_MAX_TOUCHES_COUNT] = {0};
float xs[IOS_MAX_TOUCHES_COUNT] = {0.0f};
float ys[IOS_MAX_TOUCHES_COUNT] = {0.0f};
int i = 0;
for (UITouch *touch in touches) {
ids[i] = (int)touch; // error occur here
xs[i] = [touch locationInView: [touch view]].x * view.contentScaleFactor;;
ys[i] = [touch locationInView: [touch view]].y * view.contentScaleFactor;;
++i;
}
cocos2d::CCEGLView::sharedOpenGLView()->handleTouchesBegin(i, ids, xs, ys);
}

Apparently the clang version in Xcode 5.1 and above is more strict about potential 32bit vs. 64 bit incompatibilities in source code than older clang versions have been.
To be honest, I think, clang is too restrictive here. A sane compiler may throw a warning on lines like this but by no way it should throw an error, because this code is NOT wrong, it is just potentially error-prone, but can be perfectly valid.
The original code is
ids[i] = (int)touch;
with ids being an array of ints and touch being a pointer.
In a 64bit build a pointer is 64bit (contrary to a 32bit build, where it is 32bit), while an int is 32bit, so this assignment stores a 64bit value in a 32bit storage, which may result in a loss of information.
Therefore it is perfectly valid for the compiler to throw an error for a line like
ids[i] = touch;
However the actual code in question contains an explicit c-style cast to int. This explicit cast clearly tells the compiler "Shut up, I know that this code does not look correct, but I do know what I am doing".
So the compiler is very picky here and the correct solution to make the code compile again and still let it show the exact same behavior like in Xcode 5.0 is to first cast to an integer type with a size that matches the one of a pointer and to then do a second cast to the int that we actually want:
ids[i] = (int)(size_t)touch;
I am using size_t here, because it is always having the same size as a pointer, no matter the platform. A long long would not work for 32bit systems and a long would not work for 64 bit Windows (while 64bit Unix and Unix-like systems like OS X use the LP64 data model, in which a long is 64bit, 64bit Windows uses the LLP64 data model, in which a long has a size of 32bit (http://en.wikipedia.org/wiki/64-bit_computing#64-bit_data_models)).

I meet this problem too.
ids[i] = (int)touch; // error occur here => I change this to below.
ids[i] = (uintptr_t)touch;
Then i can continue compiling. Maybe you can try this too.

XCode 5.1 is change all architecture to 64 bit.
you can just change architecture to support 32 bit compilation by all below in in Build Settings
use $(ARCHS_STANDARD_32_BIT) at Architecture instead of $(ARCHS_STANDARD)
remove arm64 at Valid Architectures
Hope it helps.

You can fix this error by replacing this line of code.
ids[i] = (uint64_t)touch;
You should perform type conversion based on 64bit build system because the type "int" supports only -32768 ~ 32768.

Surely the solution is to change the type of ids from int to type that is sufficiently large to hold a pointer.
I'm unfamiliar with XCode, but the solution should be something like follows:
Change the declaration of ids to:
intptr_t ids[IOS_MAX_TOUCHES_COUNT];
and the line producing the error to:
ids[i] = (intptr_t)touch;
Most of the "solutions" above can lose part of the pointer address when casting to a smaller type. If the value is ever used as pointer again that will prove to be an extremely bad idea.

ids[i] = (int)touch; put * and check it.
ids[i] = *(int *)touch;

Related

How should I enable cl_khr_fp64 in OpenCL?

I'm trying to get double precision to work in my OpenCL kernel but I'm having problems enabling cl_khr_fp64. If I put #pragma OPENCL EXTENSION cl_khr_fp64 : enable at the top of my kernel file and define a variable double u = 5.0; then it defines it and allows me to +-*/ on u. But if I try to do any math functions, for example double u = exp(5.0); it throws an error that it can't find the overloaded exp function for type double. Something weird I found is that if I check if cl_khr_fp64 is defined via
#ifdef cl_khr_fp64
#pragma OPENCL EXTENSION cl_khr_fp64 : enable
#elif defined(cl_amd_fp64)
#pragma OPENCL EXTENSION cl_amd_fp64 : enable
#else
#error "Double precision floating point not supported by OpenCL implementation."
#endif
Then it throws the error that double precision isn't supported. If I just say to enable it then it gets enabled, but if I check to see if it is able to be enabled, then it says it can't.
I've checked the extensions on my card and cl_khr_fp64 is listed and I also checked the CL_DEVICE_DOUBLE_FP_CONFIG using clGetDeviceInfo and it returns 63. I'm using a MacPro on Yosemite with the AMD FirePro D700. I'm wondering if I enabled cl_khr_fp64 in the wrong place or something. The contents of my mykernel.cl file are below. It's just a modification of the Apple 'hello_world' OpenCL Xcode project. The code, as written works just fine, but if I change the line from double u = (5.0); to double u = exp(5.0); it doesn't work. Ultimately I want to use math functions on double variables. Any help would be greatly appreciated!
#pragma OPENCL EXTENSION cl_khr_fp64 : enable
__kernel void square5(global double* input, global double* output, double mul,int nv)
{
size_t i = get_global_id(0);
double u = (5.0);
float left = u/1.2;
if(i==0) {
output[i] = mul*pow((float)u,left)*input[i]*input[i];
} else if (i==nv-1) {
output[i] = mul*u*input[i]*input[i];
} else {
output[i] = 0.25*mul*u*(input[i-1] + input[i+1])*(input[i-1] + input[i+1]);
}
}
Double precision was made a core-optional feature in OpenCL 1.2 (which should be the version that your device supports under OS X). This means that you shouldn't need to enable the extension in order to use it, if it is supported by the device. Enabling the extension shouldn't have any negative effect however.
You are not doing anything wrong, so this is likely a bug in Apple's OpenCL implementation. The same code (with the exp() function) compiles fine on my Macbook for the devices that support double precision. So, if your device definitely reports that it supports double precision, then you should file a bug in Apple's Bug Reporting System.

Unsequenced modification and access to parameter

I'm using open source project (NSBKeyframeAnimation) for some of animations in my p roject. Here are example of methods that i'm using:
double NSBKeyframeAnimationFunctionEaseInQuad(double t,double b, double c, double d)
{
return c*(t/=d)*t + b;
}
I have updated my Xcode to 5.0, and every method from this project started to show me warnings like this: "Unsequenced modification and access to 't' ". Should i rewrite all methods to objective-c or there's another approach to get rid of all these warnings?
The behavior of the expression c*(t/=d)*t + b is undefined, and you should fix it,
e.g. to
t /= d;
return c*t*t + b;
See for example Undefined behavior and sequence points for a detailed explanation.
those warnings can be disabled
put this before the code triggering the warning
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wunsequenced"
and this after that code
#pragma clang diagnostic pop
however, nothing guarantees that compilers will always handle this case gracefully.
i came to this page because i got 50 of those warnings, in exactly the same source file.
i'm grateful for those functions, but the programmer should realize that trying to write everything on one line is very "1980's", when the compilers weren't nearly as optimized as today.
and when it actually matter to win a few processor cycles, we only had a few million, not the billions we have now.
i would always put readability first.
The error you are referring to appears in all versions of Xcode, seeing as it is not Xcode that is the source of the warning, but the programmer; and, it is not Xcode that generates the warning, it is the GCC compiler it uses to debug your code that is responsible for identifying the potential issue the warning raises.
While it may be that rewriting the expression will resolve the error, you do not have to do that in this case (or, technically, any other case like this). You can leave it as is.
The answer is to add sequence (or order) to the modification of the variable (i.e., to the assignment of a new value) and to its expression (i.e., to the return of its new value), and that only takes a few extra characters to achieve, namely, a pair of braces ensconced in parentheses and a semicolon (see Statements and Declarations in Expressions).
double NSBKeyframeAnimationFunctionEaseInQuad(double t,double b, double c, double d)
{
return c*({(t/=d);})*t + b;
}
Here are before-and-after screenshots that demonstrate the solution:

How can I use a float as an argument in LLDB?

I'm debugging a UIProgressView. Specifically I'm calling -setProgress: and -setProgress:animated:.
When I call it in LLDB using:
p (void) [_progressView setProgress:(float)0.5f]
the progressView ends up with a progress value of 0. Apparently, LLDB doesn't parse the float value correctly.
Any idea how I can get float arguments being parsed correctly by LLDB?
Btw, I'm experiencing the same problem in GDB.
In Xcode 4.5 and before, this was a common problem. If I remember correctly, what was really happening there was that the old C no-prototype type promotion rules were in effect. If you were passing a floating point value, it had type double. If you were passing an integral value, it was passed as int, that kind of thing. If you wrote (float)0.8f, lldb would take those 4 bytes of (float) and pass them to something that reads 8 bytes and interprets it as a double.
In Xcode 4.6, lldb will fetch the argument types from the Objective-C runtime, if it can, so it knows that the argument is really taking a float here. You shouldn't even need the (float) cast.
My guess is that when you give lldb a pointer to an object p (void) [0x260da630 setProgress:..., the expression parser isn't looking at the object's isa to get the class & getting the types out of it. As soon as you added a cast to the object address, it got the types.
I think when you wrote setProgress:(float)0.8f for gdb, it would take this as a special indication that this argument is a float type -- in essence, you were providing the prototype. It's something that I think lldb's expression parser should do some time in the future, but the fact that clang is used to do all the expression parsing means that it's a little harder to shoehorn these non-standard meanings into it. (there are already a few, of course, e.g. p $r0 works ;)
Found the problem. In reality my LLDB command looked slightly different:
p (void) [0x260da630 setProgress:(float)0.8f animated:NO]
where 0x260da630 is my UIProgressView. Apparently, the debugger really needs to know the exact type of the receiving object and doesn't honor the cast of the argument, so
p (void) [(UIProgressView*)0x260da630 setProgress:(float)0.8f animated:NO]
works. (Even casting to id wasn't sufficient!)
Thanks for your comments, Martin R and Martin Ullrich, and apologies for having broken my question for better readability!
Btw, I swear, I had used the property instead of the address as well. But perhaps restarting Xcode also helped…

Porting Issue: Pointer with offset in VC++

Ok, this compiles fine in GCC under Linux.
char * _v3_get_msg_string(void *offset, uint16_t *len) {/*{{{*/
char *s;
memcpy(len, offset, 2);
*len = ntohs(*len);
s = malloc(*len+1);
memset(s, 0, *len+1);
memcpy(s, offset+2, *len);
s[*len] = '\0';
*len+=2;
return s;
}/*}}}*/
However, I'm having a problem porting it to Windows, due to the line...
memcpy(s, offset+2, *len);
Being a void pointer, VC++ doesn't want to offset the pointer. The usual caveat that CPP doesn't allow pointer offsets SHOULD be moot, as the whole project is being built under extern "C".
Now, this is only 1 function in many, and finding the answer to this will allow them all to be fixed. I would really prefer not having to rewrite the library project from the ground up, and I don't want to build under MinGW. There has to be a way to do this that I'm missing, and not finding in Google.
Well, you cannot do pointer arithmetics with void*, it is ridiculous that this compiles under GCC. try memcpy(s, ((char*)offset)+2,*len);

Is GDB in Xcode just flakey?

I'm developing an iPhone application using mixed Obj-C and C++. It seems that sometimes the values of various fields are totally bogus as reported by gdb when stepping from an Obj-C file to a C++ file. For instance, in a method:
int count = 1;
for (int i = 0; i < count; ++i) {
int x = 0; // put a breakpoint here to see how many times it gets hit.
}
in this example, sometimes gdb will report a value for 'count' that is other than '1'. it might be 126346 for example. but, stepping through the code, the loop is only iterated once, indicating that the value of 'count' actually was the expected value.
I'm new to Xcode. I probably just missing something basic. But it sucks to doubt your tools. Has anyone else seen oddness in this area? Solved it?
I have not seen gdb misprint variables as you say - however, if you compile in release instead of Debug, you can run into some wierd things with code being optimized away, or not being able to see some variables...
From what you describe, it almost seems more like you had an uninitialized value for "count". If your code looked like:
int count;
Then count could be pretty much anything and thus sometimes would be 0 - but other times some large random number.
Are you sure that you are getting the 'value' of count all the time, use NSLog and see the value in console. I think it will always show 1.
Also,
int count = 1;// put a breakpoint here to see the value of count, before and after execution of the statement.
for (int i = 0; i < count; ++i) {
int x = 0;
}
When the breakpoint is hit, that particular line is not hit, and step throuogh to see the change in the value, initially the value given by the gdb will be some arbitrary values, as the variable is not initialised, and once it gets initialised, the value changes to the new value
Saw oddness, in this regard and in many others. Never solved.
Sometimes using the GDB console directly helps. The way Xcode wraps GDB is definitely flakey.