srandom(time(NULL)) giving warning - pointer to integer without a cast - iphone

In iPhone (Xcode 4), using the function,
srandom(time(NULL));
both srand and srandom is giving this warning. But when running its working fine.
Why I am getting the warning in one of my class file? I have used that in other files, but no warning.
Warning: passing argument 1 of 'srand' makes integer from pointer
without a cast
However, using arc4random() can solve this problem. But in most example srand() is used in this way and nobody complains. Thats why I am confused.

Because srand is expecting an integer and time() is returning a pointer (from the looks of your particular error). Casting explicitly to an int will make it go away. Or perhaps reading the pointer to get the actual time value might be what you are looking for instead. Not 100% certain of time's return value here, but I'd double check to make sure it is indeed returning a tics value instead of a pointer to a time_t object that will remain mostly the same over time.
According to what I just read, it's supposed to return a time_t value, which when cast as an int, is the number of seconds elapsed since 1972ish. So not a pointer usually, but in your case it may be. Either way, add either a dereference and a cast, or just a cast if you can get it to return the time_t directly.

Related

How can I use a float as an argument in LLDB?

I'm debugging a UIProgressView. Specifically I'm calling -setProgress: and -setProgress:animated:.
When I call it in LLDB using:
p (void) [_progressView setProgress:(float)0.5f]
the progressView ends up with a progress value of 0. Apparently, LLDB doesn't parse the float value correctly.
Any idea how I can get float arguments being parsed correctly by LLDB?
Btw, I'm experiencing the same problem in GDB.
In Xcode 4.5 and before, this was a common problem. If I remember correctly, what was really happening there was that the old C no-prototype type promotion rules were in effect. If you were passing a floating point value, it had type double. If you were passing an integral value, it was passed as int, that kind of thing. If you wrote (float)0.8f, lldb would take those 4 bytes of (float) and pass them to something that reads 8 bytes and interprets it as a double.
In Xcode 4.6, lldb will fetch the argument types from the Objective-C runtime, if it can, so it knows that the argument is really taking a float here. You shouldn't even need the (float) cast.
My guess is that when you give lldb a pointer to an object p (void) [0x260da630 setProgress:..., the expression parser isn't looking at the object's isa to get the class & getting the types out of it. As soon as you added a cast to the object address, it got the types.
I think when you wrote setProgress:(float)0.8f for gdb, it would take this as a special indication that this argument is a float type -- in essence, you were providing the prototype. It's something that I think lldb's expression parser should do some time in the future, but the fact that clang is used to do all the expression parsing means that it's a little harder to shoehorn these non-standard meanings into it. (there are already a few, of course, e.g. p $r0 works ;)
Found the problem. In reality my LLDB command looked slightly different:
p (void) [0x260da630 setProgress:(float)0.8f animated:NO]
where 0x260da630 is my UIProgressView. Apparently, the debugger really needs to know the exact type of the receiving object and doesn't honor the cast of the argument, so
p (void) [(UIProgressView*)0x260da630 setProgress:(float)0.8f animated:NO]
works. (Even casting to id wasn't sufficient!)
Thanks for your comments, Martin R and Martin Ullrich, and apologies for having broken my question for better readability!
Btw, I swear, I had used the property instead of the address as well. But perhaps restarting Xcode also helped…

atoi() is not converting properly

I was trying to call atoi on the strings 509951644 and 4099516441. The first one got converted without any problem. The second one is giving me the decimal value 2,147,483,647 (0x7FFFFFFF). Why is this happening?
Your second integer is creating an overflow. The maximum 32-bit signed integer is 2147483647.
It's generally not recommended to use atoi anyway; use strtol instead, which actually tells you if your value is out of range. (The behavior of atoi is undefined when the input is out of range. Yours seems to be simply spitting out the maximum int value)
You could also check if your compiler has something like a atoi64 function, which would let you work with 64-bit values.
2147483647 is the maximum integer value in C (signed). It is giving the max that it can... the original is too large to convert to signed int. I suggest looking up how to convert into an unsigned int.

integer declaration issue

I am struggling with bug in my program. and finally i got the point. here integer shows value 1 at the time of declaration. I clean and build again. but it shows 1 value?
please any one explain me why this happen?
When you declare a local variable without specifying a value, you need to assign it first before reading from it becomes valid. The 1 that you see in your integer variable could be any garbage value, it is unspecified. Reading this value is undefined behavior.
int numberOfRecords = 0;
This is different from instance variables, which are initialized by default.

parseInt returns undefined

Facebook's code changes on Tuesday night have impacted how parseInt works in FBJS. Where I previously used it to convert decimal numbers to straight integers, now it always returns undefined.
For example:
return parseInt(decimalnum);
no longer works. Anyone figured out how we are supposed to round to integers now? Thanks.
Thanks for the report. It's fixed on trunk now; it should be out tomorrow unless there's another push later today.
I suspect that decimalnum is not defined in your function. Try replacing your return with return decimalnum; -- you may still be returning undefined.
parseInt is not for rounding - it actually takes the integer component of a number, or coerces a string to be a number. If you want to round, use Math.round. Depending on your usage, you may find Math.floor or Math.ceil useful.
Math.floor()
Math.ceil()
Math.round()
parseInt()
Did you try parseInt(decimalnum, 10); ?

Ambiguity of function overloading - Integers vs. Doubles

Suppose I wish to have 2 functions, one that generates a random integer within a given range, and one that generates a random double within a given range.
int GetRandomNumber( int min, int max );
double GetRandomNumber( double min, double max );
Notice that the method names are the same. I'm trying to decide whether to name the functions that or...
int GetRandomInteger( int min, int max );
double GetRandomDouble( double min, double max );
The first option has the benefit of the user not having to worry about which one they are calling. They can just call GetRandomNumber with integers or doubles and get a result.
The second option is more explicit in the names, but it reveals unneeded information to the caller.
I know this is petty, but I care about petty things.
Edit: How would C++ behave regarding implicit conversion.
Example:
GetRandomNumber( 1, 1 );
This could be implicitly converted for the GetRandomNumber function for the double version. Obviously I don't want this to occur. Will C++ use the int version before the double version?
I prefer your second example, it is explicit and leaves no ambiguity in interpretation. It is better to err on the side of being explicit in method names to clearly illuminate the purpose and function of that member.
The only downside to your approach is that you have coupled the name of the method to the return type which is not ideal in the event that you want to change the return type of one of these methods. However in that case I would be better to add a new method and not break compatibility in your API anyways.
I prefer the second version. I like overloading a function when ultimately the two functions do the same thing; in this case they return different types so they're not quite the same thing. Another possibility if your language supports it is to define it as a generic/template method, like:
T GetRandom<T>(T min, T max);
A function name should tell what the function does; I do not see a point in cramming the types into the names. So definitely go for overloading - that's what it is for.
I prefer the overloaded method. Look at the Math.Min() method in .NET. It has 11 overloads: int, double, byte, etc.
I usually prefer the first example because it doesn't pollute the namespace. For example when calling it, if I pass ints, I'm expecting to get back an int. If I pass in doubles, I probably expect to get back a double. The compiler will give you an error if you write:
//this causes an error
double d = GetRandomNumber(1,10);
so it's not really a big issue. and you can always cast the arguments if you need an int but have doubles for input...
In some of languages you can not vary the return type of overloaded functions this would require the second example.
Assuming C++, the second also avoids problems with ambiguity. If you said:
GetRandomNumber( 1, 5.0 );
which one should be used? In fact, it is a compilation error.
I think the ideal solution would be
Int32.GetRandom(int min, int max)
Double.GetRandom(double min, double max)
but, alas, static extension method are not possible (yet?).
The .net Framwork seems to favor the first option (System.Math class):
public static decimal Abs(decimal value)
public static int Abs(int value)
Like Andrew, I would personally favor the second option to avoid ambiguity, although I do think this is a matter of taste.