What is NSParameterAssert? - iphone

What is NSParameterAssert?
Can anyone explain with example?

It is a simple way to test that a method's parameter is not nil or not 0. So basically, you use it to create a precondition, stating that some parameter must be set. If it is not set, the macro causes the application to abort and generates an error on that line. So:
- (void)someMethod:(id)someObjectThatMustNotBeNil
{
// Make sure that someObjectThatMustNotBeNil is really not nil
NSParameterAssert( someObjectThatMustNotBeNil );
// Okay, now do things
}
Pre-conditions are a simple way to ensure that methods/API are being called correctly by the programmer. The idea is that if a programmer violates the precondition, the application terminates early--hopefully during debugging and basic testing.
NSParameterAssert can be used to test that any expression evaluates to be true, however, so you can use it like this as well:
NSParameterAssert( index >= 0 ); // ensure no negative index is supplied
Apple's documentation for the NSParameterAssert() macro

Related

Is a while loop already implemented with pass-by-name parameters? : Scala

The Scala Tour Of Scala docs explain pass-by-name parameters using a whileLoop function as an example.
def whileLoop(condition: => Boolean)(body: => Unit): Unit =
if (condition) {
body
whileLoop(condition)(body)
}
var i = 2
whileLoop (i > 0) {
println(i)
i -= 1
} // prints 2 1
The section explains that if the condition is not met then the body is not evaluated, thus improving performance by not evaluating a body of code that isn't being used.
Does Scala's implementation of while already use pass-by-name parameters?
If there's a reason or specific cases where it's not possible for the implementation to use pass-by-name parameters, please explain to me, I haven't been able to find any information on it so far.
EDIT: As per Valy Dia's (https://stackoverflow.com/users/5826349/valy-dia) answer, I would like to add another question...
Would a method implementation of the while statement perform better than the statement itself if it's possible not to evaluate the body at all for certain cases? If so, why use the while statement at all?
I will try to test this, but I'm new to Scala so it might take me some time. If someone would like to explain, that would be great.
Cheers!
The while statement is not a method, so the terminology by-name parameter is not really relevant... Having said so the while statement has the following structure:
while(condition){
body
}
where the condition is repeatedly evaluated and the body is evaluated only upon the condition being verified, as show this small examples:
scala> while(false){ throw new Exception("Boom") }
// Does nothing
scala> while(true){ throw new Exception("Boom") }
// java.lang.Exception: Boom
scala> while(throw new Exception("boom")){ println("hello") }
// java.lang.Exception: Boom
Would a method implementation of the while statement perform better than the statement itself if it's possible not to evaluate the body at all for certain cases?
No. The built-in while also does not evaluate the body at all unless it has to, and it is going to compile to much more efficient code (because it does not need to introduce the "thunks"/closures/lambdas/anonymous functions that are used to implement "pass-by-name" under the hood).
The example in the book was just showing how you could implement it with functions if there was no built-in while statement.
I assumed that they were also inferring that the while statement's body will be evaluated whether or not the condition was met
No, that would make the built-in while totally useless. That is not what they were driving at. They wanted to say that you can do this kind of thing with "call-by-name" (as opposed to "call-by-value", not as opposed to what the while loop does -- because the latter also works like that).
The main takeaway is that you can build something that looks like a control structure in Scala, because you have syntactic sugar like "call-by-name" and "last argument group taking a function can be called with a block".

What is '(signal_name)' notation mean in Specman?

I am new to Specman and trying to learn it by reading existing code.
I came across the following function and can't find an explanation in specman documentation...
VerifyNode(end_point:string, derived_value:uint) is also {
if ('(end_point)' === ~derived_value) {
message("Error1");
}
else if ('(end_point)' === derived_value) {
message("Error2");
}
else {
message("Error3");
}
}
I assume logically the '(end_point)' is getting actual value of end_point signal in run-time. Is that true?
Error1 will be the message if value of end_point at run time is negate the derived_value unsigned integer.
Error2 will be the message if value of end_point at run time is the same as the derived_value unsigned integer.
How can i explain the "Error3" condition?
Indeed this is the very old school of signal access based on string. I recommend you search for "simple_port".
The end_point is a string, that should be the full HDL path of the signal you want to read. Example of calling this method:
// check if signal has the value 3:
VerifyNode("top.module_a.sig_val", 3);
The official name of '' is called the tick notation in Specman documentation. Search for the term "tick notation" in cdnshelp and you will find it.
'(end_point)' is two things. It is old-school, non-port access to the device under test (DUT) and the (end_point) part of that snippet is the way to do string substitution directly in the '' access to the DUT. Using the old style of access to the DUT can drastically slow down simulation speed, if your design is big enough for that to be a concern for you. Also, this methodology lacks more static compile time checks as those strings can be anything and you'll only know the path is wrong when you simulate it and the DUT path cannot be found, creating an error.

Optionals vs Throwing functions

Consider the following lookup function that I wrote, which is using optionals and optional binding, reports a message if key is not found in the dictionary
func lookUp<T:Equatable>(key:T , dictionary:[T:T]) -> T? {
for i in dictionary.keys {
if i == key{
return dictionary[i]
}
}
return nil
}
let dict = ["JO":"Jordan",
"UAE":"United Arab Emirates",
"USA":"United States Of America"
]
if let a = lookUp( "JO",dictionary:dict ) {
print(a) // prints Jordan
} else {
print("cant find value")
}
I have rewritten the following code, but this time, using error handling, guard statement, removing -> T? and writing an enum which conforms to ErrorType:
enum lookUpErrors : ErrorType {
case noSuchKeyInDictionary
}
func lookUpThrows<T:Equatable>(key:T , dic:[T:T])throws {
for i in dic.keys{
guard i == key else {
throw lookUpErrors.noSuchKeyInDictionary
}
print(dic[i]!)
}
}
do {
try lookUpThrows("UAE" , dic:dict) // prints united arab emirates
}
catch lookUpErrors.noSuchKeyInDictionary{
print("cant find value")
}
Both functions work well but:
which function grants better performance
which function is "safer"
which function is recommended (based on pros and cons)
Performance
The two approaches should have comparable performance. Under the hood they are both doing very similar things: returning a value with a flag that is checked, and only if the flag shows the result is valid, proceeding. With optionals, that flag is the enum (.None vs .Some), with throws that flag is an implicit one that triggers the jump to the catch block.
It's worth noting your two functions don't do the same thing (one returns nil if no key matches, the other throws if the first key doesn’t match).
If performance is critical, then you can write this to run much faster by eliminating the unnecessary key-subscript lookup like so:
func lookUp<T:Equatable>(key:T , dictionary:[T:T]) -> T? {
for (k,v) in dictionary where k == key {
return v
}
return nil
}
and
func lookUpThrows<T:Equatable>(key:T , dictionary:[T:T]) throws -> T {
for (k,v) in dic where k == key {
return v
}
throw lookUpErrors.noSuchKeyInDictionary
}
If you benchmark both of these with valid values in a tight loop, they perform identically. If you benchmark them with invalid values, the optional version performs about twice the speed, so presumably actually throwing has a small bit of overhead. But probably not anything noticeable unless you really are calling this function in a very tight loop and anticipating a lot of failures.
Which is safer?
They're both identically safe. In neither case can you call the function and then accidentally use an invalid result. The compiler forces you to either unwrap the optional, or catch the error.
In both cases, you can bypass the safety checks:
// force-unwrap the optional
let name = lookUp( "JO", dictionary: dict)!
// force-ignore the throw
let name = try! lookUpThrows("JO" , dic:dict)
It really comes down to which style of forcing the caller to handle possible failure is preferable.
Which function is recommended?
While this is more subjective, I think the answer’s pretty clear. You should use the optional one and not the throwing one.
For language style guidance, we need only look at the standard library. Dictionary already has a key-based lookup (which this function duplicates), and it returns an optional.
The big reason optional is a better choice is that in this function, there is only one thing that can go wrong. When nil is returned, it is for one reason only and that is that the key is not present in the dictionary. There is no circumstance where the function needs to indicate which reason of several that it threw, and the reason nil is returned should be completely obvious to the caller.
If on the other hand there were multiple reasons, and maybe the function needs to return an explanation (for example, a function that does a network call, that might fail because of network failure, or because of corrupt data), then an error classifying the failure and maybe including some error text would be a better choice.
The other reason optional is better in this case is that failure might even be expected/common. Errors are more for unusual/unexpected failures. The benefit of returning an optional is it’s very easy to use other optional features to handle it - for example optional chaining (lookUp("JO", dic:dict)?.uppercaseString) or defaulting using nil-coalescing (lookUp("JO", dic:dict) ?? "Team not found"). By contrast, try/catch is a bit of a pain to set up and use, unless the caller really wants "exceptional" error handling i.e. is going to do a bunch of stuff, some of which can fail, but wants to collect that failure handling down at the bottom.
#AirspeedVelocity already has a great answer, but I think it's worth going a bit further into why to use optionals vs errors.
There are basically four ways for something to go wrong:
Simple error: it fails in only one way, so you don't need to care about why something went wrong. The problem may come either from programmer logic or user data, so you need to be able to handle it at run time and design around it when coding.
This is the case for things like initializing an Int from a String (either the string is parseable as an integer or it's not) or dictionary-style lookups (either there's a value for the key or there's not). Optionals work really well for this in Swift.
Logic error: this is the kind of error that (in theory) comes up only during development, as a result of Doing It Wrong — for example, indexing beyond the bounds of an array.
In ObjC, NSException covers these kinds of cases. In Swift, we have functions like fatalError. I'd assume that part of why NSException isn't surfaced in Swift is that once your program encounters a logic error, it's not really safe to assume anything about its further operation. Logic errors should either be caught during development or cause a (nicely debuggable) crash rather than letting the program continue in an undefined (and thus unsafe) state.
Universal error: there are loads of ways to fail, but they aren't very connected to programmer logic or user action. You might run out of memory to allocate, get a low-level interrupt, or (wait for it...) overflow the stack, but those can happen with almost anything you do and not really because of any specific thing you do.
You see universal errors getting surfaced as exceptions in some other languages, but that means that you have to code around the possibility of any and every call you make being able to fail. And at that point you're writing more error handling than you are actual code.
Recoverable error: This is for when there are lots of ways to go wrong, but not in ways that preclude further operation, and what a program does upon encountering an error might change depending on what kind of error it is. Filesystems and networking are the common examples here: if you can't load a file, it might be because the user got the name wrong (so you should tell the user that) or because the wifi momentarily dropped and will be back shortly (so you might forego the alert and just try again).
In Cocoa, historically, this is what NSError parameters are for. Swift's error handling makes this pattern part of the language.
So, when you're writing new API (for yourself or someone else to call) in Swift, or using new ObjC annotations to make an existing API easier to use from Swift, think about what kind of errors you're dealing with.
Is there only one clear way to fail that isn't a result of API misuse? Use an Optional return type.
Can something fail only if a client doesn't follow your API contract — say, if you're writing a container class that has a subscript and a count, or requiring that some specific sequence of calls be made? Don't burden every bit of code that uses your API with error handling or optional unwrapping — just fatalError or assert (or throw NSException if your Swift API is a front to ObjC code) and document what the right way is for people to use your API.
Okay, so your ObjC init method returns nil iff [super init] returns nil. So should you mark your initializer as failable for Swift or add an error out-parpameter? Think about when that really happens — if -[NSObject init] is returning nil, it's because you chained it off of an alloc call that returned nil. If alloc fails, it's already The End Times for your process, so it's not worth handling that case.
Do you have multiple failure cases, some or all of which might be worth reporting to a user? Or that a client calling your API might want to ignore some but not all of? Write a Swift function that throws and a corresponding set of ErrorType values, or an ObjC method that returns an NSError out-parameter.
If you are using Swift 2.0 you could use both versions. The second version uses try/catch (introduced with 2.0) and is thus not backwards compatible which might be a disadvantage to consider.
If there is any performance difference then it will be neglectable.
My personal favorite is the first one as it is straight forward and well readable. If I had to do maintenance of the second version I would ask myself why the author took a try/catch approach for such a simple case. So I would rather be confused...
If you had many complex conditions with many exit points (throws) then I would go for the second one. But as I said, this is not the case here.

Breaking when a method returns null in the Eclipse debugger

I'm working on an expression evaluator. There is an evaluate() function which is called many times depending on the complexity of the expression processed.
I need to break and investigate when this method returns null. There are many paths and return statements.
It is possible to break on exit method event but I can't find how to put a condition about the value returned.
I got stuck in that frustration too. One can inspect (and write conditions) on named variables, but not on something unnamed like a return value. Here are some ideas (for whoever might be interested):
One could include something like evaluate() == null in the breakpoint's condition. Tests performed (Eclipse 4.4) show that in such a case, the function will be performed again for the breakpoint purposes, but this time with the breakpoint disabled. So you will avoid a stack overflow situation, at least. Whether this would be useful, depends on the nature of the function under consideration - will it return the same value at breakpoint time as at run time? (Some s[a|i]mple code to test:)
class TestBreakpoint {
int counter = 0;
boolean eval() { /* <== breakpoint here, [x]on exit, [x]condition: eval()==false */
System.out.println("Iteration " + ++counter);
return true;
}
public static void main(String[] args) {
TestBreakpoint app = new TestBreakpoint();
System.out.println("STARTED");
app.eval();
System.out.println("STOPPED");
}
}
// RESULTS:
// Normal run: shows 1 iteration of eval()
// Debug run: shows 2 iterations of eval(), no stack overflow, no stop on breakpoint
Another way to make it easier (to potentially do debugging in future) would be to have coding conventions (or personal coding style) that require one to declare a local variable that is set inside the function, and returned only once at the end. E.g.:
public MyType evaluate() {
MyType result = null;
if (conditionA) result = new MyType('A');
else if (conditionB) result = new MyType ('B');
return result;
}
Then you can at least do an exit breakpoint with a condition like result == null. However, I agree that this is unnecessarily verbose for simple functions, is a bit contrary to flow that the language allows, and can only be enforced manually. (Personally, I do use this convention sometimes for more complex functions (the name result 'reserved' just for this use), where it may make things clearer, but not for simple functions. But it's difficult to draw the line; just this morning had to step through a simple function to see which of 3 possible cases was the one fired. For today's complex systems, one wants to avoid stepping.)
Barring the above, you would need to modify your code on a case by case basis as in the previous point for the single function to assign your return value to some variable, which you can test. If some work policy disallows you to make such non-functional changes, one is quite stuck... It is of course also possible that such a rewrite could result in a bug inadvertently being resolved, if the original code was a bit convoluted, so beware of reverting to the original after debugging, only to find that the bug is now back.
You didn't say what language you were working in. If it's Java or C++ you can set a condition on a Method (or Function) breakpoint using the breakpoint properties. Here are images showing both cases.
In the Java example you would unclik Entry and put a check in Exit.
Java Method Breakpoint Properties Dialog
!
C++ Function Breakpoint Properties Dialog
This is not yet supported by the Eclipse debugger and added as an enhancement request. I'd appreciate if you vote for it.
https://bugs.eclipse.org/bugs/show_bug.cgi?id=425744

iPhone SDK: Please explain why "nil ==" is different than "== nil"

I have been programming iPhone SDK for around 6 months but am a bit confused about a couple of things...one of which I am asking here:
Why are the following considered different?
if (varOrObject == nil)
{
}
vs
if (nil == varOrObject)
{
}
Coming from a perl background this is confusing to me...
Can someone please explain why one of the two (the second) would be true whereas the first would not if the two routines are placed one after the other within code. varOrObject would not have changed between the two if statements.
There is no specific code this is happening in, just that I have read in a lot of places that the two statements are different, but not why.
Thanks in advance.
They are the same. What you have read is probably talking about if you mistakenly write == as =, the former will assign the value to the variable and the if condition will be false, while the latter would be a compile time error:
if (variable = nil) // assigns nil to variable, not an error.
if (nil = variable) // compile time error
The latter gives you the chance to correct the mistake at compile time.
They might be different if you are using Objective-C++ and you were overriding the '==' operator for the object.
They are exactly the same, it is just a style difference. One would never be true if the other is false.
if it's a recommendation that you use the latter, it's because the second is easier to catch if you accidentally type =. otherwise, they are identical and your book is wrong.
They can only be different if the "==" - Operator is overloaded.
Let's say someone redefines "==" for a list, so that the content of two lists is checked rather than the memory adress (like "equals() in Java). Now the same person could theoretically define that "emptyList == NULL".
Now (emptyList==NULL) would be true but (NULL==emptyList) would still be false.