Is there any equivalent to check for throwing exceptions in swift language Unit tests?
For example I Have a class:
class Square : NSObject{
let sideLength: Int
init(sideLength: Int) {
assert(sideLength >= 0, "Wrong initialization of Square class with below zero side length")
self.sideLength = sideLength
super.init()
}
}
and Test to check it work. In objective C I can write test method like this:
- (void)testInitializationWithWrongSideLengthThrowsExceptions{
XCTAssertThrows([[Shape alloc] initWithSideLength: -50], "Should throw exceptions on wrong side values initialisations");
}
What is Swift equal technic?
If you add the following three files to your tests:
// ThrowsToBool.h
#import <Foundation/Foundation.h>
/// A 'pure' closure; has no arguments, returns nothing.
typedef void (^VoidBlock)(void);
/// Returns: true if the block throws an `NSException`, otherwise false
BOOL throwsToBool(VoidBlock block);
// ThrowsToBool.m
#import "ThrowsToBool.h"
BOOL throwsToBool(VoidBlock const block) {
#try {
block();
}
#catch (NSException * const notUsed) {
return YES;
}
return NO;
}
// xxxTests-Bridging-Header.h
#import "ThrowsToBool.h"
Then you can write:
XCTAssert(throwsToBool {
// test code that throws an NSException
})
But it doesn't work for assert or precondition :(
PS I got the idea from: http://modocache.io/xctest-the-good-parts
I think the assert()-function should only be used for debug-purposes. Not only because of the following statement from Apple's Swift-Book (https://itun.es/de/jEUH0.l):
„Assertions cause your app to terminate and are not a substitute for designing your code in such a way that invalid conditions are unlikely to arise.“
Thats why I would solve this as follows:
import Cocoa
import XCTest
class Square
{
let sideLength: Int
init(_ sideLength: Int)
{
self.sideLength = sideLength >= 0 ? sideLength : 0
}
}
class SquareTests: XCTestCase
{
override func setUp() { super.setUp() }
override func tearDown() { super.tearDown() }
func testMySquareSideLength() {
let square1 = Square(1);
XCTAssert(square1.sideLength == 1, "Sidelength should be 1")
let square2 = Square(-1);
XCTAssert(square2.sideLength >= 0, "Sidelength should be not negative")
}
}
let tester = SquareTests()
tester.testMySquareSideLength()
There is no equivalent to XCTAssertThrows in swift. For now you can't use a native function, but there is a solution with some objective-c help. You can use Quick, or only Nimble. Or to make your own assert function - see this article - http://modocache.io/xctest-the-good-parts - Potential Improvement #2: Add XCTAssertThrows to Swift
the best way is add a bridge I think.
please look at https://gist.github.com/akolov/8894408226dab48d3021
it works for me.
correct way in Swift 2:
class Square : NSObject{
let sideLength: Int
init(sideLength: Int) throws { // throwable initializer
guard sideLength >= 0 else { // use guard statement
throw ...// your custom error type
}
self.sideLength = sideLength
super.init()
}
}
and testing:
func testInitThrows() {
do {
_ = try Square(sideLength: -1) // or without parameter name depending on Swift version
XCTFail()
} catch ... { // your custom error type
} catch {
XCTFail()
}
}
Related
I have a class where I have an extension to check equality but on test, the equality crashes.
here's my code
extension LPItemAction {
public override func isEqual(_ other: Any?) -> Bool {
if (other as? LPItemAction) == self {
return true
} else if !(other is LPItemAction) {
return false
} else {
let otherAction = other as? LPItemAction
return hash == otherAction?.hash
}
}
}
and my test case is like this
func testIsEqualSelf() {
// Given
let action = LPItemAction()
action.type = .account
// When
let equal = action.isEqual(action)
// Then
XCTAssertTrue(equal)
}
I got a crash with error Thread 1: EXC_BAD_ACCESS (code=2, address=0x16e747fc0)
Since this is obviously a NSObject, you are probably right to override isEqual. There are some rules though.
You cannot use ==. This operator invokes the Equality protocol, which, on NSObject is implemented using isEqual, therefore you end up with infinite recursion.
Another thing is that using hash to compare equality is just wrong.
// test type first
guard let otherAction = other as? LPItemAction else {
return false
}
// test reference equality
if self === otherAction {
return true
}
// test equality of all properties
return type === otherAction.type
let's say we have a class
class Test: NSObject {
let batman: String
let spiderman: Int
let superman: NSObject
init(batman: String, spiderman: Int, superman: NSObject) {
self.batman = batman
self.spiderman = spiderman
self.superman = superman
}
}
And a generic method for initialization:
func resolve<T: NSObject>(args: Any...) throws -> T {
// if let object = initWithReflectionSomeHow(T.className, arg1, arg2...) {
// return object
// } else {
// throw exception
// }
}
I found a way to to this without parameters like this:
func resolve<T: NSObject>() throws -> T {
if let objectClass = NSClassFromString(T.className) as? T.Type {
let object = objectClass.init()
return object
} else {
//throw classNotFoundException
}
}
So I would just call:
let obj = try resolve() as Test
but not sure how I can inject parameters. Is it even possible?
I am trying to apply reflection on a swift class (someClass) to invoke an init method that takes one argument (someArg), I managed to get the init selector and IMP that has 1 argument, but when I invoke the IMP it ends up calling the init with no arguments. In the Playground below I always get "called the wrong init" printed.
If I remove the override init I get the following error:
fatal error: use of unimplemented initializer 'init()' for class '__lldb_expr_15.someClass'
What am I missing?
import UIKit
public class someClass:NSObject{
init( num:someArg){
print("called the right init")
}
override init(){
print("called the wrong init")
}
}
public class someArg:NSObject{
override init(){
}
}
public class Test{
func reflect(){
let classType: NSObject.Type = someClass.self as NSObject.Type
let (initializerWithOneArgImp,selector) = getInitializerWithArguments(classType, argumentsCount: 1)
typealias initializerWithOneArgImpType = #convention(c) (AnyObject, Selector, AnyObject) -> (AnyObject)
let callback = unsafeBitCast(initializerWithOneArgImp , initializerWithOneArgImpType.self)
callback(classType,selector,someArg())
}
func getInitializerWithArguments(classType:AnyClass, argumentsCount:Int)->(IMP,Selector){
var methodCount:CUnsignedInt = 0
let methodList = class_copyMethodList(classType.self, &methodCount)
let n : Int = Int(methodCount)
for var i: Int = 0; i < n; i++ {
let methodSelector = method_getName(methodList[i])
let methodName:String = String(_sel:methodSelector)
if(methodName == "init")
{
let methodArgumentsCount = method_getNumberOfArguments(methodList[i])
if(methodArgumentsCount == UInt32(argumentsCount) + 1)
{
return (method_getImplementation(methodList[i]),methodSelector)
}
}
}
return (nil,nil)
}
}
var test = Test()
test.reflect()
Turns out, the non parametrized init has two arguments by default, and the parameterized init would have "initWithNum" as methodName.
if(methodName.hasPrefix("init"))
{
let methodArgumentsCount = method_getNumberOfArguments(methodList[i])
if(methodArgumentsCount == UInt32(argumentsCount) + 2)
{
return (method_getImplementation(methodList[i]),methodSelector)
}
}
Thanks in advance for help!!
I'm trying to call a func from within my Class and I keep getting an error saying that:
Missing parameter for argument #1.............Read a few posts saying it's an instance vs class problem? I don't get it..I'm calling the method from within the Class??? There has to be an instance of the class if the method is being called????? right? Here is my code...Thanks
import Foundation
import Parse
class TestViewController {
let photos = getWallImages() //-----This is the line requesting an argument
func getWallImages() -> [WallPost] {
let query = WallPost.query()!
query.findObjectsInBackgroundWithBlock { objects, error in
if error == nil {
if let objects = objects as? [WallPost] {
return objects
println("We have \(objects.count)")
}
} else if let error = error {
println(error)
}
}
}
}
So the "crime" you are committing is the fact that the method is applied in an instance of the class and not as a class method. The function is expecting a self parameter (a reference to the instance). That explains the error message.
Now to fix that you have two quick options:
1. Make it a class function and call it that way too:
class TestViewController {
let photos = TestViewController.getWallImages()
class func getWallImages() -> [WallPost] {
// mumbo jumbo
}
}
This approach is problematic in case you would want to do some instance specific operations, because class func is static method and doesn't provide you with some of the object benefits.
2. Instantiate the object you are calling the method on:
class TestViewController {
let photos = TestViewController().getWallImages()
func getWallImages() -> [WallPost] {
// mumbo jumbo
}
}
This approach isn't correct with your given structure - it doesn't make sense to instantiate another view controller, but if you take the method and put it in a separate class, maybe it would then make sense.
Then of course you have multiple other ways of changing your code to make it work. Maybe you could initialize it with lazy parameter, maybe you could initialize it in the init method. Whatever suits you best. My answer is simply explaining where you've gone wrong.
There are a few ways you can set your property appropriately. You can make getWallImages() a type method:
class TestViewController {
let photos = TestViewController.getWallImages()
class func getWallImages() -> [WallPost] {
....
}
}
Or, you can keep your method an instance method and set your property upon initialization:
class TestViewController {
let photos: [WallPost]!
init() {
super.init()
photos = getWallImages()
}
func getWallImages() -> [WallPost] {
....
}
}
If you're asking a question you should reduce your code to a minimum, discarding unnecessary details.
You probably want something like this:
class MyClass {
let x = MyClass.getStuff()
static func getStuff() -> Int {
return 0
}
}
However your method getWallImages() can't do something like this, because it's returning the result asynchronous, which means you get the result much later after the function has returned.
You could do something like this though (this is how I'd be doing it):
class MyClass {
var x : Int? {
didSet {
if let x = x {
// Do something with x, here it's assigned
} else {
// x was set to nil, something failed
}
}
}
init() {
getStuffAsynchronous()
}
func getStuffAsynchronous() {
// Do your query stuff here, assign x to your objects to invoke the didSet
x = 0
// If it fails somehow, assign set x to nil
// if fail {
// x = nil
// }
}
}
I'm trying to understand where this code went wrong given the code below.
In the code below, I'm trying to locate a UIViewController of a specific class in the UITabBarController's viewControllers property which is declared as:
var viewControllers: [AnyObject]?
So I'm defining two UIViewController subclasses and stuffing them to the viewControllers array and running two different methods to extract them "viewControllerInfo:" ""viewControllerInfo2".
Which both yield the same result.
My understanding is the:
if let x as? T will evaluate true and assign x as a "T" type if it is the same class.
Just as if x is T would.
Any idea why evaluation is behaving like this ?
class VC1: UIViewController {}
class VC2: UIViewController {}
let tabBar = UITabBarController()
tabBar.viewControllers = [VC1(), VC2()]
extension UITabBarController {
public func viewControllerInfo<T: UIViewController>(ofType: T.Type) -> (viewController: T,index: Int)? {
if let tabViewControllers = self.viewControllers{
for (idx, maybeVCTypeT) in enumerate(tabViewControllers) {
if let viewController = maybeVCTypeT as? T {
return (viewController, idx)
}
}
}
return nil
}
public func viewControllerInfo2<T: UIViewController>(ofType: T.Type) -> (viewController: T,index: Int)? {
if let tabViewControllers = self.viewControllers{
for (idx, maybeVCTypeT) in enumerate(tabViewControllers) {
if maybeVCTypeT is T {
return (maybeVCTypeT as! T, idx)
}
}
}
return nil
}
}
All the tests below will end up giving exactly the same result:
"<__lldb_expr_89.VC1: 0x7f85016079f0>"
if let (vc, idx) = tabBar.viewControllerInfo(VC1.self) {
println(vc)
}
if let (vc, idx) = tabBar.viewControllerInfo(VC2.self) {
println(vc)
}
if let (vc, idx) = tabBar.viewControllerInfo2(VC1.self) {
println(vc)
}
if let (vc, idx) = tabBar.viewControllerInfo2(VC2.self) {
println(vc)
}
I'm suspicious of the enumerate(x) since without it I'm getting expected results:
if testVC1 is VC2 {
println("\(testVC1) is \(VC2.self)")
}
the above code yields a warning:
Cast from 'VC1' to unrelated type 'VC2' always fails
Which is what i'm trying to achieve with enumerate...
***************** EDIT *****************
Actually my suspicion of enumerate is dissolved after running the following code which ran perfectly.
let myArray: [AnyObject] = [VC2(), VC1()]
for (idx, x) in enumerate(myArray) {
println(x)
if let xAsVC1 = x as? VC1 {
println("\(x) is supposed to be \(VC1.self)")
//"<__lldb_expr_155.VC1: 0x7fc12a9012f0> is supposed to be __lldb_expr_155.VC1"
}
if x is VC2 {
println("\(x) is supposed to be \(VC2.self)")
//"<__lldb_expr_155.VC2: 0x7fc12a900fd0> is supposed to be __lldb_expr_155.VC2"
}
}
This seems to be caused by the generic constraint, and I believe is a bug (http://www.openradar.me/22218124). Removing the generic constraint, or making it a non-ObjC class (such as AnyObject) seems to fix it:
public func viewControllerInfo<T>(ofType: T.Type) -> (viewController: T,index: Int)? {
You can also replace:
if maybeVCTypeT is T {
with:
if maybeVCTypeT.isKindOfClass(T) {
And that seems to work.