How to set starting point for XCTest in xcode - swift

I try to be used to use XCTest on Xcode.
but my company app that is gonna be tested, has to receive token or userInfo or some datas from server first for calling functions.
So, it is quite hard to insert unit test because unit test codes are executed before getting token from server.
Is it possible to set test code's beginning point in sources that is gonna be tested? and to begin XCTest sources after specific viewController appear?
I found a way that inserting "DispatchQueue.global().asyncAfter", but it is not proper solution I think.
Thanks.

I think the solution to your problem is called "dependency injection". You should google it. It is used in all object-oriented languages and has nothing to do with Swift or Xcode in particular.
Basically it means that you inject classes that you don't want to test - like in your case some kind of network interface - into the class that you want to test. In your production code you will inject the actual classes you need to communicate with your server or whatever and in your unit tests you will inject some kind of fake/stub/mock object(s) instead that pretend to behave like the actual class but really only do whatever you need to run your unit test. For example, when the class you want to test requests a token from the server, instead of actually making the request the fake network interface would simply return a hard coded toke value. This is the proper way to write unit tests. However, if writing unit tests was not taken into account when the architecture of your code was designed, this usually means a major refactoring of your code will be inevitable.
You may find these links useful:
https://www.tutorialsteacher.com/ioc/dependency-injection
What is dependency injection?
https://www.avanderlee.com/swift/dependency-injection/

Related

Where to write tests for a Frontend/Backend application?

I want to write a web application with a simple Frontend-Backend(REST API) architecture.
It's not clear to me where and how to write tests.
Frontend: should I write tests mocking API responses and testing only UX/UI?
Backend: should I write here API call testing and eventually more fine grained unit testing on classes?
But in this way I'm afraid that Frontend testing is not aware of real API response (because it's mocking independently from the backend).
On the other side if I don't mock API response and use real response from backend, how can the Frontend client prepare the DB to get the data he wants?
It seems to me that I need 3 kind of testing types:
- UX/UI testing: the Frontend is working with a set of mock responses
- API testing: the API is giving the correct answers given a set of data
- Integration testing: The Frontend is working by calling really the backend with a set of data (generated by who?).
There are framework or tools to make this as painless as possible?
It seems to me very complicated (if API spec changes I must rewrite a lot of tests)
any suggestion welcome
Well, you are basically right. There are 3 types of tests in this scenario: backend logic, frontend behaviour and integration. Let's split it:
Backend tests
You are testing mainly the business logic of the application. However, you must test the entire application stack: domain, application layer, infrastructure, presentation (API). These layers require both unit testing and integration testing, plus some pure black box testing from user's perspective. But this is a complex problem on this own. The full answer would be extremely long. If you are interested in some techniques regarding testing applications in general - please start another thread.
Frontend behaviour
Here you test if frontend app uses API the right way. You mock the backend layer and write mostly unit tests. Now, as you noticed - there can be some problems regarding real API contract. However, there are ways to mitigate that kind of problems. First, a link to one of these solutions: https://github.com/spring-cloud/spring-cloud-contract. Now, some explanation. The idea is simple: the API contract is driven by consumer. In your case, that would be the frontend app. Frontend team cooperate with backend team to create a sensible API, meeting all of client's needs. Frontend tests are therefore guaranteed to use the "real API". When client tests change - the contract changes, so the backend must refactor to new requirements.
As a side note - you don't really need to use any concrete framework. You can follow the same methodology if you apply some discipline to your team. Just remember - the consumer defines the contract first.
Integration tests
To make sure everything works, you need also some integration e2e testing. You set up a real test instance of your backend app. Then, you perform integration tests using the real server instead of fake mockup responses. However, you don't need (and should not) duplicate the same tests from other layers. You want to test if everything is integrated properly. Therefore, you don't test any real logic. You just choose some happy paths, maybe some failure scenarios and perform these tests from user's perspective. So, you assume nothing about the state of the backend app and simulate user interaction. Something like this: add new product, modify product, fetch updated product or check single authentication point. That kind of tests - not really testing any business logic, but only if real API test server communicates properly with frontend app.
Talking about the tools, it depends on your language preferences and/or how the app is being constructed.
For example, if your team feels comfortable with javascript, it could be interesting to use frameworks like Playwright or WebdriverIO (better if you plan to test mobile) for UI and integration tests. This frameworks can work together with others more specialised in API testing like PactumJS with the plus that can share some functions.
Organising the test correctly you will not have to do a big extra work if some API specs changes.

How can I use fake datacontext for Specflow tests

I am writing specflow tests and I would like to run from an in-memory database to get the tests to run faster and have more control over my data.
Currently:
Using the Unit Of Work lifetime for datacontext
http://blog.stevensanderson.com/2007/11/29/linq-to-sql-the-multi-tier-story/
Using a fake context set up similar to:
http://refactorthis.wordpress.com/2011/05/31/mock-faking-dbcontext-in-entity-framework-4-1-with-a-generic-repository/
How can I use this fake context with Specflow? I can't seem to access the current dbcontext singleton from Specflow, so I can't just set the fake datacontext there and have it affect the running tests.
Could I somehow tell my website that I am testing in specflow and to use the fakecontext in that scenario? Should I use a button press or a url parameter or is there something else I can do?
How can I use this fake context with Specflow? I can't seem to access the current dbcontext singleton from Specflow, so I can't just set the fake datacontext there and have it affect the running tests.
Since you haven't actually said what your error is, I'm going to make a wild guess that the internal static class FakeContext is declared in an assembly different to the one that your SpecFlow tests are declared. If this is the case then you need to add
[assembly:InternalsVisibleTo("MyApplication.SpecFlowTests")] //name of the test assembly
to the AssemblyInfo.cs of the Assembly that includes FakeContext.
Or alternatively declare the class as public.
Could I somehow tell my website that I am testing in specflow and to use the fakecontext in that scenario? Should I use a button press or a url parameter or is there something else I can do?
You could tell the website that you are testing it, but if some malicious individual finds out the sequence to do that in production, then you will have problems.
Since you are talking about mocking your database and testing a website then I will assume that you are using ATDD (see last paragraph of Development_style), then a better approach would be to automate the hosting of your website at the same time that you start your browser in an automated fashion (usually via Selenium). This way you can control how your website chooses its Database without exposing that functionality to the world at large.

Implementing Chain of Responsibility with Services

I'm thinking about a platform-neutral (i.e. not .NET MEF) technique of implementing chain-of-responsibliity pattern using web services as the handlers. I want to be able to add more CoR handlers by deploying new services and not compiling new CoR code, just change configuration info. It seems the challenge will be managing the metadata about available handlers and ensuring the handlers are conforming to the interface.
My question: any ideas on how I can safely ensure:
1. The web services are implementing the interface
2. The web services are implementing the base class behavior, like calling the successor
Because, in compiled code, I can have type-safety and therefore know that any handlers have derived from the abstract base class that ensures the interface and behavior I want. That seems to be missing in the world of services.
This seems like a valid question, but a rather simple one.
You are still afforded the protection of the typing system, even if you are loading code later, at runtime, that the original code never saw before.
I would think the preferred approach here would be to have something like a properties file with a list of implementers (your chain). Then in the code, you are going to have to have a way to instantiate an instance of each handler at runtime to construct the chain. When you construct the instance, you will have to check its type. In Java, for instance, that would take the form of instanceof (abomination ordinarily, but you get a pass for loading scenarios), or isAssignableFrom. In Objective C, it's conformsToProtocol.
If it doesn't, it can't be used and you can spit an error out to the console.

Unit Testing of void methods using Entity Framework

I'm trying to familiarize myself with unit testing. I've written a small application that parses data from the Internet and stores it into a db. For this I'm using Entity Framework.
Since many of the methods are void methods such as
public void UpdateSiteValue(Site site, ObjectContext context)
This method could be used for updating some value in the db. So I'm basically wondering how to approach this from a unit testing perspective. Maybe I could mock the object context?
Would appreciate any input.
It depends on what you're testing for.
Since you're returning void, you can't test the return of the function - so are you wanting to test that your method will actually make the changes in the database? If so, then mocking your context isn't the best solution, because you aren't testing the actual code. Ladislav Mrnka made a great post about that. Could you wrap your test in a transaction scope and then do a rollback afterwards? The Id's would get incremented, but at least you'd be testing everything.
Alternatively, if you're wanting to test that your method is doing everything right - up until you get to the database layer there's a few ways to go about that. Something that is suggested alot is to use the repository pattern, so that you don't have a dependancy on EF in your test. Truewill made a good post about this also. He also links to an MSDN article about this, using an in memory ObjectContext that you may find relevant.
Here is some more general reading about unit vs function vs integration testing that can help.

How to go about implementing Specflow with WPF app using MVVM

We have an app which is developed in wpf + DevExpress using MVVM pattern. We need to implement Specflow with MStest on viewmodel level.
Have anyone tried this? Any pointers? Is codedUI any good at viewmodel level?
I had two thoughts when I read that question.
First - think if you really need to automate everything through the UI. With an architecture like MVVM you have a great opportunity to hit the application beneath the UI and still get a lot out your test automation. For example write your step definitions against the ViewModels.
Testing against the UI easily run the risk of creating brittle tests. The UI is the part of most application that changes very frequently. Tests hitting the UI need to cope with this somehow (more on this later).
Secondly, for the things that you need to automate against the UI, consider using White which is a nice object oriented abstraction above the UI Automation Library. I've used it extensively and like it.
With any automation be sure to create an abstraction over the actual automation, with the driver pattern for example. A simple way to do this is to create a screen/page object that have properties and methods to interact with the screen/page in question. Your step definitions then uses these wrapper objects.
Keep your step definitions thin and your wrapper objects fat. A bit like a controller in the MVC-pattern. More on this here
I hope this was helpful
Well I haven't tried it but I can't see anything wrong with it. By using specflow you create methods to do one thing say "The user presses the about button" and your code would be something like this
[Given(#"The user presses the about button")]
public void TheUserPressesTheAboutButton()
{
this.UIMap.PressAboutButton();
}
You may have to fiddle around to create all the methods but it's not a big deal. There's a simple guide here. Something that could be a glitch is the naming and identification of the controls so that CUIT builder would be able to find them.
Yes. It works pretty good. The biggest challenge is really defining your sentence structure and Given/When/Then statements so they are consistent and re-usable. Otherwise you end up with Tags and 5-10 givens for a single method. Not really maintainable.
We used Specflow for unit testing MVVM as well as other business components. The Given statements would basically setup the mock data, then execute the test:
Given I have created a database context
And it contains the following Books
|ISBN | Author | Title |
...
I also used specflow for Functional Testing (end to end) for automated testing via TFS. A database and server are deployed (with real or test data), then the functional test suite is executed against that server/database (creating data, modifying data, etc).