I am in the process of Dagger to Hilt migration. There are several useful apis like #OptionalInject, scope #AliasOf, Compiler option to disable #installIn checks etc. to migrate source code from using Android Dagger to Hilt.
Migrating tests seems super tricky. My test infrastructure setup is such that I have a custom ActivityTestRule that helps with creating a new appComponent every time a test is run. So basically I would do something like DaggerMockAppComponent.builder()... in beforeActivityLaunched method of activity rule. With Hilt, seems like HiltAndroidRule tries to do something similar where it provides a new app component for each test case.
Refactoring all the test setup to work with Hilt seems to be a big upfront and time consuming effort and I would like to get the source code migration going without having to redo the whole test infrastructure setup.
My question is would it be possible to continue to run tests using existing setup based on Dagger (i.e. not use #HiltAndroidTest and HintAndroidRule) and in future migrate to use either CustomTestApplication or HiltTestApplication after the source code migration is complete?
I tried to like create a prod application that uses Hilt and a test application that uses android Dagger for app component generation and it seem to work but the issue i see is that if I annotate an activity/fragment with #AndroidEntryPoint, the corresponding tests wouldn't run and I see the following exception:
java.lang.IllegalStateException: The component was not created. Check that you have added the HiltAndroidRule. That makes me wonder if it's even possible to delay test migration until all source code migration is complete.
A broader question would be: what could be a good approach to incremental dagger to hilt test migration?
Related
I try to be used to use XCTest on Xcode.
but my company app that is gonna be tested, has to receive token or userInfo or some datas from server first for calling functions.
So, it is quite hard to insert unit test because unit test codes are executed before getting token from server.
Is it possible to set test code's beginning point in sources that is gonna be tested? and to begin XCTest sources after specific viewController appear?
I found a way that inserting "DispatchQueue.global().asyncAfter", but it is not proper solution I think.
Thanks.
I think the solution to your problem is called "dependency injection". You should google it. It is used in all object-oriented languages and has nothing to do with Swift or Xcode in particular.
Basically it means that you inject classes that you don't want to test - like in your case some kind of network interface - into the class that you want to test. In your production code you will inject the actual classes you need to communicate with your server or whatever and in your unit tests you will inject some kind of fake/stub/mock object(s) instead that pretend to behave like the actual class but really only do whatever you need to run your unit test. For example, when the class you want to test requests a token from the server, instead of actually making the request the fake network interface would simply return a hard coded toke value. This is the proper way to write unit tests. However, if writing unit tests was not taken into account when the architecture of your code was designed, this usually means a major refactoring of your code will be inevitable.
You may find these links useful:
https://www.tutorialsteacher.com/ioc/dependency-injection
What is dependency injection?
https://www.avanderlee.com/swift/dependency-injection/
The visual studio project templates for a Service fabric services contains code that can be reused over other multiple projects. For example the ServiceEventSource.cs or ActorEventSource.cs
My programmer instinct wants to move this code to a shared library, so I don't have duplicate code. But maybe this isn't the way to go with microservices, since you want to have small independent services. Introducing a library will make it more dependent. But they are already dependent on the EventSource class.
My solution will be to move some reusable code to a base class in a shared project and inherit that class in my services. Is this the best approach?
I'm guessing all your services are going to be doing lots of different jobs so once you pad out your EventSource classes they'll be completely different from each other except one method which would be service started?
Like with any logging there is many different approaches, one of the main ones I like is using AOP or interceptor proxies using IoC containers, this will keeps your classes clean but allows re-use of the ETW code and a decent amount of logging to be able to debug later down the line.
I moved a lot of duplicate code to my own nuget libraries which is working quiet well. It is a extra dependency, but always better then duplicate code. Now I'm planning to make my one SF templates in visual studio, so I don't have to remove and adjust some files.
I found a nice library (EventSourceProxy) which helps me managing the EventSource code for ETW: https://github.com/jonwagner/EventSourceProxy
I am writing specflow tests and I would like to run from an in-memory database to get the tests to run faster and have more control over my data.
Currently:
Using the Unit Of Work lifetime for datacontext
http://blog.stevensanderson.com/2007/11/29/linq-to-sql-the-multi-tier-story/
Using a fake context set up similar to:
http://refactorthis.wordpress.com/2011/05/31/mock-faking-dbcontext-in-entity-framework-4-1-with-a-generic-repository/
How can I use this fake context with Specflow? I can't seem to access the current dbcontext singleton from Specflow, so I can't just set the fake datacontext there and have it affect the running tests.
Could I somehow tell my website that I am testing in specflow and to use the fakecontext in that scenario? Should I use a button press or a url parameter or is there something else I can do?
How can I use this fake context with Specflow? I can't seem to access the current dbcontext singleton from Specflow, so I can't just set the fake datacontext there and have it affect the running tests.
Since you haven't actually said what your error is, I'm going to make a wild guess that the internal static class FakeContext is declared in an assembly different to the one that your SpecFlow tests are declared. If this is the case then you need to add
[assembly:InternalsVisibleTo("MyApplication.SpecFlowTests")] //name of the test assembly
to the AssemblyInfo.cs of the Assembly that includes FakeContext.
Or alternatively declare the class as public.
Could I somehow tell my website that I am testing in specflow and to use the fakecontext in that scenario? Should I use a button press or a url parameter or is there something else I can do?
You could tell the website that you are testing it, but if some malicious individual finds out the sequence to do that in production, then you will have problems.
Since you are talking about mocking your database and testing a website then I will assume that you are using ATDD (see last paragraph of Development_style), then a better approach would be to automate the hosting of your website at the same time that you start your browser in an automated fashion (usually via Selenium). This way you can control how your website chooses its Database without exposing that functionality to the world at large.
I'm currently developping a REST Web service with Spring MVC.
And I am struggling to find the best way to do integration tests on my WS.
First solution : using rest-assured
Advantage : fluent api, really easy to use with its cool DSL
Drawback : when i perform POST or PUT requests on my WS, the state of my database is modified, and next tests are corrupted.
Second solution : unit test the controllers and perform integration tests at the service level separately
Advantage : i can control the state of my database, using Spring Test Framework and perform rollback after each test
Disadvantage : i do not perform end-to-end integration tests anymore.
Question : how can i use rest-assured to do integration tests without modifying the state of my database ?
Thanks a lot.
Why don't you delete the rest assured doubles and redirects before every test and set them up fresh for the test?
RestClient.delete "#{RestAssured::Server.address}/redirects/all"
RestClient.delete "#{RestAssured::Server.address}/doubles/all"
Or alternatively you can use different doubles for the GET and POST/PUT calls to the rest assured and use the redirects in between these calls.
I am not sure, your request makes sense as you state it.
RestAssured is just a framework to support you with testing. You can also write unit tests, that do the equivalent of PUT and DELETE (basically the internal implementations), which then modify the database state.
Or you can only issue HEAD and GET requests with RestAssured and not modify the database state by this.
Both of the options will only test parts of the code path if you leave any updates out, so your issue is orthogonal to the selection of RestAssured or hand written unit tests.
Of course you can mock your backend away, but either the mocks are trivial and you don't gain any insight. Or they are complex and you will need separate tests to assure that the mock objects to what you think they are doing.
In order to perform integration tests on a REST Spring MVC Web Service, the SpringSource team has provided a new library called spring-test-mvc, which is now integrate to spring-test.
http://blog.springsource.org/2012/11/12/spring-framework-3-2-rc1-spring-mvc-test-framework/
For my special purpose, it is more adapted than Rest-Assured.
We have an app which is developed in wpf + DevExpress using MVVM pattern. We need to implement Specflow with MStest on viewmodel level.
Have anyone tried this? Any pointers? Is codedUI any good at viewmodel level?
I had two thoughts when I read that question.
First - think if you really need to automate everything through the UI. With an architecture like MVVM you have a great opportunity to hit the application beneath the UI and still get a lot out your test automation. For example write your step definitions against the ViewModels.
Testing against the UI easily run the risk of creating brittle tests. The UI is the part of most application that changes very frequently. Tests hitting the UI need to cope with this somehow (more on this later).
Secondly, for the things that you need to automate against the UI, consider using White which is a nice object oriented abstraction above the UI Automation Library. I've used it extensively and like it.
With any automation be sure to create an abstraction over the actual automation, with the driver pattern for example. A simple way to do this is to create a screen/page object that have properties and methods to interact with the screen/page in question. Your step definitions then uses these wrapper objects.
Keep your step definitions thin and your wrapper objects fat. A bit like a controller in the MVC-pattern. More on this here
I hope this was helpful
Well I haven't tried it but I can't see anything wrong with it. By using specflow you create methods to do one thing say "The user presses the about button" and your code would be something like this
[Given(#"The user presses the about button")]
public void TheUserPressesTheAboutButton()
{
this.UIMap.PressAboutButton();
}
You may have to fiddle around to create all the methods but it's not a big deal. There's a simple guide here. Something that could be a glitch is the naming and identification of the controls so that CUIT builder would be able to find them.
Yes. It works pretty good. The biggest challenge is really defining your sentence structure and Given/When/Then statements so they are consistent and re-usable. Otherwise you end up with Tags and 5-10 givens for a single method. Not really maintainable.
We used Specflow for unit testing MVVM as well as other business components. The Given statements would basically setup the mock data, then execute the test:
Given I have created a database context
And it contains the following Books
|ISBN | Author | Title |
...
I also used specflow for Functional Testing (end to end) for automated testing via TFS. A database and server are deployed (with real or test data), then the functional test suite is executed against that server/database (creating data, modifying data, etc).