I am writing specflow tests and I would like to run from an in-memory database to get the tests to run faster and have more control over my data.
Currently:
Using the Unit Of Work lifetime for datacontext
http://blog.stevensanderson.com/2007/11/29/linq-to-sql-the-multi-tier-story/
Using a fake context set up similar to:
http://refactorthis.wordpress.com/2011/05/31/mock-faking-dbcontext-in-entity-framework-4-1-with-a-generic-repository/
How can I use this fake context with Specflow? I can't seem to access the current dbcontext singleton from Specflow, so I can't just set the fake datacontext there and have it affect the running tests.
Could I somehow tell my website that I am testing in specflow and to use the fakecontext in that scenario? Should I use a button press or a url parameter or is there something else I can do?
How can I use this fake context with Specflow? I can't seem to access the current dbcontext singleton from Specflow, so I can't just set the fake datacontext there and have it affect the running tests.
Since you haven't actually said what your error is, I'm going to make a wild guess that the internal static class FakeContext is declared in an assembly different to the one that your SpecFlow tests are declared. If this is the case then you need to add
[assembly:InternalsVisibleTo("MyApplication.SpecFlowTests")] //name of the test assembly
to the AssemblyInfo.cs of the Assembly that includes FakeContext.
Or alternatively declare the class as public.
Could I somehow tell my website that I am testing in specflow and to use the fakecontext in that scenario? Should I use a button press or a url parameter or is there something else I can do?
You could tell the website that you are testing it, but if some malicious individual finds out the sequence to do that in production, then you will have problems.
Since you are talking about mocking your database and testing a website then I will assume that you are using ATDD (see last paragraph of Development_style), then a better approach would be to automate the hosting of your website at the same time that you start your browser in an automated fashion (usually via Selenium). This way you can control how your website chooses its Database without exposing that functionality to the world at large.
Related
I try to be used to use XCTest on Xcode.
but my company app that is gonna be tested, has to receive token or userInfo or some datas from server first for calling functions.
So, it is quite hard to insert unit test because unit test codes are executed before getting token from server.
Is it possible to set test code's beginning point in sources that is gonna be tested? and to begin XCTest sources after specific viewController appear?
I found a way that inserting "DispatchQueue.global().asyncAfter", but it is not proper solution I think.
Thanks.
I think the solution to your problem is called "dependency injection". You should google it. It is used in all object-oriented languages and has nothing to do with Swift or Xcode in particular.
Basically it means that you inject classes that you don't want to test - like in your case some kind of network interface - into the class that you want to test. In your production code you will inject the actual classes you need to communicate with your server or whatever and in your unit tests you will inject some kind of fake/stub/mock object(s) instead that pretend to behave like the actual class but really only do whatever you need to run your unit test. For example, when the class you want to test requests a token from the server, instead of actually making the request the fake network interface would simply return a hard coded toke value. This is the proper way to write unit tests. However, if writing unit tests was not taken into account when the architecture of your code was designed, this usually means a major refactoring of your code will be inevitable.
You may find these links useful:
https://www.tutorialsteacher.com/ioc/dependency-injection
What is dependency injection?
https://www.avanderlee.com/swift/dependency-injection/
I'm currently creating an application which is highly modular (using the Prism-Framework) and accesses a database via the EntityFramework implemented in CodeFirst.
My goal is to seperate the actual writing of the data into the Database from the "normal" use of the created Entities. Writing to the Database shall only be done by the main-Application but the Modules should still be able to use the Entity-Classes.
Thus, they must know the DataContext or at least the Entiy Classes. Here is the problem, though: If a module changes a property of an Entity Class and the main-Application calls "SaveChanges()" on the DataContext for some other reason, the changes made by the module are automatically saved to the Database without the main-Application having control over it.
How can I prevent this behaviour? The Modules must not be able to change the Database-Content, except via a defined Interface to the main-Application.
My first thought was to implement ICloneable in every entity-Class and to only pass clones of the Entity-Objects to the Modules to work with. The modules would then, if they wanted to request a change in the database, pass the cloned Objects to the main-Application which updates the original object and calls "SaveChanges()" on the DataContext.
Do you guys think this is a viable solution, or might there be a better way to implement this behaviour?
Thanks in advance!
Use the DbSet.AsNoTracking() Method to enable reading data from the database that will not be tracked by the DbContext.
We have an app which is developed in wpf + DevExpress using MVVM pattern. We need to implement Specflow with MStest on viewmodel level.
Have anyone tried this? Any pointers? Is codedUI any good at viewmodel level?
I had two thoughts when I read that question.
First - think if you really need to automate everything through the UI. With an architecture like MVVM you have a great opportunity to hit the application beneath the UI and still get a lot out your test automation. For example write your step definitions against the ViewModels.
Testing against the UI easily run the risk of creating brittle tests. The UI is the part of most application that changes very frequently. Tests hitting the UI need to cope with this somehow (more on this later).
Secondly, for the things that you need to automate against the UI, consider using White which is a nice object oriented abstraction above the UI Automation Library. I've used it extensively and like it.
With any automation be sure to create an abstraction over the actual automation, with the driver pattern for example. A simple way to do this is to create a screen/page object that have properties and methods to interact with the screen/page in question. Your step definitions then uses these wrapper objects.
Keep your step definitions thin and your wrapper objects fat. A bit like a controller in the MVC-pattern. More on this here
I hope this was helpful
Well I haven't tried it but I can't see anything wrong with it. By using specflow you create methods to do one thing say "The user presses the about button" and your code would be something like this
[Given(#"The user presses the about button")]
public void TheUserPressesTheAboutButton()
{
this.UIMap.PressAboutButton();
}
You may have to fiddle around to create all the methods but it's not a big deal. There's a simple guide here. Something that could be a glitch is the naming and identification of the controls so that CUIT builder would be able to find them.
Yes. It works pretty good. The biggest challenge is really defining your sentence structure and Given/When/Then statements so they are consistent and re-usable. Otherwise you end up with Tags and 5-10 givens for a single method. Not really maintainable.
We used Specflow for unit testing MVVM as well as other business components. The Given statements would basically setup the mock data, then execute the test:
Given I have created a database context
And it contains the following Books
|ISBN | Author | Title |
...
I also used specflow for Functional Testing (end to end) for automated testing via TFS. A database and server are deployed (with real or test data), then the functional test suite is executed against that server/database (creating data, modifying data, etc).
I have quite a large code base using a variety of different ADO technologies (i.e. some EF and in some cases using ADO.Net directly).
I'm wondering if there is any way to globally intercept any ADO.Net calls so that I can start auditing information - exact SQL statements that executed, time taken, results returned, etc.
The main idea being that if I can do this, I shouldn't have to change any of my existing code and that I should be able to just intercept/wrap the ADO.Net calls... Is this possible?
You can globally intercept any methods that you have access to (ie: your generated models & context). If you're needing to intercept methods in framework BCL then no.
If you just want to get the SQL generated from your EF models then intercept one of the desired methods with the OnMethodBoundaryAspect and you can do your logging in the OnEntry and OnExit methods.
Remember, you can intercept only code you have access to. Generated EF code is accessable but will over write any changes you make to it so you will need to apply the aspect using either a partial class or with an assembly declaration. I would suggest the latter since you want global interception.
Just my 2 cents: You might want to look at other alternatives for this such as SQL profiler or redesigning your architecture.
Afterthought is an open source tool that supports modifying an existing dll without requiring you to recompile from source to add aspect attributes. For this to work, you would need to create amendments (the way you describe your changes in Afterthought) in a separate dll, and this dll would need to have an assembly-level attribute implementing IAmendmentAttribute that would identify the types in your target assembly to process.
Take a look at the logging example to see how this works and let me know if you have any questions/issues.
Please note that Afterthought modifies your target assembly to make calls to static methods in another assembly (your tool). If you want to intercept calls with modifying the target assembly in any way, then I recommending looking into the .NET profiling API.
Jamie Thomas (primary author of Afterthought)
I have a concern where I am writing a custom membership provider, but I'm not sure where to put it. I don't really have any code to show you, but basically the provider needs access to System.Web.Security in order to inherit the class, but it also needs data access (i.e. a connection string + LINQ to SQL) to do simple tasks such as ValidateUser.
How can I write a membership provider that adheres to the principles of DDD that I've read about in Pro ASP.NET MVC2 Framework by Apress? My one thought was to write another class in my domain project which does all the "work" related to database stuff. In essence I would have double the number of methods. Also, can this work with dependency injection (IoC)?
Hope this isn't too general ...
Look forward to the hive-mind's responses!
Edit: I just noticed in a default MVC2 project there is an AccountController which has a wrapper around an IMembershipService. Is this where my answer lies? The AccountController seems to have no database access component to it.
Asp.net user management features are super invasive.
They even spam database with profile tables and what not.
When I had to implement users management of my application, I successfully avoided all that mess and still was able to use asp.net in-built roles, user identities etc. Moving away from all that though cause my domain is getting smart enough to decide what can be seen and done so it makes no sense to duplicate that in UI client.
So... yeah. Still have zero problems with this approach. Haven't changed anything for ~4 months.
Works like a charm.