iOS: Unit Testing for Sqlite - iphone

Please help me answer this question: should i write unit testing for data access that interact with local database of iOS app, in this case is SQLite database. If should, how can i write them? use mock up or use db file.

Assuming you want to test your program's logic and not just the ability to access SQLite, a test double (either a mock object or a dummy object) will give you test that's somewhat easier to maintain than a separate db file. A separate db file has to have the right data in the right rows, and if you modify it in one test you have to reset it before the next. If your test data gets out of sync, your tests will start to fail. A mock object with literal test values will never get out of sync.
Using a mock will pretty much force you to use dependency injection so you can substitute it in for the real data object. Using a db file will not force you to use dependency injection. So if you're working with a lot of existing code that doesn't follow the DI pattern, a db file would be the "easy" choice, although not the best choice from an object oriented perspective.

In the past, I've done this by creating a new SQLite DB file for each test case. In the test cases I would test that my code writes to the DB and reads the exact same thing that was written to it. This way all the test data is in the code so the test cases are clearer.
This approach sacrifices speed but my unit tests still ran pretty fast.

Related

Correct way to persist and existing JPA entity in database

In one application I am working on, I found that the JPA objects that are modified are not previouly loaded, I mean;
Instead of doing like this:
getEntityManager().find(Enterprise.class, idEnterprise);
//Some modifying operations
They do like this(and works!):
Enterprise obj = new Enterprise (IdEnterprise);
//Some modifying operations
getEntityManager().persist(obj);
This last solution doesnt load from database the object and it modifies the object correctly.
How is this possible?
Is a good practice? At least you avoid the query that loads from database, right?
Thanks
It depends. If you are writing code from a controller class (or any application code) you shouldn't be worried about jpa stuff, so the second approach is bad and redundant.
If, instead, you are working in infrastructure code, maybe you can manually persist you entities to enable some performance optimization or simply because you want the data to persist even if the transaction fails.
I strongly suspect the second bit of code is someone creating an entity from scratch, but mixing application, domain and infrastructure code in the same method: an extremely evil practice. (Very evil, darth father evil, never do that)

Strategy for deploying EF controlled Database

I have written an application using EF 6.0 in combination with an SQL Server Compact 4.0 Database. When a customer uses this application for the first time, it (the application) should create a database-file in a given path with some initital values. Also migrations should be allowed, for it is quite possible that the object model might change with future versions of the app.
Now I´m wondering what would be the best way to to deploy the DB on the users productive system. I could think of three ways:
I could create a DB-file with initial values and just copy it to the right place during installation process and use MigrateDatabaseToLatestVersionInitializer in the app.
In the DbContext-Constructors (I have two contexts) I could check for an existing DB-file and use different Database-Initializers accordingly. Like a CreateDatabaseIfNotExistsInitializer with a seed method that creates initial data if no fiel is found and a MigrateDatabaseToLatestVersionInitializer if the DB-file exists.
I could use the MigrateDatabaseToLatestVersionInitializer always and in its "Seed"-method check for existing table entries and create them if they are not present.
Which of these ways is to be preferred or is there a better way I didn´t think of?
It sounds like this is a desktop application so you might want to catch permissions errors about creating the database file at installation time (i.e. option 1) rather than run time, especially as in option 2 the database initialization is not an imperative command you're giving that you can put a try...catch around.
I don't think option 3 would work as the Seed method gets run after all the migrations, so surely the migrations will either have successfully run, in which case the tables don't need creating, or they will have failed as the DB doesn't exist and therefore your Seed method won't get run.

Entity Framework code first - development strategies

Working on a brand new project from the ground up. That means the data model is in a constant flux, doubly so because things are, inevitably, not as well planned as they should be. Model classes are being created and changed fairly regularly.
The plan was to use the latest version of EF with all the neat code-first stuff in it. But we're constantly tripping over the limitations the framework has in terms of adding or updating tables. The initialization options seem to allow only the complete deletion and re-creation of the database, which isn't really ideal.
I've had a look at the migrations. But this seems a sledgehammer to crack a nut: we don't need to detail every single small change and update with a new migration scaffold.
Are there some better strategies to deal with this? For instance, I started writing some unit tests to pre-populate one of the contexts with some test data, but because this causes the whole Db to drop and re-create, it causes problems with all the other contexts. Or perhaps making use of a custom initialiser to seed the data for us? How can we easily exclude these in production code?
We're also wondering about perhaps abandoning code-first and going back to EDMX diagrams. At least that way changes result in updated SQL commands which can be run directly against the database.
Any suggestions gratefully received.
I think, imho, that:
as the database schema must at least match your model you should/must detail every single change, and code first migration allows that and trace the changes over time
code first migration also allows to migrate the database schema for you
code first migration also allows you to produce sql that allows you to migrate the schema
For these reasons code first is as good (if not better) as the edmx approach
Please take few minutes to implement http://msdn.microsoft.com/en-us/data/jj591621.aspx
One other point, always imho and in a perfect world, if you unit test the business of you model you should not need the DAL, use generic collection. Be aware of different comportement of linq to object vs linq to entities, for example concerning the case sensitivity.

Is core data implementing data mapper pattern?

I know that core data should not be considered as ORM but it still offers the functionality that is similar to ORM. Just curious, is it implementing data mapper pattern? I know "The Data Mapper is a layer of software that separates the in-memory objects from the database. Its responsibility is to transfer data between the two and also to isolate them from each other." (Martin Fowler). IMHO context manager handles all SQL stuff into one transaction, so it's very performance wise design and IMHO core data might be considered implementing data mapper pattern.
One year latter, I will contribute with my two cents
I am not an ORM expert and just recently started something using a Data Mapper, but as a long time Core Data user I can say that no. The main objective of this pattern is having a clear cut of a domain object from all database related operations.
Once I start writing unit tests, the first thing I notice is that I must load a database, even if it is just some in memory store, but I do must load one. Also there are no mappers for each class, I have no control about how each relation is stored.
Core Data loads lots of meta information about your object graph and forces some structure to them. Although you can change the persistent store and bake something of your own, you will have lots of restrictions about how to do it, with a clear "relational" feeling to it.
The idea is good, we might say it is some variation of it. Something that I do love is that the save operation is done by the context, not the object itself. So there is some type of separation.
However look at those functions like "awakeFromFetch" or "didSave", both operations are related with the data store, not a plain domain object. A proper Data Mapper pattern would allow you to define those operations for each persistent store, not unified in a single object.
UPDATE:
Funny enough one day after my answer I had to deal with an old CoreData based project and must come back to improve this answer. To make things clear, I do consider that "seems like a pattern" is not enough. For example, implementation of the facade and adapter patterns is quite similar, but you name them differently depending on how you use them.
Is Core Data implementing data mapper?
I must say that my "not quite" should have been "definitely not!"
I have just been very angry because I needed to rename some fields and later add new ones. Although I do know quite well how auto-migrations work with Core Data I forgot how annoying these are.
How many times do you need some new field, rename something, experiment until you get it right.... and every single tiny change requires a full blown database migration? With Data Mappers this never happens because domain objects are perfectly decoupled. You only touch the database to catch up with the domain objects after you finish some new feature. Core Data forces you to bind at every single moment every single detail of your domain objects.
Boy, how sweet life was until I forgot that "tiny" annoyance of Core Data being the exact opposite of what you can achieve with data mappers.

How to manage test data for Hibernate Search integration tests

I have a Spring-based system that uses Hibernate Search 3.4 (on top of Hibernate 3.5.4). Integration tests are managed by Spring, with #Transactional annotation. At the moment test data (entities that are to be indexed) is loaded by Liquibase script, we use it's Spring integration. It's very inconvenient to manage.
My new solution is to have test data defined as Spring beans and wire them as Resources, by name. This part works.
I tried to have these beans persisted and indexed in setUp method of my test cases (and in test methods themselves) but I failed. They get into DB fine but I can't get them indexed. I tried calling index() on FullTextEntityManager (with flushToIndexes), I tried createIndexer().startAndWait().
What else can I do?
Or may be there is some better option of testing HS?
Thank You in advance
My new solution is to have test data defined as Spring beans and wire
them as Resources, by name. This part works.
sounds like a strange setup for a unit test. To be honest I am not quote sure how you do this.
In Search itself an in memory database (H2) is used together with a Lucene RAM directory. The benefits of such a setup is that it is fast and easy to avoid dependencies between tests.
I tried to have these beans persisted and indexed in setUp method of
my test cases (and in test methods themselves) but I failed. They get
into DB fine but I can't get them indexed.
If automatic indexing is enabled and the persisting of the test data is occurring within an transaction, it should work. A common mistake in combination with Spring is to use the wrong transaction manager. The Hibernate Search forum has a lot of threads around this, for example this one - https://forum.hibernate.org/viewtopic.php?f=9&t=998155. Since you are not giving any concrete configuration and code examples it is hard to give more specific advice.
I tried createIndexer().startAndWait()
that is also a good approach. I would recommend this approach if you want to insert not such a couple of test entities, but a whole set of data. In this case it can make sense to use a framework like dbunit to insert the testdata and then manually index the data. createIndexer().startAndWait() is the right tool for that. Extracting all this loading/persisting/indexing functionality into a common test base class is the way to go. The base class can also be responsible to do all the Spring bootstrapping.
Again, to give more specific feedback you have to refine your question.
I have a complete different approach, when I write any queries, i want to write a complete test suite, but data creation has always been pain(special mention to when test customer gets corrupt and all your test suite breaks.
To solve this I created Random-JPA. It's simple and easy to integrate. The whole idea is you create fresh data and test.
You Can find the full documentation here