I have to admit I'm quite new to unit testing and there is a lot questions for me.
It's hard to name it but I think it's behaviour test, anyways let me go straight to the example:
I need to test users roles listing to make sure that my endpoint is working correctly and returns all user roles assigned to him.
That means:
I need to create user
I need to create role
I need to assign created role to created user
As we can see there is three operations that must be excecuted before actual test and I believe that in larger applications such list can grow to much larger number of operations and even complex.
The question is how I should test such endpoints: should I just insert raw data to DB, write some code blocks that would do such preparations.
It's probably best if you test the individual units of your service without hitting the service itself, otherwise you're also unit testing the WebApi framework itself.
This will also allow you to mock your database so you don't have to rely on any stored data to run your tests or the authorization to your service.
Related
I have a REST API based on Onion Architecture.
But I have some challenges to apply this way of building a server. Concretely with what should be the behaviour of an Interface if has some data to check before giving structured data to a Usecase.
That is one of my problems:
I have some methods in the Interface that catch info about timers from the request. But I'm facing always the same question. Must I catch all and give to the Usecase and do all checks there, or instead of that, first I have to check if a timer exists in the DB (if i'm updating a timer) and after that do what I need?
This type of checks like Role of who is requesting and what is allowed to do or not, if timers exist, if user exists, if an user already exists and you can't create someone with the same username (I want an unique username restriction) etc, are annoying me because depending on where I'm doing the check, following strictly the Onion Architecture or not, I'm executing more or less code that sometimes is unnecessary.
If I check some things in the Interface, I am avoiding executing code that would be unnecesary. But I'm not following this Architecture correctly, and viceversa.
Any thoughts?
I have two bounded contexts which lead into two micro services
PersonalManagement
DocumentStorage
I keep the entity model simple here.
PersonalManagement:
Entity/Table Person:
#id - int
tenantId - int
name - string
...
DocumentStorage
Entity/Table Document:
#id - int
tenantId - int
personId - int
dateIssued - string
...
You need to know that before the application is started - a company (tenant) is choosen to define the company context.
I want to store a new document by using REST/JSON.
This is a POST to /tenants/1/persons/5/documents
with the body
{
"dateIssued" : "2018-06-11"
}
On the backend side - I validate the input body.
One validation might be "if the person specified exists and really belongs to given tenant".
Since this info is stored in the PersonalManagement-MicroService, I need to provide an operation like this:
"Does exists (personId=5,tenantId=1)"
in PersonalManagement to ensure consistence since caller might be evil.
Or in general:
What is best practise to check "ownership" of entities cross database in micro services
It might also be an option that if a new person is created (tenantId,personId) this information is stored additionally(!) in DocumentStorage but wanna avoid this redundancy.
I'm not going to extend this answer into whether your bounded contexts and service endpoints are well defined since your question seems to be simplifying the issue to keep a well defined scope, but regarding your specific question:
What is best practise to check "ownership" of entities cross database in micro services
Microservice architectures use strive for a "share nothing" principle. And that usually extends from code base to data base. So you're right to assume you're checking for this constraint "cross-DB" in your scenario.
You have a few options on this particular case, each with their set of drawbacks:
1) Your proposed "Does exists (personId=5,tenantId=1)" call from the DocumentContext to the PersonContext is not wrong on itself, but you will generate a straight dependency between these two microservices, so you must ask yourself whether it seems ok for you not to accept new documents if the PersonManagement microservice is offline.
In specific situations, such dependencies might be acceptable but the more of these you have, the less your microservice architecture will behave as one and more like a "distributed monolith" which on itself it pretty much an anti-pattern.
2) The other main option you have is that you should recognize that the DocumentContext is a very much interested in some information/behavior relating to People so it should be ok with modelling the Person Entity inside its boundaries.
That means, you can have the DocumentContext subscribe for changes in the PersonContext to be aware of which People currently exist and what their characteristics are and thus being able to keep a local copy of such information.
That way, your validation will be kept entirely inside the DocumentContext which will have its operation unhindered by eventual issues with the PersonContext and you will find out your modelling of the document related entities will be much cleaner than before.
But in the end, you will also discover that a "share nothing" principle usually will cost you in what seems to be redundancy, but it's actually independence of contexts.
just for the tenancy check , this can be done using the JWT token (token which can store tenancy information and other metadata).
Let me provide another example of the same scenario which can't be solved with JWT.
Assume one Customer wants to create a Order and our system wants to check whether the customer exist or not while creating the order.
As Order and Customer service are separate, and we want minimal dependencies between them, there are multiple sol. to above problems:
create Order in "validating state" and on OrderCreated event check for customer validity and update customer state to "Valid"
another one before creating order check for the customer (which is not the right way as it creates dependency, untill and unless very critical do not do it)
last way is the let the order be created , somebody who will final check the order for delivery will verify customer will remove
I am going to write a new endpoint to unlock the domain object, something like:
../domainObject/{id}/unlock
As I apply TDD, I have started to write an API test first. When the test fails, I am going to start writing Integration and Unit tests and implement the real code.
In API test, I need a locked domain data for test fixture setup to test the unlock endpoint that will be created. However, there is no endpoint for locking the domain object on the system. (our Quartz jobs lock the data) I mean, I need to create a data by using the database directly.
I know that in API test, straight forwardly database usage is not the best practice. If you need a test data, you should call the API too. e.g.
../domainObject/{id}/lock
Should this scenario be an exception in this case? Or is there any other practice should I follow?
Thanks.
There is no good or bad practice here, it's all about how much you value end to end testing of the system including the database.
Testing the DB part will require a little more infrastructure, because you'll have to either use an in-memory database for faster test runs, or set up a full-fledged permanent test DB in your dev environment. When doing the latter, it might be a good idea to have a separate test suite for end-to-end tests that runs less frequently than your normal test suite, because it will inevitably be slower.
In that scenario, you'll have preexisting test data always present in the DB and a locked object can be one of them.
If you don't care about all this, you can stub the data store abstraction (repository, DAO or whatever) to return a canned locked object.
The Problem
Say I've got a cool REST resource /account.
I can create new accounts
POST /account
{accountName:"matt"}
which might produce some json response like:
{account:"/account/matt", accountName:"matt", created:"November 5, 2013"}
and I can look up accounts created within a date range by calling:
GET /account?created-range-start="June 01, 2013"&created-range-end="December 25, 2013"
which might also produce something like:
{accounts: {account:"/account/matt", accountName:"matt", created:"November 5, 2013"}, {...}, ...}
Now, let's say I want to set up some sample data and write some tests against the GET /account resource within some specified creation date range.
For example I want to somehow insert the following accounts into the system
name=account1, created=January 1, 2010
name=account2, created=January 2, 2010
name=account3, created=December 29, 2010
name=account4, created=December 30, 2010
then call
GET /account?created-range-start="January 2, 2010"&created=range-end="December 29,2010"
and verify that only accounts 2 and 3 are returned.
How should I insert these sample accounts to write my tests?
Possible Solutions
1) I could use inversion of control and allow the user to specify the creation date for new accounts.
POST /account
{account:"matt", created="June 01, 2013"}
However, even if the created field were optional, I don't like this approach because I may not want to allow my users the ability to set the creation date of their account. I surely need to be able to do it for testing but having that functionality as part of the public api seems wrong to me. Maybe I want to give a $5 credit to anyone who joined prior to some particular day. If they can specify their create date users can game the system. Not good.
2) I could add one or more testing configuration resources
PUT /account/creationDateTimestampProvider
{provider="DefaultProvider"}
or
PUT /account/creationDateTimestampProvider
{provider="FixedDateProvider", date="June 01, 2013"}
This approach affords me the ability to lock down these resources with security constraints so that only my test context can call them, but it also necessarily has side effects on the system that may become a pain to manage, especially if I have a bunch of backdoor configuration resources.
3) I could interact directly with the database circumventing the REST api altogether to set my sample data.
INSERT INTO ACCOUNTS ...
GET /account?...
However this can allow me to get into states that using the REST api may not allow me to get into and as the db model evolves maintaining these sql scripts might also be a pain.
So... how do i test my GET /account resource? Is there another way I'm not thinking of that is more elegant?
There are a lot of ways to do this, and you've come up with some solid (though maybe not perfect for your situation) solutions.
In the setup for the test, I would spin up an in-memory database like HSQLDB (there are others) and do the inserts. The test configuration will inject the appropriate database configuration into your service provider class. Run the tests, and then shut the database down on teardown.
This post provides a good example at least for the persistence side of things.
Incidentally, do not change the API of your service just to help facilitate a test. Maybe I misunderstood and you aren't anyway, but I thought I would mention just in case.
Hope that helps.
For what it's worth, these days I'm primarily using the second approach for most of my system level (black box) tests.
I create backdoor admin / test apis that have security requirements which only my system tests can access. These superpower apis allow me to seed data. I try to limit the scope of these apis as much as possible so they are not overly coupled to the specific implementation details but are flexible enough to allow specifying whatever is needed for the desired seed data.
The reason I prefer this approach to the database solution that Vidya provided, is so that my tests aren't coupled to the specific data storage technology. If I decide to switch from mongo to dynamo or something like that; using an admin api frees me from having to update all of my tests--instead I only need to update the admin api/impl.
I'm trying to wrap my head around CQRS. I'm drawing from the code example provided here. Please be gentle I'm very new to this pattern.
I'm looking at a logon scenario. I like this scenario because it's not really demonstrated in any examples i've read. In this case I do not know what the aggregate id of the user is or even if there is one as all I start with is a username and password.
In the fohjin example events are always fired from the domain (if needed) and the command handler calls some method on the domain. However if a user logon is invalid I have no domain to call anything on. Also most, if not all of the base Command/Event classes defined in the fohjin project pass around an aggregate id.
In the case of the event LogonFailure I may want to update a LogonAudit report.
So my question is: how to handle commands that do not resolve to a particular aggregate? How would that flow?
public void Execute(UserLogonCommand command)
{
var user = null;//user looked up by username somehow, should i query the report database to resolve the username to an id?
if (user == null || user.Password != command.Password)
;//What to do here? I want to raise an event somehow that doesn't target a specific user
else
user.LogonSuccessful();
}
You should take into account that it most cases CQRS and DDD is suitable just for some parts of the system. It is very uncommon to model entire system with CQRS concepts - it fits best to the parts with complex business domain and I wouldn't call logging user in a particularly complex business scenario. In fact, in most cases it's not business-related at all. The actual business domain starts when user is already identified.
Another thing to remember is that due to eventual consistency it is extremely beneficial to check as much as we can using only query-side, without event creating any commands/events.
Assuming however, that the information about successful / failed user log-ins is meaningful I'd model your scenario with following steps
User provides name and password
Name/password is validated against some kind of query database
When provided credentials are valid RegisterValidUserCommand(userId) is executed which results in proper event
If provided credentials are not valid
RegisterInvalidCredentialsCommand(providedUserName) is executed which results in proper event
The point is that checking user credentials is not necessarily part of business domain.
That said, there is another related concept, in which not every command or event needs to be business - related, thus it is possible to handle events that don't need aggregates to be loaded.
For example you want to change data that is informational-only and in no way affects business concepts of your system, like information about person's sex (once again, assuming that it has no business meaning).
In that case when you handle SetPersonSexCommand there's actually no need to load aggregate as that information doesn't even have to be located on entities, instead you create PersonSexSetEvent, register it, and publish so the query side could project it to the screen/raport.