I have a very specific situation in an integration test.
I'm developing a Rest API composed by few micro services using spring boot. Some of those services have basically crud operations to be accessed by an UI application or to be consumed into internal validations/queries.
All the database manipulation is done through procedures by a legacy library (no jpa) and I'm using a non-standard database. I know that the good practices say to do not use real databases, but in this scenario, I cannot imagine how to use a dummy databases in the test time (like dbunit or h2). In this way:
1 - Is it ok to hit the real database in an integration test?
If 1 is ok, I have another question:
Usually, we do not change the data state in unit/integration tests; and the tests should be independent of each other.
However, in my case, I only know what is the entity id in the response of the post method, making difficult to implement the get/put/delete methods. Of course in the get/put/delete methods I can first insert and then make the another operation, but in this perspective, at the end, I will have a database in a different state of the beginning of the test. In this way, my another question is:
2 - How can I recover the database to the same status before the tests?
I know that it could be a specific situation but I really appreciate any help to find an elegant way of testing this scenario.
Thanks in advance.
You should ask differently: is the risk acceptable to run tests against your production db?
Meaning: if your tests only uncover problems in your code, everybody will be happy.
But if you mess up and the database gets seriously corrupted, and the whole site needs to be taken down for a backup that fails initially... So your business goes offline for 2 days, how do you think your manager will like that?
Long story short: it depends. If you can contain the risks- yes sure. But if not, look for other alternatives. At least : make sure that your manager understands what you are doing.
Integration test are fine and a must I would say as long as you don't run them in a production environment. It allows to test the overall application and how you are handling responses, serializations, and deserializations. Your test cases should handle what you expect to have in you production database and every test test should be isolated and what you create in your test case you must delete it after it so returning to its original state, otherwise you might have clashing test cases. Test integration tests in a local database or a dedicated testing database.
You can specify the in memory H2 database for interface integration testing and populate it as needed for specific tests. This is useful when you are running in situations where having a database on your Jenkins or similar unit test system doesn't make sense. It really depends what you are testing ie end to end integration or finer grain integration.
Related
We are currently using direct DB connection to query mongodb from our scripts and retrieve the required data.
Is it advisable / best practice to make the data retrieval from DB as a microservice.
It does until it doesn't :)
A service needs to get its data from somewhere and a database is a good start. If you have high loads you may find that you need to add a cache in the middle see this post from Instagram engineering https://instagram-engineering.com/thundering-herds-promises-82191c8af57d
edit (after comment)
generally speaking, a service should own its database and other services shouldn't access another database service directly only via its API. The idea is to keep services autonomous and enable them to evolve independently.
Depending on the size of microservice, that's now always practical since it can make the overhead of having the service be more of the utility it provide (I call this nanoservices). Also, if you have a lot of services you don't want to allow each one to talk to any other (even not via the DB) since you just get a huge mess. The way I see it there should be clear logical boundaries (services or microservices) and then within each such logical services you may find that it makes sense to have more than one "parts" (which I call aspects) e.g. they have different scaling needs or different suitable technologies etc. When you set things this way aspects can access the same database and services shouldn't (and you can still tame the chaos :) )
One last thing to think about - who said API is only a REST API, you can add views on top of the data that belongs to another service and as long as you treat that like an API (security, versioning etc.) you can have other services access that as well
so it occurred to me that it would be helpful in the development of my backend to have a managed sandbox, like a development environment without having to set up a separate database. To be more specific I'm using Postgresql and Node.js, but I doubt that makes a difference.
So my question is, how do services such as PayPal commonly implement a "sandbox" for developers who use their API to play with that is separate from their real data? In my case all that I want is a database sandbox that operates separately from the main database for the backend developers. My first idea on this is to tag every row with their specific sandbox or production id, but that seems inefficient. Is there another way to implement this idea?
I am bit confused about N-Unit Framework Testing. Below is my scenario for Web Application.
Create a Ticket
Assign a Ticket to User
Either User can work on Ticket or He may forward it for manager Approval.
Once Manager is Approved, he will work on that Ticket.
Close the Ticket.
How to create test case in N-Unit Framework. Below are my few questions.
Should i write code to create a Ticket? Can we insert a data to Database using N-Unit Framework.
If ticket is created, should we capture that ticket number and assign it to some user.
Should we write a code to assign it to user for approval?
I am not sure how to write N-Unit scripts for Wrokflow Logic.
When you write unit tests you usually write them so that they test one thing only. When testing a workflow you would typically split it up in several unit tests. In your scenario each point is a good candidate for a unit test except for number 3 that should be split up into at least two tests.
Should i write code to create a Ticket? Can we insert a data to
Database using N-Unit Framework.
It depends on your implementation. If you need a ticket for your tests then you have to create it first. No, you cannot use the NUnit framework to insert data into the database. That is not the kind of problem that the framework is intended to solve. Typically when writing unit tests you want to avoid accessing external resources like a database so try to write the code so that you don’t have to do that.
If ticket is created, should we capture that ticket number and
assign it to some user.
Should we write a code to assign it to user for approval?
This depends on how you have implemented the system. If you need this to run your tests, then yes.
I am trying to determine whether I am able to inject test case information at run time and leverage the SOAPUI tool. I understand that I can create test cases on the GUI but is this my only option?
Background info if interested: Currently I am working on creating an automation framework at my company. We currently have web page testing and soon to be added SOAP testing. As many of these tests (at one point in the future as I am told my the architect) could be run from both a web page and soap I think it's best to store the test cases in some format (Json, YAML, etc.) to document all the test cases and then inject them into test steps at run time.
However my company enjoys working with SOAPUI. I've used the tool and created test cases, assertions, et al on the GUI (of course) but I cannot find any documentation which suggests that instead of defining the test cases in this way I could inject the test information at run time (similar to what you can do with the wsdl2java apache tool). Can this be done with testrunner? This way I can reuse the test cases. Is this possible? Does this even make sense? I just want to attempt to incorporate a tool I've been asked to use.
Any thoughts are greatly appreciated!
Here is an example of what data may look like:
Partner : [
Organization : [
Company Name:
Company URL:
]
Contact Information : [
Name:
Address:
]
] (sorry i can't get the indents to work properly...)
As I stated below in a comment, I know on the SoapUI GUI I can create a test suite, test case and add test steps. But I want to store the test step information in a different place so I can use the test steps for different kinds of tests.
Your question is way too broad for me to even attempt a complete answer.
SoapUI GUI you use to create the tests. Your data can be stored, and read by SoapUI, in Excel, database, flat file, generated dynamically, whatever you want. You can run everything using the testrunner from command line, or using the Maven plugin from Jenkins.
Seriously, spend some time with the documentation.
Let me start by saying: this is my 1st post here, this is a bit lenghty, and I havent done Windows Forms development in years....with that in mind please excuse me if this isn't directly a programming question and please bear with me as I really need the help!!
I have been asked to develop a Windows Forms app for our company that talks to a central (local area network) Linux Server hosting a PostgreSQL database. The app is to allow users to authenticate themselves into the system and thereafter conduct the usual transactions with the PG database. Ordinarily, I would propose writing a webforms app against Mono, but the clients need to utilise local resources such as USB peripheral devices, so that is out of the question. While it might not seem clear, my questions are italised below:
Dilemma #1:
The application is meant to be always connected. How should I structure my DAL/BLL - Should this reside on the server or with the client?
Dilemma #2:
I have been reading up on Client Application Services (CAS), and it seems like a great fit for authentication, as everything is exposed via URIs. I know that a .NET Data Provider exists for PostgreSQL, but not too sure if CAS will all work on a Linux (Debian) server? Believe me, I would get my hands dirty and try myself, but I need to come up with a logical design first before resources are allocated to me for "trial purposes"!
Dilemma #3:
If the DAL/BLL is to reside on the server, is there any way I can create data services, and expose only these services to authenticated clients. There is a (security) requirement whereby a connection string with username and password to the database cannot be present on any client machines...even if security on the database side is quite rigid. I'm guessing that the only way for this to work would be to create the various CRUD data service methods that are exposed by an ASP.NET app, and have the WindowsForms make a request for data or persist data to the ASP.NET app (thru a URI) and have that return a resultset or value. Would I be correct in assuming this? Should I be looking into WCF Data Services? and will WCF work with a non-SQL Server database?
Thank you for taking the time out to read this, but know that I am desperately seeking any advice on this! THANKS A MILLION!!!!
EDIT:
I am considering also using NHibernate as my ORM
Some parts of your questions are complicated and beyond my expertise. However, in general you can do almost anything you put effort into, CAP theorem and the like aside.
DAL/BLL stuff in general can reside in any of the tiers. I put a lot of this in my database and some in the middle tier, however this is to allow re-use in different environments which may or may not be a goal for you. The thing is I would think through carefully the separation of concerns issues here and what sorts of centralization of logic you want to place. The further back, the more re-usable this becomes but this is not always a free tradeoff.
I am not entirely familiar with CAS but it looked like AJAX kinds of stuff from what I saw on the MSDN web site. That could be wrong, but if it is right, then you have an issue in that such requests may be stateless and this could be an issue if you need a constant connection.
On the whole based on what you are saying it sounds cleanest to do a two tier rather than a three tier app, and have the DAL/BLL sit on the client, possibly supported by stored procedures in the server. You can then set PostgreSQL up to authenticate against whatever you use on your network (KRB5 if AD is what I would recommend). This simplifies your data access, and it allows you to control permissions based on the authentication against the database. Since you can authenticate users based on AD, you can then set permissions accordingly.
One important consideration is going to be number of connections. PostgreSQL does have some places where every current connection must be checked and iterated through, and connection startup and tear-down overhead in some cases can be significant. So one important decision will involve connection pooling. Whether or not you use connection pooling to boost performance will depend on what you are doing but I have seen cases where PostgreSQL has handled 600 connections without serious problems.