Grails integration tests with managed Mongodb - mongodb

I'm currently using mongoDB and I wanted to be available to run integration and funcional tests on any machine (currently in dedicated build server and in a future CI server).
The main problem is that I have to be able to check mongodb installation (and if not present, install it), start a mongodb instance on startup and shut it down once the process has finished.
There's an already developed question here Embedded MongoDB when running integration tests that suggests installing a gradle or maven plugin.
This gradle plugin https://github.com/sourcemuse/GradleMongoPlugin/ can do this, but I will have to manage my dependencies with it, already tried. The problem with this approach is not gradle itself, but when tried this I've lost all the benefits from my IDE (STS, intellij).
Did anyone managed to do this?
If someone configured gradle with a grails project withour losing the grails perspective, I will appreciate that help too!
Thanks!
Trygve.

I have recently created a grails plugin for this purpose. https://github.com/grails-plugins/grails-embedded-mongodb
Currently it is in snapshot, however I plan to publish a release this week

I've had good results using an in-memory Mongo server for integration tests. It runs fast and doesn't require starting up a separate Mongo server or dealing with special grails or maven config. This means that the tests can run equally well with any JUnit test runner, i.e. within any IDE or build system. No extra setup required.
In-memory Mongo example
I have also used the "flapdoodle" embedded mongo server for testing. It uses a different approach in that it downloads and executes a separate process for a real Mongo instance. I have found that this mechanism has more moving parts and seems to be overkill for me when all I really want to do is verify that my app works correctly with a mongo server.

Better answer late than never -
Unfortunately I found that Fongo does not address all of my requirements quite well - most notably, $eval is not implemented so that you cannot run integration tests with migration tools such as Mongeez.
I settled for EmbedMongo, which I am using in my Spock/Geb integration tests via JUnit ExternalResource rules. Even though Gary is right when he says that a real managed DB comes with many more moving parts, but I found that I'd rather take that risk than rely on a mock implementation. So far it worked quite well, give or take an unclean database shutdown during test suite teardown, which fortunately does not impact the tests. You would use the rules as follows:
#Integration(applicationClass = Application)
#TestFor(SomeGrailsArtifact) // this will inject grailsApplication
class SomeGrailsArtifactFunctionalSpec extends Specification {
#Shared #ClassRule
EmbedMongoRule embedMongoRule = new EmbedMongoRule(grailsApplication)
#Rule
ResetDatabaseRule resetDatabaseRule = new ResetDatabaseRule(embedMongoRule.db)
...
For the sake of completeness, these are the rule implementations:
EmbedMongoRule.groovy
import org.junit.rules.ExternalResource
import com.mongodb.MongoClient
import com.mongodb.MongoException
import de.flapdoodle.embed.mongo.MongodProcess
import de.flapdoodle.embed.mongo.MongodStarter
import de.flapdoodle.embed.mongo.config.IMongodConfig
import de.flapdoodle.embed.mongo.config.MongodConfigBuilder
import de.flapdoodle.embed.mongo.config.Net
import de.flapdoodle.embed.mongo.distribution.Version
import de.flapdoodle.embed.process.runtime.Network
/**
* Rule for {#code EmbedMongo}, a managed full-fledged MongoDB. The first time
* this rule is used, it will download the current production MongoDB release,
* spin it up before tests and tear it down afterwards.
*
* #author Michael Jess
*
*/
public class EmbedMongoRule extends ExternalResource {
private def mongoConfig
private def mongodExecutable
public EmbedMongoRule(grailsApplication) {
if(!grailsApplication) {
throw new IllegalArgumentException(
"Got null grailsApplication; have you forgotten to supply it to the rule?\n" +
"\n" +
"#Integration(applicationClass = Application)\n" +
"#TestFor(MyGrailsArtifact)\n // will inject grailsApplication" +
"class MyGrailsArtifactSpec extends ... {\n" +
"\n" +
"\t..." +
"\t#Shared #ClassRule EmbedMongoRule embedMongoRule = new EmbedMongoRule(grailsApplication)\n" +
"\t...\n" +
"}")
}
mongoConfig = grailsApplication.config.grails.mongodb
}
#Override
protected void before() throws Throwable {
try {
MongodStarter starter = MongodStarter.getDefaultInstance()
IMongodConfig mongodConfig = new MongodConfigBuilder()
.version(Version.Main.PRODUCTION)
.net(new Net(mongoConfig.port, Network.localhostIsIPv6()))
.build()
mongodExecutable = starter.prepare(mongodConfig)
MongodProcess mongod = mongodExecutable.start()
} catch (IOException e) {
throw new IllegalStateException("Unable to start embedded mongo", e)
}
}
#Override
protected void after() {
mongodExecutable.stop()
}
/**
* Returns a new {#code DB} for the managed database.
*
* #return A new DB
* #throws IllegalStateException If an {#code UnknownHostException}
* or a {#code MongoException} occurs
*/
public def getDb() {
try {
return new MongoClient(mongoConfig.host, mongoConfig.port).getDB(mongoConfig.databaseName)
} catch (UnknownHostException | MongoException e) {
throw new IllegalStateException("Unable to retrieve MongoClient", e)
}
}
}
ResetDatabaseRule.groovy - currently not working since GORM ignores the grails.mongodb.databaseName parameter as of org.grails.plugins:mongodb:4.0.0 (grails 3.x)
import org.junit.rules.ExternalResource
/**
* Rule that will clear whatever Mongo {#code DB} is provided.
* More specifically, all non-system collections are dropped from the database.
*
* #author Michael Jess
*
*/
public class ResetDatabaseRule extends ExternalResource {
/**
* Prefix identifying system tables
*/
private static final String SYSTEM_TABLE_PREFIX = "system"
private def db
/**
* Create a new database reset rule for the specified datastore.
*
* #param getDb Closure returning a reference to the {#link DB} instance
* to reset.
*/
ResetDatabaseRule(db) {
this.db = db
}
#Override
protected void before() throws Throwable {
db.collectionNames
.findAll { !it.startsWith(SYSTEM_TABLE_PREFIX) }
.each { db.getCollection(it).drop() }
}
}

Related

Artifact to use for #BsonIgnore

I am attempting to abstract out my core objects for a service I am writing to a library. I have all the other artifacts I need straightened out, but unfortunately I cannot find an artifact for #BsonIgnore. I am using #BsonIgnore to ignore some methods that get added to the bson document when they shouldn't, as the implementing service writes these objects to MongoDb.
For context, the service is written using Quarkus, and Mongo objects are handled with Panache:
implementation 'io.quarkus:quarkus-mongodb-panache'
The library I am creating is mostly just a simplistic pojo library, nothing terribly fancy in the Gradle build.
I have found this on Maven Central: https://mvnrepository.com/artifact/org.bson/bson?repo=novus-releases but seems like not a normal release, and doesn't solve the issue.
In case it is useful, here is my code:
#Data
public abstract class Historied {
/** The list of history events */
private List<HistoryEvent> history = new ArrayList<>(List.of());
/**
* Adds a history event to the set held, to the front of the list.
* #param event The event to add
* #return This historied object.
*/
#JsonIgnore
public Historied updated(HistoryEvent event) {
if(this.history.isEmpty() && !EventType.CREATE.equals(event.getType())){
throw new IllegalArgumentException("First event must be CREATE");
}
if(!this.history.isEmpty() && EventType.CREATE.equals(event.getType())){
throw new IllegalArgumentException("Cannot add another CREATE event type.");
}
this.getHistory().add(0, event);
return this;
}
#BsonIgnore
#JsonIgnore
public HistoryEvent getLastHistoryEvent() {
return this.getHistory().get(0);
}
#BsonIgnore
#JsonIgnore
public ZonedDateTime getLastHistoryEventTime() {
return this.getLastHistoryEvent().getTimestamp();
}
}
This is the correct dependency https://mvnrepository.com/artifact/org.mongodb/bson/4.3.3, but check for your specific version

how to give mongodb socketkeepalive in spring boot application?

In spring boot if we want to connect to mongodb, we can create a configuration file for mongodb or writing datasource in application.properties
I am following the second way
For me, I am gettint this error
"Timeout while receiving message; nested exception is com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message
.
spring.data.mongodb.uri = mongodb://mongodb0.example.com:27017/admin
I am gettint this error If I am not using my app for 6/7 hours and after that If I try to hit any controller to retrieve data from Mongodb. After 1/2 try I am able to get
Question - Is it the normal behavior of mongodb?
So, in my case it is closing the socket after some particular hours
I read some blogs where it was written you can give socket-keep-alive, so the connection pool will not close
In spring boot mongodb connection, we can pass options in uri like
spring.data.mongodb.uri = mongodb://mongodb0.example.com:27017/admin/?replicaSet=test&connectTimeoutMS=300000
So, I want to give socket-keep-alive options for spring.data.mongodb.uri like replicaset here.
I searched the official site, but can't able to find any
You can achieve this by providing a MongoClientOptions bean. Spring Data's MongoAutoConfiguration will pick this MongoClientOptions bean up and use it further on:
#Bean
public MongoClientOptions mongoClientOptions() {
return MongoClientOptions.builder()
.socketKeepAlive(true)
.build();
}
Also note that the socket-keep-alive option is deprecated (and defaulted to true) since mongo-driver version 3.5 (used by spring-data since version 2.0.0 of spring-data-mongodb)
You can achieve to pass this option using MongoClientOptionsFactoryBean.
public MongoClientOptions mongoClientOptions() {
try {
final MongoClientOptionsFactoryBean bean = new MongoClientOptionsFactoryBean();
bean.setSocketKeepAlive(true);
bean.afterPropertiesSet();
return bean.getObject();
} catch (final Exception e) {
throw new BeanCreationException(e.getMessage(), e);
}
}
Here an example of this configuration by extending AbstractMongoConfiguration:
#Configuration
public class DataportalApplicationConfig extends AbstractMongoConfiguration {
//#Value: inject property values into components
#Value("${spring.data.mongodb.uri}")
private String uri;
#Value("${spring.data.mongodb.database}")
private String database;
/**
* Configure the MongoClient with the uri
*
* #return MongoClient.class
*/
#Override
public MongoClient mongoClient() {
return new MongoClient(new MongoClientURI(uri,mongoClientOptions().builder()));
}

Swagger documentation with JAX-RS Jersey 2 and Grizzly

I have implementated a Rest web service (the function is not relevant) using JAX-RS. Now I want to generate its documentation using Swagger. I have followed these steps:
1) In build.gradle I get all the dependencies I need:
compile 'org.glassfish.jersey.media:jersey-media-moxy:2.13'
2) I documentate my code with Swagger annotations
3) I hook up Swagger in my Application subclass:
public class ApplicationConfig extends ResourceConfig {
/**
* Main constructor
* #param addressBook a provided address book
*/
public ApplicationConfig(final AddressBook addressBook) {
register(AddressBookService.class);
register(MOXyJsonProvider.class);
register(new AbstractBinder() {
#Override
protected void configure() {
bind(addressBook).to(AddressBook.class);
}
});
register(io.swagger.jaxrs.listing.ApiListingResource.class);
register(io.swagger.jaxrs.listing.SwaggerSerializers.class);
BeanConfig beanConfig = new BeanConfig();
beanConfig.setVersion("1.0.2");
beanConfig.setSchemes(new String[]{"http"});
beanConfig.setHost("localhost:8282");
beanConfig.setBasePath("/");
beanConfig.setResourcePackage("rest.addressbook");
beanConfig.setScan(true);
}
}
However, when going to my service in http://localhost:8282/swagger.json, I get this output.
You can check my public repo here.
It's times like this (when there is no real explanation for the problem) that I throw in an ExceptionMapper<Throwable>. Often with server related exceptions, there are no mappers to handle the exception, so it bubbles up to the container and we get a useless 500 status code and maybe some useless message from the server (as you are seeing from Grizzly).
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.Response;
import javax.ws.rs.ext.ExceptionMapper;
public class DebugMapper implements ExceptionMapper<Throwable> {
#Override
public Response toResponse(Throwable exception) {
exception.printStackTrace();
if (exception instanceof WebApplicationException) {
return ((WebApplicationException)exception).getResponse();
}
return Response.serverError().entity(exception.getMessage()).build();
}
}
Then just register with the application
public ApplicationConfig(final AddressBook addressBook) {
...
register(DebugMapper.class);
}
When you run the application again and try to hit the endpoint, you will now see a stacktrace with the cause of the exception
java.lang.NullPointerException
at io.swagger.jaxrs.listing.ApiListingResource.getListingJson(ApiListingResource.java:90)
If you look at the source code for ApiListingResource.java:90, you will see
Swagger swagger = (Swagger) context.getAttribute("swagger");
The only thing here that could cause the NPE is the context, which scrolling up will show you it's the ServletContext. Now here's the reason it's null. In order for there to even be a ServletContext, the app needs to be run in a Servlet environment. But look at your set up:
HttpServer server = GrizzlyHttpServerFactory
.createHttpServer(uri, new ApplicationConfig(ab));
This does not create a Servlet container. It only creates an HTTP server. You have the dependency required to create the Servlet container (jersey-container-grizzly2-servlet), but you just need to make use of it. So instead of the previous configuration, you should do
ServletContainer sc = new ServletContainer(new ApplicationConfig(ab));
HttpServer server = GrizzlyWebContainerFactory.create(uri, sc, null, null);
// you will need to catch IOException or add a throws clause
See the API for GrizzlyWebContainerFactory for other configuration options.
Now if you run it and hit the endpoint again, you will see the Swagger JSON. Do note that the response from the endpoint is only the JSON, it is not the documentation interface. For that you need to use the Swagger UI that can interpret the JSON.
Thanks for the MCVE project BTW.
Swagger fixed this issue in 1.5.7. It was Issue 1103, but the fix was rolled in last February. peeskillet's answer will still work, but so will OP's now.

loading DB driver in Global.beforeStart

I want to implement some DB cleanup at each startup (full schema deletion and recreation while in dev-enviroment).
I'm doing it in Global.beforeStart. And because it's literally before start I need to load DB drivers myself.
The code is:
#Override
public void beforeStart(Application app){
System.out.println("IN beforeStart");
try{
Class.forName("org.postgresql.Driver");
System.out.println("org.postgresql.Driver LOADED");
} catch (ClassNotFoundException cnfe){
System.out.println("NOT LOADED org.postgresql.Driver");
cnfe.printStackTrace();
}
ServerConfig config = new ServerConfig();
config.setName("pgtest");
DataSourceConfig postgresDb = new DataSourceConfig ();
postgresDb.setDriver("org.postgresql.Driver");
postgresDb.setUsername("postgres");
postgresDb.setPassword("postgrespassword");
postgresDb.setUrl("postgres://postgres:postgrespassword#localhost:5432/TotoIntegration2");
config.setDataSourceConfig(postgresDb);
config.setDefaultServer(true);
EbeanServer server = EbeanServerFactory.create(config);
SqlQuery countTables = Ebean.createSqlQuery("select count(*) from pg_stat_user_tables;");
Integer numTables = countTables.findUnique().getInteger("count");
System.out.println("numTables = " + numTables);
if(numTables>2){
DbHelper.cleanSchema();
}
System.out.println("beforeStart EXECUTED");
//DbHelper.cleanSchema();
}
Class.forName("org.postgresql.Driver") passed without exceptions, but then I'm getting:
com.avaje.ebeaninternal.server.lib.sql.DataSourceException: java.sql.SQLException: No suitable driver found for postgres
on the line EbeanServer server = EbeanServerFactory.create(config);
Why?
Use onStart instead, it's performed right after beforeStart but it's natural candidate for operating on database (in production mode it doesn't wait for first request), javadoc for them:
/**
* Executed before any plugin - you can set-up your database schema here, for instance.
*/
public void beforeStart(Application app) {
}
/**
* Executed after all plugins, including the database set-up with Evolutions and the EBean wrapper.
* This is a good place to execute some of your application code to create entries, for instance.
*/
public void onStart(Application app) {
}
Note, that you don't need include DB config additionally here, you can use your models here the same way as you do in controller.

Play framework 2 + JPA with multiple persistenceUnit

I'm struggling with Play and JPA in order to be able to use two different javax.persistence.Entity model associated to two different persistence units (needed to be able to connect to different DB - for example an Oracle and a MySQL db).
The problem come from the Transaction which is always bind to the default JPA persitenceUnit (see jpa.default option).
Here is two controller actions which show the solution I found to manually define the persistence :
package controllers;
import models.Company;
import models.User;
import play.db.jpa.JPA;
import play.db.jpa.Transactional;
import play.mvc.Controller;
import play.mvc.Result;
public class Application extends Controller {
//This method run with the otherPersistenceUnit
#Transactional(value="other")
public static Result test1() {
JPA.em().persist(new Company("MyCompany"));
//Transaction is run with the "defaultPersistenceUnit"
JPA.withTransaction(new play.libs.F.Callback0() {
#Override
public void invoke() throws Throwable {
JPA.em().persist(new User("Bobby"));
}
});
return ok();
}
//This action run with the otherPersistenceUnit
#Transactional
public static Result test2() {
JPA.em().persist(new User("Ryan"));
try {
JPA.withTransaction("other", false, new play.libs.F.Function0<Void>() {
public Void apply() throws Throwable {
JPA.em().persist(new Company("YourCompany"));
return null;
}
});
} catch (Throwable throwable) {
throw new RuntimeException(throwable);
}
return ok();
}
}
This solution doesn't seem to be really "clean". I'd like to know if you know a better way to avoid the need to manually modify the transaction used.
For this purpose, I created a repo on git with a working sample application which shows how I configured the project.
https://github.com/cm0s/play2-jpa-multiple-persistenceunit
Thank you for your help
i met the same problem, too. too many advices are about PersistenceUnit annotation or getJPAConfig. but both them seem not work in play framework.
i found out a method which works well in my projects. maybe you can try it.
playframework2 how to open multi-datasource configuration with jpa
gud luk!