Cannot run tests on h2 in-memory database, rather it runs on PostgreSQL - postgresql

(I have multiple related questions, so I highlight them as bold)
I have a play app.
play: 2.6.19
scala: 2.12.6
h2: 1.4.197
postgresql: 42.2.5
play-slick/play-slick-evolutions: 3.0.1
slick-pg: 0.16.3
I am adding a test for DAO, and I believe it should run on an h2 in-memory database that is created when tests start, cleared when tests end.
However, my test always runs on PostgreSQL database I configure and use.
# application.conf
slick.dbs.default.profile="slick.jdbc.PostgresProfile$"
slick.dbs.default.db.driver="org.postgresql.Driver"
slick.dbs.default.db.url="jdbc:postgresql://localhost:5432/postgres"
Here is my test test/dao/TodoDAOImplSpec.scala.
package dao
import play.api.inject.guice.GuiceApplicationBuilder
import play.api.test.{Injecting, PlaySpecification, WithApplication}
class TodoDAOImplSpec extends PlaySpecification {
val conf = Map(
"slick.dbs.test.profile" -> "slick.jdbc.H2Profile$",
"slick.dbs.test.db.driver" -> "org.h2.Driver",
"slick.dbs.test.db.url" -> "jdbc:h2:mem:test;MODE=PostgreSQL;DB_CLOSE_DELAY=-1;DATABASE_TO_UPPER=FALSE"
)
val fakeApp = new GuiceApplicationBuilder().configure(conf).build()
//val fakeApp = new GuiceApplicationBuilder().configure(inMemoryDatabase()).build()
//val fakeApp = new GuiceApplicationBuilder().configure(inMemoryDatabase("test")).build()
"TodoDAO" should {
"returns current state in local pgsql table" in new WithApplication(fakeApp) with Injecting {
val todoDao = inject[TodoDAOImpl]
val result = await(todoDao.index())
result.size should_== 0
}
}
}
For fakeApp, I try all three, but none of them work as expected - my test still runs on my local PostgreSQL table (in which there are 3 todo items), so the test fails.
What I have tried/found:
First, inMemoryDatabase() simply returns a Map("db.<name>.driver"->"org.h2.Driver", "db.<name>.url"->""jdbc:h2:mem:play-test-xxx"), which looks very similar to my own conf map. However, there are 2 main differeneces:
inMemoryDatabase uses db.<name>.xxx while my conf map uses slick.dbs.<name>.db.xxx. Which one should be correct?
Second, rename conf map's keys to "slick.dbs.default.profile", "slick.dbs.default.db.driver" and "slick.dbs.default.db.url" will throw error.
[error] p.a.d.e.DefaultEvolutionsApi - Unknown data type: "status_enum"; SQL statement:
ALTER TABLE todo ADD COLUMN status status_enum NOT NULL [50004-197] [ERROR:50004, SQLSTATE:HY004]
cannot create an instance for class dao.TodoDAOImplSpec
caused by #79bg46315: Database 'default' is in an inconsistent state!
The finding is interesting - is it related to my use of PostgreSQL ENUM type and slick-pg? (See slick-pg issue with h2). Does it mean this is the right configuration for running h2 in-memory tests? If so, the question becomes How to fake PostgreSQL ENUM in h2.
Third, I follow this thread, run sbt '; set javaOptions += "-Dconfig.file=conf/application-test.conf"; test' with a test configuration file conf/application-test.conf:
include "application.conf"
slick.dbs.default.profile="slick.jdbc.H2Profile$"
slick.dbs.default.db.driver="org.h2.Driver"
slick.dbs.default.db.url="jdbc:h2:mem:test;MODE=PostgreSQL;DB_CLOSE_DELAY=-1;DATABASE_TO_UPPER=FALSE"
Not surprisingly, I get the same error as the 2nd trial.
It seems to me that the 2nd and 3rd trials point to the right direction (Will work on this). But why must we set name to default? Any other better approach?

In play the default database is default. You could however change that to any other database name to want, but then you need to add the database name as well. For example, I want to have a comment database that has the user table:
CREATE TABLE comment.User(
id int(250) NOT NULL AUTO_INCREMENT,
username varchar(255),
comment varchar(255),
PRIMARY KEY (id));
Then I need to have the configuration of it to connect to it (add it to the application.conf file):
db.comment.url="jdbc:mysql://localhost/comment"
db.comment.username=admin-username
db.comment.password="admin-password"
You could have the test database for your testing as mentioned above and use it within your test.
Database Tests Locally: Why not have the database, in local, as you have in production? The data is not there and running the test on local does not touch the production database; why you need an extra database?
Inconsistent State: This is when the MYSQL you wrote, changes the state of the current database within the database, that could be based on creation of a new table or when you want to delete it.
Also status_enum is not recognizable as a MySQL command obviously. Try the commands you want to use in MySQL console if you are not sure about it.

Related

Flyway - Flyway Schema migration failed

I have successfully configured spring boot with a new project to work
with flyway
Migrated with the Postgres database from the version 0001.0 to 0008.0
I have made manually alter the script in local but
flyway migration getting failed.
Sample Error message:
org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'flywayInitializer' defined in class path
resource
[org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]:
Invocation of init method failed; nested exception is
org.flywaydb.core.api.FlywayException: Validate failed: Migration
checksum mismatch for migration version 0006.0
How to alter the database tables without affecting flyway script from the flyway_schema_history?
For example, I need to change the table name using alter command but executing the flyway migration script without failed.
Any suggestions, Kindly appreciated.
Note:- I don't want to remove the script entries from the table flyway_schema_history.
There are a few ways to do this:-
1) Create a new script file with incremented version. Put your DDL commands to alter the table in this file. Then run migration.
2) If you don't want to delete the entry from the schema_version table, you can change the checksum value in that table. To calculate checksum, use the following method copied from org.flywaydb.core.internal.resolver.sql.SqlMigrationResolver. You can pass null for resource parameter:-
/**
* Calculates the checksum of this string.
*
* #param str The string to calculate the checksum for.
* #return The crc-32 checksum of the bytes.
*/
/* private -> for testing */
static int calculateChecksum(Resource resource, String str) {
final CRC32 crc32 = new CRC32();
BufferedReader bufferedReader = new BufferedReader(new StringReader(str));
try {
String line;
while ((line = bufferedReader.readLine()) != null) {
crc32.update(line.getBytes("UTF-8"));
}
} catch (IOException e) {
String message = "Unable to calculate checksum";
if (resource != null) {
message += " for " + resource.getLocation() + " (" + resource.getLocationOnDisk() + ")";
}
throw new FlywayException(message, e);
}
return (int) crc32.getValue();
}
3) If you are using Flyway Pro version 5+, you can rollback the migration https://flywaydb.org/getstarted/undo.
The answers here are outdated but can still help you.
It sounds like you might be in one of two situations:
You want to re-run a versioned migration. This isn't really how flyway works, as Kartik has suggested, create a new versioned migration to alter the table.
A migration file has been modified and you want to leave it that way and run new ones (eg 0009.0). In this situation you can try:
Run repair. Which will recalculate the checksums (among other things).
Turn off the validateOnMigrate option which will not fail a migration if there are modified migration files.
To solve this error locally without dropping your whole db:
Fix the migration error which caused the root problem
Disconnect your db server
Open the table "flyway_schema_history" which would be created automatically
Delete the rows with the versions that are causing the mismatch problem
Open the tables that have columns depending on the conflict migrations and drop those columns (if needed)
Run again your db server with the new migrations

Spring Boot 2 - H2 Database - #SpringBootTest - Failing on org.h2.jdbc.JdbcSQLException: Table already exists

Unable to test Spring Boot & H2 with a script for creation of table using schema.sql.
So, what’s happening is that I have the following properties set:
spring.datasource.driver-class-name=org.h2.Driver
spring.datasource.initialization-mode=always
spring.datasource.username=sa
spring.datasource.password=
spring.datasource.platform=h2
spring.datasource.url=jdbc:h2:mem:city;MODE=PostgreSQL;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
spring.jpa.generate-ddl=false
spring.jpa.hibernate.ddl-auto=update
spring.jpa.show-sql=true
and, I expect the tables to be created using the schema.sql. The application works fine when I run gradle bootRun. However, when I run tests using gradle test, my tests for Repository passes, but the one for my Service fails stating that it’s trying to create the table when the table already exists:
Exception raised:
Caused by: org.h2.jdbc.JdbcSQLException: Table "CITY" already exists;
SQL statement:
CREATE TABLE city ( id BIGINT NOT NULL, country VARCHAR(255) NOT NULL, map VARCHAR(255) NOT NULL, name VARCHAR(255) NOT NULL, state VARCHAR(2555) NOT NULL, PRIMARY KEY (id) ) [42101-196]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.command.ddl.CreateTable.update(CreateTable.java:117)
at org.h2.command.CommandContainer.update(CommandContainer.java:101)
at org.h2.command.Command.executeUpdate(Command.java:260)
at org.h2.jdbc.JdbcStatement.executeInternal(JdbcStatement.java:192)
at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:164)
at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95)
at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java)
at org.springframework.jdbc.datasource.init.ScriptUtils.executeSqlScript(ScriptUtils.java:471)
... 105 more
The code is setup and ready to recreate the scenario. README has all the information ->
https://github.com/tekpartner/learn-spring-boot-data-jpa-h2
If the tests are run individually, they pass. I think the problem is due to schema.sql being executed twice against the same database. It fails the second time as the tables already exist.
As a workaround, you could set spring.datasource.continue-on-error=true in application.properties.
Another option is to add the #AutoConfigureTestDatabase annotation where appropriate so that a unique embedded database is used for each test.
There are 2 other possible solutions you could try:
Add a drop table if exists [tablename] in your schema.sql before you create the table.
Change the statement from CREATE TABLE to CREATE TABLE IF NOT EXISTS

export data from mongo to hive

my input: a collection("demo1") in mongo db (version 3.4.4 )
my output : my data imported in a database in hive("demo2") (version 1.2.1.2.3.4.7-4)
purpose : create a connector between mongo and hive
Error:
Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. com/mongodb/util/JSON
I tried 2 solutions following those steps (but the error remains):
1) I create a local collection in mongo (via robomongo) connected to docker
2) I upload those version of jars and add it in hive
ADD JAR /home/.../mongo-hadoop-hive-2.0.2.jar;
ADD JAR /home/.../mongo-hadoop-core-2.0.2.jar;
ADD JAR /home/.../mongo-java-driver-3.4.2.jar;
Unfortunately the error doesn't change; so I upload those version, I hesitate in choosing right version for my export, so I try this:
ADD JAR /home/.../mongo-hadoop-hive-1.3.0.jar;
ADD JAR /home/.../mongo-hadoop-core-1.3.0.jar;
ADD JAR /home/.../mongo-java-driver-2.13.2.jar;
3) I create an external table
CREATE EXTERNAL TABLE demo2
(
id INT,
name STRING,
password STRING,
email STRING
)
STORED BY 'com.mongodb.hadoop.hive.MongoStorageHandler'
WITH
SERDEPROPERTIES('mongo.columns.mapping'='{"id":"_id","name":"name","password":"password","email":"email"}')
TBLPROPERTIES('mongo.uri'='mongodb://localhost:27017/local.demo1');
Error returned in hive :
Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. com/mongodb/util/JSON
How can I resolve this problem?
Copying the correct jar files (mongo-hadoop-core-2.0.2.jar, mongo-hadoop-hive-2.0.2.jar, mongo-java-driver-3.2.2.jar) on ALL the nodes of the cluster did the trick for me.
Other points to take care about:
Follow all steps mentioned here religiously - https://github.com/mongodb/mongo-hadoop/wiki/Hive-Usage#installation
Adhere to the requirements given here - https://github.com/mongodb/mongo-hadoop#requirements
Other useful links
https://github.com/mongodb/mongo-hadoop/wiki/FAQ#i-get-a-classnotfoundexceptionnoclassdeffounderror-when-using-the-connector-what-do-i-do
https://groups.google.com/forum/#!topic/mongodb-user/xMVoTSePgg0

Slick does not commit transaction on AWS postgres DB

We have an issue with slick 3.0 and a postgres database (9.5) on AWS, where slick opens a transaction but does not seem to commit it, leaving an open connection "idle in transaction" and the futures never complete.
We are just calling db.run(saveRow(row).transactionally.asTry), where
private def saveRow(row: Row): DBIO[Int] = {
val getExistingRow: DBIO[Option[Row]] = table.filter(_.id === row.id).result.headOption
getExistingRow.flatMap((existingRow: Option[Row]) =>
existingRow match {
case None => table += row
case Some(row) =>
table.filter(_.id === row.id).map(_.propety).update(row.property)
}
)
}
Now the first select statement created from getExistingRow already does not complete. It works locally, but when running it in production on AWS, all prepared statements are never commited. Logs from slick.backend just show
#1: Start transaction
#2: StreamingInvokerAction$HeadOptionAction [select ...]
We would expect to get the following further logs from slick.backend (we see them locally), but we don't see them.
#3: SingleInsertAction [insert into ...]
#4: Commit
Is there some configuration setting I need to provide for this to work on the side of Slick, HikariCP or the postgres database that could fix this? Any other ideas on how to fix this issue?
It was actually caused by using the play execution context. When switching to the scala default execution context it worked fine.

Evolution not seen

I have began a Play Scala project and made it have a database by uncommenting in application.conf:
default.driver = org.h2.Driver
default.url = "jdbc:h2:mem:play"
Then, I created an evolution in conf/evolutions/default/1.sql:
CREATE SEQUENCE task_id_seq;
CREATE TABLE task (
id integer NOT NULL DEFAULT nextval('task_id_seq'),
label varchar(255)
);
# --- !Downs
DROP TABLE task;
DROP SEQUENCE task_id_seq;
So, when I am accessing localhost:9000 I am expecting to see the message:
Database default needs evolution!. However, this does not appear.
I am running in development mode and I don't have the code evolutionplugin=disabled anywhere in my project.
Why is the evolution not seen?
You need to add evolutions to the list of your library dependencies, as described in the docs https://www.playframework.com/documentation/2.4.0/Evolutions.