Tests using Postgres images via Testcontainers time out - postgresql

I'm working on a Spring Boot project with a Postgres database backend where JUnit 5 and Testcontainers is used for integration tests that involve database access.
Testcontainers is set up by modifying the JDBC URL like this:
spring:
datasource:
url: jdbc:tc:postgresql:9.6.8:///test
This setup did work fine for many months but now I'm hitting a road block.
So far there are already 20 integration test classes and adding another one leads to failing tests due to an error that looks like a time out to me.
When adding the 21st test class, another test (let's call it RandomTest) hangs for a few minutes and then fails with this error:
java.lang.IllegalStateException at DefaultCacheAwareContextLoaderDelegate.java:98
Caused by: org.springframework.beans.factory.BeanCreationException at AbstractAutowireCapableBeanFactory.java:1804
Caused by: org.flywaydb.core.internal.exception.FlywaySqlException at JdbcUtils.java:68
Caused by: java.sql.SQLException at JdbcDatabaseContainer.java:263
Caused by: org.postgresql.util.PSQLException at ConnectionFactoryImpl.java:659
I know it can't be a problem with the test per se, because when I run it individually, there's no problem:
./gradlew test --tests RandomTest
[...]
BUILD SUCCESSFUL in 16s
It may also be noteworthy that I only have this problem when running the tests with Gradle (both locally and on the CI server). I don't see this problem when running them in IntelliJ.
So it looks to me like this is some kind of resource problem like the Postgres instance that Testcontainers starts up running out of memory or out of connections or whatever, but that's just guessing.
I tried different configuration modifications that I found in the Testcontainers docs:
Running the container in daemon mode like this:
spring:
datasource:
url: jdbc:tc:postgresql:9.6.8:///test?TC_DAEMON=true
Disabling Ryuk by setting TESTCONTAINERS_RYUK_DISABLED=true
Starting Ryuk in (un-)privileged mode explicitly with ryuk.container.privileged=true|false (I tried both because I'm not sure what the default is)
None of these had a noticeable impact in terms of my problem.
I'm thinking that maybe we are overusing Testcontainers for too many tests? Should I instead use H2 for most integration tests and use Testcontainers only for a few selected tests to make sure that everything works with the production database?
Or am I missing something?

Okay, it turned out that it actually was a problem with the newly added test.
The test author had added a method that was supposed to clean up the database after the test like this:
#AfterEach
public void beforeEach() {
fooRepository.deleteAll();
barRepository.deleteAll();
bazRepository.deleteAll();
}
When removing this, all the tests work fine again. I guess this clean up takes a bit longer than execution of the test itself so that the database connection is not released in time for the next test to use it, or something like this.

Related

'No suitable driver found..' on AWS, works locally

I made a ktor application using exposed for db stuff and it works perfectly fine on my desktop, however when I deploy it on an AWS EC2 instance I get following error
Exposed - Transaction attempt #0 failed: No suitable driver found for
jdbc:postgresql://com.com:5432/DBName. Statement(s): null
java.sql.SQLException: No suitable driver found for jdbc:postgresql://
at
java.sql/java.sql.DriverManager.getConnection(DriverManager.java:702)
java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228)
org.jetbrains.exposed.sql.Database$Companion$connect$10.invoke(Database.kt:206)
org.jetbrains.exposed.sql.Database$Companion$connect$10.invoke(Database.kt:206)
org.jetbrains.exposed.sql.Database$Companion$doConnect$3.invoke(Database.kt:127)
org.jetbrains.exposed.sql.Database$Companion$doConnect$3.invoke(Database.kt:128)
and so on.
Here's the connection:
Database.connect(DB_URL, driver = "org.postgresql.Driver", user = DB_USER, password = B_PW)
I've tried it with both, but no luck.
implementation("com.impossibl.pgjdbc-ng:pgjdbc-ng:0.8.9")
implementation("org.postgresql:postgresql:42.3.3")
I found potential solutions for Spring Boot (e.g. setting SPRING_DATASOURCE_DRIVER_CLASS_NAME) but I have no clue how I can relate this to ktor/exposed if even possible.
nvfm
works now. aws magic, idk
edit:
com.impossibl.postgres.jdbc.PGDriver did not work at all so I tried to switch it but org.postgresql.Driver also did nothing at first. looked at the logs, same error as before.
after a while AWS' health thingy switched to Ok and it seems to work just fine now.

PostgreSQL "forgets" default schema when closing data source connection

I am running into a very strange issue with Spring Boot and Spring Data: after I manually close a connection, the formerly working application seems to "forget" which schema it's using and complains about missing relations.
Here's the code snippet in question:
try (Connection connection = this.dataSource.getConnection()) {
ScriptUtils.executeSqlScript(connection, new ClassPathResource("/script.sql"));
}
This code works fine, but after it executes, the application immediately starts throwing errors like the following:
org.postgresql.util.PSQLException: ERROR: relation "some_table" does not exist
Prior to executing the code above, the application works fine (including referencing the table it later complains about). If I remove the try-resource block, and do not close the Connection, everything also works fine, except that I've now created a resource leak. I have also tried explicitly setting the default schema (public) in the following ways:
In the JDBC URL with the currentSchema parameter
With the the spring.datasource.hikari.schema parameter
With the spring.datasource.jpa.properties.hibernate.default_schema property
The last does alleviate the issue with respect to Hibernate managed classes, but the issue persists with native queries. I could, of course, make the schema explicit in those queries, but that doesn't seem to address the root issue. Why would closing a connection trigger this behavior?
My environment:
Spring Boot 2.5.1
PostgreSQL 12.7
Thanks to several users above who immediately saw what I did not. The script, adapted from an older pg_dump run, was indeed mucking with the search_path:
SELECT pg_catalog.set_config('search_path', '', false);
Removing that line, and some other unnecessary ones, resolved the problem. Big duh on my part.

Persistent Karma test runner with autoWatch=false

I am trying to run Karma via node (gulp, specifically) persistently in the background but to manually re-run the tests i.e. I have autoWatch set to false. I start the server with something like:
karma_server = new karma.Server(config.karma)
karma_server.start()
Then elsewhere I would like to trigger the tests running when files are updated outside Karma. The API method that one would expect might work is server.refreshFiles(), but it does not do this.
Internally it seems like executor.schedule() might do the trick, but it seems to be undocumented, private, and inaccessible.
So when autoWatch is turned off, how does one trigger Karma testing with an existing server? I'm sure I must be missing something obvious as otherwise the autoWatch option would always need to be true for the server to be useful.
If you have a server already running you can use the karma runner to communicate with it:
var runner = require('karma').runner,
karmaConfig = {/* The karma config here */};
runner.run(karmaConfig, callback);
The grunt-karma plugin works like this, you can check it out for more info:
https://github.com/karma-runner/grunt-karma/blob/master/tasks/grunt-karma.js

Configuring spring-xd to use oracle as job repository

I want to run spring xd with Oracle(11g) which i already have in my environment. Currently my first concern is the jobs UI (my database has existing data of job executions that were performed by spring-batch and i simply want to display the details of those executions).
i'm using spring-xd-1.0.0.M5. I followed the instructions in the reference guide and i changed application.yml to have the following:
spring:
datasource:
url: jdbc:oracle:oci:MY_USERNAME/MYPWD#//orarmydomain.com:1521/myservice
username: MY_USERNAME
password: MYPWD
driverClassName: oracle.jdbc.OracleDriver
profiles:
active: default,oracle
i modified also batch-jdbc.properties to have the database configuration similar to the above.
Yet, when i start xd-singlnode.bat (or either xd-admin.bat) it seems like it ignores my oracle configuration and still uses the default hsqldb.
what am i doing wrong?
Thanks
The likely reason is that we did not upgrade the windows .bat scripts to take advantage of the property overriding via xd-config.yml. If you go into the unix script for xd-singlenode you will see that when java is invoked there there is an option
-Dspring.config.location=$XD_CONFIG
you can for now hardcode your location of that file, use file: as the prefix.
Also, The UI right now is very primitive, you will not be able to see many details about the job execution. There are however many job related commands you can execute in the shell and there is only one gap regarding step execution information as compared to what is available via spring-batch-admin.
The issue to watch for this is https://jira.springsource.org/browse/XD-1209 and it is schedule for the next milestone release.
Let me know how it goes, thanks!
Cheers,
Mark

play framework 2.0 evolutions, how to mark an inconsistent state as resolved in PROD

I have an application developed in scala play2.0,
it worked successfully in local, but if failed when deployed to heroku.
the reason of the fail is that locally i was using a H2 database,
and using postgresql in heroku, i have to change one of the data types from "clob" to "text".
the problem now is that the database in heroku is in a "inconsistent state", according to the play20 documentation.
in DEV mode (locally), you can just click on the "Mark it as resolved" when the html appears.
how to "mark it ask resolved" in the heroku PROD environtment?
http://www.playframework.com/documentation/2.1.1/Evolutions
ps: note, because it was a new application, i just deleted the database and re-started.
however, here i am asking what is the proper way to handle evolutions in the PROD env.
that is, the "Mark it as resolved" issue for PROD is not explained here: http://www.playframework.com/documentation/2.1.1/Evolutions
Although I couldn't find a way to do it via the play command, you can do it by editing the database directly.
Imagine you're trying to go from 5.sql to 6.sql. Here's what you do:
Figure out and fix the problem(s) that caused the database to enter an inconsistent state (i.e. manually apply your !Ups and fix all the problems with them).
Manually apply your !Downs so that the database is in the state it was after 5.sql was applied.
Go into your database, find the table called play_evolutions, and look at the row with id 6. It should saying something like applying ups in the state column and have the error message in the last_problem column.
Delete the row with id 6. This will make Play think you are in the state you were with 5.sql.
Now you should be able to run play -DapplyEvolutions.default=true start to evolve to 6.sql.
Inconsistent state just means that the evolutions could not be applied and thus, the application is blocked. Update your evolution scripts and re-deploy.