#Scheduled annotation not running the underlying DDl script from oracle DB attaching the code below - scheduler

This is my main class i have added #EnableScheduling here[][1\This class is running script]

Related

Run init script on datasource devservice in quarkus?

I have a Quarkus project that uses a postgresql datasource. In production, we create the necessary schemas on the db manually before.
When I run quarkusDev mode and use the devservices, I therefor would like to run an init script on the testcontainer to create the schemas before liquibase does its migrations, which otherwise will fail.
I tried this without success
quarkus.datasource.jdbc.url=jdbc:tc:postgresql:13:///quarkus?TC_INITSCRIPT=testcontainer/schema-init.sql
quarkus.datasource.jdbc.driver=org.testcontainers.jdbc.ContainerDatabaseDriver
Nothing got picked up by the postgres testcontainer.
How can I run a init script on a datasource testcontainer with quarkus?
As stated here: https://quarkus.io/guides/databases-dev-services#database-vendor-specific-configuration
specific properties supported by the Testcontainers JDBC driver such as TC_INITSCRIPT, TC_INITFUNCTION, TC_DAEMON, TC_TMPFS are not supported.
So this simply does not work

Examining contents of TestContainers

I am using a PostgreSQL TestContainer to test Liquibase schema migration in Spring Boot. I don't have any respositories. I am wondering if I can see/access the contents of the TestContainer, and test the schema migration.
Yes, you can access the Docker container spawned by Testcontainers like any other Docker container. Using the JUnit 5 extension or the JUnit 4 rule for Testcontainers will however shut down the container after your tests.
You can use the coontainer re-usability feature for Testcontainers (in alpha state since 1.12.3) for ensuring your containers are up- and running after your tests finish.
As Testcontainers will launch the container on an ephemeral port, simply execute docker ps and check to which local port the container port is mapped. E.g.:
b0df4733babb postgres:9.6.12 "docker-entrypoint.s…" 19 seconds ago Up 18 seconds 0.0.0.0:32778->5432/tcp inspiring_dewdney
You can now connect to your db on localhost:32778 with e.g. PgAdmin or the database view of IntelliJ IDEA and check your database tables.
The credentials for the access are those you specify in your test:
static PostgreSQLContainer postgreSQLContainer = (PostgreSQLContainer) new PostgreSQLContainer()
.withDatabaseName("differentDatabaseName")
.withUsername("duke")
.withPassword("s3cret")
.withReuse(true);
As a workaround you could also put in a breakpoint at the end of your test, debug the test, and quickly check your database.
UPDATE: If you want to verify the validity of your schema, you can use a Hibernate feature for this:
spring.jpa.hibernate.ddl-auto=validate
This will validate that your Java entity setup matches the underlying database schema on application startup. You can also add this to your production application.properties file as your application won't start if there is a mismatch (e.g. missing table, column).
For this to work in a test, you need to either use #DataJpaTest or use #SpringBootTest to the whole application context connect to your local container.
Find more information here.

How do you include postgresql.conf on docker container when using org.testcontainers

Is it possible to give postgresql testcontainer a custom postgresql.conf file via config?
I have included maven dependency
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>postgresql</artifactId>
<version>1.10.6</version>
</dependency>
And using 'Database containers launched via JDBC URL scheme' for DB url
As such have the setting in my Spring Boot app as:
datasource:
url: jdbc:tc:postgresql:10-alpine:///databasename
driver-class-name: org.testcontainers.jdbc.ContainerDatabaseDriver
I need to have a custom setting in postgresql.conf.
Is there a way of pushing postgresql.conf to the docker container started by testcontainers?
EDIT 1
Thanks #vilkg I did know about the TC_INITSCRIPT script option and SET function however:
I am wanting a custom setting such as my.key
ALTER system does not work for your own settings eg: ALTER SYSTEM SET my.key = 'jehe'; get error Could not execute the SQL command.
Message returned: `ERROR: unrecognized configuration parameter "my.key"
I had previously try SET and ALTER DATABASE as below
SET my.key = 'new value 8'; -- sets for current session
ALTER DATABASE test SET my.key = 'new value 8'; -- sets for subsequent sessions
select current_setting('my.key');
PROBLEM IS
when testcontainer starts postgres container and I pass it an init script to run
url: jdbc:tc:postgresql:10-alpine:///databasename?TC_INITSCRIPT=init_pg.sql
and I can include the above SQL its happy..
I know setting of that secret.key is working correctly in this script because it will fail on the line select current_setting('my.key'); if other two are commented out
I also know that runing it against db name test is correct eg: 'ALTER DATABASE test' because if I use a different name it fails
Testcontainers automatically connects the app to db named test
So with all of the above I believe the DB is setup nicely and all should be good
BUT
When I use 'current_setting('my.key')' within application code it fails
If you want to continue launching Postgres container using JDBC URL scheme, test containers can execute init script for you. The script must be on the classpath, referenced as follows:
jdbc:tc:postgis:9.6://hostname/databasename?TC_INITSCRIPT=somepath/init.sql
ALTER SYSTEM SET command was introduced in postgres 9.4, so you could use it in your init script.
Another option would be to start postgres container using database containers objects and use withCopyFileToContainer() method. Example:
JdbcDatabaseContainer<?> postgisContainer = new PostgisContainerProvider()
.newInstance()
.withDatabaseName( POSTGRES_DATABASE_NAME )
.withUsername( POSTGRES_CREDENTIALS )
.withPassword( POSTGRES_CREDENTIALS )
.withCopyFileToContainer(MountableFile.forClasspathResource("postgresql.conf"), "/var/lib/postgresql/data"));
EDIT:
If none of the above works, you can reconfigure Postgres command and pass your custom configuration keys. All you need is extending PostgreSQLContainer and overriding configure() method.
#Override
protected void configure()
{
setCommand( "postgres -c $your_config_key=$your_config_value" );
}
we have to create our database and the connection’s user. This is done by using environment variables from the Docker image. To change postgres.conf we can use DockerFIle where we will replace the existing postgres.conf by the new configuration.
#ClassRule
public static GenericContainer postgresql= new GenericContainer(
new ImageFromDockerfile("postgresql:10-alpine")
.withDockerfileFromBuilder(dockerfileBuilder -> {
dockerfileBuilder.from("myPostgresql:10-alpine")
// root password is mandatory
.env("PG_ROOT_PASSWORD", "root_password")
.env("PG_DATABASE", "postgres")
.env("PG_USER", "postgres")
.env("PG_PASSWORD", "postgres")
})
Next, we have to create a database schema and populate the database. From the image documentation, the directory /docker-entrypoint-initdb.d is scanned at startup and all files with .sh, .sql et .sql.gz extension are executed. So, we just have to put our files schema.sql and data.sql in this directory. Somme parameteres can be set by sql requesting running database
.withFileFromClasspath("a_schema.sql", "db/pg/schema.sql")
.withFileFromClasspath("b_data.sql", "db/pg/data.sql"))

Flyway migrations not persistent in H2 embedded database

I'm actually writing a small web application with spring boot and wanted to use a (embedded) H2 database together with Spring Data JPA and Flyway for database migration.
This is my application.properties:
spring.datasource.url=jdbc:h2:~/database;DB_CLOSE_ON_EXIT=FALSE;DB_CLOSE_DELAY=-1;
spring.datasource.username=admin
spring.datasource.password=admin
spring.datasource.driver-class-name=org.h2.Driver
In the main() method of my #SpringBootApplication class I do the following:
ResourceBundle applicationProperties = ResourceBundle.getBundle("application");
Flyway flyway = new Flyway();
flyway.setDataSource(applicationProperties.getString("spring.datasource.url"), applicationProperties.getString("spring.datasource.username"), applicationProperties.getString("spring.datasource.password"));
flyway.migrate();
I added a script, which creates a table USER in the database, Flyway says it is correctly migrated, but if I connect to the database, in schema PUBLIC theres only the schema_versions table of Flyway listed.
If I am adding another script, which inserts base data into the USER table, the migration failes, because the table is not present after a restart of my spring boot application.
Can anyone tell me if there is missing in my configuration? Or if there is any wrong assumption in my setup...
I have not enough data about your configuration
Hint:
See migration file must be part of dicrectory /db/migration
Hint
use a pattern like V1.0.1__name.sql 2 under scores
Hint
depending on Flyway version you should start with a sql file version greater than 1.0 example 1.0.1.
Hint per default spring boot jpa drops your database content if you using a in memory database. See http://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-sql.html section 28.3.3.

Script EF migration seed from Configuration class

I have EF migrations working nicely, but I also want to generate the sql script for the seed data from my DbMigrationsConfiguration class.
The seed data runs ok when I do Update-Database, but when I do UpdateDatabase -Script I do not get the sql for the seed inserts. I tried -Verbose on a normal Update-Database but I do not see the seed statements output there either.
Is this possible?
No it is not possible. Configuration class is not part of migration itself - it is infrastructure executing the migration. You have single configuration class for all your migrations and its Seed method is executed after every migration run - you can even use context for seeding data and because of that this method is executed after the migration is completed = it cannot be part of migration. Only content of the migration class is scripted.
Whether you are using EF or EF Core, a solution/workaround is to have SSMS generate the seed script for you:
Start with a clean database generated by your DB initializer and seed method. Make sure the data you want scripted is in there.
Using SSMS, right-click the database, go to Tasks > "Generate Scripts...", and follow the wizard. Under Advanced options, be sure to select "Data only" for "Types of data to script".
From the generated script, copy required seed statements over to your target script.
I know it's bit of an old thread but, here is an answer that could help someone else looking for an answer.
You can use the Migrate.exe supplied by Entity Framework. This will allow you to run the Seed method on the database context.
If you need to run a specific Seed method you can place that in a separate migration config file like this:
Enable-Migrations -MigrationsDirectory "Migrations\ContextA" -ContextTypeName MyProject.Models.ContextA
Command:
Migrate.exe MyAssembly CustomConfig /startupConfigurationFile=”..\web.config”
Look for it in the NuGet packages directory: "..\packages\EntityFramework.6.1.3\tools"
You can specify migration configuration as an argument to it. The CustomConfig should contain your code based Seed method. So, This way you do not require SQL scripts to be generated from the migration.
More info here:
http://www.eidias.com/blog/2014/10/13/initialcreate-migration-and-why-is-it-important
http://www.gitshah.com/2014/06/how-to-run-entity-framework-migrations.html
Using this solution, you do not need to generate an SQL script and can run multiple Seeds for different environments.