I need to set auto commit for the database connection that flyway is using. Typically for clients I would do this via jdbc options in the jdbc url. I couldn't figure out how to do this using the flyway command-line tool. Is it possible?
You can pass options to in the jdbc url.
However, setting autocommit will not work, as Flyway will override this and run each migrate inside a transaction, to make sure it can be rolled back in case of failure.
Related
My integration tests for my asp.net core application require a connection to a PostgreSQL database. In my deployment pipeline I only want to deploy if my integration tests pass.
How do I supply a working connection string inside the Microsoft build agent?
I looked under service connections and couldn't see anything related to a database.
If you are using Microsoft hosted agent, then your database need to be accessible from internet.
Otherwise, you need to it on self-hosted agent that can access your database.
I assume the default connectionstring is in appsettings.json, you could store the actual database connectionstring to a secret variable, then update appsettings.json file with that variable value through some task (e.g. Set Json Property) or do it programming (e.g. powershell script) before running web app and starting test during build.
If you can use any PostgreSQL database, you can use service container with a docker image that has PostgreSQL database (e.g. postgres).
For classical pipeline, you could call docker command run the image.
I would recommend you to use runsettings which you can override in task. In that way you will keep your connection string away of source control. Please check this link. And in terms of service connection, you don't need any service connection, only what you need is proper connection string.
Since I don't know how you connect to your DB in details I can't give you more info. If you provide example how you already connect to database I can try to provide a better answer.
I am using wildfly-8.2.0.Final.
There are several databases that i have to connect to. However, some of them are only used for certain functionalities on the web application and they are not needed to be online all the time. So when the wildfly starts, some of the datasources may not be online. However, disconnection to any datasource causes wildfly to not deploy .war deployment and i cannot find any way to solve this problem. Is there a way?
UPDATE:
I have a single table on a remote database server. The user will be able to query the table via my web application. The thing is, I have almost no control over the mentioned database. When the web application starts, it could be offline. However, this would cause my web application to fail to start. What I want is being able to run queries on a remote database if it is online. If it is offline, the web page could fail or the query can be canceled. But the only thing that I don’t want is that my web application to be limited by a remote database that I may have no control over.
My previous solution was a workaround. I would run queries on the remote database via a local database which has a foreign table to the remote one. However, the local one reads all data on the remote table before applying any constraints on postgresql 9.5. As the remote table has a large number of rows and I am using lazy loading, it takes so long for a single query and defeats the whole purpose of the lazy loading.
I found a similar question, but there is no answer.
On wildfly, you can set the datasource so that it tries to reconnect periodically when it disconnects. In my case, the deployment should be successful initially for this to be helpful.
The deployment will failed if it references those datasources.
Also you could define but disable those datasources.
Has anyone been able to get Camunda to run with Spring Boot and mongodb?
I tried several approaches and always got into a brick wall.
What I tried:
1. jpa / hibernate-ogm
I was able to initiate a connection to mongo after creating my own CamundaDatasourceConfiguration and ProcessEngineConfigurationImpl.
It failed when Camunda tried to get table metadata. I couldn't plug out this behavior.
2. jdbc driver for mongo by progress
I set up the jdbc url and driver class by progress.
Camunda then gets stuck during the startup process and does not get to the point where Jetty is fully started, i.e. the "Jetty started on port XYZ" message in the log.
3. camunda with postgres with mongo FDW
FDW is a mechanism for postress to interface an external datasource. This way an application can work with postgres over jdbc while the FDW will take care of reading and writing the date to the external source, be it a file, mongodb, etc.
After realizing 1 and 2 don't work, I started working on 3.
Has anyone succeeded in doing this and can share how?
so I ran into the same problem and decided to share my answers with you.
Currently it is not possible to run the Camunda-Engine with a NoSQL Database.
In this Camunda-Forum-Post one of the guys at Camunda also says it is not possible to run the engine completely without a database.
And in the offical Camunda-Docs there is also a list with all supported environments. Currently there are only SQL-Databases listed:
https://docs.camunda.org/manual/7.10/introduction/supported-environments/
But in some earlier Blog-Posts they metioned, that they want to make some proof-of-concept examples with the use of NoSQL-Databases. So we can expect, that these databases will be supported in the future, but not at the moment.
(note: the flowable engine is doing the same proof of concepts, they mentioned, that they want to be able to use NoSQL-databases by the end of the next year).
Is query tracing on Postgresql possible? I am using 9.0 on windows with OLEDB interface.
Also, I need it to be in real-time, not buffered like it is by default...
I assume you mean tracing the statements on the server side?
In that case, change the parameter log_min_duration to 0 in postgresql.conf.
You don't need to restart the server, just reload the configuration (pg_ctl reload)
I'm using EntLib v4 for Logging and currently I'm saving the events to the default text file listener.
I would like to use MS SQL database as my event sink and I saw that the database listener is already provided, but I don't know how to create logging database and stored procedures?
After googling around I saw that in v3 the database creation scripts were shipped with the EntLib, but I can't find them in v4.
I just checked and its in the installation for the source. On my machine its in C:\EntLib4Src\Blocks\Logging\Src\DatabaseTraceListener\Scripts.
You can use the createloggingdb.cmd file or parse loggingdatabase.sql yourself for the relevant commands.