I'm new in development with Axon Framework. My problem is when I run my microservice (a client connecting to Axon Server), this error message
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'banking2.**tokenentry**' doesn't exist
is displayed in console and my microservie fails to start!
It pretty much depends how you have configured your project and how you want to setup the tables.
A common approach for enterprises is to have scripts (in form of migrations) that will run and create the tables. If that is the case, you have to provide your own scripts to create it.
If you are using Hibernate for instance, you can set the hibernate.hbm2ddl.auto property to ask it to create it (among other options, better check their docs).
Related
I am using wildfly-8.2.0.Final.
There are several databases that i have to connect to. However, some of them are only used for certain functionalities on the web application and they are not needed to be online all the time. So when the wildfly starts, some of the datasources may not be online. However, disconnection to any datasource causes wildfly to not deploy .war deployment and i cannot find any way to solve this problem. Is there a way?
UPDATE:
I have a single table on a remote database server. The user will be able to query the table via my web application. The thing is, I have almost no control over the mentioned database. When the web application starts, it could be offline. However, this would cause my web application to fail to start. What I want is being able to run queries on a remote database if it is online. If it is offline, the web page could fail or the query can be canceled. But the only thing that I don’t want is that my web application to be limited by a remote database that I may have no control over.
My previous solution was a workaround. I would run queries on the remote database via a local database which has a foreign table to the remote one. However, the local one reads all data on the remote table before applying any constraints on postgresql 9.5. As the remote table has a large number of rows and I am using lazy loading, it takes so long for a single query and defeats the whole purpose of the lazy loading.
I found a similar question, but there is no answer.
On wildfly, you can set the datasource so that it tries to reconnect periodically when it disconnects. In my case, the deployment should be successful initially for this to be helpful.
The deployment will failed if it references those datasources.
Also you could define but disable those datasources.
I am trying to build a ADO.NET entity model from a SAP HANA database. This is for SAP B1. This process is pretty straight forward using MS Server/MySql etc.
However, when I follow the steps of creating this HANA model, I get the following error below on clicking "Test Connection":
general error: database 'EOH_CCL_TEST' does not exist
I have added a reference for Sap.Data.Hana.v4.5.dll.
Version is 1.0.120.0.
The database exists and I am able to perform queries on it as can be seen below.
Note: I am using the same credentials as I used to log into SAP HANA Studio.
What am I missing here?
There is a previous post: ADO.NET Provider for SAP HANA - Version mismatch issue
But in that issue, the user was able to make the connection.
You are using the schema name EOH_CCL_TEST as database name. The database name is different to the schema name. Did you logon to the SYSTEMDB database or to a tenant database in HANA Studio? Using the used DB name should solve the issue for you. PS: I also do not think that you need to add a port in the hostname property field.
Going from the screenshot you are not using a HANA system with multiple database containers. In this “classic” setup there is no separate admin object “database” and connections don’t take a database name.
Just put in hostname and port and leave the database name empty. The EOH_CCL_TEST is indeed just the schema name.
Beyond that, it’s really not a good idea to use SYSTEM user for working with data or really anything beyond bootstrapping the system.
Has anyone been able to get Camunda to run with Spring Boot and mongodb?
I tried several approaches and always got into a brick wall.
What I tried:
1. jpa / hibernate-ogm
I was able to initiate a connection to mongo after creating my own CamundaDatasourceConfiguration and ProcessEngineConfigurationImpl.
It failed when Camunda tried to get table metadata. I couldn't plug out this behavior.
2. jdbc driver for mongo by progress
I set up the jdbc url and driver class by progress.
Camunda then gets stuck during the startup process and does not get to the point where Jetty is fully started, i.e. the "Jetty started on port XYZ" message in the log.
3. camunda with postgres with mongo FDW
FDW is a mechanism for postress to interface an external datasource. This way an application can work with postgres over jdbc while the FDW will take care of reading and writing the date to the external source, be it a file, mongodb, etc.
After realizing 1 and 2 don't work, I started working on 3.
Has anyone succeeded in doing this and can share how?
so I ran into the same problem and decided to share my answers with you.
Currently it is not possible to run the Camunda-Engine with a NoSQL Database.
In this Camunda-Forum-Post one of the guys at Camunda also says it is not possible to run the engine completely without a database.
And in the offical Camunda-Docs there is also a list with all supported environments. Currently there are only SQL-Databases listed:
https://docs.camunda.org/manual/7.10/introduction/supported-environments/
But in some earlier Blog-Posts they metioned, that they want to make some proof-of-concept examples with the use of NoSQL-Databases. So we can expect, that these databases will be supported in the future, but not at the moment.
(note: the flowable engine is doing the same proof of concepts, they mentioned, that they want to be able to use NoSQL-databases by the end of the next year).
We migrated from Spring Batch 2.1.7 to Spring Batch 3.0.6, but got this jboss startup error:
org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [SELECT E.JOB_EXECUTION_ID, E.START_TIME, E.END_TIME, E.STATUS, E.EXIT_CODE, E.EXIT_MESSAGE, E.CREATE_TIME, E.LAST_UPDATED, E.VERSION, E.JOB_INSTANCE_ID, E.JOB_CONFIGURATION_LOCATION from BATCH_JOB_EXECUTION E, BATCH_JOB_INSTANCE I where E.JOB_INSTANCE_ID=I.JOB_INSTANCE_ID and I.JOB_NAME=? and E.END_TIME is NULL order by E.JOB_EXECUTION_ID desc]; nested exception is java.sql.SQLSyntaxErrorException: ORA-00904: "E"."JOB_CONFIGURATION_LOCATION": invalid identifier
...which was apparently caused by Spring Batch 3 auto-migration, in which Spring Batch 3 has some table structure changes from Spring Batch 2.
To get things moving forward, using the create table script our developer team found in one of the Spring Batch jars, our DBA team wrote a script to update (instead of create) the tables, since we need the job history. This is all working so far, but here is our issue:
We can't migrate all our systems forward to Spring Batch 3. We have to leave the older ones in Spring Batch 2 for a while.
Are these Spring Batch 3 table structure changes backward compatible with Spring Batch 2?
They appear to be from analysis by our DBA team and from our batch run results so far, but I'm just asking if this was intentional by Spring, i.e. when Spring altered the table structure for the purposes of Spring Batch 3, did you INTENTIONALLY make it backward compatible?
So far they appear to be compatible, but I just want to make sure there isn't some subtle difference which will break our system badly down some not-often-used logic path, i.e. at statement execution time (vs jboss startup time).
Ben Ethridge
They are not backwards compatible. The way job parameters are stored is different. The migration scripts did not remove the old columns (just added the net new). That doesn't mean that you couldn't come up with a schema that works for both versions (it looks like that's what you have), but as for our intent, it was identified as a breaking change when we added non-identifying parameters.
I'm in the process of developing a deployment system for a new web app and I'm wondering where the best point in the process to manage database migrations is (the question of how to do the migrations is another problem entirely).
It seems there are two ways to go:
Use a migration script that can
either be run manually from command
line or as part of the automatic
deployment/build process
Run the migrations when the app
starts up (I'm using ASP.NET so this
can be done easily enough without
causing a long-running user request)
Does anyone have any suggestions/insight/experience with these approaches? Any other suggestions?
I can see why #1 might be more attractive - it gives me complete control over when the DB is updated. However, I quite like #2 as it allows me to quickly iterate between deployments and reduces the manual process. #2 could also be used on my development machine to allow even quicker iterations. Hmm, starting to think having both might be a good thing...
We have a sales-force system with ~100 client and we are updating database at application startup (True, our is a desktop application.) I like this approach, it's safe and iterative if we have indeterministic startpoint (is the client database new or only updated to verison x.y.z?).
But at serverside I'm preferr your #1 option: we create a SQL query file on our virtual machine (based on the copy of the original database) and runs this query against the real server.
So IMHO:
Disconnected clients: startup, iterative scripts
Server: query created on VM based on the actual and real database
So I'm interrested in this problem too, and find some (half)frameworks as RikMigrations. After some googling there is a good startplace about DB versioning/migration frameworks: .NET Database Migration Tool Roundup. Not neccessarely the documentation but the team blogs can be interresting.
I like option #1 better as it seems much more flexible. In lieu of actually performing migrations on each app start, I think I would verify that the database schema (version number?) matches the code, and if not, throw a warning or error about a mismatched database schema.
I'd prefer option #1 for a number of reasons. First, integration tests usually require your DB schema to be up-to-date, and launching a web-site to upgrade the schema will be a huge timewaster. Second, you cannot change database schema while your site is running (say, add a couple of indexes to speed things up).
As for production side of things, upgrading your database in transaction MSI-style installation is much better than attempting to upgrade at each app startup since you can potentially end up with desynchronized database-application versions.
And if you're looking for the migration framework, take a look at Wizardby.
If the application ever has to run on a customer's machine than migrating at startup can prevent a lot of support calls - assuming you can do seamless migration without user intervention (I hope you aren't normally running your web app with permission to modify the database).
If the application always runs under your control automatic migration is less of an issue - but still can be a good feature, especially if you want to minimize downtime and manual deployment steps.