I'm having trouble restoring an OrientDB database from a backup. I'm using OrientDB version 1.2.0 (this backup is from November 2012) and the backup was produced by OrientDB (same version) using the built-in backup utility. I'm trying to restore the backup to a new database using the OrientDB console:
create database remote:localhost/dbname root password local graph
import database backup.json
But when I run those commands, I get the following error in the console:
Importing indexes ...
- Index 'dictionary'...Error on database import happened just before line 22258, column 6
com.orientechnologies.orient.core.exception.OConcurrentModificationException: Cannot update record #0:1 in storage 'dbname' because the version is not the latest. Probably you are updating an old record or it has been modified by another user (db=v2 your=v1)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinary.createException(OChannelBinary.java:429)
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinary.handleStatus(OChannelBinary.java:382)
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinaryAsynch.beginResponse(OChannelBinaryAsynch.java:145)
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinaryAsynch.beginResponse(OChannelBinaryAsynch.java:59)
at com.orientechnologies.orient.client.remote.OStorageRemote.beginResponse(OStorageRemote.java:1556)
at com.orientechnologies.orient.client.remote.OStorageRemote.command(OStorageRemote.java:727)
at com.orientechnologies.orient.client.remote.OStorageRemoteThread.command(OStorageRemoteThread.java:191)
at com.orientechnologies.orient.core.command.OCommandRequestTextAbstract.execute(OCommandRequestTextAbstract.java:60)
at com.orientechnologies.orient.core.index.OIndexManagerRemote.dropIndex(OIndexManagerRemote.java:80)
at com.orientechnologies.orient.core.index.OIndexManagerProxy.dropIndex(OIndexManagerProxy.java:80)
at com.orientechnologies.orient.core.db.tool.ODatabaseImport.importIndexes(ODatabaseImport.java:687)
at com.orientechnologies.orient.core.db.tool.ODatabaseImport.importDatabase(ODatabaseImport.java:127)
at com.orientechnologies.orient.console.OConsoleDatabaseApp.importDatabase(OConsoleDatabaseApp.java:1419)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.orientechnologies.common.console.OConsoleApplication.execute(OConsoleApplication.java:238)
at com.orientechnologies.common.console.OConsoleApplication.executeCommands(OConsoleApplication.java:127)
at com.orientechnologies.common.console.OConsoleApplication.run(OConsoleApplication.java:92)
at com.orientechnologies.orient.console.OConsoleDatabaseApp.main(OConsoleDatabaseApp.java:130)
All of the records import correctly, but it fails on the indexes. I have 15+ backups of the same database and they all have this issue, so it seems unlikely that they're all corrupted. How can I restore my database? (I'm OK with having to modify the JSON if needbe.)
When trying to use local mode rather than remote mode, I get a different error:
Started import of database 'local:dbname' from dbname.json...
Importing database info...OK
Importing clusters...
- Creating cluster 'internal'...OK, assigned id=0
- Creating cluster 'default'...Error on database import happened just before line 13, column 52
com.orientechnologies.orient.core.exception.OConfigurationException: Imported cluster 'default' has id=3 different from the original: 2. To continue the import drop the cluster 'manindex' that has 1 records
at com.orientechnologies.orient.core.db.tool.ODatabaseImport.importClusters(ODatabaseImport.java:544)
at com.orientechnologies.orient.core.db.tool.ODatabaseImport.importDatabase(ODatabaseImport.java:130)
at com.orientechnologies.orient.console.OConsoleDatabaseApp.importDatabase(OConsoleDatabaseApp.java:1414)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.orientechnologies.common.console.OConsoleApplication.execute(OConsoleApplication.java:269)
at com.orientechnologies.common.console.OConsoleApplication.executeCommands(OConsoleApplication.java:157)
at com.orientechnologies.common.console.OConsoleApplication.run(OConsoleApplication.java:97)
at com.orientechnologies.orient.graph.console.OGremlinConsole.main(OGremlinConsole.java:53)
Error: com.orientechnologies.orient.core.db.tool.ODatabaseExportException: Error on importing database 'dbname' from file: dbname.json
Error: com.orientechnologies.orient.core.exception.OConfigurationException: Imported cluster 'default' has id=3 different from the original: 2. To continue the import drop the cluster 'manindex' that has 1 records
It seems like the issue is that my old cluster IDs don't match the ones in the new database. Perhaps there are creation options that affect which clusters are created by default?
It turns out that the issue was that the database was being imported with OrientDB version 1.2.0 but was created with 1.0.1 (or 1.0.0). The JSON file listed version 1.2.0 as the engine-version, but that was the engine that backed it up, not created it. After I tried Lvca's suggestion of using local mode, I discovered that the cluster IDs weren't matching up. From there I guessed that the version numbers might not be correct. So it seems that even though I was running the database using the 1.2.0 server and the 1.2.0 Java library, the database was never "upgraded" to use the new 1.2.0 schema.
Thanks to Lvca for a nudge in the right direction.
Related
I am trying to upgrade keycloak(running in standalone mode) from version 8 to 12. I have followed the steps mentioned here
I deleted the data/tx-object-store/ transaction directory, and copied the standalone directory from version 8.
I ran the upgrade script. I can see that there are no failures and all the steps were SUCCESS.
I try to start the server with this command
sudo ./standalone.sh -b 0.0.0.0 &
Server started successfully, I can reach keycloak admin console and also able to login. I can see that the data(users, groups, etc) is successfully migrated as well.
After this, I stopped keycloak
sudo ./jboss-cli.sh --connect command=:shutdown
Which ran Okay. Now If I try to start it again, I see the following errors and keycloak doesn't boot up
06:30:00,080 FATAL [org.keycloak.services] (ServerService Thread Pool -- 66) Error during startup: java.lang.RuntimeException: Failed to connect to database
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.getConnection(DefaultJpaConnectionProviderFactory.java:377)
at org.keycloak.connections.jpa.updater.liquibase.lock.LiquibaseDBLockProvider.lazyInit(LiquibaseDBLockProvider.java:65)
at org.keycloak.connections.jpa.updater.liquibase.lock.LiquibaseDBLockProvider.lambda$waitForLock$2(LiquibaseDBLockProvider.java:96)
at org.keycloak.models.utils.KeycloakModelUtils.suspendJtaTransaction(KeycloakModelUtils.java:654)
at org.keycloak.connections.jpa.updater.liquibase.lock.LiquibaseDBLockProvider.waitForLock(LiquibaseDBLockProvider.java:94)
at org.keycloak.services.resources.KeycloakApplication$1.run(KeycloakApplication.java:136)
at org.keycloak.models.utils.KeycloakModelUtils.runJobInTransaction(KeycloakModelUtils.java:228)
at org.keycloak.services.resources.KeycloakApplication.startup(KeycloakApplication.java:129)
at org.keycloak.provider.wildfly.WildflyPlatform.onStartup(WildflyPlatform.java:29)
at org.keycloak.services.resources.KeycloakApplication.<init>(KeycloakApplication.java:115)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.jboss.resteasy.core.ConstructorInjectorImpl.construct(ConstructorInjectorImpl.java:152)
at org.jboss.resteasy.spi.ResteasyProviderFactory.createProviderInstance(ResteasyProviderFactory.java:2815)
at org.jboss.resteasy.spi.ResteasyDeployment.createApplication(ResteasyDeployment.java:371)
at org.jboss.resteasy.spi.ResteasyDeployment.startInternal(ResteasyDeployment.java:283)
at org.jboss.resteasy.spi.ResteasyDeployment.start(ResteasyDeployment.java:93)
at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.init(ServletContainerDispatcher.java:140)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.init(HttpServletDispatcher.java:42)
at io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:117)
at org.wildfly.extension.undertow.security.RunAsLifecycleInterceptor.init(RunAsLifecycleInterceptor.java:78)
at io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:103)
at io.undertow.servlet.core.ManagedServlet$DefaultInstanceStrategy.start(ManagedServlet.java:305)
at io.undertow.servlet.core.ManagedServlet.createServlet(ManagedServlet.java:145)
at io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:588)
at io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:559)
at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:42)
at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1530)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1530)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1530)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1530)
at io.undertow.servlet.core.DeploymentManagerImpl.start(DeploymentManagerImpl.java:601)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:97)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:78)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1990)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.lang.Thread.run(Thread.java:748)
at org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: java.sql.SQLException: javax.resource.ResourceException: IJ000453: Unable to get managed connection for java:jboss/datasources/KeycloakDS
at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:159)
at org.jboss.as.connector.subsystems.datasources.WildFlyDataSource.getConnection(WildFlyDataSource.java:64)
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.getConnection(DefaultJpaConnectionProviderFactory.java:371)
... 45 more
Caused by: javax.resource.ResourceException: IJ000453: Unable to get managed connection for java:jboss/datasources/KeycloakDS
at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.getManagedConnection(AbstractConnectionManager.java:690)
at org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl.getManagedConnection(TxConnectionManagerImpl.java:440)
at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:789)
at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:151)
... 47 more
Caused by: javax.resource.ResourceException: IJ031084: Unable to create connection
at org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createLocalManagedConnection(LocalManagedConnectionFactory.java:345)
at org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.getLocalManagedConnection(LocalManagedConnectionFactory.java:352)
at org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createManagedConnection(LocalManagedConnectionFactory.java:287)
at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.createConnectionEventListener(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:1322)
at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.getConnection(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:499)
at org.jboss.jca.core.connectionmanager.pool.AbstractPool.getSimpleConnection(AbstractPool.java:632)
at org.jboss.jca.core.connectionmanager.pool.AbstractPool.getConnection(AbstractPool.java:604)
at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.getManagedConnection(AbstractConnectionManager.java:624)
... 50 more
Caused by: org.h2.jdbc.JdbcSQLException: Constraint "FK_OUSE064PLMLR732LXJCN1Q5F1" already exists; SQL statement:
ALTER TABLE PUBLIC.SCOPE_MAPPING ADD CONSTRAINT PUBLIC.FK_OUSE064PLMLR732LXJCN1Q5F1 FOREIGN KEY(CLIENT_ID) INDEX PUBLIC.FK_OUSE064PLMLR732LXJCN1Q5F1_INDEX_8 REFERENCES PUBLIC.CLIENT(ID) NOCHECK [90045-197]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.command.ddl.AlterTableAddConstraint.tryUpdate(AlterTableAddConstraint.java:110)
at org.h2.command.ddl.AlterTableAddConstraint.update(AlterTableAddConstraint.java:78)
at org.h2.engine.MetaRecord.execute(MetaRecord.java:58)
at org.h2.engine.Database.open(Database.java:775)
at org.h2.engine.Database.openDatabase(Database.java:286)
at org.h2.engine.Database.<init>(Database.java:280)
at org.h2.engine.Engine.openSession(Engine.java:66)
at org.h2.engine.Engine.openSession(Engine.java:179)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:157)
at org.h2.engine.Engine.createSession(Engine.java:140)
at org.h2.engine.Engine.createSession(Engine.java:28)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:351)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:124)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:103)
at org.h2.Driver.connect(Driver.java:69)
at org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createLocalManagedConnection(LocalManagedConnectionFactory.java:321)
... 57 more
I am using H2 in memory database. I tried installing keycloak 12 and upgrading several times. Everytime I have the same issue, Starts successfully the first time, and then fails there after.
Can anyone please help? It looks like when I start the server the second time, it is trying to do migration again.. Not sure though.
1.Get a copy of H2 1.4.196 (bug as noted here and at Keycloak H2 login failure: constraint already exists is with 4.1.197)
https://repo1.maven.org/maven2/com/h2database/h2/1.4.196/h2-1.4.196.jar
2.Get a copy of H2 1.4.197or later
https://repo1.maven.org/maven2/com/h2database/h2/1.4.199/h2-1.4.199.jar
Take a copy of DB backup from before the upgrade (ours was from 4.6.0)
Create a dump (will create backup.sql) with:
java -cp ~/ba-docker/h2-1.4.196.jar org.h2.tools.Script -url "jdbc:h2:./kc-data/keycloak;AUTO_SERVER=TRUE" -user sa -password sa (user and password can be confirmed in your standalone file)
Restore DB with newer version of H2
Command by default uses backup.sql
java -cp ~/ba-docker/h2-1.4.199.jar org.h2.tools.RunScript -url "jdbc:h2:./kcdata-restore;AUTO_SERVER=TRUE" -user sa -password sa
In KC data directory
Delete keycloak.mv.db and keycloak.trace.db
rm keycloak.trace.db
rm keycloak.mv.db
Copy your kcdata-restore.mv.db from step 4 to keycloak.mv.db
cp ../../kcdata-restore.mv.db keycloak.mv.db
Follow standard KC upgrade
Delete tx-object-store
rm -Rf tx-object-store/
And you should now be able to start, stop and restart without issues.
Copied answer from : https://keycloak.discourse.group/t/keycloak-10-0-2-doesnt-start-locally-twice/3320
Courtesy : #lieven_wk
Came across the same problem when trying to upgrade 3.1.0.Final to 11.0.2. Found out this was caused by https://github.com/h2database/h2database/issues/1247 26.
As a workaround, I dumped & restored the keycloak H2 datasource file before starting the new version:
Given:
Old KeyCloak is installed in keycloak-3.1.0.Final
New KeyCloak is installed in keycloak-11.0.2
Running standalone setup
Datasource is configured with below connection URL
jdbc:h2:${jboss.server.data.dir}/keycloak;AUTO_SERVER=TRUE
Dump old sql:
java -cp keycloak-3.1.0.Final/modules/system/layers/base/com/h2database/h2/main/h2-1.3.173.jar org.h2.tools.Script -url jdbc:h2:keycloak-3.1.0.Final/standalone/data/keycloak -user sa -password MYPASS -script ~/keycloak-testbackup.sql
Restore to new H2 file (make sure file doesn’t exists prior to running the restore):
java -cp keycloak-11.0.2/modules/system/layers/base/com/h2database/h2/main/h2-1.4.197.jar org.h2.tools.RunScript -url jdbc:h2:keycloak-11.0.2/standalone/data/keycloak -user sa -password MYPASS -script ~/keycloak-testbackup.sql
Now upon starting the application server (keycloak-11.0.2) for the first time, the schema is migrated to the latest version, and remains running properly at each restart.
I am using kafka source connector for capturing CDC from RDS Aurora Postgres. Getting this error.
Please assist if someone know this issue.
Caused by: org.postgresql.util.PSQLException: Database connection failed when reading from copy
at org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:1074)
at org.postgresql.core.v3.CopyDualImpl.readFromCopy(CopyDualImpl.java:37)
at org.postgresql.core.v3.replication.V3PGReplicationStream.receiveNextData(V3PGReplicationStream.java:158)
at org.postgresql.core.v3.replication.V3PGReplicationStream.readInternal(V3PGReplicationStream.java:123)
at org.postgresql.core.v3.replication.V3PGReplicationStream.readPending(V3PGReplicationStream.java:80)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.readPending(PostgresReplicationConnection.java:397)
at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:119)
at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:99)
... 5 more
Caused by: java.net.SocketException: Socket is closed
at java.base/java.net.Socket.setSoTimeout(Socket.java:1155)
at java.base/sun.security.ssl.BaseSSLSocketImpl.setSoTimeout(BaseSSLSocketImpl.java:639)
at java.base/sun.security.ssl.SSLSocketImpl.setSoTimeout(SSLSocketImpl.java:73)
at org.postgresql.core.PGStream.setNetworkTimeout(PGStream.java:589)
at org.postgresql.core.PGStream.hasMessagePending(PGStream.java:139)
at org.postgresql.core.v3.QueryExecutorImpl.processCopyResults(QueryExecutorImpl.java:1109)
at org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:1072)
... 12 more
Debezium 1.1.0.CR1 already handles auto-reconnects in these cases https://debezium.io/blog/2020/03/13/debezium-1-1-c1-released/
yes Jiri Pechanec. Actually the problem was with debezium version. older version does not support Postgres auto-connect facility if connection lost for some temporary issue. Now new version of debezium (1.1.0) support auto-connect facility.
I have a Jhipster Spring boot project. Recently I shifted from mlabs standalone sandboxes to Atlas cluster sandbox M0 Free tier replica set. It even worked and I had made some database operations on it. But now for some reason then there is a read permission error
Error creating bean with name 'mongobee' defined in class path resource [DatabaseConfiguration.class]: Invocation of init method failed; nested exception is com.mongodb.MongoQueryException: Query failed with error code 8000 and error message 'user is not allowed to do action [find] on [test.system.indexes]' on server ********-shard-00-01-mfwhq.mongodb.net:27017
You can see the full stack here https://pastebin.com/kaxcr7VS
I have searched high and low and all I could find is that M0 tier user doesn't have permissions to overwrite admin database which I am not doing.
Even now connection to Mlabs DB works fine but have this issue on Atlas DB M0 tier.
Mongo DB version : 3.4
Jars and It's version
name: 'mongobee', version: '0.10'
name: 'mongo-java-driver', version: '3.4.2'
#Neil Lunn
The userId I am using to connect is that of admin's and the connection read and write works through shell or Robo3T(mongo client)
After discussion with MongoDB support team, MongoDB 3.0 deprecates direct access to the system.indexes collection, which had previously been used to list all indexes in a database. Applications should use db.<COLLECTION>.getIndexes() instead.
From MongoDB Atlas docs it can be seen that they may forbid calls to system. collections:
Optionally, for the read and readWrite role, you can also specify a collection. If you do not specify a collection for read and readWrite, the role applies to all collections (excluding some system. collections) in the database.
From the stacktrace it's visible that MongoBee is trying to make this call, so it's now the library issue and it should be updated.
UPDATE:
In order to fix an issue until MongoBee has released new version:
Get the latest sources of MongoBee git clone git#github.com:mongobee/mongobee.git, cd mongobee
Fetch pull request git fetch origin pull/87/head:mongobee-atlas
Checkout git checkout mongobee-atlas
Install MongoBee jar mvn clean install
Get compiled jar from /target folder or local /.m2
Use the jar as a dependency on your project
Came across this issue this morning. Heres a quick and dirty monkey-patch for the problem:
package com.github.mongobee.dao;
import com.github.mongobee.changeset.ChangeEntry;
import com.mongodb.BasicDBObject;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.DBObject;
import java.util.List;
import static com.github.mongobee.changeset.ChangeEntry.CHANGELOG_COLLECTION;
public class ChangeEntryIndexDao {
public void createRequiredUniqueIndex(DBCollection collection) {
collection.createIndex(new BasicDBObject()
.append(ChangeEntry.KEY_CHANGEID, 1)
.append(ChangeEntry.KEY_AUTHOR, 1),
new BasicDBObject().append("unique", true));
}
public DBObject findRequiredChangeAndAuthorIndex(DB db) {
DBCollection changelogCollection = db.getCollection(CHANGELOG_COLLECTION);
List<DBObject> indexes = changelogCollection.getIndexInfo();
if (indexes == null) return null;
for (DBObject index : indexes) {
BasicDBObject indexKeys = ((BasicDBObject) index.get("key"));
if (indexKeys != null && (indexKeys.get(ChangeEntry.KEY_CHANGEID) != null && indexKeys.get(ChangeEntry.KEY_AUTHOR) != null)) {
return index;
}
}
return null;
}
public boolean isUnique(DBObject index) {
Object unique = index.get("unique");
if (unique != null && unique instanceof Boolean) {
return (Boolean) unique;
} else {
return false;
}
}
public void dropIndex(DBCollection collection, DBObject index) {
collection.dropIndex(index.get("name").toString());
}
}
Caused by: java.lang.NoSuchMethodError: com.github.mongobee.dao.ChangeEntryIndexDao.<init>(Ljava/lang/String;)V
at com.github.mongobee.dao.ChangeEntryDao.<init>(ChangeEntryDao.java:34)
at com.github.mongobee.Mongobee.<init>(Mongobee.java:87)
at com.xxx.proj.config.DatabaseConfiguration.mongobee(DatabaseConfiguration.java:62)
at com.xxx.proj.config.DatabaseConfiguration$$EnhancerBySpringCGLIB$$4ae465a5.CGLIB$mongobee$1(<generated>)
at com.xxx.proj.config.DatabaseConfiguration$$EnhancerBySpringCGLIB$$4ae465a5$$FastClassBySpringCGLIB$$f202afb.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:358)
at com.xxx.proj.config.DatabaseConfiguration$$EnhancerBySpringCGLIB$$4ae465a5.mongobee(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162)
... 22 common frames omitted
jhipster 5 must be using a different version, because i get that when implementing the above code. looks like its expecting a different version.
This access to system.indexes is an open issue in mongobee. The issue has been fixed in the project, but the project was abandoned before the fix was ever released.
Due to this project abandonment, two successor libraries have since been forked from mongobee which have fixed this issue: Mongock and mongobeeJ.
Switching your application's dependency from the mongobee library to one of these successor libraries will allow you to run mongobee database migrations on Atlas.
To summarize these libraries:
Mongock - Forked from mongobee in 2018. Actively maintained. Has evolved significantly from the original, including built-in support for Spring, Spring Boot, and both versions 3 & 4 of the Mongo Java driver.
mongobeeJ - Forked from mongobee in 2018. Five updated versions have been released. Minimal evolution from the original mongobee. Mongo Java Driver 4 support was implemented in August, 2020. This project was deprecated in August, 2020, with a recommendation from its creators to use a library such as Mongock instead.
I attempted to setup OFBiz on PostgreSQL, but when I run the server, I get this error when I visit http://server_address:8080/ecommerce/:
org.ofbiz.widget.screen.ScreenRenderException: Error rendering screen [component://ecommerce/widget/CommonScreens.xml#main]: java.lang.IllegalArgumentException: Error running script at location [component://order/webapp/ordermgr/WEB-INF/actions/entry/catalog/Category.groovy]: org.ofbiz.entity.GenericEntityException: org.ofbiz.entity.transaction.GenericTransactionException: The current transaction is marked for rollback, not beginning a new transaction and aborting current operation; the rollbackOnly was caused by: Failure in findByCondition operation for entity [ProdCatalogCategory]: org.ofbiz.entity.GenericEntityException: org.postgresql.Driver (org.postgresql.Driver). Rolling back transaction.org.ofbiz.entity.GenericEntityException: org.postgresql.Driver (org.postgresql.Driver) (org.postgresql.Driver (org.postgresql.Driver)) (The current transaction is marked for rollback, not beginning a new transaction and aborting current operation; the rollbackOnly was caused by: Failure in findByCondition operation for entity [ProdCatalogCategory]: org.ofbiz.entity.GenericEntityException: org.postgresql.Driver (org.postgresql.Driver). Rolling back transaction.org.ofbiz.entity.GenericEntityException: org.postgresql.Driver (org.postgresql.Driver) (org.postgresql.Driver (org.postgresql.Driver))) (Error running script at location [component://order/webapp/ordermgr/WEB-INF/actions/entry/catalog/Category.groovy]: org.ofbiz.entity.GenericEntityException: org.ofbiz.entity.transaction.GenericTransactionException: The current transaction is marked for rollback, not beginning a new transaction and aborting current operation; the rollbackOnly was caused by: Failure in findByCondition operation for entity [ProdCatalogCategory]: org.ofbiz.entity.GenericEntityException: org.postgresql.Driver (org.postgresql.Driver). Rolling back transaction.org.ofbiz.entity.GenericEntityException: org.postgresql.Driver (org.postgresql.Driver) (org.postgresql.Driver (org.postgresql.Driver)) (The current transaction is marked for rollback, not beginning a new transaction and aborting current operation; the rollbackOnly was caused by: Failure in findByCondition operation for entity [ProdCatalogCategory]: org.ofbiz.entity.GenericEntityException: org.postgresql.Driver (org.postgresql.Driver). Rolling back transaction.org.ofbiz.entity.GenericEntityException: org.postgresql.Driver (org.postgresql.Driver) (org.postgresql.Driver (org.postgresql.Driver))))
However, I have no idea what it means. I setup a basic database called ofbiz with the owner as the ofbiz user. Then ran ./ant load-demo to populate the database with the demo data. My operating system is Debian GNU/Linux 7.2. I should also note that I'm a newcomer to PostgreSQL, OFBiz, and Java. I'm sorry about formatting, this is how it came.
Your question is not clear. I am assuming that PostgreSQL driver are missing. By default ofbiz doesnt come with PostgreSQL drivers so you need to download it. Go to ofbiz directory and run the following command
ant download-PG-JDBC in windows or ./ant download-PG-JDBC for linux .
I am using WSO2 Governance Registry v 4.6.0 and am trying to migrate the structure I have in an H2 backed test instance to a Postgres backed production instance (separate VMs for the Web server and database) using client-checkin.
I have successfully checked out the registry from the H2 instance but I am struggling to check it in to the Postgress system.
On the test instance I ran
./checkin-client.sh co https://localhost:9443/registry -u admin -p admin -f /../../../registry_checkout/registry.dump
to create the dump.
On the production system I executed
./checkin-client.sh ci https://arc-gov:9443/registry -u admin -p admin -f /../registry.dump
and get the following error below. (And yes I know the password is the same, it will change when I get it to work!). The url here is that of the wso2 web server not the Postgres database.
Any help would be much appreciated.
[2014-10-09 10:34:05,672] ERROR - Error in restoring the path. Make sure the registry is up and running Or the username, password is correct! and check the user have the WRITE permission to the path.
path: /
registry url: https://arc-gov:9443/registry
username: admin {org.wso2.registry.checkin.Client}
org.wso2.carbon.registry.synchronization.SynchronizationException: message code: ERROR_IN_RESTORING, parameters: {path: /, registry url: https://arc-gov:9443/registry, username: admin
at org.wso2.carbon.registry.synchronization.operation.CheckInCommand.restoreFromFile(CheckInCommand.java:207)
at org.wso2.carbon.registry.synchronization.operation.CheckInCommand.execute(CheckInCommand.java:164)
at org.wso2.registry.checkin.Checkin.execute(Checkin.java:70)
at org.wso2.registry.checkin.Checkin.execute(Checkin.java:56)
at org.wso2.registry.checkin.Client.execute(Client.java:272)
at org.wso2.registry.checkin.Client.start(Client.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.wso2.carbon.bootstrap.Bootstrap.loadClass(Bootstrap.java:63)
at org.wso2.carbon.bootstrap.CheckinClientBootstrap.main(CheckinClientBootstrap.java:36)
Caused by: org.wso2.carbon.registry.core.exceptions.RegistryException: Restoring to / failed.
at org.wso2.carbon.registry.app.RemoteRegistry.restore(RemoteRegistry.java:1725)
at org.wso2.carbon.registry.app.RemoteRegistry.restore(RemoteRegistry.java:1665)
at org.wso2.carbon.registry.synchronization.operation.CheckInCommand.restoreFromFile(CheckInCommand.java:198)
... 11 more
WSO2 Governance Registry does not support check-out , check-in from top level collection paths.[1] (i.e. /_system/governance/ and /_system/config/)
Instead we recommend that you check-out check-in from child collection paths.
There seems to be an issue with dumping remote registry path collections on Greg 4.6.0. [2]
You can also find the fix attached to [2]
[1] https://docs.wso2.com/display/Governance460/Check-in+Client+Examples
[2] https://wso2.org/jira/browse/REGISTRY-2044