I am using docker here to build Liquibase and trying to execute Liquibase commands on fly.
Here is the Dockerfile:-
FROM liquibase/liquibase
RUN lpm add mysql --global
I am using below command on fly
docker run --rm -v ${bamboo.build.working.directory}/:/liquibase/changelog liquibase/liquibase --url="jdbc:postgresql://app-liquibase-test.us-region.rds.amazonaws.com:5432/database?currentSchema=schema" --changeLogFile=sql/changelog/db.changelog-master.xml --username=username --password=password update
I am getting below error:-
error 04-May-2022 22:08:38 Starting Liquibase at 05:08:38 (version 4.9.1 #1978 built at 2022-03-28 19:39+0000)
error 04-May-2022 22:08:38 Liquibase Version: 4.9.1
error 04-May-2022 22:08:38 Liquibase Community 4.9.1 by Liquibase
error 04-May-2022 22:08:48
error 04-May-2022 22:08:48 Unexpected error running Liquibase: Connection could not be created to jdbc:postgresql://app-liquibase-test.us-region.rds.amazonaws.com:5432/database?currentSchema=schema with driver org.postgresql.Driver. The connection attempt failed.
Please suggest
From this October 2021 GitHub issue, you need to use single quotes around your JDBC driver URL instead of double-quotes. Try entering it as edited below.
docker run --rm -v ${bamboo.build.working.directory}/:/liquibase/changelog liquibase/liquibase --url='jdbc:postgresql://app-liquibase-test.us-region.rds.amazonaws.com:5432/database?currentSchema=schema' --changeLogFile=sql/changelog/db.changelog-master.xml --username=username --password=password update
Related
I am trying to deploy Hyperledger Iroha on MacOS (BigSUr) locally, and while running the following command
./build/bin/irohad --config example/config.postgres.sample --genesis_block example/genesis.block --keypair_name example/node0
I get the error
Storage initialization failed: Cannot execute query. Fatal error. ERROR: relation "schema_version" does not exist LINE 1: ... test, iroha_major, iroha_minor, iroha_patch from schema_ver... ^ while executing "select 1 test, iroha_major, iroha_minor, iroha_patch from schema_version;".
I have installed Postgresql DB locally and created the iroha_data database.
Is there a schema that i must load additionally or does it get auto created ?
I was able to overcome this issue by adding -drop_state while starting iroha daemon.
CentOS 7
Docker 20.10.5
In my machine run PostgreSQL 9.5 and success open my db:
localhost:5432/sonar
And success open DB by PGAdmin
Nice.
Now in Docker I installed SonarQube 4.5. And want to connect to my db.
I try this:
sudo docker run -e SONARQUBE_JDBC_USERNAME=sonar -e SONARQUBE_JDBC_PASSWORD=sonar -e SONARQUBE_JDBC_URL=jdbc:postgresql://localhost:5432/sonar sonarqube:4.5.7
But I get error:
2021.04.20 11:47:55 INFO web[o.s.s.p.ServerImpl] SonarQube Server / 4.5.7 / e2afb0bff1b8be759789d2c1bc9348de6f519f83
2021.04.20 11:47:55 INFO web[o.s.c.p.Database] Create JDBC datasource for jdbc:postgresql://localhost:5432/sonar
2021.04.20 11:47:55 ERROR web[o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.PlatformServletContextListener
java.lang.IllegalStateException: Can not connect to database. Please check connectivity and settings (see the properties prefixed by 'sonar.jdbc.').
at org.sonar.core.persistence.DefaultDatabase.checkConnection(DefaultDatabase.java:115) ~[sonar-core-4.5.7.jar:na]
at org.sonar.core.persistence.DefaultDatabase.start(DefaultDatabase.java:73) ~[sonar-core-4.5.7.jar:na]
I am using zolando postgres-operator to setup back up and restore in Kubernetes cluster. The backups are done using wal-g and uploaded into gcp bucket. The backups are running fine
wal-g backup-list
name last_modified wal_segment_backup_start
base_000000020000000000000007 2019-12-19T03:56:22Z 000000020000000000000007
base_00000003000000000000000A 2019-12-19T04:34:41Z 00000003000000000000000A
base_000000010000000000000003 2019-12-19T04:40:01Z 000000010000000000000003
However when I try to restore, I get the below error
wal-g backup-fetch /home/postgres/pgdata/pgroot/data LATEST
INFO: 2019/12/19 10:06:24.765611 LATEST backup is: 'base_000000010000000000000003'
ERROR: 2019/12/19 10:06:25.062066 Failed to fetch backup: failed to fetch sentinel: context canceled
Patroni 1.6.0
Postgres 11
wal-g 0.2.11
This was an error in wal-g version 0.2.11. This is fixed in 0.2.14
I simply want to create a user in OrientDB 3.0.0m2 using the console.sh.
As said in the documentation, I should be able to execute SQL statements.
Here is my use case.
In terminal 1, run the following:
docker run -it --name orientdb -p "2424:2424" -e ORIENTDB_ROOT_PASSWORD=rootpass orientdb:3.0.0m2
In terminal 2, run the following:
docker run -it --rm --link orientdb:orientdb orientdb:3.0.0m2 /orientdb/bin/console.sh
OrientDB console v.3.0.0m2 (build 4abea780acc12595bad8cbdcc61ff96980725c3b) https://www.orientdb.com
Type 'help' to display all the supported commands.
orientdb> CREATE DATABASE remote:orientdb/db root rootpass PLOCAL
Creating database [remote:orientdb/db] using the storage type [PLOCAL]...SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Connecting to database [remote:orientdb/db] with user 'root'...OK
Database created successfully.
Current database is: remote:orientdb/db
orientdb {db=db}> CREATE USER myuser IDENTIFIED BY userpass ROLE admin
Error: com.orientechnologies.orient.core.sql.OCommandSQLParsingException: Error parsing query:
^
Encountered " <CREATE> "create "" at line 1, column 1.
Was expecting one of:
<IF> ...
<FOREACH> ...
";" ...
<IF> ...
DB name="db"
Error Code="1"
DB name="db"
orientdb {db=db}>
It seems that the console.sh doesn't understand the SQL command CREATE USER. Is this a bug, or am I doing the things in a wrong way?
It was fixed a few hours ago, see https://github.com/orientechnologies/orientdb/issues/7898
You can find the fix in latest snapshot https://oss.sonatype.org/content/repositories/snapshots/com/orientechnologies/orientdb-community/3.0.0-SNAPSHOT/
I am attempting to deploy a Flask app to Heroku. I have pushed to Heroku and can access my login page but any call to the db gives an OperationalError:
2014-01-29T12:12:31.801772+00:00 app[web.1]: OperationalError: (OperationalError) no such table: projects_project u'SELECT
Using Flask-migrate I can successfully run local migrations and upgrades:
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [alembic.migration] Running upgrade None -> 4b56d58e1d4c, empty message
When I try to upgrade on Heroku using heroku run python manage.py db upgrade the upgrade appears to happen, but the Context impl. is now SQLite?:
(dev01)Toms-MacBook-Pro:dev01 kt$ heroku run python manage.py db upgrade
Running `python manage.py db upgrade` attached to terminal... up, run.9069
INFO [alembic.migration] Context impl SQLiteImpl.
INFO [alembic.migration] Will assume non-transactional DDL.
INFO [alembic.migration] Running upgrade None -> 4b56d58e1d4c, empty message
Running Heroku pg:info gives:
=== HEROKU_POSTGRESQL_PINK_URL (DATABASE_URL)
Plan: Dev
Status: available
Connections: 0
PG Version: 9.3.2
Created: 2014-01-27 18:55 UTC
Data Size: 6.4 MB
Tables: 0
Rows: 0/10000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
The relevant logs for the Heroku upgrade are:
2014-01-29T12:55:40.112436+00:00 heroku[api]: Starting process with command `python manage.py db upgrade` by kt#gmail.com
2014-01-29T12:55:44.638957+00:00 heroku[run.9069]: Awaiting client
2014-01-29T12:55:44.667692+00:00 heroku[run.9069]: Starting process with command `python manage.py db upgrade`
2014-01-29T12:55:44.836337+00:00 heroku[run.9069]: State changed from starting to up
2014-01-29T12:55:46.643857+00:00 heroku[run.9069]: Process exited with status 0
2014-01-29T12:55:46.656134+00:00 heroku[run.9069]: State changed from up to complete
Also, heroku config gives me:
(dev01)Toms-MacBook-Pro:dev01 kt$ heroku config
=== myapp Config Vars
DATABASE_URL: postgres://xxx.compute-1.amazonaws.com:5432/da0jtkatk6057v
HEROKU_POSTGRESQL_PINK_URL: postgres://xxx.compute-1.amazonaws.com:5432/da0jtkatk6057v
where [xxx == xxx]
How is the Context impl. set? Apart from this obvious difference between working local and heroku, I can't work out what's happening or how I should debug. Thanks.
The URL for the database is taken from the SQLALCHEMY_DATABASE_URI configuration in your Flask app instance. This happens in the env.py configuration for Alembic that was created in the migrations folder.
Are you storing the value of os.environ['DATABASE_URL'] in the configuration before you hand over control to Flask-Migrate and Alembic? It seems you have a default SQLite based database that never gets overwritten with the real one provided by Heroku.