How to solve the init_by_lua error in kong? - docker-compose

I have created a docker-compose file to spin up Postgres, kong-migration and kong in a container. All the containers were up and I was able to use kong for the first time. But yesterday onwards, I am getting the below error:
stack traceback: [C]: in function 'error'
/usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:16: in
function 'check_state' /usr/local/share/lua/5.1/kong/init.lua:432: in
function 'init' init_by_lua:3: in main chunk nginx: [error]
init_by_lua error:
/usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:16: Database
needs bootstrapping or is older than Kong 1.0.
To start a new installation from scratch, run 'kong migrations
bootstrap'.
To migrate from a version older than 1.0, migrated to Kong 1.5.0
first. If you still have 'apis' entities, you can convert them to
Routes and Services using the 'kong migrations migrate-apis' command
in Kong 1.5.0.

Can't tell without the compose file, but the error is clear, Kong is connecting to uninitialized database. So it needs bootstrapping using kong migrations bootstrap.
Could be that the order in which the containers are started is wrong?

Related

Getting error while connecting dbt with postgres

I'm trying to connect my dbt project with postgres db.
I'm writing all the connecting credentials in profiles.yml but it is throwing below error when i am running dbt debug.
dbt was unable to connect to the specified database.
The database returned the following error:
type object 'PostgresConnectionManager' has no attribute 'retry_connection'
I'm unable to find solution anywhere.
dbt-core v1.2.0 has added retry_connection method in BaseConnectionManager to support the retry connection feature. If the adapter implements this method, it can't run on dbt-core v1.1.0. Try updating the adapter.

Fatal error: No default database configured

When I try to do a migration of my database in heroku using vapor I get the following error when I run heroku run Run -- migrate --env production
FluentKit/Databases.swift:160: Fatal error: No default database configured.
I execute heroku config and created a database before migration.
Local migration works for me without problem. From a database management software if I can access the database without problem.
Thanks

Move my hasura cloud schema, relations, tables etc. and put into my offline docker file using docker-compose

So basically I have my cloud hasura with existing schema, relations tables etc... and i want to offline it using docker and try using metadata export and import that seems not working how can I do it or is there other ways to do it?
this is the docker i want to offline
this is my cloud i want to get the schemas or metadata
OR MAYBE I JUST MANUALLY RECREATE THE TABLES AND RELATIONS??
When using the steps outlined in the Hasura Quickstart with Docker page then the following steps would help get all the table definitions, relationships etc., setup on the local instance just like it is set up on hasura cloud instance.
Migrate all the database schema and metadata using the steps mentioned in Setting up migrations
Since you want to migrate from hasura cloud use the URL of the cloud instance in step 2. Perform steps 3-6 as described in the above link.
Bring up the local docker environment. Ideally edit the docker-compose.yaml file to set HASURA_GRAPHQL_ENABLE_CONSOLE: "false" before running docker-compose up -d.
Resume the process of applying migrations from step 7. Use the endpoint from local instance. For example,
$ hasura metadata apply --endpoint http://localhost:8080
$ hasura migrate apply --endpoint http://localhost:8080

AWS DMS Streaming replication : Logical Decoding Output Plugins(test_decoding) not accessible

I'm trying to migrate a PostgreSQL DB persisted on cloud (on DO droplet) to RDS using AWS Database Migration Service (DMS).
I've successfully configured the replication instance and endpoints.
I've created a task with Migrate existing data and replicate ongoing changes. When I start the task it shows some error ERROR: could not access file "test_decoding": No such file or directory.
I've tried to create a replication slot manually on my DB console it throws the same error.
I've followed the procedures which was suggested on the DMS documentation for Postgres
I'm using PostgreSQL 9.4.6 on my source endpoint.
I presume that the problem is the output plugin test_decoding was not accessible to do the replication.
Please assist me to resolve this. Thanks in advance!
You must install postgresql-contrib additional supplied modules on Your source endpoint.
If it is installed, make sure, directory where test_decoding module located is the same with directory where PostgreSQL expect it.
In *nix, You can check module directory by command:
pg_config --pkglibdir
If it is not the same, copy module, or make symlink, or some other solution You prefer.

Play Framework + Heroku + Postgres not able to connect

I've been having a heck of a time getting my Play! framework java app to run on Heroku, and I think I've narrowed it down to the Postgres JDBC driver not liking Heroku's DATABASE_URL parameter because it starts with postgres: and not postgresql:.
What is the proper way to configure a play! 2.0 app to connect to a heroku-provided Postgres instance?
I've tried variations on the following:
PLAY_OPTS="-Ddb.default.url=$DATABASE_URL -Ddb.default.driver=org.postgresql.Driver"
But upon startup I get a SQLException that no suitable driver can be found for $DATABASE_URL.
No need to pass them in as system properties you can pickup Heroku environment variables in your application.conf file
...
db.default.driver=org.postgresql.Driver
db.default.url=${DATABASE_URL}
Then define this in your Procfile
web: target/start -Dhttp.port=${PORT} ${JAVA_OPTS} -Dconfig.resource=application.conf
It should pick up the DATABASE_URL property for the Heroku environment. Although, I recommend creating a configuration file that is specific for the Heroku environment (i.e. heroku-prod.conf), but this is just an example.