Rolled back on heroku, restored db, and now getting a ton of database errors - postgresql

We deployed some code tonight but had to rollback. The rollback affected the database so we restored our database to one we captured right before the deploy. Now we're getting all types of database errors, such as:
ActiveRecord::UnknownPrimaryKey: Unknown primary key for table billing_accounts in model BillingAccount.
PG::SyntaxError: ERROR: zero-length delimited identifier at or near """" LINE 1: ... "billing_accounts" ORDER BY "billing_accounts"."" ASC LIM.
PG::InternalError: ERROR: cache lookup failed for type 19005
PG::InFailedSqlTransaction: ERROR: current transaction is aborted, commands ignored until end of transaction block
I have no idea what to do about these errors, we're getting a ton of them. Help!!

Related

Keycloak 18 won't create default-tables on docker when configure a second DB via quarkus.properties

I am implementing a Custom Userprovider SPI for keycloak 18.0.2 and therefore have (alongside the keycloak default PostgreSQL-DB) a MSSQL in use.
The customized Keycloak and the PostgreSQL are run via docker-container.
The problems occure on my local MacBook M1 (but the same behaviour on intel-cpu as well). When building and starting the custom keycloak container, all volumes for both containers are removed. So there is always fresh DB-container
(sidenote: As the SPI was written for WildFly and it is broken with 19.x.x, i just stepped back to 18.0.2 to get the whole process working again. Afterwards will update to 19 and adapt the SPI implementations.)
the problem ...
Keycloak will create all tables - for the default keycloak-db (PostgreSQL) - in the public schema ONLY IF i configure the connection to the MSSQL via persistence.xml. This must not be in the production setup, as this should be at least configurable by the gitlab pipeline.
If i move the connection-infos from persistence.xml to quarkus.properties (as described in here: https://github.com/keycloak/keycloak-quickstarts/tree/main/user-storage-jpa), the default DB-tables can't be created anymore...
logs in Postgre-Container:
LOG: database system is ready to accept connections
ERROR: relation "migration_model" does not exist at character 25
STATEMENT: SELECT ID, VERSION FROM MIGRATION_MODEL ORDER BY UPDATE_TIME DESC
ERROR: syntax error at end of input at character 20
STATEMENT: call current_schema
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: SELECT COUNT(*) FROM DATABASECHANGELOG
ERROR: syntax error at end of input at character 20
STATEMENT: call current_schema
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: SELECT COUNT(*) FROM DATABASECHANGELOGLOCK
ERROR: syntax error at end of input at character 20
STATEMENT: call current_schema
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: CREATE TABLE DATABASECHANGELOGLOCK (ID INT NOT NULL, "LOCKED" BOOLEAN NOT NULL, LOCKGRANTED datetime, LOCKEDBY VARCHAR(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID))
ERROR: syntax error at end of input at character 20
keycloak logs:
WARN [liquibase.database.DatabaseFactory] (main) Unknown database: PostgreSQL
WARN [org.keycloak.connections.jpa.updater.liquibase.lock.CustomLockService] (main) Failed to create lock table. Maybe other transaction created in the meantime. Retrying...
ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
Does using the quarkus.properties overwrite some keycloak-defaults? so, when using it, keycloak acts differently than the configuration without a custom quarkus file?

data corrupted in postgres - right sibling's left-link doesn't match: block 9550 links to 12028 instead of expected 12027 in index "log_attach_id_idx"

I am new to Postgres and we are using it for tests reports, we had an issue with our environment that entered duplicate keys to one of the table and since then we are getting this message when trying to run migration scripts:
error: migration failed: right sibling's left-link doesn't match: block 9550 links to 12028 instead of expected 12027 in index "log_attach_id_idx" in line 0: UPDATE log SET project_id = (SELECT project_id FROM item_project WHERE item_project.item_id=log.item_id LIMIT 1); (details: pq: right sibling's left-link doesn't match: block 9550 links to 12028 instead of expected 12027 in index "log_attach_id_idx")
I tried to run pg_dump and got this error:
pg_dump: error: query was: SELECT pg_catalog.pg_get_viewdef('457544'::pg_catalog.oid) AS viewdef
pg_dumpall: error: pg_dump failed on database "reportportal", exiting
Can anyone help here?
Restore your backup, and research what parameters you changed and what you did to end up with data corruption in the first place.

Is it possible to restore db from WALs twice?

I have a main database server that WALs is periodically archived on s3. So s3 has a 'snapshot' of a database with all the corresponding latest WALs.
I have another (local) database server that I want to periodically
update to be actual to the state of the main database server.
So I once copied "main" directory from s3 and applied all the WALs from s3 by using restore.conf
The only thing I've changed in this file is:
restore_command = 'aws s3 cp s3://%bucketName%/database/pg_wal/%f %p'
It was successful.
After some time I want to apply all the latest WALs from s3 to being "more synchronized" with a main database server. Is it possible to do it somehow? I know exactly, that I did not make any updates or writes into my "copied" database server. When I'm trying to do it in the exactly same way as before I am getting the next errors (from stderr):
fatal error: An error occurred (404) when calling the HeadObject
operation: Key "database/pg_wal/00000001000001EF0000001F" does not
exist
fatal error: An error occurred (404) when calling the HeadObject
operation: Key "database/pg_wal/00000002.history" does not exist
fatal error: An error occurred (404) when calling the HeadObject
operation: Key "database/pg_wal/00000001.history" does not exist
fatal error: An error occurred (403) when calling the HeadObject
operation: Forbidden
fatal error: An error occurred (403) when calling the HeadObject
operation: Forbidden
fatal error: An error occurred (403) when calling the HeadObject
operation: Forbidden
fatal error: An error occurred (403) when calling the HeadObject
operation: Forbidden
fatal error: An error occurred (403) when calling the HeadObject
operation: Forbidden
This is a more detailed description of my procedure:
I have a two directories on s3: basebackup and pg_wal. basebackup contains base, global, pg_logical, pg_multixact, pg_xact, PG_VERSION, backup_label files.
When I recover it the first time, I do the following:
Stop postgres
aws s3 sync s3://%bucketname%/basebackup ~/10/main
mkdir empty directories in ~/10/main
copied recovery.conf.sample into ~/10/main/recovery.conf
edit recovery.conf as above
start PostgreSQL
When I'm doing it again after some time I'm doing steps 1, 4, 5, 6 and getting the described result.
Probably, I need to somehow specify the first WAL from s3 bucket to being restored? Because we already restored some of them before. Or it is impossible at all?
There seems to be a lot wrong with your procedures:
A complete backup does not only consist of the files and directories you list above, but of the complete data directory (pg_wal/pg_xlog can be empty).
After the first recovery, PostgreSQL will choose a new time line, rename backup_label and recovery.conf and come up as a regular database.
You cannot resume recovering such a database. I don't know what exactly you did to get into recovery mode again, but you must have broken something.
Once a database has finished recovery, the only way to recover further is to restore the initial backup again and recover from the beginning.
Have you considered using point-in-time recovery with recovery_target_action = 'pause'? Then PostgreSQL will stay in recovery mode, and you can run queries against the database. To continue recovering, define a new recovery target and restart the server.

Error with SQL query: The query failed to parse. Exception from HResult: 0x80040E14

I am using SSIS to create a data flow task to a postgresql server database.
I get the Error with SQL query:
The query failed to parse. Exception from HResult: 0x80040E14
See below screenshot:
https://ibb.co/7KcBnMG
https://ibb.co/zR093SQ
The query being tried is:
INSERT INTO public.controlflow_example(rollnumber) VALUES (1)
The connection itself is fine. The schema is public, the table is indeed spelled controlflow_example, and the column is integer type named rollnumber
Even using
SELECT *
FROM public.controlflow_example
as an even simpler query gives the same error.
If i try to run the package it fails with the following error:
SSIS package "C:\Users\AJ\Documents\Visual Studio 2017\Projects\control_flow_example\control_flow_example\Package.dtsx" starting.
Error: 0xC002F210 at Execute SQL Task, Execute SQL Task: Executing the query "INSERT INTO public.controlflow_example(rollnumber)..." failed with the following error: "Exception from HRESULT: 0x80040E14". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
Task failed: Execute SQL Task
Warning: 0x80019002 at Package: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
SSIS package "C:\Users\AJ\Documents\Visual Studio 2017\Projects\control_flow_example\control_flow_example\Package.dtsx" finished: Failure.
The program '[15288] DtsDebugHost.exe: DTS' has exited with code 0 (0x0).
Any advice please?
I have already searched for similar questions here and elsewhere but could not find a resolution.
Thank you.
Maybe you should change the function name and use private it works properly.
INSERT INTO public.controlflow_example column name VALUES ('1');
Are you trying to store the result set into an SSIS object? The result error means you did not set up the result properly. If you are not expecting any results back, then set result set to be nothing. If you are, then check to make sure you’re returning the results correctly.
Check out how to return results https://www.google.com/amp/s/www.red-gate.com/simple-talk/sql/ssis/ssis-basics-using-the-execute-sql-task-to-generate-result-sets/amp/
Good Luck

Powercenter SQL1224N error connecting DB2

Im running a workflow in powercenter that is constatnly getting an SQL1224N error.
This process execute a query against one table (POLIZA) with 800k rows, it retrieves the first 10k rows and then it start to execute to another table with 75M rows, at ths moment in DB2 an idle thread error appear but the PWC process still running retrieving the 75M rows, when it is completed (after 20 minutes) the errros comes up related with the first table:
[IBM][CLI Driver] SQL1224N A database agent could not be started to service a request, or was terminated as a result of a database system shutdown or a force command. SQLSTATE=55032
sqlstate = 40003
[IBM][CLI Driver] SQL1224N A database agent could not be started to service a request, or was terminated as a result of a database system shutdown or a force command. SQLSTATE=55032
sqlstate = 40003
Database driver error...
Function Name : Fetch
SQL Stmt : SELECT POLIZA.BSPOL_BSCODCIA, POLIZA.BSPOL_BSRAMOCO
FROM POLIZA
WHERE
EXA01.POLIZA.BSPOL_IDEMPR='0015' for read only with ur
Native error code = -1224
DB2 Fatal Error].
I have a similar process runing against the same 2 tables and it is woking fine where the only difference I can see is that the DB2 user is different.
Any idea how can i fix this?
Regards
The common causes for -1224 are:
Your instance or database has crashed, or
Something/somebody is forcing off your application (FORCE APPLICATION or equivalent)
As for the crash, I think you would know by know. This typically requires a database or instance restart. At any rate, can you please have a look into your DIAGPATH to check for any FODC* directories whose timestamp would match the timestamp of the -1224 errors?
As for the FORCE case, you should find some evidence of the -1224 in db2diag.log. Try searching for the decimal -1224, but also for its hex representation (0xFFFFFB38).