What should I do when system tables of PostgreSQL are damaged? - postgresql

My computer shut down because of power failier. And an error occurred in the log after I restarted the database:
ERROR: invalid page header in block 27073 of relation base/263742/11768.
I got to know that damaged relation is 'pg_class' by executing the command:
oid2name -H 127.0.0.1 -p 5432 -U postgres -f 11768
From database "postgres":
Filenode Table Name
----------------------
11768 pg_class
So, what should I do to recover my database as soon as possible? Thank you.

Related

Flyway not able to find role with postgres docker

I am trying to run my first flyway example using docker postgres image but getting the following error:
INFO: Flyway Community Edition 6.4.2 by Redgate
Exception in thread "main" org.flywaydb.core.internal.exception.FlywaySqlException:
Unable to obtain connection from database (jdbc:postgresql://localhost/flyway-service) for user 'flyway-service': FATAL: role "flyway-service" does not exist
-------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL State : 28000
Error Code : 0
Message : FATAL: role "flyway-service" does not exist
at org.flywaydb.core.internal.jdbc.JdbcUtils.openConnection(JdbcUtils.java:65)
at org.flywaydb.core.internal.jdbc.JdbcConnectionFactory.<init>(JdbcConnectionFactory.java:80)
I looked up into the docker container and can see that the user role flyway-service is created as part of the docker-compose execution:
$ docker exec -it flywayexample_postgres_1 bash
root#b2037e382112:/# psql -U flyway-service;
psql (12.2 (Debian 12.2-2.pgdg100+1))
Type "help" for help.
flyway-service=# \du;
List of roles
Role name | Attributes | Member of
----------------+------------------------------------------------------------+-----------
flyway-service | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
flyway-service=#
Main class is:
public static void main( String[] args ) {
var flyway = Flyway.configure().schemas("flyway_test_schema")
.dataSource("jdbc:postgresql://localhost/flyway-service", "flyway-service",
"password")
.load()
.migrate();
System.out.println( "Flyway example's hello world!" );
}
}
The migration called src/main/resources/db/migration/V1__Create_person_table.sql:
create table PERSON (
ID int not null,
NAME varchar(100) not null
);
Docker-compose yml file:
version: "3.8"
services:
postgres:
image: postgres:12.2
ports: ["5432:5432"]
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=flyway-service
I am running this code on MAC OSX. I assume, I am missing something obvious here, but not sure what! Any pointers would be appreciated.
Finally managed to figure out the issue with the help of a friend! The problem was not with the attached code but with a postgres daemon process running on the same port 5432 by an old Postgres installation.
I found the complete uninstallation procedure here. After removing the additional daemon process got only one port listening.
a$ lsof -n -i4TCP:5432
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 654 root 50u IPv6 0x7ae1b5f8fbcf1cb 0t0 TCP *:postgresql (LISTEN)

PGBouncer : Cant connect on the right db

I'm actually facing an issue. I've installed pgbouncer on a production server, on which i've a Odoo instance and postgresql as well.
Perhaps :
In my logs, i'm having this :
2018-09-10 16:39:16.389 10123 WARNING C-0x1eb5478:
(nodb)/(nouser)#unix(18272):6432 pooler error: no such database: postgres
2018-09-10 16:39:16.389 10123 LOG C-0x1eb5478: (nodb)/(nouser)#unix(18272):6432 login failed: db=postgres user=oerppreprod
Here is the actual conf of pgbouncer :
pgbouncer_archive = host=127.0.0.1 port=5432 dbname=archive
admin_users = postgres
ignore_startup_parameters = extra_float_digits
With aswell, the default config (i've only added/edited this).
Why is he trying to connect on the postgres database ?
When i go back on the previous conf (without PGBouncer, just swapping from port 6432 to 5432), everything is working ....
Any idea ?
Thanks in advance !
I had the same issue, and in my situation. Maybe it will be usefull to somebody:
I have solved this by a few steps:
At the beginning of every request - your Framework or PDO (or else) running the initial query to check if database you asking is exists in the postgres data to process you request.
I have replaced the part of line "user=project_user password=mytestpassword" from the database section of pgbouncer.ini file. As I tested, if you replace this part - then the pgbouncer will use your userlist.txt file (or your selected auth), in my case, it was the userlist.txt.
Added the line "postgres = host=127.0.0.1 port=5432 dbname=postgres"
[databases]
postgres = host=127.0.0.1 port=5432 dbname=postgres
my_database = host=127.0.0.1 port=5432 dbname=my_database
My userlist.txt file looks like this (I am using auth_type = md5, so my password was in md5):
"my_user" "md5passwordandsoelse"
I have added my admin users to my pgbouncer.ini file:
admin_users = postgres, my_user
After all manipulations I advise you to check from which user u are running queries, by usin this simple query:
select current_user;
At the end, with this query you must to receive you selected username (in my case it was - my_user)
p.s. also I must to mention, that I was using 127.0.0.1 - because my pgbouncer is installed on the same server with postgres.

FATAL: GTM error, could not obtain snapshot. Current XID = 0, Autovac = 0

After running virtually any command (e.g. createdb test) from GTM host inside pgxc_ctl tool, I am getting the following error:
FATAL: GTM error, could not obtain snapshot. Current XID = 0, Autovac = 0
PostgresXL is configured and installed on 4 nodes (/etc/hosts):
172.23.102.115 coordinator001
172.23.102.116 datanode001
172.23.103.0 datanode002
172.23.102.114 gtm001
So that each can ssh into another one and pg_hba.conf contains:
host all all 0.0.0.0/0 trust
GTM node has this configuration
Would appreciate any tips or ideas where to dig next.
[edit]
Getting this error during the direct connection:
psql -h coordinator001 -p 6668 -U postgres testdb
psql: FATAL: GTM error, could not obtain snapshot. Current XID = 0, Autovac = 0

pg_restore complains about integrity errors on a dump. Is that even possible?

I have dumped an OpenERP DB like this:
pg_dump -Fc -xO -f o7db.dump o7db
The source machine has:
$ pg_dump --version
pg_dump (PostgreSQL) 9.3.5
The I scp the dump to a target machine (an OpenVZ container), where pg_restore is:
$ pg_restore --version
pg_restore (PostgreSQL) 9.3.5
I run pg_restore like this:
pg_restore -d o7db -xO -j3 o7db.dump
The only difference I can see is that postgres user is not the same in both
machines (but that is supposed to be dealt by -O). pg_restore complains
about:
pg_restore: [archiver (db)] Error from TOC entry 8561; 0 1161831 TABLE DATA account_move_line manu
pg_restore: [archiver (db)] COPY failed for table "account_move_line": ERROR: value too long for type character varying(64)
CONTEXT: COPY account_move_line, line 172, column name: "<MASKED DATA HERE....>"
This error is issued several times for several tables. After that, many so
errors about missing tuples follow:
pg_restore: [archiver (db)] Error from TOC entry 6784; 2606 1182924 FK CONSTRAINT account_account_currency_id_fkey manu
pg_restore: [archiver (db)] could not execute query: ERROR: insert or update on table "account_account" violates foreign key constraint "account_account_currency_id_fkey"
DETAIL: Key (currency_id)=(1) is not present in table "res_currency".
Command was: ALTER TABLE ONLY account_account
ADD CONSTRAINT account_account_currency_id_fkey FOREIGN KEY (currency_id) REFERENCES re..
I don't see how this is possible, since the source DB seems to be Ok.
The restored DB has many empty tables (each that failed cause too long
values):
$ psql -d o7db -Ac "select * from account_move_line" | tail -1
(0 rows)
Furthermore, I do the pg_restore on the same source machine:
pg_restore -d o7db_restore -xO -j3 o7db.dump
Everything works as expected. Not a single warning.
What should I do? What am I doing wrong?
The answer is actually given in Moving PostgreSQL database fails on non-ascii characters with 'value too long'
It seems the target server creates DB with a different encoding, so creating the DB with UTF8 before restoring solves the problem.
Credit goes to #habe (https://stackoverflow.com/users/216458/habe)
So, I have voted my question to be closed.

mySQL Workbench Importing

I was trying to import it but i am encountering some errors.
this is my error:
08:49:13 PM Restoring dbDB (contact)
Running: mysql --defaults-extra-file="/tmp/tmpdwf14l/extraparams.cnf" --host=127.0.0.1 --user=root --port=3306 --default-character-set=utf8 --comments
ERROR 1046 (3D000) at line 22: No database selected
Operation failed with exitcode 1
08:49:13 PM Restoring dbDBB (course)
Running: mysql --defaults-extra-file="/tmp/tmpMW20Fb/extraparams.cnf" --host=127.0.0.1 --user=root --port=3306 --default-character-set=utf8
ERROR 1046 (3D000) at line 22: No database selected
Error: You have not selected the default target schema in which to import the data from dump
Create a schema/database in MySQL and select that database in MySQL Workbench while importing data from Dump.
Or
You can edit the dump file and append a SQL statement at the start with some thing like this
create database test;
use test;
Solution as per the dump file of user:
--
-- Table structure for table `course`
--
Write the code as :
create database test1;
use test1;
--
-- Table structure for table `course`
--
This should do.
The error is because you havent selected any database; In the dump right below create schema 'database_name' (or create database 'database_name') add this : use 'database_name';
Replace the database_name with your DB name;