Scala flyway migration marked as Success but there are no changes - scala

I'm new to Flyway. We are currently using version 6.5.7 of "org.flywaydb" % "flyway-core" in our build.sbt. The versioned migration also extends BaseJavaMigration.
The migration iterates over all rows in a table, and adds a field to json blob column. When I run the migration I first execute info to verify the migration is noticed, then migrate and finally info again to confirm the status of the migration.
I can confirm the migration is picked up and noted as Pending during the first info. Then after the migration is executed, no changes are applied, and also I don't see any info level logs from logback, which indicates that the migration didn't really enter the business logic of the migration.
The final info command then marks the migration as Success.
Here are some logs:
2021-11-29 16:33:35 +0000 | Versioned | 42 | my migration name | JDBC | | Pending |
2021-11-29 16:33:35 +0000 +------------+---------+--------------------------------------------------------------+------+---------------------+------------+
2021-11-29 16:33:35 +0000 dt=2021-11-29T16:33:35.138Z level=INFO thread=main logger=DbValidate Successfully validated 208 migrations (execution time 00:00.051s)
2021-11-29 16:33:35 +0000 dt=2021-11-29T16:33:35.147Z level=INFO thread=main logger=DbMigrate Current version of schema "public": 41
2021-11-29 16:33:35 +0000 dt=2021-11-29T16:33:35.151Z level=INFO thread=main logger=DbMigrate Migrating schema "public" to version 42 - my migration name
2021-11-29 16:33:35 +0000 dt=2021-11-29T16:33:35.396Z level=INFO thread=main logger=DbMigrate Successfully applied 1 migration to schema "public" (execution time 00:00.255s)
2021-11-29 16:33:35 +0000 dt=2021-11-29T16:33:35.498Z level=INFO thread=main logger=migrations Schema version: org.flywaydb.core.internal.info.MigrationInfoImpl#fac39b2c
2021-11-29 16:33:35 +0000 dt=2021-11-29T16:33:35.503Z level=INFO thread=main logger=migrations +------------+---------+--------------------------------------------------------------+------+---------------------+------------+
2021-11-29 16:33:35 +0000 | Versioned | 42 | my migration name | JDBC | 2021-11-29 16:33:35 | Success |
It's super strange I'm not sure how to go about finding what could be causing this. I'm highly suspicious about the execution time (execution time 00:00.255s) , can you even open db connection that quick? Any suggestions how can I go about investigating this further?

Related

PostgreSQL - Recovery process triggering after startup

I m trying to run PostgreSQL 9.5 on Ubuntu 16.04. Our PostreSQL setup looks as following:
PostgreSQL Primary (9.5 on Ubuntu 16.04, Docker container) --rsync--> Walstore (backup server, Docker Container) --rsync--> PostgreSQL Standby (v9.5 on Ubuntu 16.04, Docker container).
In case of Primary crashes we are going to copy the Standby PostgreSQL data (which is in sync with Primary PostgreSQL data, excluding only recovery.conf file) to Primary PostgreSQL data directory and startup Primary PostgreSQL server again.
After the Primary PostgreSQL server is up and running again, a recovery process is going to be triggered as follow (logs):
postgresql-docker-primary-1 | * Starting PostgreSQL 9.5 database server
postgresql-docker-primary-1 | ...done.
postgresql-docker-primary-1 | 2022-11-28 10:14:59 UTC [27-1] LOG: database system was shut down in recovery at 2022-11-28 10:11:14 UTC
postgresql-docker-primary-1 | 2022-11-28 10:14:59 UTC [27-2] LOG: database system was not properly shut down; automatic recovery in progress
postgresql-docker-primary-1 | 2022-11-28 10:14:59 UTC [27-3] LOG: redo starts at 0/1700DB8
postgresql-docker-primary-1 | 2022-11-28 10:14:59 UTC [27-4] LOG: invalid record length at 0/3000060
postgresql-docker-primary-1 | 2022-11-28 10:14:59 UTC [27-5] LOG: redo done at 0/3000028
postgresql-docker-primary-1 | 2022-11-28 10:14:59 UTC [27-6] LOG: last completed transaction was at log time 2022-11-28 10:04:37.388433+00
postgresql-docker-primary-1 | 2022-11-28 10:14:59 UTC [27-7] LOG: MultiXact member wraparound protections are now enabled
postgresql-docker-primary-1 | 2022-11-28 10:14:59 UTC [26-1] LOG: database system is ready to accept connections
postgresql-docker-primary-1 | 2022-11-28 10:14:59 UTC [31-1] LOG: autovacuum launcher started
postgresql-docker-primary-1 | 2022-11-28 10:14:59 UTC [34-1] [unknown]#[unknown] LOG: incomplete startup packet
Since the Standby PostgreSQL data is in sync with PostgreSQL Primary data, why the Primary startup triggers the automatic recovery? My expectation is that the Primary PostgreSQL database server is not going to perform an automatic recovery. How does PostgreSQL know that a recovery process has to be triggered or not?
In our production case the automatic recovery would take around 3 hours in our case, so that the PostgreSQL server would be unavailable for around 3 hours and we would like to avoid this behaviour.
Is there any documentation where I can find the recovery process triggering of PostgreSQL database server?
Thank you very much for your feedback.
ctn

Unable to run Airflow and acess it

I downloaded Airflow's docker-compose.yaml from his official website and putted into my folder and runned sudo docker compose up airflow-init that worked perfectly. And when I runned sudo docker compose up to run the server and tried to acess in localhost:8080, i could'nt. This is my logs when I run sudo docker compose up:
WARN[0000] The "AIRFLOW_UID" variable is not set. Defaulting to a blank string.
WARN[0000] The "AIRFLOW_UID" variable is not set. Defaulting to a blank string.
[+] Running 7/0
⠿ Container airflow-docker-redis-1 Created 0.0s
⠿ Container airflow-docker-postgres-1 Created 0.0s
⠿ Container airflow-docker-airflow-init-1 Created 0.0s
⠿ Container airflow-docker-airflow-scheduler-1 Created 0.0s
⠿ Container airflow-docker-airflow-webserver-1 Created 0.0s
⠿ Container airflow-docker-airflow-triggerer-1 Created 0.0s
⠿ Container airflow-docker-airflow-worker-1 Created 0.0s
Attaching to airflow-docker-airflow-init-1, airflow-docker-airflow-scheduler-1, airflow-docker-airflow-triggerer-1, airflow-docker-airflow-webserver-1, airflow-docker-airflow-worker-1, airflow-docker-postgres-1, airflow-docker-redis-1
airflow-docker-redis-1 | 1:C 12 Oct 2022 15:20:30.213 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
airflow-docker-redis-1 | 1:C 12 Oct 2022 15:20:30.213 # Redis version=7.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
airflow-docker-redis-1 | 1:C 12 Oct 2022 15:20:30.213 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
airflow-docker-redis-1 | 1:M 12 Oct 2022 15:20:30.214 * monotonic clock: POSIX clock_gettime
airflow-docker-redis-1 | 1:M 12 Oct 2022 15:20:30.217 * Running mode=standalone, port=6379.
airflow-docker-redis-1 | 1:M 12 Oct 2022 15:20:30.217 # Server initialized
airflow-docker-redis-1 | 1:M 12 Oct 2022 15:20:30.217 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
airflow-docker-redis-1 | 1:M 12 Oct 2022 15:20:30.218 * Loading RDB produced by version 7.0.5
airflow-docker-redis-1 | 1:M 12 Oct 2022 15:20:30.218 * RDB age 6 seconds
airflow-docker-redis-1 | 1:M 12 Oct 2022 15:20:30.218 * RDB memory usage when created 0.85 Mb
airflow-docker-redis-1 | 1:M 12 Oct 2022 15:20:30.218 * Done loading RDB, keys loaded: 0, keys expired: 0.
airflow-docker-redis-1 | 1:M 12 Oct 2022 15:20:30.218 * DB loaded from disk: 0.001 seconds
airflow-docker-redis-1 | 1:M 12 Oct 2022 15:20:30.218 * Ready to accept connections
airflow-docker-postgres-1 |
airflow-docker-postgres-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
airflow-docker-postgres-1 |
airflow-docker-postgres-1 | 2022-10-12 15:20:30.319 UTC [1] LOG: starting PostgreSQL 13.8 (Debian 13.8-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
airflow-docker-postgres-1 | 2022-10-12 15:20:30.320 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
airflow-docker-postgres-1 | 2022-10-12 15:20:30.320 UTC [1] LOG: listening on IPv6 address "::", port 5432
airflow-docker-postgres-1 | 2022-10-12 15:20:30.323 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
airflow-docker-postgres-1 | 2022-10-12 15:20:30.330 UTC [26] LOG: database system was shut down at 2022-10-12 15:20:24 UTC
airflow-docker-postgres-1 | 2022-10-12 15:20:30.340 UTC [1] LOG: database system is ready to accept connections
airflow-docker-airflow-init-1 | /home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py:367: FutureWarning: The auth_backends setting in [api] has had airflow.api.auth.backend.session added in the running config, which is needed by the UI. Please update your config before Apache Airflow 3.0.
airflow-docker-airflow-init-1 | FutureWarning,
airflow-docker-airflow-init-1 |
airflow-docker-airflow-init-1 | WARNING!!!: AIRFLOW_UID not set!
airflow-docker-airflow-init-1 | If you are on Linux, you SHOULD follow the instructions below to set
airflow-docker-airflow-init-1 | AIRFLOW_UID environment variable, otherwise files will be owned by root.
airflow-docker-airflow-init-1 | For other operating systems you can get rid of the warning with manually created .env file:
airflow-docker-airflow-init-1 | See: https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#setting-the-right-airflow-user
airflow-docker-airflow-init-1 |
airflow-docker-airflow-init-1 | The container is run as root user. For security, consider using a regular user account.
airflow-docker-airflow-init-1 |
airflow-docker-airflow-init-1 | /home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py:367: FutureWarning: The auth_backends setting in [api] has had airflow.api.auth.backend.session added in the running config, which is needed by the UI. Please update your config before Apache Airflow 3.0.
airflow-docker-airflow-init-1 | FutureWarning,
airflow-docker-airflow-init-1 | DB: postgresql+psycopg2://airflow:***#postgres/airflow
airflow-docker-airflow-init-1 | Performing upgrade with database postgresql+psycopg2://airflow:***#postgres/airflow
airflow-docker-airflow-init-1 | [2022-10-12 15:20:40,747] {migration.py:204} INFO - Context impl PostgresqlImpl.
airflow-docker-airflow-init-1 | [2022-10-12 15:20:40,748] {migration.py:211} INFO - Will assume transactional DDL.
airflow-docker-airflow-init-1 | [2022-10-12 15:20:40,847] {db.py:1531} INFO - Creating tables
airflow-docker-airflow-init-1 | INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
airflow-docker-airflow-init-1 | INFO [alembic.runtime.migration] Will assume transactional DDL.
airflow-docker-airflow-init-1 | Upgrades done
airflow-docker-airflow-init-1 | /home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py:367: FutureWarning: The auth_backends setting in [api] has had airflow.api.auth.backend.session added in the running config, which is needed by the UI. Please update your config before Apache Airflow 3.0.
airflow-docker-airflow-init-1 | FutureWarning,
airflow-docker-airflow-init-1 | [2022-10-12 15:20:55,367] {providers_manager.py:211} INFO - Optional provider feature disabled when importing 'airflow.providers.google.leveldb.hooks.leveldb.LevelDBHook' from 'apache-airflow-providers-google' package
airflow-docker-airflow-init-1 | [2022-10-12 15:20:57,136] {providers_manager.py:211} INFO - Optional provider feature disabled when importing 'airflow.providers.google.leveldb.hooks.leveldb.LevelDBHook' from 'apache-airflow-providers-google' package
airflow-docker-airflow-init-1 | airflow already exist in the db
airflow-docker-airflow-init-1 | /home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py:367: FutureWarning: The auth_backends setting in [api] has had airflow.api.auth.backend.session added in the running config, which is needed by the UI. Please update your config before Apache Airflow 3.0.
airflow-docker-airflow-init-1 | FutureWarning,
airflow-docker-airflow-init-1 | 2.4.1
airflow-docker-airflow-init-1 exited with code 0
I've already searched and tried some solutions, but nothing did well. I just want to run and acess at port 8080. What can I do?
You need to run the following at first time before running the docker compose.
echo -e "AIRFLOW_UID=$(id -u)" > .env
See initialize environment at this link

PostgreSQL 9.4.1 Switchover & Switchback without recover_target_timeline=latest

I have tested different scenarios to do switchover and switchback in postgreSQL 9.4.1 Version.
Scenario 1:- PostgreSQL Switchover and Switchback in 9.4.1
Scenario 2:- Is it mandatory parameter recover_target_timeline='latest' in switchover and switchback in PostgreSQL 9.4.1?
Scenario 3:- On this page
To test scenario 3 I have followed below steps to perform.
1) Stop the application connected to primary server.
2) Confirm all application was stopped and all thread was disconnected from primary DB.
#192.x.x.129(Primary)
3) Clean shutdown primary using
pg_ctl -D$PGDATA stop --mf
#DR(192.x.x.128) side check sync status:
postgres=# select pg_last_xlog_receive_location(),pg_last_xlog_replay_location();
-[ RECORD 1 ]-----------------+-----------
pg_last_xlog_receive_location | 4/57000090
pg_last_xlog_replay_location | 4/57000090
4)Stop DR server.DR(192.x.x.128)
pg_ctl -D $PGDATA stop -mf
pg_log:
2019-12-02 13:16:09 IST LOG: received fast shutdown request
2019-12-02 13:16:09 IST LOG: aborting any active transactions
2019-12-02 13:16:09 IST LOG: shutting down
2019-12-02 13:16:09 IST LOG: database system is shut down
#192.x.x.128(DR)
5) Make following changes on DR server.
mv recovery.conf recovery.conf_bkp
6)make changes in 192.x.x.129(Primary):
[postgres#localhost data]$ cat recovery.conf
standby_mode = 'on'
primary_conninfo = 'user=replication password=postgres host=192.x.x.128 port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres'
restore_command = 'cp %p /home/postgres/restore/%f'
trigger_file='/tmp/promote'
7)Start DR as read write mode:
pg_ctl -D $DATA start
pg_log:
2019-12-02 13:20:21 IST LOG: database system was shut down in recovery at 2019-12-02 13:16:09 IST
2019-12-02 13:20:22 IST LOG: database system was not properly shut down; automatic recovery in progress
2019-12-02 13:20:22 IST LOG: consistent recovery state reached at 4/57000090
2019-12-02 13:20:22 IST LOG: invalid record length at 4/57000090
2019-12-02 13:20:22 IST LOG: redo is not required
2019-12-02 13:20:22 IST LOG: database system is ready to accept connections
2019-12-02 13:20:22 IST LOG: autovacuum launcher started
(END)
We can see in above log OLD primary is now DR of Primary(Which was OLD DR) and not showing any error because timeline id same on new primary which is already exit in new DR.
8)Start Primary as read only mode:-
pg_ctl -D$PGDATA start
logs:
2019-12-02 13:24:50 IST LOG: database system was shut down at 2019-12-02 11:14:50 IST
2019-12-02 13:24:51 IST LOG: entering standby mode
cp: cannot stat ‘pg_xlog/RECOVERYHISTORY’: No such file or directory
cp: cannot stat ‘pg_xlog/RECOVERYXLOG’: No such file or directory
2019-12-02 13:24:51 IST LOG: consistent recovery state reached at 4/57000090
2019-12-02 13:24:51 IST LOG: record with zero length at 4/57000090
2019-12-02 13:24:51 IST LOG: database system is ready to accept read only connections
2019-12-02 13:24:51 IST LOG: started streaming WAL from primary at 4/57000000 on timeline 9
2019-12-02 13:24:51 IST LOG: redo starts at 4/57000090
(END)
Question 1:- In This scenario i have perform only switch-over to show you. using this method we can do switch-over and switchback. but using below method Switch-over-switchback is work, then why PostgreSQL Community invented recovery_target_timeline=latest and apply patches see blog: https://www.enterprisedb.com/blog/switchover-switchback-in-postgresql-9-3 from PostgrSQL 9.3...to latest version.
Question 2:- What mean to say in above log cp: cannot stat ‘pg_xlog/RECOVERYHISTORY’: No such file or directory ?
Question 3:- I want to make sure from scenarios 1 and scenario 3 which method/Scenarios is correct way to do switchover and switchback? because scenario 2 is getting error because we must use recover_target_timeline=latest which all community experts know.
Answers:
If you shut down the standby cleanly, then remove recovery.conf and restart it, it will come up, but has to perform crash recovery (database system was not properly shut down).
The proper way to promote a standby to a primary is by using the trigger file or running pg_ctl promote (or, from v12 on, by running the SQL function pg_promote). Then you have no down time and don't need to perform crash recovery.
Promoting the standby will make it pick a new time line, so you need recovery_target_timeline = 'latest' if you want the new standby to follow that time line switch.
That is caused by your restore_command.
The method shown in 1. above is the correct one.

Can't connect to postgresql server after moving database files

I want to move my postgresql databases to an external hard drive (HDD 2TB USB 3.0). I copied the whole directory:
/var/lib/postgresql/9.4/main/
to the external drive, preserving permissions, with a command (ran by the user postgres):
$ rsync -aHAX /var/lib/postgresql/9.4/main/* new_dir_path
First run of this command was interrupted, but in the second attempt I copied everything (basically one database of size 800 GB). In the file
/etc/postgresql/9.4/main/postgresql.conf
I changed the line
data_directory = '/var/lib/postgresql/9.4/main'
to point to the new location. I restarted the postgresql service, and when from the user postgres I run the command psql, I get:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I didn't change any other settings. There is no pidfile 'postmaster.pid' in the new location (or in the old one). When I run a command
$ /usr/lib/postgresql/9.4/bin/postgres --single -D /etc/postgresql/9.4/main -P -d 1
I get
2017-03-16 20:47:39 CET [2314-1] DEBUG: mmap with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory
2017-03-16 20:47:39 CET [2314-2] NOTICE: database system was shut down at 2017-03-16 20:01:23 CET
2017-03-16 20:47:39 CET [2314-3] DEBUG: checkpoint record is at 647/4041B3A0
2017-03-16 20:47:39 CET [2314-4] DEBUG: redo record is at 647/4041B3A0; shutdown TRUE
2017-03-16 20:47:39 CET [2314-5] DEBUG: next transaction ID: 1/414989450; next OID: 112553
2017-03-16 20:47:39 CET [2314-6] DEBUG: next MultiXactId: 485048384; next MultiXactOffset: 1214064579
2017-03-16 20:47:39 CET [2314-7] DEBUG: oldest unfrozen transaction ID: 259446705, in database 12141
2017-03-16 20:47:39 CET [2314-8] DEBUG: oldest MultiXactId: 476142442, in database 12141
2017-03-16 20:47:39 CET [2314-9] DEBUG: transaction ID wrap limit is 2406930352, limited by database with OID 12141
2017-03-16 20:47:39 CET [2314-10] DEBUG: MultiXactId wrap limit is 2623626089, limited by database with OID 12141
2017-03-16 20:47:39 CET [2314-11] DEBUG: starting up replication slots
2017-03-16 20:47:39 CET [2314-12] DEBUG: oldest MultiXactId member is at offset 1191132700
2017-03-16 20:47:39 CET [2314-13] DEBUG: MultiXact member stop limit is now 1191060352 based on MultiXact 476142442
PostgreSQL stand-alone backend 9.4.9
backend>
but I don't now how to understand this output. When I revert the changes in the postgresql.conf file, everything works fine. Interestingly, few months ago I moved the database in the same way, but to the local directory, and it worked.
I use postgresql-9.4 and debian-jessie.
Thanks for your help!
UPDATE
Content of the log file:
$ cat /var/log/postgresql/postgresql-9.4-main.log
2017-03-14 17:07:16 CET [13822-2] LOG: received fast shutdown request
2017-03-14 17:07:16 CET [13822-3] LOG: aborting any active transactions
2017-03-14 17:07:16 CET [13827-3] LOG: autovacuum launcher shutting down
2017-03-14 17:07:16 CET [13824-1] LOG: shutting down
2017-03-14 17:07:16 CET [13824-2] LOG: database system is shut down

At what point PostgreSQL begin recovery

I'm going to make backups from a standby server. I use the following commands to create binary backup:
psql -c 'select pg_xlog_replay_pause()'
tar c data --exclude=pg_xlog/* | lzop --fast > /mnt/nfs/backup/xxxx.tar.lzop
psql -c 'select pg_xlog_replay_resume()'
All WAL logs from master database are stored on external storage for several days and recovery using these logs works great. However, backup becomes invalid after logs are cleaned. The solution is to copy all needed WAL logs starting from some point until the last log when backup is done.
The question is what is the first file?
pg_controldata shows:
pg_control version number: 942
Catalog version number: 201409291
Database system identifier: 6185091942558520564
Database cluster state: in archive recovery
pg_control last modified: Thu 08 Oct 2015 03:14:23 PM UTC
Latest checkpoint location: 1C41/F662E1F8
Prior checkpoint location: 1C41/B4435EE8
Latest checkpoint's REDO location: 1C41/DE003400
Latest checkpoint's REDO WAL file: 0000000200001C41000000DE
Latest checkpoint's TimeLineID: 2
Latest checkpoint's PrevTimeLineID: 2
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID: 0/3550951620
Latest checkpoint's NextOID: 83806
Latest checkpoint's NextMultiXactId: 1
Latest checkpoint's NextMultiOffset: 0
Latest checkpoint's oldestXID: 3152230057
Latest checkpoint's oldestXID's DB: 16385
Latest checkpoint's oldestActiveXID: 3550951620
Latest checkpoint's oldestMultiXid: 1
Latest checkpoint's oldestMulti's DB: 16385
Time of latest checkpoint: Thu 08 Oct 2015 03:10:44 PM UTC
Fake LSN counter for unlogged rels: 0/1
Minimum recovery ending location: 1C42/4CC934E0
So what is the first file? AFAIK PostgreSQL always begin recovery from a checkpoint. I have tried to restore several backups and noticed that PostgreSQL starts recovery from Prior checkpoint location. Is this always true? What's the difference between Prior checkpoint location and Latest checkpoint location?
According to pg_controldata:
First file: 1C41/B4
Minimum last file: 1C42/4C (Must be greater of equal to `Minimum recovery ending location`)
Am I right?
You need from the "last checkpoint's redo location" - the first WAL for which is identified with "last checkpoint's REDO WAL file" - through to the WAL segment containing "minimum recovery ending location" on timeline "Last checkpoint's TimeLineID".
In your example that'd be from LSN 1C41/DE003400 through to 1C42/4CC934E0, both on TimeLineID 2.
That corresponds to WAL segments 0000000200001C41000000DE through 0000000200001C42????????.