I have recently upgraded composer/airflow from 1.17/2.0 to 1.18/2.2 on GCP. Upon upgrade I see the following warnings on airflow UI:
Airflow found incompatible data in the dag_run table in the metadatabase, and has moved them to _airflow_moved__2_2__dag_run during the database migration to upgrade. Please inspect the moved data to decide whether you need to keep them, and manually drop the _airflow_moved__2_2__dag_run table to dismiss this warning. Read more about it in Upgrading.
and
Airflow found incompatible data in the task_instance table in the metadatabase, and has moved them to _airflow_moved__2_2__task_instance during the database migration to upgrade. Please inspect the moved data to decide whether you need to keep them, and manually drop the _airflow_moved__2_2__task_instance table to dismiss this warning. Read more about it in Upgrading.
to remove the tables as instructed in the error message, i followed GCP documentation on how to access the airflow database. once connected i see the tables listed however the following attempt to remove the tables doesn't work. no error message produced.
DROP TABLE public._airflow_moved__2_2__dag_run
DROP TABLE public._airflow_moved__2_2__task_instance
Related
I have a PostgreSQL db that is used by a Nest.Js / Prisma app.
We changed the name of a field in the Prisma schema and added a new field.
Now, when we want to update the PostreSQL structure, I'm running, as suggested by Prisma, the following commands:
npx prisma generate
and then
npx prisma migrate dev --name textSettings-added --create-only
The idea is to use the --create-only flag to review the migration before it is actually made.
However, when I run it I get a list of the changes to be made to the DB and the following message:
We need to reset the PostgreSQL database "my_database" at "my_db_name#cluster.us-east-1.rds.amazonaws.com:5432".
Do you want to continue? All data will be lost.
Of course I choose not to continue, because I don't want to lose the data. Upon inspection I see that the migration file actually contains DROP TABLE for the tables that I simply wanted to modify. How to avoid that?
So how do I run the update without affecting the data?
UPDATE:
I saw that running with --create-only creates a migration which can then be implemented on the DB level using prisma migrate dev, however, in that migration file there are still some commands that drop my previous tables because of some new parameters inside. How can I run prisma migration without deleting my PostgreSQL data?
UPDATE 2:
I don't want Prisma to drop my tables when I just updated them. The migration file generated, however, drops them and then alters them. Do you know what's the best procedure to avoid this drop? I saw somewhere I could first manually update the DB with the new options and then run the migration, so Prisma can find a way to update it, but that seems too manual to me... Maybe some other ideas?
For some cases like renaming tables or columns, Prisma's generated migration files need to be updated if they already contain data.
If that applies to your use case, Prisma's docs suggest to:
Make updates to the prisma schema
Create migration file without applying it (--create-only flag)
Update the migration script to remove the drops and instead write your custom query (e.g. RENAME <table_name> TO <new_name>)
Save and apply the migration (npx prisma migrate dev)
Note that those changes can lead to downtime (renaming a field or model), for which they have outlined the expand and contract pattern.
It might be a Prisma bug: https://github.com/prisma/prisma/issues/8053
I also recently had this situation. It probably should not try to run migration if you only want to create migration file.
But overall it is expected with Prisma to recreate your db sometimes. If you migration is breaking then it will be required to reset the data anyway when you apply it.
I suggest you to create some seeding script so you could consistently re-create the database state, it's very useful for your development environment.
More info
I tried to build an Offer-Ready Docker container on Azure Cloud. Although I created a new (blank) table in PostgreSQL, I got this strange error message.
javax.servlet.ServletException: org.eclipse.jetty.servlet.ServletHolder$1: org.flywaydb.core.api.FlywayException: Found non-empty schema(s) "public" without schema history table! Use baseline() or set baselineOnMigrate to true to initialize the schema history table.
I double-checked the database, there is no table in schema "public". I didn't have that problem on AWS. Has anybody an idea what is different on Azure?
I had the same experience once.
The PostgreSQL database on Azure seemed empty (\dt returned no results),
But Flyway claimed the database was not empty (and therefore would not apply the migration scripts, for fear of interfering with whatever was already there).
Here is what I did was:
Create a new schema within the database e.g. myschema
Delete the default schema called public
Add the parameter currentSchema=myschema to the JDBC URL
And then it worked. I never got to find out what the root cause of this problem was.
EDIT: This link might provide more information on what objects are in the "public" schema by default on Azure PostgreSQL: https://community.atlassian.com/t5/Jira-questions/Re-quot-database-that-is-not-empty-quot-when-trying-to-use-azure/qaq-p/1308795/comment-id/410329#M410329
When trying to connect to my Amazon PostgreSQL DB, I get the above error. With pgAdmin, I get "error saving properties".
I don't see why to connect to a server, I would do any write actions?
There are several reasons why you can get this error:
The PostgreSQL cluster is in recovery (or is a streaming replication standby). You can find out if that is the case by running
SELECT pg_is_in_recovery();
The parameter default_transaction_read_only is set to on. Diagnose with
SHOW default_transaction_read_only;
The current transaction has been started with
START TRANSACTION READ ONLY;
You can find out if that is the case using the undocumented parameter
SHOW transaction_read_only;
If you understand that, but still wonder why you are getting this error, since you are not aware that you attempted any data modifications, it would mean that the application that you use to connect tries to modify something (but pgAdmin shouldn't do that).
In that case, look into the log file to find out what statement causes the error.
This was a bug which is now fixed, Fix will be available in next release.
https://redmine.postgresql.org/issues/3973
If you want to try then you can use Nightly build and check: https://www.postgresql.org/ftp/pgadmin/pgadmin4/snapshots/2019-02-17/
I am unable to complete Moodle installation. I am hosting the site on NearlyFreeSpeech and using PHP 5.6. Moodle doesnt seem to be able to connect to the database and write any tables.
I created the moodledata folder in /protected/moodledata and moodle is in /public/moodle
I receive this error after accepting the terms and conditions.
Error reading from database
More information about this error
It is usually not possible to recover from errors triggered during installation, you may need to create a new database or use a different database prefix if you want to retry the installation.
Normally my first instinct would point to the config.php file but if it's getting as far as telling you that a connection is established with the database but there's a read error ("Error reading from database"), then that generally means your config.php file is probably healthy, but your database is not.
Firstly, check that you're using one of the following database servers that Moodle is compatible with (minimum version)
PostgreSQL 9.1
MySQL 5.5.31
MariaDB 5.5.31
Microsoft SQL Server 2008
Oracle Database 10.2
source.
Secondly, ensure that the user assigned to access your database in config.php has ALL PRIVILEGES set on that database.
Moving on... If this is a fresh install and you have no data to lose, your best bet is to start with a clean database.
You can either delete your existing database and set up a new one, or you can drop all tables from your existing database.
Option 1. Delete your existing database.
Delete your config.php file
Jump to phpMyAdmin (from the 'actions' tab on the MySQL process page)
Click on "Databases"
Delete your existing database
Hit "Create database" to generate a fresh, empty database
Go to http://your.url/install.php and follow the instructions for a fresh install.
Option 2. Clear your existing database
Jump to phpMyAdmin and run the following query:
DECLARE #sql NVARCHAR(max)=''
SELECT #sql += ' Drop table '+TABLE_SCHEMA+'.'+ TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
Exec Sp_executesql #sql
source.
Then go to http://your.url/install.php and follow the instructions for a fresh install.
If you managed to start with a fresh database and you get the same error, please ensure that you have all the prerequisites available from your host. You can find a list of Moodle PHP requirements here.
The only time I've seen that error is when using the wrong MySQL version. Eg currently MySQL 5.5 is supported but if you have 5.1 you would get that error.
Source: http://realtechtalk.com/moodle_install_error_Error_reading_from_database_-2072-articles
While restoring a (pg_dump-produced) database dump, I get the following error:
Cannot execute COPY FROM on a distributed table on master node
How can I work around this?
COPY support was added in Citus 5.1, which was released May 2016 and is available in the official PostgreSQL Linux package repositories (PGDG).
Are you trying to load data via a pg_dump output? Creating distributed tables is slightly different than regular tables, and requires picking of partition columns and partitioning method. Take a look at the docs to get more information on both.