How can I remove an enum label from pg_enum table in Google Cloud SQL? - postgresql

I have a Node app (Express) that I built recently, which uses Sequelize to connect to a PostgreSQL instance. I just deployed this to GCP and used Cloud SQL to set up the database. This works quite well and the app properly connects to the DB instance.
However, when running the migrations I have I get this error: permission denied for table pg_enum
This happens on a single migration that I have which tries to remove an enum value from the database:
module.exports = {
up: (queryInterface, Sequelize) =>
queryInterface.sequelize.query(
`DELETE FROM pg_enum WHERE enumlabel = 'to' AND enumtypid = (SELECT oid FROM pg_type WHERE typname = 'enum_Subscriptions_emailType')`
),
down: (queryInterface, Sequelize) =>
queryInterface.sequelize.query(`ALTER TYPE "enum_Subscriptions_emailType" ADD VALUE 'to';`)
}
I've read here that since Cloud SQL is a managed service, it doesn't provide superuser privileges to customers like me. Is there some other way that I can get this migration run?
I've also tried running a Cloud Build process, but that also fails with the following error: ERROR: connect ENOENT /cloudsql/<project-id>:<project-region>:<db-instance>/.s.PGSQL.5432. For reference, my cloudbuild.yaml file looked like this:
steps:
- name: 'gcr.io/cloud-builders/yarn'
args: ['install']
- name: 'gcr.io/cloud-builders/yarn'
args: ['migrate']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy']
and my package.json scripts were:
"scripts": {
"start": "node index.js",
"migrate": "npx sequelize-cli db:migrate",
"dev": "set DEBUG=app:* && nodemon index.js",
"deploy": "gcloud builds submit --config cloudbuild.yaml ."
}
What else can I do to get around this, so that the migrations I have following this one can run and my app can function?

Removing a value from an enum is not supported by PostgreSQL. You can only add new ones or rename existing ones.
While it might work somewhat reliably by modifying the system catalogue, even this is not officially supported and needs superuser permissions for a reason - so there is no way to do it without.
The supported way to do what you want to do is to recreate the type without the value.
CREATE TYPE new_enum AS ENUM('a','b','c');
ALTER TABLE table_using_old_enum
ALTER COLUMN colum_using_old_enum
SET DATA TYPE new_enum
USING colum_using_old_enum::text::new_enum;
DROP TYPE old_enum;
ALTER TYPE new_enum RENAME TO old_enum;
This obviously only works if no entry in table_using_old_type is still set to the value that you want to remove - you need to ensure this first, otherwise the typecast in the USING clause will fail for those values.
--
Alternatively, you could also just comment out that query - if you already know that a few minutes later with a later migration the type will be removed completely, it shouldn't cause an issue if that value stays in until then.

Related

DBT Cloud: Trying to conditional create tables in a DB based on variable

In my profiles.yml file I have vars declared:
vars:
database: dbtworkshop
And then within my models I have the following:
{{ config(database= var('database') ) }}
select
*
from {{var('database')}}.staging.orders
The query appears to be compiling correctly however I'm getting an error when passing the vars at runtime
dbt run --vars '{"database":"a"}'
I get the error:
Postgres adapter: Postgres error: cross-database reference to database "a" is not supported
I'm not sure whether doing something like this (ie having models that I want to create in a database which changes based on the variable passed at runtime) is even possible using dbt or if I'm just doing something wrong.
When using Redshift with dbt, make sure to set ra3_node = true in your profiles.yml since the older node type of DC2 does not actually support cross database queries. RA3 and Serverless support it, and for those the setting is required.
See https://docs.getdbt.com/reference/warehouse-setups/redshift-setup

Prisma db push error when removing column that has a default value

After removing a column from schema.prisma, and running npm prisma db push, I'm getting the error...
Error: The object 'users_role_df' is dependent on column 'role'.
I'm using Prisma 3.3.0 with the sqlserver connection.
The model looks like this...
model User {
id ....
name ...
role String #default("USER")
}
I see the initial push created the CONSTRAINT users_role_df but now when I remove it from the model, and run push, it's not handling removing the constraint first and then the column.
How can I fix this?
You could try running the command npx prisma migrate dev --create-only to see what the generated SQL looks like for the migration.
You could manually add a DROP CONSTRAINT users_role_df; inside the generated migration or change the order of the SQL commands (if such a command already exists in the generated migration).
It is fine for you to make changes to the migration file as long as you do it before applying the migration to your database.

DROP ROLE IF EXISTS fails with an "does not exist" error, instead of a NOTICE

A set of 20 month old new DB provisioning .sql files suddenly stopped working on our PostGRES 12.7 instance hosted on the IBM Cloud. Every DROP ROLE IF EXISTS clause now fails with an ERROR, where previously it would give notice and continue with configuration, even if run on a new, empty database.
The exact commands:
DROP ROLE IF EXISTS "anonymous";
or
DROP ROLE IF EXISTS "FLUMMOXED";
return
ERROR: role "FLUMMOXED" does not exist
SQL state: 42704
instead of a NOTICE.
What I've tried:
I've been through the PostGRES manual, and this command hasn't changed since PostGRES 8.
I've read through the other posts, which emphasize sensitivity to casing.
I've checked the 12.8 upgrade log, and this command isn't mentioned.
I've tried the command on an empty, new DB, and it fails there as well.
Has anyone seen this before?
Thanks.

How to set default database in Postgresql database dump script?

I'd need to initialize postgres instance to Docker container from dump SQL-file. Otherwise it works fine but the problem is I cannot set database to be something else than "postgres". Creating new database works fine but schema clauses eg. CREATE TABLE end up going nowhere.
I tried to set default database with --env option in docker run command but it returns error --env requires a value.
Is there any way to set default database? Hopefully in SQL-clause.
Apparently you need to use /connect "dbname=[database name]" before schema clauses in order to point script towards correct dabase.
This wasn't (quite understandbly) included into the script when dump was generated only for a single database instead of the whole cluster.

sailsjs on Heroku not creating model in postgresql

I am trying to use sails.js on Heroku. When I push my changes, the application starts, but when I try to run MyModel.find() methods, I receive an E_NOTFOUND error. When I log into the database with psql, I see that the tables have not been created automatically. I have the policy in models.js set to migrate: 'drop', so shouldn't I at least get empty tables made when I launch the application? Is there something going on on Heroku that it doesn't like me running sails?
edit:
I had previously been putting settings in development.js and production.js that were different settings (heroku postgresql settings in production). I took that out, and put the settings in connections.js, and it seems like I am able to do the queries and such on the Models, however, when I do heroku pg:psql to connect to my database, I don't see the tables if I do "select * mytable;" it tells me no relation is found.
I believe the problem was that even though I had separate settings for development or production databases, when NODE_ENV was set to production, sails defaults to "safe" and will not let you change it to alter or drop - silently. Changing NODE_ENV to something else allowed the tables to be created.