Typorm migration needs permission to all tables in database - postgresql

I have a nestjs app and migrations configured to run automatically on startup. Recently, a new table was added to the database for aws dms. This table should not be included in migrations or accessed by the app in any way but needs to be in the same database for dms purposes. We did not give the user used by the app permissions to this new table and the migrations failed to run with the following error:
Migration "migrations1673309585875" failed, error: permission denied for table awsdms_ddl_audit
in the nestjs logs
ERROR [TypeOrmModule] Unable to connect to the database. Message: permission denied for sequence awsdms_ddl_audit_c_key_seq. Retrying (9)...
QueryFailedError: permission denied for sequence awsdms_ddl_audit_c_key_seq
at PostgresQueryRunner.query (/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:211:19)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
Migration "migrations1673309585875" failed, error: permission denied for sequence awsdms_ddl_audit_c_key_seq
I want to reiterate that this migration does not include any changes to that table. It includes very minor changes to another table with no relation to the aws dms table.
The only thing I was able to find on this was this GitHub issue where someone had exactly the same problem and their solution was to grant permissions for the table and a sequence to that user. You can find that here.
This solution worked for me but I would prefer to not have the permissions granted if I can avoid it. Is there a way to specify a table to be excluded in typeorm migration? There isn't anything in the migration related to that table and there never should be. I don't know what typeorm is doing to need permissions on this table.
Here's what I did to 'fix' the issue:
GRANT ALL
ON awsdms_ddl_audit
TO user_role;
GRANT USAGE, SELECT ON SEQUENCE awsdms_ddl_audit_c_key_seq TO user_role;
Here's an example of our typeorm configuration
// typeorm/config.ts
export const databaseConfig: TypeOrmModuleOptions = {
type: 'postgres',
host,
port,
username,
password,
database,
entities,
synchronize: false,
migrations: ['dist/migrations/**/*.js'],
migrationsRun: true,
logging: true,
};
// app.module.ts
import { Module } from '#nestjs/common';
import { TypeOrmModule } from '#nestjs/typeorm';
import { databaseConfig } from './typeorm/config';
#Module({
imports: [TypeOrmModule.forRoot(databaseConfig)],
controllers: [controllers],
providers: [providers],
})
export class AppModule {}

Related

creating a datasource for postgres schema based multitenancy and issues with connection pooling

From the typeorm docs:
Generally, you call initialize method of the DataSource instance on application bootstrap, and destroy it after you completely finished working with the database. In practice, if you are building a backend for your site and your backend server always stays running - you never destroy a DataSource.
But, for implementing postgres's schema based multitenancy, I'm scoping connections per request, because, each request has to be sent to a different schema. So, in my getConnection option, I'm doing this:
async function getTenantConnection(
tenantName: string,
connectionOptions?: PostgresConnectionOptions,
) {
if (!connectionOptions) {
connectionOptions = baseConnection;
}
const options: PostgresConnectionOptions = {
...connectionOptions,
schema: tenantName,
entities: [__dirname + '/../**/*.entity.js'],
synchronize: false,
};
const dataSource = new DataSource(options);
return await dataSource.initialize();
}
so on each request, I'm doing getTenantConnection which sort of initialized the database. The previously available getConnection() seems deprecated, and now doing stress tests on the app, I'm getting TCP connection issues, which I simply cannot debug:
5;3m[ExceptionsHandler] connect ETIMEDOUT 20.119.245.111:5432
2023-01-31T10:40:40.581303183Z Error: connect ETIMEDOUT 20.119.245.111:5432
2023-01-31T10:40:40.581370686Z at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16)
2023-01-31T10:40:40.581381886Z at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17)
2023-01-31T10:40:40.589395960Z [Nest] 53 - 01/31/2023, 10:40:40 AM ERROR [ExceptionsHandler] connect ETIMEDOUT 20.119.245.111:5432
I'm just speculating that the database pool has something to do with this. I don't understand the code fully, but the source code for typeorm doesn't seem to contain any pooling done in the initialize() method section as well. I had tried to take reference from this article which demonstrates schema based multitenancy in postgres using typeorm , but methods available there aren't available anymore so I had to resolve to using .initialize(). Please let me know how I can go about implementing this.

MikroORM failed to connect to database despite correct username and password

I'm following Ben Awad's youtube tutorial on writing a full stack application. I'm using MikroORM with postgres.
I created a database called tut, a user called tut, then gave that user access to the database. I can verify that the user has access to the db like so:
$ su - tut
Password:
user:/home/tut$ psql
tut=>
Here's what my mikro-orm.config.ts looks like:
import {Post} from "../entities/Post";
import {MikroORM} from "#mikro-orm/core";
import path from "path"
export default {
migrations: {
path: path.join(__dirname, "./migrations"),
pattern: /^[\w-]+\d+.*\.[tj]s$/
},
entities: [Post],
dbName: 'tut',
user: 'tut',
password: 'tut',
type: 'postgresql',
debug: process.env.NODE_ENV !== 'production',
} as Parameters<typeof MikroORM.init>[0]
When I attempt to connect to the db in index.ts I get a "MikroORM failed to connect to database tut on postgresql://tut:*****#127.0.0.1:5432" (error code 28P01).
Am I supposed to be running a psql server on localhost? The tutorial doesn't have you do that as far as I can tell.
I fixed this by running \password in psql as tut, thanks #AdrianKlaver

How to define schema name in #MappedEntity annotation for r2dbc

I have kotlin & Micronaut application connecting to postgresql using r2dbc for reactive approach
r2dbc:
datasources:
default:
schema-generate: NONE
dialect: POSTGRES
url: r2dbc:postgresql://localhost:5434/mydb
username: postgres
password: postgres
I have the table called Customer inside database mydb and schema myschema, but while using the #MappedEntity we can only define the table name. Since table is inside of myschema the application is throws entity does not exist
15:26:15.455 [reactor-tcp-nio-1] ERROR i.m.h.n.stream.HttpStreamsHandler - Error occurred writing stream response: relation "customer" does not exist
io.r2dbc.postgresql.ExceptionFactory$PostgresqlBadGrammarException: relation "customer" does not exist
how to define schema name in MappedEntity annotation ?
One way you can do it is, you can define the current schema in url using query parameter
url: r2dbc:postgresql://localhost:5434/mydb?currentSchema=myschema
You can use JPA’s ‘#Table’ as a workaround.

Vapor PostgreSQL Error: invalidSQL("ERROR: relation \"pages\" already exists\n")

I am trying to revert a PostgreSQL database with with the Vapor command:
vapor run prepare --revert -y
I get this out put:
Running mist...
Are you sure you want to revert the database?
y/n>yes
Reverting Post
Reverted Post
Removing metadata
Reversion complete
In case you are wondering, I have tried doing this multiple times, so the Post class gets prepared, but the others don't.
This command reverts that tables for all the models, except one (There are four total).
For some reason the 'pages' table will not revert.
And when I try running the app after reverting the database, I get this error:
invalidSQL("ERROR: relation \"pages\" already exists\n")
Here is the database preparation code for the model:
extension Page: Preparation {
static func prepare(_ database: Database) throws {
try database.create("pages", closure: { post in
post.id()
post.string("content", length: 10000)
post.string("name")
post.string("link")
})
}
static func revert(_ database: Database) throws {
try database.delete("pages")
}
}
I managed to fix this by deleting the old DB:
dropdb `whoami`
Then creating a new one:
createdb `whoami`
Problem solved!

postgres liquibase limit diff to one schema

Goal:
I created a special login (named liquibase) which has only acces to one schema CustomerXX (the schema which I would like to compare and only this!!)
Problem:
Even with this login limited the liquibase diff operations try to read other schema (as exemple in my error obsolete_objects)
Remark:
I am studing the CustomerXX. I have another schema in the database named obsolete_objects but i am only focus on customerxx
login liquibase used ==>
GRANT all ON schema customerxx to liquibase;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA customerxx to liquibase;
no other right
My command liquibase:
liquibase.bat
--classpath="E:\Program Files\Talend-Studio\plugins\org.talend.libraries.jdbc.postgresql_5.5.1.r118616\lib\postgresql-9.2-1003.jdbc3.jar"
--driver=org.postgresql.Driver --url=jdbc:postgresql://xxxxxxxxx
--username=USERNAME
--password=PASSWORD
--defaultSchemaName=customerxx
Diff
--referenceUrl=jdbc:postgresql://yyyyyyyyyy
--referenceUsername=liquibase
--referencePassword=liquibase
--referenceDefaultSchemaName=customerxx
My error
Diff Results:
Unexpected error running Liquibase: liquibase.exception.DatabaseException:
org.postgresql.util.PSQLException: ERROR: permission denied for schema obsolete_objects Positioná: 27
Thanks in advance!