I actually stopped a rollback migration midway, now I don't really know what the problem is, it won't rollback migrations any more. I used the command: knex migrate:rollback --knexfile=knexfile-client.ts --verbose <name> for rollback.
What I do know is these records are kept in the migrations and migrations_lock that postgres creates automatically, I was wondering if the problem will be solved if I delete these two. Would they regenerate making things fully functional again? I mean my database is heavy, and I'd like to avoid pulling and recreating everything.
Currently this is the error:
$ knex migrate:rollback --knexfile=knexfile-client.ts --verbose
Requiring external module ts-node/register
Using environment: development
FS-related option specified for migration configuration. This resets migrationSource to default FsMigrations
FS-related option specified for migration configuration. This resets migrationSource to default FsMigrations
migration file "20210413082306_create_find_report.ts" failed
migration failed with error: drop table "find_report" - table "find_report" does not exist
error: drop table "find_report" - table "find_report" does not exist
at Parser.parseErrorMessage (/app/node_modules/pg-protocol/src/parser.ts:357:11)
at Parser.handlePacket (/app/node_modules/pg-protocol/src/parser.ts:186:21)
at Parser.parse (/app/node_modules/pg-protocol/src/parser.ts:101:30)
at Socket.<anonymous> (/app/node_modules/pg-protocol/src/index.ts:7:48)
at Socket.emit (node:events:369:20)
at addChunk (node:internal/streams/readable:313:12)
at readableAddChunk (node:internal/streams/readable:288:9)
at Socket.Readable.push (node:internal/streams/readable:227:10)
at TCP.onStreamRead (node:internal/stream_base_commons:190:23)
error Command failed with exit code 1.
I've triple checked find_report table exists from pgAdmin. Any help would be appreciated. Thanks in advance.
PS: I'm actually using a make script to run above command so that I don't actually have to specify the file name
Related
I dont know why, but since I was cloning my working repository I am using on AWS to a local machine and try to run it, I am getting the following error:
"SCRAM-SERVER-FIRST-MESSAGE: client password must be a string"
Error: SASL: SCRAM-SERVER-FIRST-MESSAGE: client password must be a string
at Object.continueSession (C:\Users\thehe\Documents\workspace\work\nft-trading-server\node_modules\pg\lib\sasl.js:24:11)
at Client._handleAuthSASLContinue (C:\Users\thehe\Documents\workspace\work\nft-trading-server\node_modules\pg\lib\client.js:257:10)
at Connection.emit (node:events:390:28)
at C:\Users\thehe\Documents\workspace\work\nft-trading-server\node_modules\pg\lib\connection.js:114:12
at Parser.parse (C:\Users\thehe\Documents\workspace\work\nft-trading-server\node_modules\pg-protocol\src\parser.ts:104:9)
at Socket.<anonymous> (C:\Users\thehe\Documents\workspace\work\nft-trading-server\node_modules\pg-protocol\src\index.ts:7:48)
at Socket.emit (node:events:390:28)
at addChunk (node:internal/streams/readable:315:12)
at readableAddChunk (node:internal/streams/readable:289:9)
at Socket.Readable.push (node:internal/streams/readable:228:10)
at TCP.onStreamRead (node:internal/stream_base_commons:199:23)
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_USER=admin
POSTGRES_PASSWORD=admin
POSTGRES_DB=nftapi01
PORT=5000
Does anyone know where that is coming from and how to fix this? I am not sure why I get this locally. I can connect to the pg database with the credentials of the .dev.env file but the Nest app wont start.
Are you importing the "dotenv" package? you need it for access to environment variables.
On your server.js file put: require('dotenv').config();
You said that you cloned your repository... do you have a package.json file in your project? (inside this file you declare what is your main file: "main": "server.js")
Of course, you need access to read your .env file, check them!
I am new to Postgres and we are using it for tests reports, we had an issue with our environment that entered duplicate keys to one of the table and since then we are getting this message when trying to run migration scripts:
error: migration failed: right sibling's left-link doesn't match: block 9550 links to 12028 instead of expected 12027 in index "log_attach_id_idx" in line 0: UPDATE log SET project_id = (SELECT project_id FROM item_project WHERE item_project.item_id=log.item_id LIMIT 1); (details: pq: right sibling's left-link doesn't match: block 9550 links to 12028 instead of expected 12027 in index "log_attach_id_idx")
I tried to run pg_dump and got this error:
pg_dump: error: query was: SELECT pg_catalog.pg_get_viewdef('457544'::pg_catalog.oid) AS viewdef
pg_dumpall: error: pg_dump failed on database "reportportal", exiting
Can anyone help here?
Restore your backup, and research what parameters you changed and what you did to end up with data corruption in the first place.
Setup
Asp.NET core website 2.2 and EF Core 2.2
Postgresql database with multiple schemas and one of the schema already has __EFMigrationsHistory table
when trying
Add-Migration x1 -Context YodaContext
it works
but when trying the following statement
Update-Database -Context YodaContext for the first time (I do not have any tables in this schema this is the first update-database) I am seeing the following error.
Failed executing DbCommand (85ms) [Parameters=[], CommandType='Text', CommandTimeout='30']
SELECT "MigrationId", "ProductVersion"
FROM "__EFMigrationsHistory"
ORDER BY "MigrationId";
Npgsql.PostgresException (0x80004005): 42P01: relation "__EFMigrationsHistory" does not exist
at Npgsql.NpgsqlConnector.<>c__DisplayClass161_0.<<ReadMessage>g__ReadMessageLong|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Npgsql.NpgsqlConnector.<>c__DisplayClass161_0.<<ReadMessage>g__ReadMessageLong|0>d.MoveNext() in C:\projects\npgsql\src\Npgsql\NpgsqlConnector.cs:line 1032
--- End of stack trace from previous location where exception was thrown ---
at Npgsql.NpgsqlDataReader.NextResult(Boolean async, Boolean isConsuming) in C:\projects\npgsql\src\Npgsql\NpgsqlDataReader.cs:line 444
at Npgsql.NpgsqlDataReader.NextResult() in C:\projects\npgsql\src\Npgsql\NpgsqlDataReader.cs:line 332
at Npgsql.NpgsqlCommand.ExecuteDbDataReader(CommandBehavior behavior, Boolean async, CancellationToken cancellationToken) in C:\projects\npgsql\src\Npgsql\NpgsqlCommand.cs:line 1218
at Npgsql.NpgsqlCommand.ExecuteDbDataReader(CommandBehavior behavior) in C:\projects\npgsql\src\Npgsql\NpgsqlCommand.cs:line 1130
at System.Data.Common.DbCommand.ExecuteReader()
at Microsoft.EntityFrameworkCore.Storage.Internal.RelationalCommand.Execute(IRelationalConnection connection, DbCommandMethod executeMethod, IReadOnlyDictionary`2 parameterValues)
at Microsoft.EntityFrameworkCore.Storage.Internal.RelationalCommand.ExecuteReader(IRelationalConnection connection, IReadOnlyDictionary`2 parameterValues)
at Microsoft.EntityFrameworkCore.Migrations.HistoryRepository.GetAppliedMigrations()
at Microsoft.EntityFrameworkCore.Migrations.Internal.Migrator.Migrate(String targetMigration)
at Microsoft.EntityFrameworkCore.Design.Internal.MigrationsOperations.UpdateDatabase(String targetMigration, String contextType)
at Microsoft.EntityFrameworkCore.Design.OperationExecutor.UpdateDatabase.<>c__DisplayClass0_1.<.ctor>b__0()
at Microsoft.EntityFrameworkCore.Design.OperationExecutor.OperationBase.Execute(Action action)
42P01: relation "__EFMigrationsHistory" does not exist
What I did
I searched this error and I found this bug on github
the suggested solution there was to create the table manually, and I did that, but this did not solve the problem.
I also tried to open a new .NET 5 project and install the latest version of the Npgsql.EntityFrameworkCore.PostgreSQL v 5.0.0 provider to connect to that database and I still facing this problem
this question does not solve the problem EF Core - Table '*.__EFMigrationsHistory' doesn't exist
I had same problem because my database (POSTGRESQL) had another schema that contains ____EFMigrationsHistory table. I think it is a provider's bug, in my case postgresql provider. I created same table manually on new schema then update database command executed successfully
I faced this problem while working with a very huge enterprise, so the solution applied by the enterprise themselves, they deleted the schema and created it again (as they told me) maybe they did some other stuff, I do not know.
After that everything went well, and the Update-Database statement worked perfectly well.
I also have the same issue, but I don't have permissions to delete schema and create it again in my company.
I solved problem in a another way:
I run command to execute migration. Add-Migration InitialIdentityUser -Context IdentityUserDbContext
After that, I use tricky command Script-Migration -from 0. This command generate bunch of sql queries to create schema and tables.
I use pretty cool tool - flyway for my migrations https://flywaydb.org/
Execute baseline command with flyway tool.
flyway baseline -locations=filesystem:. -url=... -user=... -password=... -baselineVersion="001" -schemas=...
Finally, execute migrate command to migrate all sql tables.
flyway migrate -locations=filesystem:. -url=... -user=... -password=... -schemas=...
I have a project where the database is redshift and I am using Postgrex adapter in my phoenix project, locally I am using postgresql, and everything is working fine, but when I deploy and try to run migrations, I am getting this error.
15:39:27.201 [error] Could not create schema migrations table. This error usually happens due to the following:
* The database does not exist
* The "schema_migrations" table, which Ecto uses for managing
migrations, was defined by another library
* There is a deadlock while migrating (such as using concurrent
indexes with a migration_lock)
To fix the first issue, run "mix ecto.create".
To address the second, you can run "mix ecto.drop" followed by
"mix ecto.create". Alternatively you may configure Ecto to use
another table for managing migrations:
config :my_service, MyService.Repo,
migration_source: "some_other_table_for_schema_migrations"
The full error report is shown below.
▸ Given the following expression: Elixir.MyService.StartupTasks.init()
▸ The remote call failed with:
▸ ** (exit) %Postgrex.Error{connection_id: 5598, message: nil, postgres: %{code: :feature_not_supported, file: "/home/ec2-user/padb/src/pg/src/backend/commands/tablecmds.c", line: "3690", message: "timestamp or timestamp with time zone column do not support precision.", pg_code: "0A000", routine: "xen_type_size_from_attr", severity: "ERROR"}, query: nil}
▸ (ecto_sql) lib/ecto/adapters/sql.ex:629: Ecto.Adapters.SQL.raise_sql_call_error/1
▸ (elixir) lib/enum.ex:1336: Enum."-map/2-lists^map/1-0-"/2
▸ (ecto_sql) lib/ecto/adapters/sql.ex:716: Ecto.Adapters.SQL.execute_ddl/4
▸ (ecto_sql) lib/ecto/migrator.ex:633: Ecto.Migrator.verbose_schema_migration/3
▸ (ecto_sql) lib/ecto/migrator.ex:477: Ecto.Migrator.lock_for_migrations/4
▸ (ecto_sql) lib/ecto/migrator.ex:401: Ecto.Migrator.run/4
▸ (my_service) lib/my_service/startup_tasks.ex:11: MyService.StartupTasks.migrate/0
▸ (stdlib) erl_eval.erl:680: :erl_eval.do_apply/6
It seems that redshift does not support some of the data types that postgres supports, so is there a better way to go about this or can I, create my own schema migrations table with another time stamp?
There are limitations that the driver cannot outcome, since the principle of working compared to postgres database differs, here is the documentation to the ecto adapter.
In documentation is stated:
We highly recommend reading the Designing Tables section from the AWS
Redshift documentation.
If you want to continue to use postgres on local, then you will need to create 2 separate repos and respectively migrations. Here are the commands you can use to migrate a separate repo.
I recommend however to get a dev instance of redshift and use it for development, since the method of working with databases like redshift is different and you can easily make a mistake.
I have a multi-page mean.js app, that I need to host. I've tried the steps in this guide https://scotch.io/tutorials/deploying-a-mean-app-to-amazon-ec2-part-2 but when I try to run a local version of my app using a remote version of mongodb on an amazon EC2 instance, I run into the following error.
C:\Users\Forest\Desktop\CS\BOROWR\node_modules\mongoose\node_modules\mongodb\lib\mongodb\connection\base.js:246
throw message;
^
Error: Error setting TTL index on collection : sessions
at C:\Users\Forest\Desktop\CS\BOROWR\node_modules\connect-mongo\lib\connect-mongo.js:169:23
at C:\Users\Forest\Desktop\CS\BOROWR\node_modules\mongoose\node_modules\mongodb\lib\mongodb\db.js:1499:46
at C:\Users\Forest\Desktop\CS\BOROWR\node_modules\mongoose\node_modules\mongodb\lib\mongodb\db.js:1632:20
at C:\Users\Forest\Desktop\CS\BOROWR\node_modules\mongoose\node_modules\mongodb\lib\mongodb\command_cursor.js:152:16
at C:\Users\Forest\Desktop\CS\BOROWR\node_modules\mongoose\node_modules\mongodb\lib\mongodb\db.js:1196:16
at C:\Users\Forest\Desktop\CS\BOROWR\node_modules\mongoose\node_modules\mongodb\lib\mongodb\db.js:1905:9
at Server.Base._callHandler (C:\Users\Forest\Desktop\CS\BOROWR\node_modules\mongoose\node_modules\mongodb\lib\mongodb\connection\base.js:453:41)
at C:\Users\Forest\Desktop\CS\BOROWR\node_modules\mongoose\node_modules\mongodb\lib\mongodb\connection\server.js:488:18
at MongoReply.parseBody (C:\Users\Forest\Desktop\CS\BOROWR\node_modules\mongoose\node_modules\mongodb\lib\mongodb\responses\mongo_reply.js:68:5)
at null.<anonymous> (C:\Users\Forest\Desktop\CS\BOROWR\node_modules\mongoose\node_modules\mongodb\lib\mongodb\connection\server.js:446:20)
at emit (events.js:107:17)
at null.<anonymous> (C:\Users\Forest\Desktop\CS\BOROWR\node_modules\mongoose\node_modules\mongodb\lib\mongodb\connection\connection_pool.js:207:13)
at emit (events.js:110:17)
at Socket.<anonymous> (C:\Users\Forest\Desktop\CS\BOROWR\node_modules\mongoose\node_modules\mongodb\lib\mongodb\connection\connection.js:440:22)
at Socket.emit (events.js:107:17)
at readableAddChunk (_stream_readable.js:163:16)
In terms of my code, the only thing I've changed is replacing the line
module.exports = {
db: 'mongodb://localhost/borowr-dev',
with
module.exports = {
db: 'mongodb://MY_EC2_URL.com:27017/borowr-dev',
I previously tried updating my version of connect-mongo from 0.4.2 to 1.1.0, which caused other problems. Any help is appreciated, thanks.