we are using AWS Babelfish with Postgres enabled DB, ideally it's a Postgres DB. There are frequent errors related to could not open relation. The same SP executes fine sometimes and same one fails with error sporadically if not more frequently. I have found a article[https://www.postgresql.org/message-id/12791.1310599941%40sss.pgh.pa.us] which discuss about the error, but it doesn't point the exact issue. And other articles doesn't help me understand the pattern of the error. I have added .dbo to all the tables as one of the possible fix and dropped all the temp tables at end of the SP as well.
Level 16, State 1, Line 4
could not open relation with OID 54505
Related
I have problem with PostgreSQL database running as replica of the master database server. Database on master runs without any problems. But replica database runs only for few hours (it is random time) and after that crashing down by this reason:
WARNING: page 3318889 of relation base/16389/19632 is uninitialized
...
PANIC: WAL contains references to invalid pages
Have you any idea what is wrong please? I'm not able to solve this problem for many days! Thanks.
There was more Postgres bugs with these symptoms. Lot of was fixed already. Please, check if your Postgres is latest minor release. And if it is, then report this issue to mailing list https://www.postgresql.org/list/pgsql-hackers/.
I am attempting to use OPENJSON in a database that is running on SQL Server 2016, and get the following error when running this simple test query (which works fine on a different 2016 database)
select * from OPENJSON('{ "test": "test" }')
Invalid object name 'OPENJSON'.
I know about the compatibility level settings, but that doesn't seem to be the case on this database. It's compatibility level is already set to 130. This particular database was migrated from an old 2008R2 database. Is there something else we need to do to access the OPENJSON function?
EDIT
As a test, I created a new empty database on the same database server, and the above query works fine. So the database server doesn't seem to be the issue, it's something related to the one database we migrated.
If it matters, I'm connected as the SA account.
The database's reported compatibility level is 130 from both SSMS gui and by running SELECT compatibility_level FROM sys.databases WHERE name = 'MyDB';
For no reason other than to test, I ran ALTER DATABASE MyDB SET COMPATIBILITY_LEVEL = 130 , and now everything works. I don't know what would cause that.
EDIT
I think I know what caused the weird behavior now. The database that was exhibiting this behavior is a dev database that is created every single morning via replication from the prod database. After some more researching, the prod database was not set to compatibility level 130, but rather 100. I'm thinking that when replication occured and restored the dev DB from the prod log files, even though the dev db was set to 130, something was mismatched between the two. I've since upped prod to 130 as well and all should be good going forward.
pg_dump is failing with the error message:
"pg_dump FATAL: segment too big"
What does that mean?
PostgreSQL 10.4 on Ubuntu 16.04.
It appears that pg_dump passes the error messages it receives from the queries it is running into the logs.
The following line in the logs (maybe buried deeper if you have busy logs), shows the query that failed.
In this case, we had a corrupted sequence. Any query on the sequence, whether it was interactive, via a column default, or via pgdump, returned the "segment too big" error, and killed the querying process.
I figured out the new start value for the sequence, dropped the dependencies, and created a new sequence starting where the old one left off and then put the dependencies back.
pg_dump worked fine after that.
It is not clear why or how a sequence could get so corrupted that you would have a session killing error when it was accessed. We did have a recent database hard-crash though, so it may be related. (Although that sequence is accessed very rarely and it is unlikely we went down in the middle of incrementing it.)
I have a PostgreSQL database on my centos server.
Unfortunately since yesterday, in all existing schemas, all the tables are lost.
I saw log file and there was an unexpected reboot in recent days, probably a crash of the o.s.
Now the Postgres server it start correctly and I can view triggers, sequences and there aren't other problems.
Can I do something to recovery these tables?
Thanks.
I'm trying to create a Catalyst project connecting to an existing MS SQL Server database. I got the correct connection string and it's authenticating, but it's not finding any tables. Anyone have an idea of what I might be missing?
I substituted the real ip address, database name, username, and password but you get the idea.
This is the command I run:
script\qa_utility_create.pl model DB DBIC::Schema QA_Utility::Schema create=static "db_schema=DatabaseName" "dbi:ODBC:Driver={sql server};Server=1.1.1.1,1433;Database=DatabaseName" username password
When I run this, I get the below error:
exists "C:\strawberry\perl\site\bin\QA_Utility\lib\QA_Utility\Model"
exists "C:\strawberry\perl\site\bin\QA_Utility\t"
Dumping manual schema for QA_Utility::Schema to directory C:\strawberry\perl\site\bin\QA_Utility\lib ...
Schema dump completed.
WARNING: No tables found, did you forget to specify db_schema?
exists "C:\strawberry\perl\site\bin\QA_Utility\lib\QA_Utility\Model\DB.pm"
Check your db_schema as the error suggests. The default is usually "dbo".
So I had similar issues connecting with a mySQL database which drove me crazy for about 4 hours (I'm a newbie to Catalyst).
the create script was executing ok, but failed to pick up any tables giving the "WARNING No tables found...."
The tables were present however in the database.
Prior to this, I had been getting errors when the script tried to connect to the database, and after playing with the arguments for a while, the connection errors cleared and I assumed all was good at this point (wrong !!!!).
The suggested solution to specify the db_schema was misleading at this point, as the problem was more an issue with the connection failing to return any valid data. So I think what was happening was it was finding the database, connecting ok, but not returning any data, thus no tables....
After about 4 hours of playing with the arguments for the connection one combination just magically worked.
So here is the successful command line....
script/testcatalyst_create.pl model DB DBIC::Schema testcatalyst::Schema::perl_test create=static dbi:mysql:perl_test:user=root
The parameter which was causing the error was the last parameter which specifies the connection parameters dbi:mysql...
previously I had tried...
script/testcatalyst_create.pl model DB DBIC::Schema testcatalyst::Schema::perl_test create=dynamic dbi:mysql:perl_test,username=root
and many other formats from various online searches. The ":user=root" turned out to be the correct format.
Hope this helps someone else !!!!!!!