I'm getting "org.postgresql.util.PSQLException: FATAL: database “null” does not exist" in connecting PostgreSQL to my SpringCloudDataFlow Server App in PCF environment.
I have successfully performed following steps.
Deployed SCDF(Spring-Cloud-Dataflow) server in PCF (1.7.3 version)
Created PostgreSQL service instance with 'Standalone' plan. Note: I don't have any other database service available in PCF marketplace.
Connect to that instance (using host (IP) and autogenerated credentials) by a third party software and create database using script 'CREATE DATABASE scdf'
Bind 'PostgreSQL service instance' with 'SCDF server app'.
Set environment variables
spring_datasource_driver_class_name = org.postgresql.Driver
spring_datasource_username [PostgreSQL_Instance_Autogenerated_Username]
spring_datasource_password [PostgreSQL_Instance_Autogenerated_Password]
spring_datasource_url "jdbc:postgresql://10.254.48.231:5432/scdf"
After setting environment variables, when I restart SCDF server app, it gives exception and crash the app
org.postgresql.util.PSQLException: FATAL: database “null” does not exist
Can anyone help please.
A good first step is to make sure the PostgreSQL service-instance is functional on PCF.
Perhaps you could connect to the host/user/pass from outside of PCF via a DB client tool or from other applications. If this is successful standalone, then there's something wrong in supplying the credentials to the SCDF-server.
It is unclear how you're supplying database properties to SCDF. You may have to wrap those "datasource" properties as a well-defined JSON, and provided as the value for SPRING_APPLICATION_JSON property attached to the SCDF-server. If you continue to see issues, please update the description with manifest.yml and other information about the environment.
I'm trying to install postgreSQL on my windows 10 computer for the first time. I got an error at the end of the installation saying that there was a "problem running post-install step. Installation may not complete correctly. The database cluster initialization failed."
When I run the sql shell I get an error trying to do the default login that says 'chcp' is not recognized as an internal or external command. I set the environmental path variable to the bin of the Postgres folder in my program files. I also tried a number of other (but very dated) solutions to similar problems users experienced such as moving my data directory outside of the Postgres directory entirely. Most of these solutions date back to 2012 and don't seem to work anymore.
The one that seemed closest to working is postgresql installation failed.
However, I can't find "postgres" as a user. I get an error saying:
"An object named "postgres" cannot be found. Check the selected object types and location for accuracy and ensure that you typed the object name correctly, or remove this object from the selection."
Does anybody have any updated solutions/tips for this?
I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.
Trying to install MobileFirst 6.3 (Using DB2 v 10.5, Windows Server 2012 R2 std) - and during the creation of the DB2 APPCNTR database stage, I get the error:
Creating database APPCNTR (this may take 5 minutes) ...failed:
Cannot connect to database 'APPCNTR' with user 'db2admin' after it was created: com.ibm.db2.jcc.am.SqlException: DB2 SQL Error: SQLCODE=-1035, SQLSTATE=57019, SQLERRMC=null, DRIVER=4.17.29
This is a clean installation of DB2, with no other programs using it (that I know of). The db2admin user is a member of my windows security group 'DB2ADMS' as well as 'DB2USERS' just in case
If I go back in the installer, then press next again, it says the database is already created (not sure if it's fully successful or partial)....
I believe that the database has been created successfully but for some reason, it is not possible to connect to the database via JDBC.
The explanation for the error message can be found here -- it can be many reasons: http://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.5.0/com.ibm.db2.luw.messages.sql.doc/doc/msql01035n.html
When that happens to me the issue is often resolved by waiting a bit and retarting Installation Manager. Restarting the DB2 instance helps also. (If you don't restart Installation Manager, it believes it needs to create the database and fails because the database was created).
If the problem is not resolved by waiting a bit, you can also create the database manually prior running install Manager using that procedure (use a different database name than the one that is considered busy):
https://www-01.ibm.com/support/knowledgecenter/SSHS8R_6.3.0/com.ibm.worklight.installconfig.doc/admin/t_creating_the_db2_database_for_App_Center.html
Then use that database name when running Installation Manager,
Hi I am working with ReplicAction tool to transfer data from Lotus Notes View to Oracle Database.
When i Create the link document for Oracle DB it is created successfully without any error
When I create the Include Table for Oracle Db it is created successfully and all columns are listed
When i create the Replication it is also created successfully,
But when the job executes it gives the error is log :
05/08/2012 01:37:16 AM Starting Replication: BADtoProductPortal
05/08/2012 01:37:19 AM Error: <ODBC Error> [DataDirect][ODBC Oracle driver][Oracle]ORA-12154: TNS:could not resolve service name
05/08/2012 01:37:19 AM Error: Information: Unable to open Link: PPLink
05/08/2012 01:37:19 AM Error: Replication to Link <PPLink> did not complete
05/08/2012 01:37:20 AM End of Replication: BADtoProductPortal
If the error is with service name, Then i think we should not be able to create Link document also.
When i use ODBC connection for link, then i am unable to create Replication job, giving the error like Notes Data field "ID" does not match the source data field.
But i know it was working before.
I suggest to check that the TASK runing the job uses the same TNS entry as you are doing "manually".
I suggest to check that the TASK has also access to your Oracle driver. This tasks has right to run it?
ORA-12154 error is thrown during the logon process to a database. This error indicates that the communication software (TNS) in Oracle ( SQL *Net or Net8 ) did not recognize the host/service name specified in the connection parameters.
So the issue is clearly a type en "environment difference" between your configuration when you run manually the replication and when the job run it.
Hope I help
I'm assuming here that when you successfully replicate you're doing it manually from your local machine, and when the job fails it's running scheduled on a server. If that's the case I agree with Emmanuel. Remember running the job locally uses the local tnsnames.ora file, running it scheduled uses the tnsnames.ora file on the server. You may not be aware of anything changing but are you responsible for maintainance on the server?