How to tell if data checksum feature is on in Postgres - postgresql

Postgres 9.3 introduces a data checksum feature which can detect corruption in pages. Is there a way to query the database to determine if this is on?
Being hosted on a PaaS system, I don't have access to the actual server to check any configurations settings there. I also only have access to our database and not the main postgres database either. Is there a way to determine if this is on from a psql console only?

show data_checksums;
data_checksums
----------------
off
http://www.postgresql.org/docs/current/static/runtime-config-preset.html

You can use pg_controldata see if your postgresql cluster enable data_checksum.
if version=0 then your cluster disable the function.
And data_checksums parameter add by PostgreSQL 9.3.4, if your postgresql version small than that, you cannt select this guc parameter. you must check it by control file.
pg93#db-172-16-3-150-> pg_controldata |grep checksum
Data page checksum version: 0

From 9.4 on, you can try the following query:
select * from pg_settings where name ~ 'checksum';
https://paquier.xyz/postgresql-2/postgres-9-4-feature-highlight-data-checksum-switch-as-a-guc-parameter/

Related

How to fix "ERROR: column c.relhasoids does not exist" in Postgres?

I’m trying to CREATE TABLE command in Postgresql.
After creating a table, if I punch in TABLE table name, it works.
But I punch in \d table name, I keep getting an error below.
ERROR: column c.relhasoids does not exist
LINE 1: ...riggers, c.relrowsecurity, c.relforcerowsecurity, c.relhasoi...
I attempted DROP DATABASE table name recreated a database and recreated a table again several times. But it didn't work.
Any suggestions would be appreciated! Thank you.
I am able to reproduce your error if I am using Postgres v.12 and an older client (v.11 or earlier):
[root#def /]# psql -h 172.17.0.3
psql (11.5, server 12.0)
WARNING: psql major version 11, server major version 12.
Some psql features might not work.
Type "help" for help.
postgres=# create table mytable (id int, name text);
CREATE TABLE
postgres=# table mytable;
id | name
----+------
(0 rows)
postgres=# \d mytable;
ERROR: column c.relhasoids does not exist
LINE 1: ...riggers, c.relrowsecurity, c.relforcerowsecurity, c.relhasoi...
^
postgres=#
This is because in v. 12, table OIDs are no longer treated as special columns, and hence the relhasoids column is no longer necessary. Please make sure you're using a v. 12 psql binary so you don't encounter this error.
You may not necessarily be using psql, so the more general answer here is to make sure you’re using a compatible client.
For anyone running Postgres as a Docker container:
Instead of running psql from the host, run it from inside the container e.g.
docker exec -it postgres_container_name psql your_connection_string
The Postgres image always ships with the corresponding—and thus always updated—version of psql so you don't have to worry about having the correct version installed on the host machine.
If you're using DataGrip, there's an easy fix:
Try using "Introspect using JDBC metadata". This fixed it for me when (I think) I had a version mismatch between postgresql server and DataGrip client.
Under your connection settings -> Options tab -> check Introspect using JDBC metadata
According to https://www.jetbrains.com/help/datagrip/data-sources-and-drivers-dialog.html#optionsTab :
Switch to the JDBC-based introspector.
To retrieve information about database objects (DB metadata), DataGrip
uses the following introspectors:
A native introspector (might be unavailable for certain DBMS). The
native introspector uses DBMS-specific tables and views as a source of
metadata. It can retrieve DBMS-specific details and produce a more
precise picture of database objects.
A JDBC-based introspector (available for all the DBMS). The JDBC-based
introspector uses the metadata provided by the JDBC driver. It can
retrieve only standard information about database objects and their
properties.
Consider using the JDBC-based intorspector when the native
introspector fails or is not available.
The native introspector can fail, when your database server version is
older than the minimum version supported by DataGrip.
You can try to switch to the JDBC-based introspector to fix problems
with retrieving the database structure information from your database.
For example, when the schemas that exist in your database or database
objects below the schema level are not shown in the Database tool
window.
The issue is the client (psql) is a different version from the postgres server. I have seen this issue with psql version 11 talking to postgres version 12. To solve this issue upgrade the psql version to 12.
If you are running a docker postgres, you can exec into the container then use the psql client installed there.
# get the container id with this
docker ps
# Then exec into the container, please note the host will now be 120.0.0.1
docker exec -it c12e8c6b8eb5 /bin/bash
I had this issue because my psql was 9.2 and the server version was 12.7.
So ... clearly the psql client needs to be updated. But how?
Before you go downloading/installing anything though you may already have the right version. In my case I did.
I executed which psql which showed my version was coming from /usr/bin/psql.
I then checked /usr/pgsql-12/bin and found there was a psql in there.
So all I needed to do was ensure psql was picked up from there.
There are a number of places that could be controlling this; in my case I just added this line to my .pgsql_profile (in the postgres user's home directory):
export PATH="/usr/pgsql-12/bin:$PATH"
Logging out and back in as postgres and executing which psql showed the change had been successful:
which psql
/usr/pgsql-12/bin/psql
This answer is specific to pgcli
If you are using pgcli you may be encountering this issue. It's solved by updating the python package pgspecial.
If you installed pgcli using pip, you can simply do, depending on your python version:
pip install -U pgspecial
or
pip3 install -U pgspecial
If you are using Ubuntu and intalled pgcli using apt, you can either switch it to pip with:
sudo apt remove --purge pgcli
pip3 install pgcli
or update the distribution package python-pgspecial or python3-pgspecial from the Ubuntu packages web site. In that case you may need to update its dependencies as well.
I had this issue today, was unable to continue work due to this, strangely the application code is working fine.
Later, found this issue is only occurring if I use OmniDb client I use to connect to DB.
I have switched client to default pgAdmin 4 that comes with postgres installation & issue is not occurring anymore pgAdmin 4. Link: https://www.pgadmin.org/download/pgadmin-4-windows/
Its possible that OmniDb client might be older, but no time to troubleshoot it, using pgAdmin 4 for now.
Hope that helps.
Just update DataGrip solved this issue, Datagrip updated to version DataGrip 2019.3.3, Build #DB-193.6494.42, built on February 12, 2020, Now working :)
Just for DataGrip users!
I had the same issue today too. In my case, the problem was solved when I deleted the version 12 and installed the version 11. Seems that v12 has some features that must be create along the others columns.
I had the same problem.
But I found the solution by downloading the latest build on 14/10/2019
Follow the link:
https://postbird.paxa.kuber.host/2019_10_14.06_42-master-7a9e949
I hope it helps
To fix this, edit Postgres.php file and comment the lines from hasObjectID function as shown below.
function hasObjectID($table) {
$c_schema = $this->_schema;
$this->clean($c_schema);
$this->clean($table);
/*
$sql = "SELECT relhasoids FROM pg_catalog.pg_class WHERE relname='{$table}'
AND relnamespace = (SELECT oid FROM pg_catalog.pg_namespace WHERE nspname='{$c_schema}')";
$rs = $this->selectSet($sql);
if ($rs->recordCount() != 1) return null;
else {
$rs->fields['relhasoids'] = $this->phpBool($rs->fields['relhasoids']);
return $rs->fields['relhasoids'];
}
*/
}
I had the same issue when using PgAdmin to query the database.
Once I installed the newest version of PgAdmin the error disappeared!
You might also try restarting pgadmin.
After upgrading from postgres96 to postgres12 I had the same issue. My pgadmin was running psql v12.0 so that wasn't the issue. I restarted pgadmin for a separate issue and the relhasoids issue went away.
If anyone could explain to me why this worked that would be appreciated.
Just use version 11.
how to install version 11
https://websiteforstudents.com/how-to-install-postgresql-11-on-ubuntu-16-04-18-04-servers/
I also got same issue with my postgresql tables. I have fixed this issue by below query.
ALTER Table MyDataBase.table_name add column column_name data_type default 0 not null;
commit;

How to change max_connections for Postgres through SQL command

We have a hosted PostgreSQL, with no access to the system or *.conf files.
I do have a admin access and can connect to it using Oracle SQL developer.
Can I run any command to increase the max_connections. All other parameters seems to be ok shared mem and buffers can hold more connections so there is not problem there.
Changing max_connection parameter needs a Postgres restart
Commands
Check max_connection just to keep current value in mind
SHOW max_connections;
Change max_connection value
ALTER SYSTEM SET max_connections TO '500';
Restart PostgreSQL server
Apparently, the hosted Postgres we are using does not provide this option. (compose.io)
So the work around is to use a pgbouncer to manage you connections better.

ERROR: cannot execute CREATE TABLE in a read-only transaction

I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.

Updating Heroku Postgres from dev to basic

I am trying to update my database from Dev to Basic on heroku. I followed all the steps mentioned here but after heroku pg:promote HEROKU_POSTGRESQL_WHATEVER I wanted to check if my database had everything, so I just went and looked on the website and for the basic version it says
![Data Size 0 B
Tables 0
PG Version ?][1]
The basic
While it should be
I am not sure what went wrong.
The table and database size is computed via an asynchronous process. This can sometimes take a little while to show. If you've recently migrated then you should try connecting with heroku pg:psql then running:
VACUUM ANALYZE;
This will ensure Postgres has proper stastics then reports the tables correctly for when Heroku asks about the table size. Additionally you could manually explore your database once connected to ensure your data is there:
\dt --- to display tables
SELECT * FROM foo; --- to ensure data is there on a specific table

App to monitor PostgreSQL queries in real time?

I'd like to monitor the queries getting sent to my database from an application. To that end, I've found pg_stat_activity, but more often then not, the rows which are returned read " in transaction". I'm either doing something wrong, am not fast enough to see the queries come through, am confused, or all of the above!
Can someone recommend the most idiot-proof way to monitor queries running against PostgreSQL? I'd prefer some sort of easy-to-use UI based solution (example: SQL Server's "Profiler"), but I'm not too choosy.
PgAdmin offers a pretty easy-to-use tool called server monitor
(Tools ->ServerStatus)
With PostgreSQL 8.4 or higher you can use the contrib module pg_stat_statements to gather query execution statistics of the database server.
Run the SQL script of this contrib module pg_stat_statements.sql (on ubuntu it can be found in /usr/share/postgresql/<version>/contrib) in your database and add this sample configuration to your postgresql.conf (requires re-start):
custom_variable_classes = 'pg_stat_statements'
pg_stat_statements.max = 1000
pg_stat_statements.track = top # top,all,none
pg_stat_statements.save = off
To see what queries are executed in real time you might want to just configure the server log to show all queries or queries with a minimum execution time. To do so set the logging configuration parameters log_statement and log_min_duration_statement in your postgresql.conf accordingly.
pg_activity is what we use.
https://github.com/dalibo/pg_activity
It's a great tool with a top-like interface.
You can install and run it on Ubuntu 21.10 with:
sudo apt install pg-activity
pg_activity
If you are using Docker Compose, you can add this line to your docker-compose.yaml file:
command: ["postgres", "-c", "log_statement=all"]
now you can see postgres query logs in docker-compose logs with
docker-compose logs -f
or if you want to see only postgres logs
docker-compose logs -f [postgres-service-name]
https://stackoverflow.com/a/58806511/10053470
I haven't tried it myself unfortunately, but I think that pgFouine can show you some statistics.
Although, it seems it does not show you queries in real time, but rather generates a report of queries afterwards, perhaps it still satisfies your demand?
You can take a look at
http://pgfouine.projects.postgresql.org/