PostgreSQL said: MultiXactId 1085002 has not been created yet -- apparent wraparound - postgresql

I got a weird error while I was writing to my Postgres database over Aurora PostgreSQL on AWS
PostgreSQL version 9.6.11
my tries to fix that issue on table admin_user
Vacuum admin_user
Vacuum Freeze admin_user
I couldn't recreate the table as it's connected to all other tables and will cause a big mess
Update question
I cant access the table

For reasons that we probably cannot fathom here (could be a PostgreSQL software bug that might have been fixed by 9.6.19, could be a hardware problem), you have suffered data corruption.
Since you are using a hosted database and have no access to the database except through SQL, your options are limited.
The best is probably to use subtransactions to lift as many data as you can from the table (called test in this example):
/* will receive salvaged rows */
CREATE TABLE test_copy (LIKE test);
You can run something like this code:
DO
$$DECLARE
c CURSOR FOR SELECT * FROM test;
r test;
cnt bigint := 0;
BEGIN
OPEN c;
LOOP
cnt := cnt + 1;
/* block to start a subtransaction for each row */
BEGIN
FETCH c INTO r;
EXIT WHEN NOT FOUND;
EXCEPTION
WHEN OTHERS THEN
/* there was data corruption fetching the row */
RAISE WARNING 'skipped corrupt data at row number %', cnt;
NOVE c;
CONTINUE;
END;
/* row is good, salvage it */
INSERT INTO test_copy VALUES (r.*);
END LOOP;
END;$$;
Good luck.

Related

How to prevent or avoid running update and delete statements without where clauses in PostgreSQL

How to prevent or avoid running update or delete statements without where clauses in PostgreSQL?
Same as SQL_SAFE_UPDATES statement in MySQL is needed for PostgreSQL.
For example:
UPDATE table_name SET active=1; -- Prevent this statement or throw error message.
UPDATE table_name SET active=1 WHERE id=1; -- This is allowed
My company database has many users with insert and update privilege any one of the users do that unsafe update.
In this secoario how to handle this.
Any idea can write trigger or any extension to handle the unsafe update in PostgreSQL.
I have switched off autocommits to avoid these errors. So I always have a transaction that I can roll back. All you have to do is modify .psqlrc:
\set AUTOCOMMIT off
\echo AUTOCOMMIT = :AUTOCOMMIT
\set PROMPT1 '%[%033[32m%]%/%[%033[0m%]%R%[%033[1;32;40m%]%x%[%033[0m%]%# '
\set PROMPT2 '%[%033[32m%]%/%[%033[0m%]%R%[%033[1;32;40m%]%x%[%033[0m%]%# '
\set PROMPT3 '>> '
You don't have to insert the PROMPT statements. But they are helpful because they change the psql prompt to show the transaction status.
Another advantage of this approach is that it gives you a chance to prevent any erroneous changes.
Example (psql):
database=# SELECT * FROM my_table; -- implicit start transaction; see prompt
-- output result
database*# UPDATE my_table SET my_column = 1; -- missed where clause
UPDATE 525125 -- Oh, no!
database*# ROLLBACK; -- Puh! revert wrong changes
ROLLBACK
database=# -- I'm completely operational and all of my circuits working perfectly
There actually was a discussion on the hackers list about this very feature. It had a mixed reception, but might have been accepted if the author had persisted.
As it is, the best you can do is a statement level trigger that bleats if you modify too many rows:
CREATE TABLE deleteme
AS SELECT i FROM generate_series(1, 1000) AS i;
CREATE FUNCTION stop_mass_deletes() RETURNS trigger
LANGUAGE plpgsql AS
$$BEGIN
IF (SELECT count(*) FROM OLD) > TG_ARGV[0]::bigint THEN
RAISE EXCEPTION 'must not modify more than % rows', TG_ARGV[0];
END IF;
RETURN NULL;
END;$$;
CREATE TRIGGER stop_mass_deletes AFTER DELETE ON deleteme
REFERENCING OLD TABLE AS old FOR EACH STATEMENT
EXECUTE FUNCTION stop_mass_deletes(10);
DELETE FROM deleteme WHERE i < 100;
ERROR: must not modify more than 10 rows
CONTEXT: PL/pgSQL function stop_mass_deletes() line 1 at RAISE
DELETE FROM deleteme WHERE i < 10;
DELETE 9
This will have a certain performance impact on deletes.
This works from v10 on, when transition tables were introduced.
If you can afford making it a little less convinient for your users, you might try revoking UPDATE privilege for all "standard" users and creating a stored procedure like this:
CREATE FUNCTION update(table_name, col_name, new_value, condition) RETURNS void
/*
Check if condition is acceptable, create and run UPDATE statement
*/
LANGUAGE plpgsql SECURITY DEFINER
Because of SECURITY DEFINER this way your users will be able to UPDATE despite not having UPDATE privilege.
I'm not sure if this is a good approach, but this way you can force as strict UPDATE (or anything else) requirements as you wish.
Of course the more complicated UPDATES are required, the more complicated has to be your procedure, but if this is mostly just about updating single row by ID (as in your example) this might be worth a try.

How to delete multiple blob large objects in Postgres

I have million of rows in pg_largeobject_metadata table I want to delete. What I have tried so far is :
A simple select lo_unlink(oid) works fine
A perform lo_unlink(oid) in a loop of 10000 rows will also work fine
So when I delete recursively multiple rows i get this error. I cannot increase max_locks_per_transaction because it is managed by AWS.
ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
CONTEXT: SQL statement "SELECT lo_unlink(c_row.oid)" PL/pgSQL function inline_code_block line 21 at
PERFORM SQL state: 53200
Here is the program I tried to write but I still get the Out of shared memory ERROR.
DO $proc$
DECLARE
v_fetch bigInt;
v_offset bigInt;
nbRows bigInt;
c_row record;
c_rows CURSOR(p_offset bigInt, p_fetch bigInt) FOR SELECT oid FROM pg_largeobject_metadata WHERE oid BETWEEN 1910001 AND 2900000 OFFSET p_offset ROWS FETCH NEXT p_fetch ROWS ONLY;
BEGIN
v_offset := 0;
v_fetch := 100;
select count(*) into nbRows FROM pg_largeobject_metadata WHERE oid BETWEEN 1910001 AND 2900000;
RAISE NOTICE 'End loop nbrows = %', nbRows;
LOOP -- Loop the different cursors
RAISE NOTICE 'offseter = %', v_offset;
OPEN c_rows(v_offset, v_fetch);
LOOP -- Loop through the cursor results
FETCH c_rows INTO c_row;
EXIT WHEN NOT FOUND;
perform lo_unlink(c_row.oid);
END LOOP;
CLOSE c_rows;
EXIT WHEN v_offset > nbRows;
v_offset := v_offset + v_fetch; -- The next 10000 rows
END LOOP;
END;
$proc$;
I am using Pg 9.5
Can anyone has faced this issue and could help please?
Each lo_unlink() grabs a lock on the object it deletes. These locks are freed only at the end of the transaction, and they are capped by max_locks_per_transaction * (max_connections + max_prepared_transactions) (see Lock Management). By default max_locks_per_transaction is 64, and cranking it up by several order of magnitudes is not a good solution.
The typical solution is to move the outer LOOP from your DO block into your client-side code, and commit the transaction at each iteration (so each transaction removes 10000 large objects and commits).
Starting with PostgreSQL version 11, a COMMIT inside the DO block would be possible, just like transaction control in procedures is possible.

Postgres Locking a table inside the function is not working?

CREATE OR REPLACE FUNCTION()
RETURND VOID AS
BEGIN
FOR I IN 1..5
LOOP
LOCK TABLE tbl_Employee1 IN EXCLUSIVE MODE;
INSERT INTO tbl_Employee1
VALUES
(i,'test');
END LOOP;
COMMIT;
END;
$$ LANGUAGE PLPGSQL
When I select the table it is going into infinty loop means the transaction is not complete. Please help me out ?
Your code has been stripped down so much that it doesn't really make sense any more.
However, you should only lock the table once, not in each iteration of the loop. Plus you can't use commit in a function in Postgres, so you have to remove that as well. It's also bad coding style (in Postgres and Oracle) to not provide column names for the insert statement.
Immediate solution:
CREATE OR REPLACE FUNCTION ...
RETURNS VOID AS
$$
BEGIN
LOCK TABLE Employee1 IN EXCLUSIVE MODE;
FOR I IN 1..5 LOOP
INSERT INTO Employee1 (id, name)
VALUES (i,'test');
END LOOP;
-- no commit here!
END;
$$ LANGUAGE PLPGSQL
The above is needlessly complicated in Postgres and can be implemented much more efficiently without a loop:
CREATE OR REPLACE FUNCTION ....
RETURNS VOID AS
$$
BEGIN
LOCK TABLE Employee1 IN EXCLUSIVE MODE;
INSERT INTO Employee1 (id, name)
select i, test
from generate_series(1,5);
END;
$$ LANGUAGE PLPGSQL
Locking a table in exclusive mode seems like a bad idea to begin with. In Oracle as well, but in Postgres this might have more severe implications. If you want to prevent duplicates in the table, create a unique index (or constraint) and deal with errors. Or use insert ... on conflict in Postgres. That will be much more efficient (and scalable) than locking a complete table.
Additionally: LOCK TABLE IN EXCLUSIVE MODE; behaves differently in Oracle and Postgres. While Oracle will still allow read only queries on that table, you block every access to it in Postgres - including SELECT statements.

What is the scope of a PostgreSQL Temp Table?

I have googled quite a bit, and I have fairly decent reading comprehension, but I don't understand if this script will work in multiple threads on my postgres/postgis box. Here is the code:
Do
$do$
DECLARE
x RECORD;
b int;
begin
create temp table geoms (id serial, geom geometry) on commit drop;
for x in select id,geom from asdf loop
truncate table geoms;
insert into geoms (geom) select someGeomfield from sometable where st_intersects(somegeomfield,x.geom);
----do something with the records in geoms here...and insert that data somewhere else
end loop;
end;
$do$
So, if I run this in more than one client, called from Java, will the scope of the geoms temp table cause problems? If so, any ideas for a solution to this in PostGres would be helpful.
Thanks
One subtle trap you will run into though, which is why I am not quite ready to declare it "safe" is that the scope is per session, but people often forget to drop the tables (so they drop on disconnect).
I think you are much better off if you don't need the temp table after your function to drop it explicitly after you are done with it. This will prevent issues that arise from trying to run the function twice in the same transaction. (On commit you are dropping)
Temp tables in PostgreSQL (or Postgres) (PostGres doesn't exists) are local only and related to session where they are created. So no other sessions (clients) can see temp tables from other session. Both (schema and data) are invisible for others. Your code is safe.

Importing of a PostgreSQL DB on Ubuntu Server

So I am reallly new to PostgreSQL and for the last 3 days I am stuck with this. I need to import a PostgreSQL Database. The database lies in a folder and has various .tab files.
I have PostgreSQL 9.1 release. The database is imported by user postgres to which I granted all the possible privileges. I also granted all the possible privileges to the language.
The command that is run by a script is:
CREATE OR REPLACE FUNCTION unique_toplevel_corpus_name() RETURNS TRIGGER AS $$
DECLARE
count INTEGER := 0;
BEGIN
IF NEW.top_level = 'y' THEN
PERFORM * FROM corpus WHERE corpus.name = NEW.name AND corpus.t$
GET DIAGNOSTICS count = ROW_COUNT;
IF count != 0 THEN
RAISE EXCEPTION 'conflicting top-level corpus found: %'$
END IF;
END IF;
RETURN NEW;
END;
$$ language plpgsql;
The code is not mine. It is provided and works fine on my local machine (Ubuntu 12.04), importing the db without any problems, but has problems only on Ubuntu server (also 12.04).
This is the exception I keep getting:
#valian:/opt/annis/annis-service-2.2.1/bin$ ./annis-admin.sh import /home/FilesForAnnis/Korpora/relANNIS/relANNIS/
19:38:45.755 CorpusAdministration INFO: Importing corpus from: /home/FilesForAnnis/Korpora/relANNIS/relANNIS/
19:38:45.758 SpringAnnisAdministrationDao INFO: creating staging area
19:38:45.788 SpringAnnisAdministrationDao INFO: bulk-loading data
19:38:46.236 SpringAnnisAdministrationDao INFO: computing top-level corpus
Exception in thread "main" org.springframework.jdbc.BadSqlGrammarException: StatementCallback; bad SQL grammar [-- find the top-level corpus
ALTER TABLE _corpus ADD top_level boolean;
CREATE TRIGGER unique_toplevel_corpus_name BEFORE UPDATE ON _corpus FOR EACH ROW EXECUTE PROCEDURE unique_toplevel_corpus_name();
UPDATE _corpus SET top_level = 'n';
UPDATE _corpus SET top_level = 'y' WHERE pre = (SELECT min(pre) FROM _corpus);
-- add the toplevel corpus to the node table
CREATE INDEX tmp_corpus_toplevel ON _corpus (id) WHERE top_level = 'y';
ALTER TABLE _node ADD toplevel_corpus bigint;
UPDATE _node SET toplevel_corpus = _corpus.id FROM _corpus WHERE _corpus.top_level = 'y';
DROP INDEX tmp_corpus_toplevel;
]; nested exception is org.postgresql.util.PSQLException: ERROR: function unique_toplevel_corpus_name() does not exist
at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:98)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:406)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:427)
at annis.administration.SpringAnnisAdministrationDao.executeSqlFromScript(SpringAnnisAdministrationDao.java:617)
at annis.administration.SpringAnnisAdministrationDao.executeSqlFromScript(SpringAnnisAdministrationDao.java:608)
at annis.administration.SpringAnnisAdministrationDao.computeTopLevelCorpus(SpringAnnisAdministrationDao.java:226)
at annis.administration.CorpusAdministration.importCorpora(CorpusAdministration.java:85)
at annis.administration.CorpusAdministration$$FastClassByCGLIB$$ce864c53.invoke(<generated>)
at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:191)
at org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation.invokeJoinpoint(Cglib2AopProxy.java:688)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:110)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:621)
at annis.administration.CorpusAdministration$$EnhancerByCGLIB$$b6899e79.importCorpora(<generated>)
at annis.administration.AnnisAdminRunner.doImport(AnnisAdminRunner.java:176)
at annis.administration.AnnisAdminRunner.run(AnnisAdminRunner.java:74)
at annis.administration.AnnisAdminRunner.main(AnnisAdminRunner.java:49)
Caused by: org.postgresql.util.PSQLException: ERROR: function unique_toplevel_corpus_name() does not exist
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2062)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1795)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:479)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:353)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:345)
at org.springframework.jdbc.core.JdbcTemplate$1ExecuteStatementCallback.doInStatement(JdbcTemplate.java:420)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:395)
... 16 more
I also updated jdbc posgtres connector. The only difference between my local machine where everything works perfectly and the server is that I have java 1.7 on the local machine and not 1.6 but I do not think it can be the problem.
I have no idea what else I could try there...
This is not a problem with Ubuntu. This is a problem with your database import. Your database import error says that you are missing the trigger function.
Additionally I know the code isn't yours. To the extent you can replace the trigger with a partial unique index however your performance will be a lot better.
If the code isn't yours you will need to follow up with whoever wrote it as to why the function isn't there, but when you do that, you may want to follow up with them about replacing the trigger with this:
CREATE UNIQUE INDEX ON corpus(name) WHERE corpus.t;
This will perform better and it is easier to maintain.