We had pg_dump failed with "backup : cache lookup failed for type 174104".
2016-01-06 03:08:46.572
EST,"postgres","aabbcc",13840,"[local]",568cf5bd.3610,3,"SELECT",2016-01-06
03:08:45 PST,2/24331,0,ERROR,XX000,"cache lookup failed for type
174104",,,,,,"SELECT proretset, prosrc, probin,
pg_catalog.pg_get_function_arguments(oid) AS funcargs,
pg_catalog.pg_get_function_identity_arguments(oid) AS funciargs,
pg_catalog.pg_get_function_result(oid) AS funcresult, proiswindow,
provolatile, proisstrict, prosecdef, proleakproof, proconfig, procost,
prorows, (SELECT lanname FROM pg_catalog.pg_language WHERE oid =
prolang) AS lanname FROM pg_catalog.pg_proc WHERE oid =
'174103'::pg_catalog.oid",,,"pg_dump"
Tried following solutions that not work.
1) bounced DB not work
2) vacuum full not work
Any idea is appreciated :)
3) pg_basebackup worked too
It is local RAID file system.
It did not seem to work after REINDEX of all the tables.
I tried pg_dump of all tables 1-by-1 but did not spot anything weird.
Related
I'm running pg_dump to create a script to automate the creation of a system like this:
pg_dump --dbname=postgresql://postgres:ohdsi#127.0.0.1:5432/OHDSI -t webapi.* > webapi.sql
This creates a sql script, but it is not really a sql script as it has code in it like what is shown below.
When this script is run as a sql script, it fails giving the error shown below.
Is there a way to get pg_dump to create a sql script that is standard sql and can be executed as a sql script?
Code sample from sql generated by pg_dump:
COPY webapi.cohort_version (asset_id, comment, description, version, asset_json, archived, created_by_id, created_date) FROM stdin;
\.
--
-- Data for Name: concept_of_interest; Type: TABLE DATA; Schema: webapi; Owner: ohdsi_admin_user
--
COPY webapi.concept_of_interest (id, concept_id, concept_of_interest_id) FROM stdin;
1 4329847 4185932
2 4329847 77670
3 192671 4247120
4 192671 201340
Error seen when running the script generated by pg_dump:
--
-- Name: penelope_laertes_uni_pivot id; Type: DEFAULT; Schema: webapi; Owner: ohdsi_admin_user
--
ALTER TABLE ONLY webapi.penelope_laertes_uni_pivot ALTER COLUMN id SET DEFAULT nextval('webapi.penelope_laertes_uni_pivot_id_seq'::regclass)
--
-- Data for Name: achilles_cache; Type: TABLE DATA; Schema: webapi; Owner: ohdsi_admin_user
--
COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
Error executing: COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
. Cause: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
Exception in thread "main" java.lang.RuntimeException: org.apache.ibatis.jdbc.RuntimeSqlException: Error executing: COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
. Cause: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
at org.yaorma.database.Database.executeSqlScript(Database.java:344)
at org.yaorma.database.Database.executeSqlScript(Database.java:332)
at org.nachc.tools.fhirtoomop.tools.build.postgres.build.A04_CreateAtlasWebApiTables.exec(A04_CreateAtlasWebApiTables.java:29)
at org.nachc.tools.fhirtoomop.tools.build.postgres.build.A04_CreateAtlasWebApiTables.main(A04_CreateAtlasWebApiTables.java:19)
Caused by: org.apache.ibatis.jdbc.RuntimeSqlException: Error executing: COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
. Cause: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
at org.apache.ibatis.jdbc.ScriptRunner.executeLineByLine(ScriptRunner.java:109)
at org.apache.ibatis.jdbc.ScriptRunner.runScript(ScriptRunner.java:71)
at org.yaorma.database.Database.executeSqlScript(Database.java:342)
... 3 more
Caused by: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2365)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:329)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:315)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:291)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:286)
at org.apache.ibatis.jdbc.ScriptRunner.executeStatement(ScriptRunner.java:190)
at org.apache.ibatis.jdbc.ScriptRunner.handleLine(ScriptRunner.java:165)
at org.apache.ibatis.jdbc.ScriptRunner.executeLineByLine(ScriptRunner.java:102)
... 5 more
--- EDIT ------------------------------------
The --inserts method in the accepted answer gave me exactly what I needed.
I ended up doing this:
pg_dump --inserts --dbname=postgresql://postgres:ohdsi#127.0.0.1:5432/OHDSI -t webapi.* > webapi.sql
The client tool you are using to restore the dump cannot deal with the data from the (nonstandard) COPY command being mixed into the script. You need psql to restore such a dump.
You can use the --inserts option of pg_dump to create a dump that contains INSERT statements rather than COPY. That will be slower to restore, but will work with more client tools.
However, your wish to get a standard SQL script is hopeless. PostgreSQL extends the standard in many ways, so a database cannot be dumped with a standard SQL script. Note, for example, that indexes are not defined by the SQL standard. If you are looking to transfer a PostgreSQL dump to a different RDBMS, you will be disappointed. That is more difficult.
I'm trying to run pg_dump and it's failing due to a orphaned sequence.
pg_dump -U db --format=custom --compress=0 db
pg_dump: error: query to get data of sequence "non_existing_table_id_seq" returned 0 rows (expected 1)
https://wiki.postgresql.org/wiki/Fixing_Sequences
The above wiki page has some snippets which can be used to fix this issue, the last snippit does work to display the orphaned snippets.
select ns.nspname as schema_name, seq.relname as seq_name
from pg_class as seq
join pg_namespace ns on (seq.relnamespace=ns.oid)
where seq.relkind = 'S'
and not exists (select * from pg_depend where objid=seq.oid and deptype='a')
order by seq.relname;
schema_name | seq_name
-------------+--------------------------------
public | non_existing_table_id_seq
public | another_non_existing_table_id_seq
(2 rows)
The command which should fix this issue doesn't run because column d.adsrc does not exist
It seems to have been removed from postgres-12.
https://stackoverflow.com/a/58798028/1891184 says I can replace d.adsrc with pg_get_expr(d.adbin, d.adrelid). and that runs, however the issue still remains.
Other than this, the database is working fine.
How can I either fix or remove the offending sequences in order to let pg_dump work?
I am saving a table which is enumerated and to be splayed on my hdb. After which we load the hdb directory
There was a corruption which caused
'2022.01.01T00:00:01.000 part
(.Q.L) error"
https://code.kx.com/q/ref/dotq/#ql-load/
This is due to load, and I tried corrupting partition to replicate:
Corrupted .d file
Corrupted splayed column(s)
Removed entire partition
Most of the above cases are handled in our exception.
".\2022.01.1\tbl. OS reports: No such file or directory"
but I couldn't replicate the use case where .Q.l happened nor find why it happened from my logs.
Can someone please suggest what kind of corruption could have caused the part error during load.
One possible cause would be a partition folder without correct read permissions:
$ mkdir -p badHDB/2001.01.01/tab1
$ mkdir -p badHDB/20011.01.01/tab2
$ mkdir -p badHDB/2002.01.01
$ chmod 000 badHDB/2002.01.01
Running it we see the error:
q badHDB
'part
[2] (.Q.L)
[0] \l badHDB
You could write a small function to try to narrow down the issues:
// https://code.kx.com/q/ref/system/#capture-stderr-output
q)tmp:first system"mktemp"
q)tab:flip `part`date`osError`files`error!flip {d:1_string x;{y:string y;(y;"D"$y),{r:system x;$[0~"J"$last r;(0b;-1_r;"");(1b;();first r)]} "ls ",x,"/",y," > ",tmp," 2>&1;echo $? >> ",tmp,";cat ",tmp}[d] each key x} `:badHDB
This would result in:
part date osError files error
-----------------------------------------------------------------------------------------------------------
"2001.01.01" 2001.01.01 0 ,"tab1" ""
"20011.01.01" 0 ,"tab2" ""
"2002.01.01" 2002.01.01 1 () "ls: cannot open directory 'badHDB/2002.01.01': Permission denied"
For a larger HDB filter down to partitions with issues:
select from tab where or[null date;osError]
I've used pg_dump --no-privileges --format custom --compress=0 some_database > my-dump.pgdump to dump a database, but I'm running into issues when I try to restore it.
Specifically, it appears to be loading function definitions before table definitions:
$ pg_restore ./my-dump.pgdump
…
create function my_function() returns …
language sql $$
select …
from some_table
where …
$$;
… later in the dump …
create table some_table ( … );
…
Which causes an error when I try to restore the dump:
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 4863; 0 16735 TABLE DATA some_table some_database
pg_restore: [archiver (db)] COPY failed for table "some_table": ERROR: relation "some_table" does not exist
LINE 3: from some_table
^
QUERY:
select …
from some_table
where …
CONTEXT: SQL function "my_function" during inlining
What's going on here? How can I trick pg_dump / pg_restore into doing things in the correct order?
Check your dump file for commands which mess with search_path, for example:
SELECT pg_catalog.set_config('search_path', '', false);
I encountered the same kind error as you (relation xxx does not exist ... during inlining) in a legacy project I inherited, even though it's running PostgreSQL 9.4.x.
I traced it to the above command.
The solution for me was to remove this command from the dump file.
After I did this I was able to restore the database without errors.
Note: the OP is using the custom format. There is no editing the binary file that is emitted.
In my experience, pg_dump using the custom format (-Fc) doesn't set check_function_bodies = false. But since it adds random functions at the top of the dump file (instead of putting all routines at the end), this causes pg_restore to barf.
I was able to workaround this issue by setting PGOPTIONS:
export PGOPTIONS="-c check_function_bodies=false"
pg_restore ...
That is strange. Ever since commit ef88199f611e625b496ff92aa17a447d254b9796 in 2003, pg_dump and pg_restore have emitted
SET check_function_bodies = false;
This setting makes sure that an error like you describe won't happen, because PostgreSQL won't check the validity of the function bodies.
Are you using an ancient PostgreSQL version or are you doing anything else that could mess with that?
If you run pg_restore on your dump (without specifying a destination database), does it emit the line?
The following works in PostgreSQL 8.4:
insert into credentials values('demo', pgp_sym_encrypt('password', 'longpassword'));
When I try it in version 9.1 I get this:
ERROR: function pgp_sym_encrypt(unknown, unknown) does not exist LINE
1: insert into credentials values('demo', pgp_sym_encrypt('pass...
^ HINT: No function matches the given name and argument types. You might need to add
explicit type casts.
*** Error ***
ERROR: function pgp_sym_encrypt(unknown, unknown) does not exist SQL
state: 42883 Hint: No function matches the given name and argument
types. You might need to add explicit type casts. Character: 40
If I try some explicit casts like this
insert into credentials values('demo', pgp_sym_encrypt(cast('password' as text), cast('longpassword' as text)))
I get a slightly different error message:
ERROR: function pgp_sym_encrypt(text, text) does not exist
I have pgcrypto installed. Does anyone have pgp_sym_encrypt() working in PostgreSQL 9.1?
On explanation could be that the module was installed into a schema that is not in your search path - or to the wrong database.
Diagnose your problem with this query and report back the output:
SELECT n.nspname, p.proname, pg_catalog.pg_get_function_arguments(p.oid) as params
FROM pg_catalog.pg_proc p
JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
WHERE p.proname ~~* '%pgp_sym_encrypt%'
AND pg_catalog.pg_function_is_visible(p.oid);
Finds functions in all schemas in your database. Similar to the psql meta-command
\df *pgp_sym_encrypt*
Make sure you install the extension on the desired schema.
sudo -i -u postgres
psql $database
CREATE EXTENSION pgcrypto;
OK, problem solved.
I was creating the pgcrypto extension as the first operation in the script. Then I dropped and added the VGDB database. That's why pgcrypto was there immediately after creating it, but didn't exist when running the sql later in the script or when I opened pgadmin.
This script is meant for setting up new databases and if I had tried it on a new database the create extension would have failed right away.
My bad. Thanks for the help, Erwin.
Just mention de schema where is installed pgcrypto like this:
#ColumnTransformer(forColumn = "TEST",
read = "public.pgp_sym_decrypt(TEST, 'password')",
write = "public.pgp_sym_encrypt(?, 'password')")
#Column(name = "TEST", columnDefinition = "bytea", nullable = false)
private String test;
I ran my (python) script again and the CREATE EXTENSION ran without error. The script also executes this command
psql -d VGDB -U postgres -c "select * from pg_available_extensions order by name"
which includes the following in the result set:
pgcrypto | 1.0 | 1.0 | cryptographic functions
So psql believes that it has installed pgcrypto.
Later in the same script when I execute
psql -d VGDB -U postgres -f sql/Create.Credentials.table.sql
where sql/Create.Credentials.table.sql includes this
insert into credentials values('demo', pgp_sym_encrypt('password', 'longpassword'));
I get this
psql:sql/Create.Credentials.table.sql:31: ERROR: function pgp_sym_encrypt(unknown, unknown) does not exist
LINE 1: insert into credentials values('demo', pgp_sym_encrypt('pass...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
When I open pgadmin it does not show pgcrypto in either the VGDB or postgres databases even though the query above called by psql shows that pgcrypto is installed.
Could there be an issue with needing to commit after using psql to execute the "create extension ..." command? None of my other DDL or SQL statements require a commit when they get executed with psql.
It's starting to look like psql is just flakey. Is there another way to call "create extension pgcrypto" - e.g. with Python's database support classes - or does that have to be run through psql?