How to alter external table in Redshift Spectrum? - amazon-redshift

I want to add a partition of data to my external table, but I'm receiving the error: ALTER EXTERNAL TABLE cannot run inside a transaction block.
I removed the BEGIN/END transaction but still the same error persists. I read on some forums that adding an isolation level might solve the problem, but wanted to get an opinion of others, if someone has experienced this before.

A standard statement like this works for me. If you are getting error from this as well, please share your exact statement ?
ALTER TABLE spectrum_schema.spect_test
ADD PARTITION (column_part='2019-07-23')
LOCATION 's3://bucketname/folder1/column_part=2019-07-23/';

Related

How does one report the row or a specific element(ie rowid) on which an error occured in Postgres/plpgsql?

From what I've seen so far on several websites there is no chance to do that. Asking here hoping I missed some crucial information. What I'm trying to do is migrating some tables varying sizes and if the procedure/function gets an error due to anything I want to insert the rowid to the log table. Thanks in advance.
Alper

`ERROR: cannot execute TRUNCATE TABLE in a read-only transaction` in Heroku PostgreSQL

I am getting the ERROR: cannot execute TRUNCATE TABLE in a read-only transaction in Heroku PostgreSQL. How could I fix it?
I am trying to TRUNCATE a table.
I am using the Heroku Postgres.
I have tried to figure out in the UI how I could change my permissions or something similar to be allowed to run not only the read-only transactions. But with no success.
This is currently possible, you have to set the transaction to "READ WRITE" when creating a data clip. Here is an example:
begin;
set transaction read write;
Delete FROM mytable where id > 2130;
COMMIT;
The feature you're looking at (Heroku Dataclips docs here) is intentionally made read-only. Its a reporting tool, not a database management tool. The express purpose is to allow surfacing data to a wider group of people associated with a project without the risk of someone accidentally (or otherwise) deleting or modifying data improperly. There is no way to make Dataclips read-write.
If you want full control to delete/modify data you'll need to use an appropriate interface, psql or pgAdmin if you prefer a GUI.
I had the same problem, but the error was solved by adding the following:
begin; set transaction read write;
(Without COMMIT;)
I don't get errors anymore, but I can't see my new table in the Schema Explorer. Do you know why?

Deal with Postgresql Error -canceling statement due to conflict with recovery- in psycopg2

I'm creating a reporting engine that makes a couple of long queries over a standby server and process the result with pandas. Everything works fine but sometimes I have some issues with the execution of those queries using a psycopg2 cursor: the query is cancelled with the following message:
ERROR: cancelling statement due to conflict with recovery
Detail: User query might have needed to see row versions that must be removed
I was investigating this issue
PostgreSQL ERROR: canceling statement due to conflict with recovery
https://www.postgresql.org/docs/9.0/static/hot-standby.html#HOT-STANDBY-CONFLICT
but all solutions suggest fixing the issue making modifications to the server's configuration. I can't make those modifications (We won the last football game against IT guys :) ) so I want to know how can I deal with this situation from the perspective of a developer. Can I resolve this issue using python code? My temporary solution is simple: catch the exception and retry all the failed queries. Maybe could be done better (I hope so).
Thanks in advance
There is nothing you can do to avoid that error without changing the PostgreSQL configuration (from PostgreSQL 9.1 on, you could e.g. set hot_standby_feedback to on).
You are dealing with the error in the correct fashion – simply retry the failed transaction.
The table data on the hot standby slave server is modified while a long running query is running. A solution (PostgreSQL 9.1+) to make sure the table data is not modified is to suspend the replication on the slave and resume after the query.
select pg_xlog_replay_pause(); -- suspend
select * from foo; -- your query
select pg_xlog_replay_resume(); --resume
I recently encountered a similar error and was also in the position of not being a dba/devops person with access to the underlying database settings.
My solution was to reduce the time of the query where ever possible. Obviously this requires deep knowledge of your tables and data, but I was able to solve my problem with a combination of a more efficient WHERE filter, a GROUPBY aggregation, and more extensive use of indexes.
By reducing the amount of server side execute time and data, you reduce the chance of a rollback error occurring.
However, a rollback can still occur during your shortened window, so a comprehensive solution would also make use of some retry logic for when a rollback error occurs.
Update: A colleague implemented said retry logic as well as batching the query to make the data volumes smaller. These three solutions have made the problem go away entirely.
I got the same error. What you CAN do (if the query is simple enough), is deviding the data into smaller chunks as a workaround.
I did this within a python loop to call the query multiple times with the LIMIT and OFFSET parameter like:
query_chunk = f"""
SELECT *
FROM {database}.{datatable}
LIMIT {chunk_size} OFFSET {i_chunk * chunk_size}
"""
where database and datatable are the names of your sources..
The chunk_size is individually and to set this to a not too high value is crucial for the query to finish.

CREATE SCHEMA IF NOT EXISTS raises duplicate key error

To give some context, the command is issued inside a task, and many task might issue the same command from multiple workers at the same time.
Each tasks tries to create a postgres schema. I often get the following error:
IntegrityError: (IntegrityError) duplicate key value violates unique constraint "pg_namespace_nspname_index"
DETAIL: Key (nspname)=(9621584361) already exists.
'CREATE SCHEMA IF NOT EXISTS "9621584361"'
Postgres version is PostgreSQL 9.4rc1.
Is it a bug in Postgres?
This is a bit of a wart in the implementation of IF NOT EXISTS for tables and schemas. Basically, they're an upsert attempt, and PostgreSQL doesn't handle the race conditions cleanly. It's safe, but ugly.
If the schema is being concurrently created in another session but isn't yet committed, then it both exists and does not exist, depending on who you are and how you look. It's not possible for other transactions to "see" the new schema in the system catalogs because it's uncommitted, so it's entry in pg_namespace is not visible to other transactions. So CREATE SCHEMA / CREATE TABLE tries to create it because, as far as it's concerned, the object doesn't exist.
However, that inserts a row into a table with a unique constraint. Unique constraints must be able to see uncommitted rows in order to function. So the insert blocks (stops) until the first transaction that did the CREATE either commits or rolls back. If it commits, the second transaction aborts, because it tried to insert a row that violates a unique constraint. CREATE SCHEMA isn't smart enough to catch this case and re-try.
To properly fix this PostgreSQL would probably need predicate locking, where it could lock the potential for a row. This might get added as part of the current work going on for implementing UPSERT.
For these particular commands, PostgreSQL could probably do a dirty read of the system catalogs, where it can see uncommitted changes. Then it could wait for the uncommitted transaction to commit or roll back, re-do the dirty read to see if someone else is waiting, and retry. But this would have a race condition where someone else might create the schema between when you do the read to check for it and when you try to create it.
So the IF NOT EXISTS variants would have to:
Check to see if the schema exists; if it does, finish without doing anything.
Attempt to create the table
If creation fails due to a unique constraint error, retry at the start
If table creation succeeds, finish
As far as I know nobody's implemented that, or they tried and it wasn't accepted. There would be possible issues with transaction ID burn rate, etc, with this approach.
I think this is a bug of sorts, but it's a "yeah, we know" kind of bug, not a "we'll get right on fixing that" kind of bug. Feel free to post to pgsql-bugs about it; at the very least the documentation should mention this caveat about IF NOT EXISTS.
I don't recommend doing DDL concurrently like that.
I needed to work around this limitation in an application where schemas are created concurrently. What worked for me was adding
LOCK TABLE pg_catalog.pg_namespace
in the transaction including CREATE SCHEMA. Looks like a dirty and unsafe thing to do, but helped me to solve the problem which occurred only in tests anyway.

How to get the name of the table that was changed in sqlite?

Does anyone here knows how to get the name
of the table that was changed,updated or deleted
in SQLite?..i found the function changes() and totalChanges()
but they only return the number of database rows that were
changed or inserted or deleted by the most recently completed SQL statement.
In most RDBMS's you have some kind of journaling that captures all database transactions for data backup and recovery. In Oracle, it's called a redo log. That is where you would go to check if a table name has changed.
But I'm not familiar enough with SqlLite to know if this is available. I did find a thread where a similar question was asked, and it was recommended to implement it yourself. Try reading through the this link and see if this satisfies your requirements:
But aside from all of that, I would also recommend that your app use views, that way you protect the model from changes.