Table output_values_center1 (and some other) inherits output_values. Periodically I truncate table output_values_center1 and load new data (in one transaction). In that time user can request some data and he got error message. Why it ever happens (select query requests only one record) and how to avoid such problem:
2010-05-19 14:43:17 UTC ERROR: deadlock detected
2010-05-19 14:43:17 UTC DETAIL: Process 25972 waits for AccessShareLock on relation 2495092 of database 16385; blocked by process 26102.
Process 26102 waits for AccessExclusiveLock on relation 2494865 of database 16385; blocked by process 25972.
Process 25972: SELECT * FROM "output_values" WHERE ("output_values".id = 122312) LIMIT 1
Process 26102: TRUNCATE TABLE "output_values_center1"
"TRUNCATE acquires an ACCESS EXCLUSIVE lock on each table it operates on, which blocks all other concurrent operations on the table. If concurrent access to a table is required, then the DELETE command should be used instead."
Obviously it's not clear if you just look at the "manpage" linked above why querying the parent table affects its descendant. The following excerpt from the "manpage" for the SELECT command clarifies it:
"If ONLY is specified, only that table is scanned. If ONLY is not specified, the table and any descendant tables are scanned."
I'd try this (in pseudocode) for truncating:
#define NOWAIT_TIMES 100
#define SLEEPTIME_USECS (1000*100)
for ( i=0; ; i++ ) {
ok = query('start transaction');
if ( !ok ) raise 'Unable to start transaction!';
queries = array(
'lock table output_values in access exclusive mode nowait',
'truncate output_values_center1',
'commit'
);
if ( i>NOWAIT_TIMES ) {
// we will wait this time, as we tried NOWAIT_TIMES and failed
queries[0] = 'lock table output_values in access exclusive mode';
}
foreach q in queries {
ok = query(q);
if (!ok) break;
}
if (!ok) {
query('rollback');
usleep(SLEEPTIME_USECS);
} else {
break;
};
};
This way you'll be safe from deadlocks, as parent table will be exclusively locked. A user will just block for a fraction of second while truncate runs and will automatically resume after commit.
But be prepared that this can run several seconds on busy server as when table is in use then lock will fail and be retried.
Related
Am getting below error in Postgres while executing insert and delete queries. I have around 50 inserts and 50 delete statements. When executed an getting the error as,
SQL Error: ERROR: not all tokens processed
The error is not consistent all the time,
For example,
My 20th delete statement is getting failed
Next time when the same queries are executed, 25th delete statement is getting failed
And when those statements are executed alone, there is no failure.
Not sure if it is a database load issue or infrastructure related issue.
Any suggestion would be helpful
Below is the query,
WITH del_table_1 AS
(
delete from table_1 where to_date('01-'||col1,'DD-mm-YYYY') < current_date-1
RETURNING *
)
update control_table set deleted_count = cnt, status = 'Completed',
update_user_id = 'User', update_datetime = current_date from
(select 'Table1' as table_name, count(*) as cnt from del_table_1) aa
where
control_table.table_name = aa.table_name
and control_table.table_name = 'Table1'
and control_table.status = 'Pending';
I'm new to Rust and I was trying to play with the postgres crate. I was able to create a table by hardcoding the table name, but I'm always having code going panic when trying to pass the table name from a variable.
rustc --version 1.36.0
cargo --version 1.36.0
postgres = "0.15"
fn main() {
let conn = Connection::connect("postgresql://postgres:postgres#localhost/db1",
TlsMode::None).unwrap();
let tname = "message";
conn.execute("CREATE TABLE IF NOT EXISTS $1 (
id SERIAL PRIMARY KEY,
title VARCHAR NOT NULL,
body VARCHAR,
)", &[&tname]).ok().expect("Table message creation failed");
thread 'main' panicked at 'Table message creation failed', src/libcore/option.rs:1036:5
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:39
1: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:71
2: std::panicking::default_hook::{{closure}}
at src/libstd/sys_common/backtrace.rs:59
at src/libstd/panicking.rs:197
3: std::panicking::default_hook
at src/libstd/panicking.rs:211
4: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:474
5: std::panicking::continue_panic_fmt
at src/libstd/panicking.rs:381
6: rust_begin_unwind
at src/libstd/panicking.rs:308
7: core::panicking::panic_fmt
at src/libcore/panicking.rs:85
8: core::option::expect_failed
at src/libcore/option.rs:1036
9: core::option::Option<T>::expect
at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libcore/option.rs:314
10: rustdb::main
at ./main.rs:27
11: std::rt::lang_start::{{closure}}
at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libstd/rt.rs:64
12: std::panicking::try::do_call
at src/libstd/rt.rs:49
at src/libstd/panicking.rs:293
13: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:85
14: std::rt::lang_start_internal
at src/libstd/panicking.rs:272
at src/libstd/panic.rs:394
at src/libstd/rt.rs:48
15: std::rt::lang_start
at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libstd/rt.rs:64
16: main
17: __libc_start_main
18: _start
You cannot use placeholders (eg. $1) to substitute a table name into a query.
One of the functions of placeholders is to allow a query to be prepared once, then executed multiple times. This saves on the overhead of planning the query each time you want to use it, which can be substantial. However it would not be possible to plan the query if the database did not even know which table was being queried.
If you need to dynamically insert the table name at runtime, you will need to do that in rust before passing the SQL to the database:
let sql = format!("CREATE TABLE IF NOT EXISTS {} (
id SERIAL PRIMARY KEY,
title VARCHAR NOT NULL,
body VARCHAR,
)", tname);
If the table name is being passed in from user input, don't forget to guard against SQL injection by validating it beforehand.
Also note that the panic is due to the use of .ok().expect(....).
ok() will take the result of executing the SQL and convert it into an Option. If the result was an error it will be discarded, so you never get to see the error message which would probably have helped you diagnose the problem. Result implements expect directly, with the advantage that instead of discarding the error, it will display it as part of the Panic. So, you would be better off with:
conn.execute(sql, &[] as &[String]).expect("Failed creating table");
However, if there is a realistic chance that the SQL statement is going to fail you would be better off to check the result and handle it more gracefully than crashing the program.
I have a query which contains two part. First part call function which creates a temporary table, second part select data from this table.
SELECT create_data_slice(15962, NULL, ARRAY[[15726]]);
SELECT
AK."15962_15726" as AK_NAME
FROM
t15962 AK
GROUP BY
AK."15962_15726;"
If I execute this query in PgAdmin, it turns right result with data. But if I execute it in Qt:
QSqlDatabase db = store->get_db();
QSqlQuery query(db);
query.exec(sql);
it executes only the first part (create temporary table), but do not execute second part and do not return data.
You can use a transaction like this:
QSqlDatabase::database().transaction();
QSqlQuery query;
query.exec("SELECT create_data_slice(15962, NULL, ARRAY[[15726]]);");
if (query.next())
{
int employeeId = query.value(0).toInt();
query.exec("SELECT AK."15962_15726" as AK_NAME FROM t15962 AK GROUP BY AK."15962_15726;");
while(query.next())
{
qDebug() << query.value().toString(); ///or what you want to do with data
}
}
QSqlDatabase::database().commit();
I wrote T-SQL MERGE query to merge staging data into a data warehouse (you can find it at the bottom).
If I uncomment the OUTPUT statement the I get error mentioned in the title.
However, if I do not include it, everything works perfectly fine and MERGE succeeds.
I know that there are some issue connected to the MERGE clause, however there are more connected to the type of merge.
I checked the following answer: [https://dba.stackexchange.com/questions/140880/why-does-this-merge-statement-cause-the-session-to-be-killed], however in my execution plan I cannot find exactly index insert followed by index merge.
Rather, what I see is the following execution plan
Code was developed on database attached to SQL Server 2012 (SP4) instance
I would really appreciate good explanation of this problem, ideally referencing steps from my execution plan.
Thank you.
declare #changes table (chgType varchar(50),Id varchar(18))
begin try
set xact_abort on
begin tran
;with TargetUserLogHsh as (select
hsh =hashbytes('md5',concat(coalesce([AccountName],'')
,coalesce([TaxNumber],'')))
,LastLoginCast = coalesce(CONVERT(datetime,LastLogin,103),getdate())
,* from
dw.table1)
,SourceUserLogHsh as (select
hsh =hashbytes('md5',concat(coalesce([AccountName],'')
,coalesce([TaxNumber],'')))
,LastLoginCast = coalesce(CONVERT(datetime,LastLogin,103),getdate())
,* from
sta.table1)
merge TargetUserLogHsh target
using SourceUserLogHsh source
on target.ContactId = source.ContactId and target.Lastlogincast >= source.LastLoginCast
when not matched then insert (
[AccountName]
,[TaxNumber]
,[LastLogin]
)
values (
source.[AccountName]
,source.[TaxNumber]
,source.[LastLogin]
)
when matched and target.lastlogincast = source.lastlogincast
and target.hsh != source.hsh then
update
set
[AccountName] = source.[AccountName]
,[TaxNumber] = source.[TaxNumber]
,[LastLogin] = source.[LastLogin]
output $action,inserted.contactid into #changes
;
commit tran
end try
begin catch
if ##TRANCOUNT > 0 rollback tran
select ERROR_MESSAGE()
end catch
I'm writing an sqlalchemy import/export script using the serializer dumps and loads.
The export works, but I have problems with the import, mainly due to foreign key issues.
I'm using sorted_tables to get the list of tables sorted based on dependencies and this makes sure I won't have cross tables foreign key issues but is there something similar to handle internal foreign keys (a table pointing to itself)?
I'm basically thinking about 2 possible solutions:
Find a way to sort the rows based on the dependencies
Disable all constraints -> insert the data -> enable all constraints
again
but I'm not sure how to do this properly...
a table example:
class Employee(Base):
__tablename__ = "t_employee"
id = sa.Column(Identifier, sa.Sequence('%s_id_seq' % __tablename__), primary_key=True, nullable=False)
first_name = sa.Column(sa.String(30))
last_name = sa.Column(sa.String(30))
manager_id = sa.Column(Identifier, sa.ForeignKey("t_employee.id", ondelete='SET NULL'))
and here is my script:
def export_db(tar_file):
print "Exporting Database. This may take some time. Please wait ..."
Base.metadata.create_all(engine)
tables = Base.metadata.tables
with tarfile.open(tar_file, "w:bz2") as tar:
for tbl in tables:
print "Exporting table %s ..." % tbl
table_dump = dumps(engine.execute(tables[tbl].select()).fetchall())
ti = tarfile.TarInfo(tbl)
ti.size = len(table_dump)
tar.addfile(ti, StringIO(table_dump))
print "Database exported! Exiting!"
exit(0)
def import_db(tar_file):
print "Importing to Database. This may take some time. Please wait ..."
print "Dropping all tables ..."
Base.metadata.drop_all(engine)
print "Creating all tables ..."
Base.metadata.create_all(engine)
tables = Base.metadata.sorted_tables
with tarfile.open(tar_file, "r:bz2") as tar:
for tbl in tables:
try:
entry = tar.getmember(tbl.name)
print "Importing table %s ..." % entry.name
fileobj = tar.extractfile(entry)
table_dump = loads(fileobj.read(), Base.metadata, db)
for data in table_dump:
db.execute(tbl.insert(), strip_unicode(dict(**data)))
except:
traceback.print_exc(file=sys.stdout)
exit(0)
db.commit()
print "Database imported! Exiting!"
exit(0)
For mass dumps, the standard technique is to disable constraints, do the import, then re-enable them. You'll also get much faster performance on the inserts.