I'm having this error : could not complete because of conflict with concurrent transaction.
And I don't find the other query conflicting with this one.
I've tried :
Using the AWS charts/metrics within the online console.
Looking in STL_TR_CONFLICT
STL_QUERY selecting queries running at the same time of mine failing.
None of these options help me to understand the issue. I've found some queries running at the same time but they were not using the same tables.
Relaunching the query few minutes later worked just fine.
I just found I got this issue when the following transactions were running at the same time:
drop / create my_schema.my_table as ...
grant select on all tables in schema my_schema <-- this one failed
Related
we are using AWS Babelfish with Postgres enabled DB, ideally it's a Postgres DB. There are frequent errors related to could not open relation. The same SP executes fine sometimes and same one fails with error sporadically if not more frequently. I have found a article[https://www.postgresql.org/message-id/12791.1310599941%40sss.pgh.pa.us] which discuss about the error, but it doesn't point the exact issue. And other articles doesn't help me understand the pattern of the error. I have added .dbo to all the tables as one of the possible fix and dropped all the temp tables at end of the SP as well.
Level 16, State 1, Line 4
could not open relation with OID 54505
Pg_stats_actvity is used in pgadmin 4 but it give only current queries.
And also checked on azure portal for logs in server logs there can't see the logs for queries.
If anyone know help on this it is azure postgresql server.
Need Queries for one perticular database.
I tried below query in pgadmin 4
Select * from pg_stats_activity
It is azure postgresql single server i checked logs and diagnostics.
Also tried explain analyse.
Help me to get list of queries in postgresql database for perticular time interval
You have several options in Postgres to capture queries.
You've already found that pg_stats_activity shows active, running queries.
The pg_stat_statements extension will show aggregated information for query fingerprints.
The log_statement configuration will log queries that were sent to the database into your postgres.log. Use the log_line_prefix configuration to capture more details into the logs.
The pgaudit extension will capture queries that were executed on the database, with nicely formatted output for easy parsing or to send to log aggregation tools.
Here's an Azure doc that describes how to set these in the UI and how to view the resulting logs: https://learn.microsoft.com/en-us/azure/postgresql/single-server/concepts-server-logs.
In your case, setting log_statement to 'all' should be a good start.
Having a problem with one of our external tables in redshift.
We have over 300 tables in AWS Glue which have been added to our redshift cluster as an external schema called events. Most of the tables in events can be queries fine. But when querying one of the tables called item_loaded we get the following error;
select * from events.item_loaded limit 1;
ERROR: XX000: Failed to incorporate external table "events"."item_loaded" into local catalog.
LOCATION: localize_external_table, /home/ec2-user/padb/src/external_catalog/external_catalog_api.cpp:358
What's weird is that they are in the catalog;
select *
from SVV_EXTERNAL_TABLES
where tablename = 'item_loaded';
-[ RECORD 1 ]-----+------------------------------------------
schemaname | events
tablename | item_loaded
location | s3://my_bucket/item_loaded
input_format | org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
output_format | org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat
serialization_lib | org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
serde_parameters | {"serialization.format":"1"}
compressed | 0
parameters | {"EXTERNAL":"TRUE","parquet.compress":"SNAPPY","transient_lastDdlTime":"1504792238"}
AFAICT, this table is configured the exact same way as the other tables in the same schema which are working fine. I've tried recreating a new external schema pointing to the same AWS Glue database but the same issue occurs.
What else could I potentially check? Is there anything that could occur which would cause a table to removed from the catalog?
As per the forum post about the same:
The external table has a number of columns which exceed the Redshift limits:
1,600 columns per table for local Redshift table
1,598 columns for Redshift Spectrum external table
You can verify the number of columns of external table by querying svv_external_columns
I very recently faced the problem,
In addition to the above solution, there are a few more threads as well
https://forums.aws.amazon.com/message.jspa?messageID=845538&tstart=0 (Solution by Joe)
https://forums.aws.amazon.com/thread.jspa?messageID=780552 (Says the fix is incorporated)
I was facing this issue with the IAM role having AWS Glue Full Access. I deliberately added AthenaFullAccess as well and restarted the Redshift cluster which resolved the issue. Not sure what caused the issue and how it got resolved in this case
it can also happen if there are typos in the config.
for ex following fails:
SECRET_ARN ' arn:aws:secretsmanager:us-east-1:123:secret:stage/data/redshift-rds'
and following works
SECRET_ARN 'arn:aws:secretsmanager:us-east-1:123:secret:stage/data/redshift-rds'
Note additional space at the beginning of arn
I was trying to run flyway:migrate on my projects postgres database. I have made the changes to a table manually and because of that the schema migration using flyway is failing, which is blocking next schema migration execution.
table : foo
required_change : ALTER TABLE foo ALTER COLUMN id DROP NOT NULL
current_schema_version : 2
next_schema_version : 3
Error:
[ERROR] com.googlecode.flyway.core.api.FlywayException: Migration of schema "public" to version 3 failed! Changes successfully rolled back.
How could I skip failing schema and make flyway:migrate execute next schema defined?
It might be simplest to undo the manual change so that Flyway can run successfully. For example, if you dropped the column then add it back in, then run the Flyway script to drop it.
So I found one possible solution to this problem, which goes as follows:
(1). In mysql database there is a table schema_version, which maintains the migration version number , it's status and other related information.
for ex.
mysql> desc schema_version;
version_rank
installed_rank
version
description
type
script
checksum
installed_by
installed_on
execution_time
success
for a failed migration the success field values stores 0, to override that and execute the next flyway:migration you can manually set the value to 1 (this would make sure, you don't lose the data stored on the table you created manually , when migration failed.
(2). Adding following on your pom.xml while running flyway:migration temporarily (while testing) also helps.
<validateOnMigrate>false</validateOnMigrate>
I have been working in SAS connecting to DB2 via ODBC for a while now and this PROC SQL step generally works:
proc sql;
connect to odbc(dsn=DSQ user="UserID" password="Password");
execute (set current degree = 'ANY' ) by ODBC;
create table tempTable as select * from connection to odbc (
select *
from schemaName.tableName
);
quit;
However, occassionally I get the following error, and when I get this error I won't be able to run another ODBC query for quite some time because everytime I try to run the query I get the same error:
ERROR: CLI error trying to establish connection: [IBM][CLI Driver] SQL1042C An unexpected system
error occurred. SQLSTATE=58004
After some time the error either resolves itself or I do something that I am not aware of that fixes the issue. This is a very frustrating issue and since I never know how long this issue is going to persist I would like a more robust solution to this issue. I have checked the db2diag.log file and here is the part that describes this error:
2015-06-03-08.17.34.345000-300 I60888H446 LEVEL: Error
PID : 4452 TID : 7804 PROC : sas.exe
INSTANCE: DB2 NODE : 000
HOSTNAME:
EDUID : 7804
FUNCTION: DB2 Common, Cryptography, cryptDynamicLoadGSKitCrypto, probe:998
MESSAGE : ECF=0x90000007=-1879048185=ECF_UNKNOWN
Unknown
DATA #1 : unsigned integer, 4 bytes
60
DATA #2 : String, 11 bytes
gsk8sys.dll
I was trying to find an example to put in this post so I ran this snippet of code in SAS to see if the error would come back, however it seemed to have resolved the error because after running this piece of code, I ran the initial code and it worked. Here is the code that seemed to have resolved the issue.
proc sql;
connect to odbc(dsn=DSQ user="UserID" password="Password");
execute (set current degree = 'ANY' ) by ODBC;
create table column_names as select * from connection to odbc (
select * from sysibm.syscolumns
);
quit;
I have tried googling this issue, but there isn't much help on this particular error. Is there any reason that the second SAS code would have fixed the issue I was having? Is there any way to fix this problem so it won't come back in the future?
Please note, when the error occurs, I am still able to run queries via ODBC in Microsoft Access without any problems. It appears this is an issue with just a particular instance.
System Setup:
Windows 7 64-bit
SAS 9.3 (32)
DB2 v10.5.300.125
Thanks in advance for your help!
Update:
On a few occasions, I was able to go into task manager and delete some processes that were still running and then this issue would resolve itself, however today when this problem occurred, those processes weren't there. Any thoughts on this would be greatly appreciated.
The symptom may be depended on the installation order of Db2 connect and a program (sas.exe). If DB2 connect is installed first, it should not be occurred. So it is suggested to remove and re-install both products once then install Db2 connect first.
Hope this helps.