S3ServiceException when using AWS RedshiftBasicEmitter - amazon-redshift

I am using the sample AWS kinesis/redshift code from GitHub. I ran the code in an EC2 instance and ran into the following exception. Note that the emitting from Kinesis to S3 actually succeeded. But the emitting from S3 to Redshift failed. As both emitters in the same program used the same credentials, I am very puzzled why only one of them failed!?
I understand most people getting “The AWS Access Key Id you provided does not exist in our records” exception probably may have issue setting up the S3 key pair properly. But it does not seem to be the case here as emitting to S3 succeeded. If the credentials do not have read access, it should throw an authorization error instead.
Please comment if you have any insight.
Mar 16, 2014 4:32:49 AM com.amazonaws.services.kinesis.connectors.s3.S3Emitter emit
INFO: Successfully emitted 31 records to S3 in s3://mybucket/495362565978733426345566872055061454326385819810529281-49536256597873342638068737503047822713441029589972287489
Mar 16, 2014 4:32:50 AM com.amazonaws.services.kinesis.connectors.redshift.RedshiftBasicEmitter executeStatement
SEVERE: org.postgresql.util.PSQLException: ERROR: S3ServiceException:The AWS Access Key Id you provided does not exist in our records.,Status 403,Error InvalidAccessKeyId,Rid 5TY6Y784TT67,ExtRid qKzklJflmmgnhtttthbce+8T0NIR/sdd4RgffTgfgfdfgdfgfffgghgdse56f,CanRetry 1
Detail:
-----------------------------------------------
error: S3ServiceException:The AWS Access Key Id you provided does not exist in our records.,Status 403,Error InvalidAccessKeyId,Rid 5TY6Y784TT67,ExtRid qKzklJflmmgnhtttthbce+8T0NIR/sdd4RgffTgfgfdfgdfgfffgghgdse56f,CanRetry 1
code: 8001
context: Listing bucket=mfpredshift prefix=49536256597873342637951299872055061454326385819810529281-49536256597873342638068737503047822713441029589972287489
query: 3464108
location: s3_utility.cpp:536
process: padbmaster [pid=8116]
-----------------------------------------------
Mar 16, 2014 4:32:50 AM com.amazonaws.services.kinesis.connectors.redshift.RedshiftBasicEmitter emit
SEVERE: java.io.IOException: org.postgresql.util.PSQLException: ERROR: S3ServiceException:The AWS Access Key Id you provided does not exist in our records.,Status 403,Error InvalidAccessKeyId,Rid 5TY6Y784TT67,ExtRid qKzklJflmmgnhtttthbce+8T0NIR/sdd4RgffTgfgfdfgdfgfffgghgdse56f,CanRetry 1
Detail:
-----------------------------------------------
error: S3ServiceException:The AWS Access Key Id you provided does not exist in our records.,Status 403,Error InvalidAccessKeyId,Rid 5TY6Y784TT67,ExtRid qKzklJflmmgnhtttthbce+8T0NIR/sdd4RgffTgfgfdfgdfgfffgghgdse56f,CanRetry 1
code: 8001
context: Listing bucket=mybucket prefix=495362565978733426345566872055061454326385819810529281-49536256597873342638068737503047822713441029589972287489
query: 3464108
location: s3_utility.cpp:536
process: padbmaster [pid=8116]
-----------------------------------------------

I encountered the same errors. I'm using IAM role to get credentials. In my case, it was solved by modify RedshiftBasicEmitter to add ;token=TOKEN to CREDENTIALS parameter (finally I created my own IEmitter).
See http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html

Related

Keycloak 18 won't create default-tables on docker when configure a second DB via quarkus.properties

I am implementing a Custom Userprovider SPI for keycloak 18.0.2 and therefore have (alongside the keycloak default PostgreSQL-DB) a MSSQL in use.
The customized Keycloak and the PostgreSQL are run via docker-container.
The problems occure on my local MacBook M1 (but the same behaviour on intel-cpu as well). When building and starting the custom keycloak container, all volumes for both containers are removed. So there is always fresh DB-container
(sidenote: As the SPI was written for WildFly and it is broken with 19.x.x, i just stepped back to 18.0.2 to get the whole process working again. Afterwards will update to 19 and adapt the SPI implementations.)
the problem ...
Keycloak will create all tables - for the default keycloak-db (PostgreSQL) - in the public schema ONLY IF i configure the connection to the MSSQL via persistence.xml. This must not be in the production setup, as this should be at least configurable by the gitlab pipeline.
If i move the connection-infos from persistence.xml to quarkus.properties (as described in here: https://github.com/keycloak/keycloak-quickstarts/tree/main/user-storage-jpa), the default DB-tables can't be created anymore...
logs in Postgre-Container:
LOG: database system is ready to accept connections
ERROR: relation "migration_model" does not exist at character 25
STATEMENT: SELECT ID, VERSION FROM MIGRATION_MODEL ORDER BY UPDATE_TIME DESC
ERROR: syntax error at end of input at character 20
STATEMENT: call current_schema
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: SELECT COUNT(*) FROM DATABASECHANGELOG
ERROR: syntax error at end of input at character 20
STATEMENT: call current_schema
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: SELECT COUNT(*) FROM DATABASECHANGELOGLOCK
ERROR: syntax error at end of input at character 20
STATEMENT: call current_schema
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: CREATE TABLE DATABASECHANGELOGLOCK (ID INT NOT NULL, "LOCKED" BOOLEAN NOT NULL, LOCKGRANTED datetime, LOCKEDBY VARCHAR(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID))
ERROR: syntax error at end of input at character 20
keycloak logs:
WARN [liquibase.database.DatabaseFactory] (main) Unknown database: PostgreSQL
WARN [org.keycloak.connections.jpa.updater.liquibase.lock.CustomLockService] (main) Failed to create lock table. Maybe other transaction created in the meantime. Retrying...
ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
Does using the quarkus.properties overwrite some keycloak-defaults? so, when using it, keycloak acts differently than the configuration without a custom quarkus file?

Redshift-Postgres RDS federated query: Authentication method 10 not supported

VPC is configured, secret is in Secrets Manager with correct policy attached to Redshift cluster.
Created external schema using
CREATE EXTERNAL SCHEMA schema_ext
FROM POSTGRES
DATABASE 'db' SCHEMA 'schema'
URI 'rds.some_symbols.eu-west-1.rds.amazonaws.com' PORT 5432
IAM_ROLE 'arn:aws:iam::999999999999:role/redshift-iam-role'
SECRET_ARN 'arn:aws:secretsmanager:eu-west-1:999999999999:secret:some-secret-some-symbols';
But when I try to run query to some table in this schema I get error:
SQL Error [XX000]: ERROR:
-----------------------------------------------
error: authentication method 10 not supported
code: 25300
context:
query: 0
location: pgclient.cpp:535
process: padbmaster [pid=2022]
-----------------------------------------------
Details for this error are following
org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error [XX000]: ERROR:
-----------------------------------------------
error: authentication method 10 not supported
code: 25300
context:
query: 0
location: pgclient.cpp:535
process: padbmaster [pid=2022]
-----------------------------------------------
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:133)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:575)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$1(SQLQueryJob.java:484)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:172)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:491)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:878)
at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:3526)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:118)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:172)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:116)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$ResultSetDataPumpJob.run(ResultSetViewer.java:4868)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:105)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: com.amazon.redshift.util.RedshiftException: ERROR:
-----------------------------------------------
error: authentication method 10 not supported
code: 25300
context:
query: 0
location: pgclient.cpp:535
process: padbmaster [pid=2022]
-----------------------------------------------
at com.amazon.redshift.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2601)
at com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread(QueryExecutorImpl.java:2269)
at com.amazon.redshift.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1880)
at com.amazon.redshift.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1872)
at com.amazon.redshift.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:368)
at com.amazon.redshift.jdbc.RedshiftStatementImpl.executeInternal(RedshiftStatementImpl.java:514)
at com.amazon.redshift.jdbc.RedshiftStatementImpl.execute(RedshiftStatementImpl.java:435)
at com.amazon.redshift.jdbc.RedshiftStatementImpl.executeWithFlags(RedshiftStatementImpl.java:376)
at com.amazon.redshift.jdbc.RedshiftStatementImpl.executeCachedSql(RedshiftStatementImpl.java:362)
at com.amazon.redshift.jdbc.RedshiftStatementImpl.executeWithFlags(RedshiftStatementImpl.java:339)
at com.amazon.redshift.jdbc.RedshiftStatementImpl.execute(RedshiftStatementImpl.java:329)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.execute(JDBCStatementImpl.java:329)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.lambda$0(JDBCStatementImpl.java:131)
at org.jkiss.dbeaver.utils.SecurityManagerUtils.wrapDriverActions(SecurityManagerUtils.java:94)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:131)
... 12 more
I tried to run the same query from query editor in AWS Console for Redshift. The error seems to be the same
ERROR: ----------------------------------------------- error: authentication method 10 not supported code: 25300 context: query: 0 location: pgclient.cpp:535 process: padbmaster [pid=12588] -----------------------------------------------
I tried to update JDBC client drivers. No result.
Maybe the problem is in the custom KMS key used to encrypt the secret. But the guy who understands how it works tells that "it decrypts and only then goes for authorization".
What should I do to avoid this error?
For now, there is no answer in Redshift documentation.
But what helped is explicit change of encryption to MD5 on RDS side for the Redshift user
set password_encryption = 'md5';
ALTER ROLE ... PASSWORD ...;
Also, don't forget to run ANALYZE after dropping an object on RDS side. SELECT on fresh objects can return an error for Redshift.

WSO2 API-Manager with Postgres database is not working properly

I have shifted the default h2 database to Postgresql for WSO2 API Manager by following this documentation: https://apim.docs.wso2.com/en/latest/install-and-setup/setup/setting-up-databases/changing-default-databases/changing-to-postgresql/
Creating a new API on throws:
"Something went wrong while getting the Revisions!"
On server found this error
ERROR - ApiMgtDAO Failed to get API Revision deployment mapping details for api uuid: a96f7266-c340-49b6-bbe1-cb252b49860e
org.postgresql.util.PSQLException: ERROR: UNION types integer and boolean cannot be matched
Any help would be greatly appreciated... Thanks...

Can not create backup of Firebird database because of the errors

At the time of backup firebird database (gbak -g -ig) I have the following error:
gbak: writing data for table ORDERS
gbak: ERROR:message length error (encountered 532, expected 528)
gbak: ERROR:gds_$receive failed
gbak:Exiting before completion due to errors
When I'm using gfix with different parameters (-v -full, -mend, -ignore), I have the message:
Summary of validation errors
Number of index page errors : 540
In firebird.log file I see the lines:
PC (Server) Thu Sep 20 08:37:01 2018
Database: E:\...GDB
Index 2 is corrupt on page 134706 level 1. File: ..\..\..\src\jrd\validation.cpp, line: 1699
in table COMPONENTS (197)
However, the database works OK without problems.
Please help me to fix the error and make a backup.
(I need the backup to migrate to on 64bit server).

Unable to shutdown mongodb server - unexpected error: "shutdownServer failed: unauthorized" at src/mongo/shell/assert.js:7

I am trying to shutdown one of the mongodb instance in a 3 node replica set. The config file has auth set to 1. I have a admin account that has userAdminAnyDatabase role and I logged to admin database with that account. However when I run db.shutdownServer() I get the following error
db.shutdownServer()
assert failed : unexpected error: "shutdownServer failed: unauthorized"
Error: Printing Stack Trace
at printStackTrace (src/mongo/shell/utils.js:37:15)
at doassert (src/mongo/shell/assert.js:6:5)
at assert (src/mongo/shell/assert.js:14:5)
at DB.shutdownServer (src/mongo/shell/db.js:346:9)
at (shell):1:4
Mon Jun 23 12:52:51.839 assert failed : unexpected error: "shutdownServer failed: unauthorized" at src/mongo/shell/assert.js:7
I created another user that has both dbAdminAnyDatabase and userAdminAnyDatabase roles and that also gets the same error.
Can someone help me with this error?
If running MongoDB 2.4 a user with the clusterAdmin role is needed to run db.shutdownServer(). A full list of user roles for MongoDB 2.4 is available here: http://docs.mongodb.org/v2.4/reference/user-privileges/
If on MongoDB 2.6 you would use the hostManager role instead. See the following page for 2.6 roles: http://docs.mongodb.org/manual/reference/built-in-roles/