INSERT INTO POSGRESQL DB sink table with FLINK SQL - postgresql

how to write in FLINK SQL full command with upsert for postgres if it doesn't support this syntax?
2023-02-13 21:22:47,152 ERROR org.apache.flink.connector.jdbc.internal.JdbcOutputFormat [] - JDBC executeBatch error, retry times = 2
java.sql.BatchUpdateException: Batch entry 0 INSERT INTO flink.flink(id, full_name) VALUES (1, 'test') ON CONFLICT (id) DO UPDATE SET id=EXCLUDED.id, full_name=EXCLUDED.full_name was aborted: ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification Call getNextException to see other errors in the batch.
at org.postgresql.jdbc.BatchResultHandler.handleCompletion(BatchResultHandler.java:201) ~[flink-sql-connector-postgres-cdc-2.3.0.jar:2.3.0]

Related

insert null value for auto incremented primary key

I'm using aws glue and i'm trying to insert (in pg databases) line with null value in primary key. I get this error :
An error occurred while calling o204.pyWriteDynamicFrame. ERROR: null value in column "abc" violates not-null constraint.
The issue is that primary key has a sequence
nextval('abc'::regclass).
Is it a parameter in glue to avoid this error ? Thanks
Conf :
AWS GLUE,
Python job,
Postgres databases

Spark sql query execution fails with org.apache.parquet.io.ParquetDecodingException

I am executing a simple create table query in spark sql using spark-submit(cluster mode). Receiving org.apache.parquet.io.ParquetDecodingException. I could get few details on this issue over internet, one of the suggestion was to add the config spark.sql.parquet.writeLegacyFormat=true. The issue still persist after addding this setting.
Below is the query:
spark.sql("""
CREATE TABLE TestTable
STORED AS PARQUET
AS
SELECT Col1,
Col2,
Col3
FROM Stable""")
Error Description :
Caused by: org.apache.parquet.io.ParquetDecodingException: Can not read value at 1 in block 0 in file maprfs:///path/disputer/1545555-r-00000.snappy.parquet
at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:461)
at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:219)
at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:186)
... 13 more
Caused by: java.lang.ClassCastException: org.apache.spark.sql.catalyst.expressions.MutableLong cannot be cast to org.apache.spark.sql.catalyst.expressions.MutableInt
Spark Configuration file :
spark.driver.memory=10G
spark.executor.memory=23G
spark.executor.cores=3
spark.executor.instances=100
spark.dynamicAllocation.enabled=false
spark.yarn.preserve.staging.files=false
spark.yarn.executor.extraJavaOptions=-XX:MaxDirectMemorySize=6144m
spark.sql.shuffle.partitions=1000
spark.shuffle.service=true
spark.yarn.maxAppAttempts=1
spark.broadcastTimeout=36000
spark.debug.maxToStringFields=100
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version=2
spark.network.timeout=600s
spark.sql.parquet.enableVectorizedReader=false
spark.scheduler.listenerbus.eventqueue.capacity=200000
spark.driver.memoryOverhead=1024
spark.yarn.executor.memoryOverhead=5120
spark.executor.extraJavaOptions=-XX:+UseG1GC
spark.driver.extraJavaOptions=-XX:+UseG1GC
This issue was occurring due to disabling spark.sql.parquet.enableVectorizedReader. spark.sql.parquet.enableVectorizedReader=true resolves the issue.
For more details, Visit https://jaceklaskowski.gitbooks.io/mastering-spark-sql/spark-sql-vectorized-parquet-reader.html

Why is liquibase deleting databasechangelog rows and trying to create a renamed database table?

I am using postgres 10.5 and liquibase 3.6.2 on a Mac.
I nuke & re-create my database, run liquibase update, and it works.
But a second liquibase update fails with an exception that the pkey already exists.
After the first liquibase update, the databasechangelog table contains 97 entries. After the second, it contains 10, and the time and deployment ids for those are different than they were after the first update!
Table foo was created in an early change.
Later it was changed to be named bar, but the pkey is still foo.pkey.
Liquibase-update should not be trying to re-create foo, but it does, and fails because foo.pkey already exists.
A) In general, how can I get liquibase to output more info about what it's doing? I tried both of the commands:
liquibase --logLevel=debug --logFile=`pwd`/foo.log update
liquibase --logLevel debug --logFile `pwd`/foo.log update
Both seem to work the same, and foo.log isn't created and there's no more output in the terminal.
B) How can I stop liquibase from trying to re-make this and nuking my databasechangelog?
I tried to make a small example that fails, but this seems to work... Others here are using it with postgres 9.5.10 with no problem...
All I see in the terminal is:
Starting Liquibase at Wed, 14 Nov 2018 13:06:44 PST (version 3.6.2 built at 2018-07-03 11:28:09)
Unexpected error running Liquibase: ERROR: relation "cant_change_pkey" already exists [Failed SQL: CREATE TABLE nuss.cant_change (message_id UUID NOT NULL, origin VARCHAR(4), type VARCHAR(12) NOT NULL, CONSTRAINT CANT_CHANGE_PKEY PRIMARY KEY (message_id), UNIQUE (message_id))]
liquibase.exception.MigrationFailedException: Migration failed for change set db/changelog/changelog-new1.xml::first-one::rstrauss:
Reason: liquibase.exception.DatabaseException: ERROR: relation "cant_change_pkey" already exists [Failed SQL: CREATE TABLE nuss.cant_change (message_id UUID NOT NULL, origin VARCHAR(4), type VARCHAR(12) NOT NULL, CONSTRAINT CANT_CHANGE_PKEY PRIMARY KEY (message_id), UNIQUE (message_id))]
at liquibase.changelog.ChangeSet.execute(ChangeSet.java:637)
at liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:53)
at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:78)
at liquibase.Liquibase.update(Liquibase.java:202)
at liquibase.Liquibase.update(Liquibase.java:179)
at liquibase.integration.commandline.Main.doMigration(Main.java:1205)
at liquibase.integration.commandline.Main.run(Main.java:191)
at liquibase.integration.commandline.Main.main(Main.java:129)
Caused by: liquibase.exception.DatabaseException: ERROR: relation "cant_change_pkey" already exists [Failed SQL: CREATE TABLE nuss.cant_change (message_id UUID NOT NULL, origin VARCHAR(4), type VARCHAR(12) NOT NULL, CONSTRAINT CANT_CHANGE_PKEY PRIMARY KEY (message_id), UNIQUE (message_id))]
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:356)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:57)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:125)
at liquibase.database.AbstractJdbcDatabase.execute(AbstractJdbcDatabase.java:1229)
at liquibase.database.AbstractJdbcDatabase.executeStatements(AbstractJdbcDatabase.java:1211)
at liquibase.changelog.ChangeSet.execute(ChangeSet.java:600)
... 7 common frames omitted
Caused by: org.postgresql.util.PSQLException: ERROR: relation "cant_change_pkey" already exists
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2476)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2189)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:300)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:428)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:354)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:301)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:287)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:264)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:260)
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:352)
... 12 common frames omitted
For more information, please use the --logLevel flag

Keycloak not able to start --version 3.4.1.cR1

When Keycloak is trying to start, it is unable to start. The Following error is thrown in the logs :
ERROR
[org.keycloak.connections.jpa.updater.liquibase.conn.DefaultLiquibaseConnectionProvider]
(ServerService Thread Pool -- 53) Change Set
META-INF/jpa-changelog-1.5.0.xml::1.5.0::bburke#redhat.com failed.
Error: Column "USER_SETUP_ALLOWED" not found; SQL statement: ALTER
TABLE PUBLIC.AUTHENTICATION_EXECUTION ALTER COLUMN USER_SETUP_ALLOWED
SET DEFAULT NULL [42122-193] [Failed SQL: ALTER TABLE
PUBLIC.AUTHENTICATION_EXECUTION ALTER COLUMN USER_SETUP_ALLOWED SET
DEFAULT NULL]: liquibase.exception.DatabaseException: Column
"USER_SETUP_ALLOWED" not found; SQL statement: ALTER TABLE
PUBLIC.AUTHENTICATION_EXECUTION ALTER COLUMN USER_SETUP_ALLOWED SET
DEFAULT NULL [42122-193] [Failed SQL: ALTER TABLE
PUBLIC.AUTHENTICATION_EXECUTION ALTER COLUMN USER_SETUP_ALLOWED SET
DEFAULT NULL]
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:316)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:55)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:122)
at liquibase.database.AbstractJdbcDatabase.execute(AbstractJdbcDatabase.java:1247

Flyway Issue with DB2

I am using flyway for a deployment and the tables built as a result of flyway are all fine.
The issue I have is with the schema_version table. I am unable to query an individual column in the table. I am only able to perform a select *.
The error message I am getting is:
10:35:49 [SELECT - 0 row(s), 0.000 secs] 1) [Error Code: -206, SQL State: 42703] DB2 SQL Error: SQLCODE=-206, SQLSTATE=42703, SQLERRMC=SCRIPT, DRIVER=4.13.127. 2) [Error Code: -727, SQL State: 56098] DB2 SQL Error: SQLCODE=-727, SQLSTATE=56098, SQLERRMC=2;-206;42703;SCRIPT, DRIVER=4.13.127
Try enclosing the column name in double quotes:
select "SCRIPT" from flyway.schema_version
This might help.
Flyway Schema table is designed to be in lower case.
You can change the table name using below config property.
flyway.table=SCHEMA_VERSION
For more details you can check this
https://flywaydb.org/documentation/faq#case-sensitive
Try to make your query like this:
SELECT "version", "installed_on" FROM "schema_version";