kafka-connect JDBC PostgreSQL Sink Connector explicitly define the PostgrSQL schema (namespace) - postgresql

I am using the JDBC sink connector to write data to postgresql.
The connector works fine, but it seems the connector can only write data to the default postgresql schema called public
This is the common JDBC URL format for postgresql.
jdbc:postgresql://<host>:<port5432>/<database>
Is it possible to explicitly define the schema name, to which I need the postgresql sink connector to write?
UPDATE:
Thanks, #Laurenz Albe for the hint. I can define search_path in the jdbc connection URL like either of these:
jdbc:postgresql://<host>:<port>/<database>?options=-c%20search_path=myschema,public
jdbc:postgresql://<host>:<port>/<database>?currentSchema=myschema

Use the options connection parameter to set the search_path:
jdbc:postgresql://<host>:<port5432>/<database>?options=-c%20search_path=myschema,public

Related

Sending data mssql server to MySQL using Kafka connect

I need to deploy apache-kafka-connect in kubernetes. I have following questions:
Do I need to create MS SQL Server db tables in MySQL database? Or Kafka connect will create?
Any reference how to implement Kafka Connect without Avro Schema registry?
How to give configurations of key and value convertors?

Adding IBM Db2 as a datasource

Has anyone had any luck or found documentation on how to add IBM Db2 as a datasource for their dashboard on Apache Superset? I don't see any information in the Db2 service credentials about the driver or dialect.
Based on the docs for Apache Superset the connections and hence the connection URI are based on SQLAlchemy. According to the Db2 driver for SQLAlchemy, the Db2 database URI would be like this:
db2+ibm_db://user:pass#host[:port]/database
db2+ibm_db is the dialect and driver, database is the database name on that host with the specified port (typically 50000). If you want to connect to a local database, just leave out the host/port combination:
db2+ibm_db://user:pass#/database

How to enable streaming result set when using PostgreSQL JDBC and PostgreSQL database

By default, PostgreSQL JDBC reads all records into momery and then starts to process them.
If it is MySQL JDBC and MySQL database, I know how to do the reading and processing simultaneously. However, I don't know how to do it in PostgreSQL JDBC and PostgreSQL database.

Error with flyway.conf for Redshift

Can you please provide an example of flyway.conf settings for Redshift?
I tried using:
flyway.url=jdbc:Redshift://name.redshift.amazonaws.com:5439/DBName
flyway.user=user
flyway.password=pass
but that produced this error:
ERROR: Unable to autodetect JDBC driver for url: jdbc:Redshift:
There are many issues here:
redshift should be lower case in the jdbc url
You also need to put the Redshift JDBC driver on the classpath (/drivers directory for Flyway command-line)
additionally you need to set flyway.driver to the AWS redshift driver class name (Flyway defaults to the standard PG driver: http://flywaydb.org/documentation/database/redshift.html)

Connect icCube with Reshift

in icCube 5.1 there is no Redshift as list of supported JDBC connections.
How to create a data source in icCube on Amazon Redshift ?
A first solution is using the Postgres jdbc driver. Redshift is based on Postgres so it also works (for how long is a good question).
The second is a bit more complicated as you need to add Reshift jdbc driver to icCube. First download jdbc driver from amazon from here, after follow this instructions to add a library to icCube.
Once done you've to configure a new data-source :