Sending data mssql server to MySQL using Kafka connect - kubernetes

I need to deploy apache-kafka-connect in kubernetes. I have following questions:
Do I need to create MS SQL Server db tables in MySQL database? Or Kafka connect will create?
Any reference how to implement Kafka Connect without Avro Schema registry?
How to give configurations of key and value convertors?

Related

Is there a way to connect Spark with Kerberized Hadoop and Kerberized Mongo with different user IDs at the same time?

I am trying to archive data from mongodb in hive using Spark. Intention is to connect to mongodb, read data and dump it as hive table. Problem is that both mongodb and hive are Kerberized and have different principals. I am not able to connect to mongo from Spark after successfully connecting to Hadoop. Having a cloudera cluster.
Did a kinit with hdfs credentials. Executed Spark submit and passed mongo credentials as Java options. I get an error during mongo authentication from executor node, gssapi failed.

Connect Azure PostgreSQL DB From HDinsights

Is there a way where we can connect Azure PostgreSQL DB From HDinsights cluster.
I can see we have an option to have custom metastore when creating HDInsights cluster. But would like to know if there is a way where we can connect Azure PostgreSQL DB From HDinsights cluster (Apart from PostgreSQL JAR) to load some data using spark

DebeziumIO read with PostgresSQL to connect to RDS is not streaming data in Apache Beam

I have created apache beam pipeline which uses DebeziumIO - PostgreSQLconnector class to connect to postgresql in rds. The setup is working for local postgresql database with replication set to logical. AWS is setup with replication to logical and rds_replication role set to the user.
But the data stream is not captured by the pipeline.
I have checked with the embedded debezium with the postgresql connector class and it captures the change data.

Debezium Connector for RDS Aurora

I'm trying to use debezium with rds/aurora/ and I'm thinking which connector to use MySQL connector or there is another connector for Aurora? Also, how can i connect debezium connector(localhost) to remote aws aurora db? Also, is someone using debezium with aurora to share some info.
Also, how can i configure debezium to write into different kafka topics the different tables which we are monitoring?
Regards
While creating AWS Aurora instance you must have chosen between
Amazon Aurora with MySQL compatibility
Amazon Aurora with PostgreSQL compatibility
Based on this you can choose between MySQL or Postgres connector for Debezium. There is no separate connector for Aurora.
how can i connect debezium connector(localhost) to remote aws aurora db?
You need to configure connector with database.hostname and database.port
Refer - https://debezium.io/documentation/reference/1.0/connectors/index.html

Connecting Tableau to Heroku Follower Database

Have created a follower database for a Heroku Postgres DB. I am able to connect to the follower DB by using pgAdmin, no problem.
When I try to connect with Tableau 8.2 Postgres connector using the same credentials, I get "no pg_hba.conf entry for host xxx.xxx.xxx.xxx, user yyyyyyyy, database zzzzzzzz, SSL Off".
As an alternative I tried the Amazon Redshift connector with the same credentials and it works, except for full drill through to underlying data errors out ("Syntax error at or near 7500") or some other line number.
Does anyone know a way to connect to Heroku Follower database with Tableau 8.2 that enables drill through (view underlying data)?
Have you been able to get the Postgres connector to work?
Is there a way to get the Amazon Redshift (or other) connector to work with drill through capabilities?