How to load the data from Kafka to sql using talend - talend

I am trying to load the data from tkafkainput to tdboutput, how to the data dynamically without stoping Kafka input
Kafka to sql using talend

Related

Kafka use case to replace control M jobs

I have few control M jobs running in production..
first job - to load csv file records and insert to database staging table
Second job - to perform data encrichment for each records in staging table
Third job - to read data from staging table and insert to another table..
Currently we use apache camel to do this.
We have bough confluent kafka license so we want to use kafka..
Design proposal
Create csv kafka source connector to read data from csv and insert to kafka input topic
Create spring cloud kafka stream binder application to read data from input topic, enrich the data and push to output topic
To have kafka sink connector to push data from output topic to database
Problem now in steps two we need to have database connection to enrich the data and when watch video in youtube it said spring cloud kafka stream binder should not have database connection.. so how i should design my flow? What spring technology i should use?
There's nothing preventing you from having a database connection, but if you read the database table into a Kafka stream/table then you can join and enrich data using Kafka Streams joins rather than remote database calls

How to stream data from snowflake to kafka

I am trying to build a data pipeline from snowflake to Kafka.
Preferably I want to use AWS MSK.
I could find multiple docs for streaming data in to snowflake from Kafka but I am looking for other way around.

Confluent Kafka Connect : Run multiple sink connectors in synchronous way

We are using Kafka connect S3 sink connector that connect to Kafka and load data to S3 buckets.Now I want to load data from S3 buckets to AWS Redshift using Copy command, for that I'm creating my own custom connector.Use case is I want to load data that created over S3 to Redshift in synchronous way, and then next time S3 connector should replace the existing file and again our custom connector load data to S3.
How can I do this using Confluent Kafka Connect,or my other better approach to do same task?
Thanks in advance !
If you want data to Redshift, you should probably just use the JDBC Sink Connector and download the Redshift JDBC Driver into the kafka-connect-jdbc directory.
Otherwise, rather than writing a Connector, you could use Lambda to trigger some type of S3 event notification to do some type of Redshift upload
Alternatively, if you are simply looking to query S3 data, you could use Athena instead without dealing with any databases
But basically, Sink Connectors don't communicate between one another. They are independent tasks that are designed to initially consume from a topic and write to a destination, not necessarily trigger external, downstream systems.
You want to achieve synchronous behaviour from Kafka to redshift then S3 sink connector is not right option.
If you are using S3 sink connector then first put the data into s3 and then externally run copy command to push to S3. ( Copy command is extra overhead )
No customize code or validation can happen before pushing to redshift.
Redshift sink connector has come up with native jdbc library which is equivalent fast to S3 copy command.

How do I read a Table In Postgresql Using Flink

I want to do some analytics using Flink on the Data in Postgresql. How and where should I give the port address,username and password. I was trying with the table source as mentioned in the link:https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/table/common.html#register-tables-in-the-catalog.
final static ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
final static TableSource csvSource = new CsvTableSource("localhost", port);
I am unable to start with actually. I went through all the documents but detailed report about this not found.
The tables and catalog referred to the link you've shared are part of Flink's SQL support, wherein you can use SQL to express computations (queries) to be performed on data ingested into Flink. This is not about connecting Flink to a database, but rather it's about having Flink behave somewhat like a database.
To the best of my knowledge, there is no Postgres source connector for Flink. There is a JDBC table sink, but it only supports append mode (via INSERTs).
The CSVTableSource is for reading data from CSV files, which can then be processed by Flink.
If you want to operate on your data in batches, one approach you could take would be to export the data from Postgres to CSV, and then use a CSVTableSource to load it into Flink. On the other hand, if you wish to establish a streaming connection, you could connect Postgres to Kafka and then use one of Flink's Kafka connectors.
Reading a Postgres instance directly isn't supported as far as I know. However, you can get realtime streaming of Postgres changes by using a Kafka server and a Debezium instance that replicates from Postgres to Kafka.
Debezium connects using the native Postgres replication mechanism on the DB side and emits all record inserts, updates or deletes as a message on the Kafka side. You can then use the Kafka topic(s) as your input in Flink.

Transfer data from vertica to Redshift using Apache Nifi

I want to transfer data from vertica to redshift using apache nifi.
which are the processors and configuration I need to set?
If Vertica and Redshift have "well-behaved" JDBC drivers, you can set up a DBCPConnectionPool for each, then a SQL processor such as ExecuteSQL, QueryDatabaseTable, or GenerateTableFetch (the latter of which generates SQL for use in ExecuteSQL). These will get your records into Avro format, then (prior to NiFi 1.2.0) you can use ConvertAvroToJSON -> ConvertJSONToSQL -> PutSQL to get your records inserted into Redshift.
In NiFi 1.2.0, you can use set up an AvroReader for use in PutDatabaseRecord. Then you will only need the SQL processor to get the records out of Vertica, directly to PutDatabaseRecord to put them into Redshift.