I am using File System source connector to ingest data in kafka. But I am not able to find any file sink connector i checked pulse and spooldir everyone has only source connector. I am trying to use fileStream sink connector but it is not production grade as it is mentioned in official website.
Could anyone please suggest me any solution or connectors.
Note: I dont want to use consumer application
Related
I already setup the Kafka JDBC Sink Connector where it will consume the data from the kafka producer api, however I want to setup FME to deal with the data side and sink it to the database where it will interact with GIS (geographic information system) and it will stream the spatial data. I do not have much knowledge on FME, so are there any information/ documentation or does anyone know and can explain how to setup FME with the Kafka JDBC Sink Connector
Thank you
The FME connector appears to be a producer/consumer and has no correlation to the Kafka Connect API. https://docs.safe.com/fme/2019.1/html/FME_Desktop_Documentation/FME_Transformers/Transformers/kafkaconnector.htm
You also wouldn't "set it up with the JDBC connector". The sink writes to the database, so FME would need to read from there, or bypass Kafka Connect altogether, and use the FME supported Kafka consumer processes
I have a requirement in my project where I have to get the data after hitting a REST service. Is there any Kafka connector that does this work or I have to write the custom code using streams or producer ?
I tried finding REST connector on ( https://www.confluent.io/hub/ ) but could not find anything. Can you please suggest?
Seems you're asking for the HTTP Sink Connector, which does exist on that page.
There is also a kafka-connect-rest project on Github
I installed Neo4j and I can access the server. I can make nodes though cypher.
Now I want to use it for data streams. But I'm not sure how to do so. I just started Neo4j and I'm struggling with installing 'Stream Plugin'.
Any help is highly appreciated.
You should copy the jar files for the Neo4j streams plugin directly into your /plugins folder and configure the connections to Kafka and Zookeeper as well as other Neo4j property values at the neo4j.conf file as described here. For example:
kafka.zookeeper.connect=zookeeper-host:2181
kafka.bootstrap.servers=kafka-host:9092
Alternatively, if you are looking only for a sink connection from Kafka (i.e. moving records from Kafka topics to into Neo4j), you can also use Kafka Connect with the the supported Kafka Connect Neo4j Sink. More at https://www.confluent.io/hub/neo4j/kafka-connect-neo4j
I have a system pushing Avro data in to multiple Kafka topics.
I want to push that data to HDFS. I came across confluent but am not sure how can I send data to HDFS without starting kafka-avro-console-producer.
Steps I performed:
I have my own Kafka and ZooKeeper running so i just started schema registry of confluent.
I started kafka-connect-hdfs after changing topic name.
This step is also successful. It's able to connect to HDFS.
After this I started pushing data to Kafka but the messages were not being pushed to HDFS.
Please help. I'm new to Confluent.
You can avoid using the kafka-avro-console-producer and use your own producer to send messages to the topics, but we strongly encourage you to use the Confluent Schema Registry (https://github.com/confluentinc/schema-registry) to manage your schemas and use the Avro serializer that is bundled with the Schema Registry to keep your Avro data consistent. There's a nice writeup on the rationale for why this is a good idea to do here.
If you are able to send messages that were produced with the kafka-avro-console-producer to HDFS, then your problem is likely in the kafka-connect-hdfs connector not being able to deserialize the data. I assume you are going through the quickstart guide. The best results will come from you using the same serializer on both sides (in and out of Kafka) if you are intending to write Avro to HDFS. How this process works is described in this documentation.
I am trying to stream data using Kafka-Connect with HDFS Sink Connector. Both Standalone and Distributed modes are running fine but its writing into HDFS only once (based on flush-size) and not streaming later on. Please help if I'm missing some thing.
Confluent 2.0.0 & Kafka 0.9.0
I faced this issue long back.Just check below parameter is missing
Connect-hdfs-sink properties
"logs.dir":"/hdfs_directory/data/log"
"request.timeout.ms":"310000"
"offset.flush.interval.ms":"5000"
"heartbeat.interval.ms":"60000"
"session.timeout.ms":"300000
"max.poll.records":"100"