Kafka connect - File Source - apache-kafka

It seems connect file source is reading from the beginning of the files when the connector is restarted .
I couldn't find the equivalent configuration.
How to specify to read only the appended data ( Please note this only happens if connect is restarted , in that case, it is only reading the data that got appended) .

Related

Why does Kafka Connect Sftp source directory need to be writeable?

using the connector: https://docs.confluent.io/current/connect/kafka-connect-sftp/source-connector/index.html
When I config the connector and check the status I get the bellow exception...
org.apache.kafka.connect.errors.ConnectException: Directory for 'input.path' '/FOO' it not writable.\n\tat io.confluent.connect.sftp.source.SftpDirectoryPermission.directoryWritable
This makes no sense from a source stand point especially if you are connecting to a 3rd party source you do NOT control.
You need write permissions because the Connector will move the files that it read to the configurable finished.path. This movement into finished.path is explained in the link you have provided:
Once a file has been read, it is placed into the configured finished.path directory.
The documentation on the configuration input.path states that you need write access to it:
input.path - The directory where Kafka Connect reads files that are processed. This directory must exist and be writable by the user running Connect.

Kafka Connect - Missing Text

Kafka Version : 2.12-2.1.1
I created a very simple example to create a source and sink connector by using following commands :
bin\windows\connect-standalone.bat config\connect-standalone.properties config\connect-file-source.properties config\connect-file-sink.properties
Source file name : text_2.txt
Sink file name : test.sink_2.txt
A topic named "connect-test-2" is used and I created a consumer in PowerShell to show the result.
It works perfect at the first time. However, after i reboot my machine and start everything again. I found that some text are missing.
For example, when I type the characters below into test_2.txt file and save as following:
HAHAHA..
missing again
some text are missing
I am able to enter text
first letter is missing
testing testing.
The result windows (Consumer) and the sink file shows the following:
As you can see, some text are missing and i cannot find out why this happen. Any advice?
[Added information below]
connect-file-source.properties
name=local-file-source
connector.class=FileStreamSource
tasks.max=1
file=test_2.txt
topic=connect-test-2
connect-file-sink.properties
name=local-file-sink
connector.class=FileStreamSink
tasks.max=1
file=test.sink_2.txt
topics=connect-test-2
I think the strange behaviour is caused be the way you are modifying the sink file (text_2.txt).
How you applied changes after stoping the connector:
Using some editor <-- I think you use that method
Append only new characters to the end of file
FileStreamSource track changes based on the position in the file. You are using Kafka Connect in standalone mode so current position is written in /tmp/connect.offsets file.
If you modify source file using the editor, the whole content of the file has been changed. However FileStreamSource checks only if the size has change and poll characters, which offsets in the file is bigger then last processed by the Connector.
You should modify source file only by appending new characters to the end of the file

Issue in fetching the output from local fs via StreamSet tool

I am Exploring StreamSet Tool,I have a log file n , I need to parse the log file to the StreamSet tool,I passed the log file from the Directory to the log parser,the format of the log parser is the Common log format , n the destination is the local fs..When I start executing it is running but I am not getting the output.Could any one please help me..

Spark Streaming textFileStream COPYING

I'm trying to monitor a repository in HDFS to read and process data in files copied to it (to copy files from local system to HDFS I use hdfs dfs -put ), sometimes it generates the problem : Spark Streaming: java.io.FileNotFoundException: File does not exist: .COPYING so I read the problems in forums and the question here Spark Streaming: java.io.FileNotFoundException: File does not exist: <input_filename>._COPYING_
According to what I read the problem is linked to Spark streaming reading the file before it finishes being copied in HDFS and on Github :
https://github.com/maji2014/spark/blob/b5af1bdc3e35c53564926dcbc5c06217884598bb/streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala , they say that they corrected the problem but only for FileInputDStream as I could see but I'm using textFileStream
When I tried to use FileInputDStream the IDE throws an error the Symbol is not accessible from this place.
Does anyone know how to filter out the files that are still COPYING because I tried :
var lines = ssc.textFileStream(arg(0)).filter(!_.contains("_COPYING_")
but that didn't work and it's expected because the filter should be applied on the name of the file process I guess which I can't access
As you can see I did plenty of research before asking the question but didn't get lucky ,
Any help please ?
So I had a look: -put is the wrong method. Look at the final comment: you have to use -rename in your shell script to have an atomical transaction on the HDFS.

Getting null.txt file when using Cygnus HDFS sink

I'm using Cygnus 0.5 with default configuration for HDFS sink. In order make it to run, I have deactivated the "ds" interceptor (otherwise I get an error at start time that precludes Cygnus to start, related with not finding the matching table file).
Cygnus seems to work, but the file in which entity information is stored in HDFS gets a weird name: "null.txt". How can I fix this?
First of all do no deactivate the DestinationExtractor interceptor. This is the piece of code infering the destination the context data notified by Orion is going to be persisted. Please observe the destination may refer to a HDFS file name, a MySQL table name or a CKAN resource name, it depends on the sinks you have configured. Once infered, the destination is added to the internal Flume event as a header called destination in order the sinks know where to persist. Thus, if deactivated, such a header is not found by the sinks and a null name is used as the destination name.
Regarding the "matching table file not found" problem you experienced (and which leaded you yo deactivate the Interceptor), it was due to the Cygnus configuration template had a bad default value for the cygnusagent.sources.http-source.interceptors.de.matching_table parameter. This has been solved in Cygnus 0.5.1.
A workaround while Cygnus 0.5.1 gets released is:
Do no deactivate the DestinatonExtractor (as #frb says in his answer)
Create an empty matching table file and use it for the matching_table configuration, i.e.: touch /tmp/dummy_table.conf then set in the cygnus configuration file: cygnusagent.sources.http-source.interceptors.de.matching_table = /tmp/dummy_table.conf