rsyslog - timestamp setting - log file(imfile) - redhat

I applied the following setting on the .conf for rsyslog, it can read logs from the file.
module(load="imfile" PollingInterval="10")
input(
type="imfile" File="/root/loggingFile.log"
Tag="tag2"
Severity="notice"
Facility="local0"
)
The logs in the file also come as follows-directly the same, nothing different (like time);
trying1
trying2
trying3
...
tryingNew
It can read the logs, but every time a new line is written, it reads the whole file from the beginning.
When I make freshStartTail = on, even though the new line appears, it does not read at all this time.
How can I solve this problem?

Related

How can be configured kafka-connect-file-pulse for continuous reading of a text file?

I have FilePulse correctly configured, so that when I create a file inside the reading folder, it reads it and ingests it in the topic.
Now I need to do continuous reading of each of the files in that folder, since they are continually being updated.
I have to change any property of properties file?
My filePulseTxtFile.properties:
name=connect-file-pulse-txt
connector.class=io.streamthoughts.kafka.connect.filepulse.source.FilePulseSourceConnector
topic=lineas-fichero
tasks.max=1
File types
fs.scan.filters=io.streamthoughts.kafka.connect.filepulse.scanner.local.filter.RegexFileListFilter
file.filter.regex.pattern=.*\\.log$
task.reader.class=io.streamthoughts.kafka.connect.filepulse.reader.RowFileInputReader
File scanning
fs.cleanup.policy.class=io.streamthoughts.kafka.connect.filepulse.clean.LogCleanupPolicy
fs.scanner.class=io.streamthoughts.kafka.connect.filepulse.scanner.local.LocalFSDirectoryWalker
fs.scan.directory.path=/home/ec2-user/parser/scanDirKafka
fs.scan.interval.ms=10000
Internal Reporting
internal.kafka.reporter.bootstrap.servers=localhost:9092
internal.kafka.reporter.id=connect-file-pulse-txt
internal.kafka.reporter.topic=connect-file-pulse-status
Track file by name
offset.strategy=name
Thanks a lot!
Continious reading is only supported by the RowFileInputReader that you can configure with the read.max.wait.ms property - The maximum time to wait in milliseconds for more bytes after hitting end of file.
For example, if you configure that property to 10000 then the reader will wait 10 seconds for new lines to be added to the file before considering it completed.
Also, you should note that as long as there are task processing files, then new files that are added to the source directory will not be selected. But, you can configure the allow.tasks.reconfiguration.after.timeout.ms to force all tasks to be restarted after a given period so that new files will be scheduled.
Finally, you must take care to correctly set the max.tasks property so that all files can be processed in parallel (a task can only process one file at a time).

Openpanel and symbol communication not working

I am trying to make a patch that plays audio when a bang is pressed. I have put a symbol so that I don't need to keep reimporting the file. However it works sometimes but not all the time.
A warning in the Pd console reads: Start requested with no prior open
However I have imported an audio file
Is there something that I have done wrong?
Use [trigger] to get the order-of-execution correct.
One problem is, that whenever you send a [1( to [readsf~] you must have sent an [open ...( message directly beforehand.
Even if you have just successfully opened a file, but then stopped it (with [0() or played it through (so it has been closed automatically), you have to send the filename again.
The real problem is, that your messages are out of order: you should never have a fan-out (that is: connecting a message outlet to multiple inlets), as this will create undefined behavior.
Use [trigger] to get the order-of-execution correct.
(Mastering [trigger] is probably the single most important step in learning to program Pd)

Data overwritten when opening file using FatFs

If I close a file and then reopen it, I cannot write more data to it after reopening it, but if i keep it open i can write as many lines as i want then close it when i am finish writing.
See the example below. Thanks.
if (f_mount(&FatFs, "", 1) == FR_OK) {
f_mkdir ("TEST");
count = 0;
while(count < 200){
if(f_open(&fil, "TEST/test.txt", FA_OPEN_ALWAYS | FA_WRITE) != FR_OK){
break;
}
else{
sprintf(array,"This is file entry number: %d\r\n",count);
f_puts(array, &fil);
if(f_close(&fil) != FR_OK){
break;
}
}
count++;
}
f_mount(0, "", 1);
}
It will count to the max value but it will only write the last entry which is 199.
You need to set your open mode so that it appends to the file rather than writing at the start:
From f_open
FA_OPEN_APPEND Same as FA_OPEN_ALWAYS except read/write pointer is set end of the file.
When you open the file with this:
f_open(&fil, "TEST/test.txt", FA_OPEN_ALWAYS | FA_WRITE);
you are opening the file for writing with the write pointer at the start of the file, so when you go to write to the file with:
f_puts(array, &fil);
you overwrite the previous data in the file.
If you change your open to:
f_open(&fil, "TEST/test.txt", FA_OPEN_APPEND | FA_WRITE);
then you should get the behavior you desire. There is an exception, though, and that's that each time running this, you will continue appending to the file. If that isn't desired, you may need to delete the file first or open it initially with FA_OPEN_ALWAYS and then re-open each pass with FA_OPEN_APPEND.
Depending on what you are trying to do, you should take a look at f_sync, which will perform all clean up and writes that an f_close would perform, but keeps the file open. From the documentation:
This is suitable for the applications that open files for a long time in write mode, such as data logger. Performing f_sync function of periodic or immediataly after f_write function can minimize the risk of data loss due to a sudden blackout or an unintentional media removal.
This would cover nearly every case I can think of for why you might be repeatedly opening and closing a file to append data, so this may be a better solution to your problem.

Why does this code work successfully with Enumerator.fromFile?

I wrote the file transferring code as follows:
val fileContent: Enumerator[Array[Byte]] = Enumerator.fromFile(file)
val size = file.length.toString
file.delete // (1) THE FILE IS TEMPORARY SO SHOULD BE DELETED
SimpleResult(
header = ResponseHeader(200, Map(CONTENT_LENGTH -> size, CONTENT_TYPE -> "application/pdf")),
body = fileContent)
This code works successfully, even if the file size is rather large (2.6 MB),
but I'm confused because my understanding about .fromFile() is a wrapper of fromCallBack() and SimpleResult actually reads the file buffred,but the file is deleted before that.
MY easy assumption is that java.io.File.delete waits until the file gets released after the chunk reading completed, but I have never heard of that process of Java File class,
Or .fromFile() has already loaded all lines to the Enumerator instance, but it's against the fromCallBack() spec, I think.
Does anybody knows about this mechanism?
I'm guessing you are on some kind of a Unix system, OSX or Linux for example.
On a Unix:y system you can actually delete a file that is open, any filesystem entry is just a link to the actual file, and so is a file handle which you get when you open a file. The file contents won't become unreachable /deleted until the last link to it is removed.
So: it will no longer show up in the filesystem after you do file.delete but you can still read it using the InputStream that was created in Enumerator.fromFile(file) since that created a file handle. (On Linux you actually can find it through the special /proc filesystem which, among other things, contains the filehandles of each running process)
On windows I think you will get an error though, so if it is to run on multiple platforms you should probably check test your webapp on windows as well.

How can I validate an image file in Perl?

How would I validate that a jpg file is a valid image file. We are having files written to a directory using FTP, but we seem to be picking up the file before it has finished writing it, creating invalid images. I need to be able to identify when it is no longer being written to. Any ideas?
Easiest way might just be to write the file to a temporary directory and then move it to the real directory after the write is finished.
Or you could check here.
JPEG::Error
[arguments: none] If the file reference remains undefined after a call to new, the file is to be considered not parseable by this module, and one should issue some error message and go to another file. An error message explaining the reason of the failure can be retrieved with the Error method:
EDIT:
Image::TestJPG might be even better.
You're solving the wrong problem, I think.
What you should be doing is figuring out how to tell when whatever FTPd you're using is done writing the file - that way when you come to have the same problem for (say) GIFs, DOCs or MPEGs, you don't have to fix it again.
Precisely how you do that depends rather crucially on what FTPd on what OS you're running. Some do, I believe, have hooks you can set to trigger when an upload's done.
If you can run your own FTPd, Net::FTPServer or POE::Component::Server::FTP are customizable to do the right thing.
In the absence of that:
1) try tailing the logs with a Perl script that looks for 'upload complete' messages
2) use something like lsof or fuser to check whether anything is locking a file before you try and copy it.
Again looking at the FTP issue rather than the JPG issue.
I check the timestamp on the file to make sure it hasn't been modified in the last X (5) mins - that way I can be reasonably sure they've finished uploading
# time in seconds that the file was last modified
my $last_modified = (stat("$path/$file"))[9];
# get the time in secs since epoch (ie 1970)
my $epoch_time = time();
# ensure file's not been modified during the last 5 mins, ie still uploading
unless ( $last_modified >= ($epoch_time - 300)) {
# move / edit or what ever
}
I had something similar come up once, more or less what I did was:
var oldImageSize = 0;
var currentImageSize;
while((currentImageSize = checkImageSize(imageFile)) != oldImageSize){
oldImageSize = currentImageSize;
sleep 10;
}
processImage(imageFile);
Have the FTP process set the readonly flag, then only work with files that have the readonly flag set.