I'm integrating watchman via the socket/bser interface in a JVM program.
I'm seeing odd timing where:
A file is written to by the build system (a small text file)
I get a watchman notification on the bser interface
Thread A listening for bser subscription notifications puts the update onto a queue for a separate thread
Thread B reads the queue, reads the changed file, and then puts the file's data on the wire
However, somehow, Thread B is reading an empty file.
Which, I assume is validly empty at some point, e.g. the IO/syscalls might be:
Clear the file contents
Write chunk 1
Write chunk 2
Close the file
And I assume my Thread B is reading the file between steps 1 and 2. Or maybe 1 and 4, if 4 is when the result is flushed.
My confusion is two fold:
1) I thought watchman's default 20ms wait would account for things like this, and I'd only see an update on my thread A, let alone when my thread B does a read, after step 4, and the data is done being written to the file.
2) Even if watchman did tell me "too soon" about the 1st syscall (say step 1), and I read the results while it was an empty file, there should be another syscall/watchman notification that "btw, the file has some content now".
FWIW/oddly enough, I was seeing this very same behavior when using the Java WatchService API, where I would get an inotify event, but read a file "too soon", and so get either empty or partial results, and then no follow up inotify event when the rest of the data was available.
I assumed this was a fluke/nuance of the WatchService, so I solved it at the time by checking the file mod time before reading it, and just waiting to ensure mod time >2 seconds old before assuming the file is "done" being written.
(Note that this also handled ~100mb+ files being written, where the build process might write a chunk of data every 100ms+, but with WatchService I was seeing 100s of inotify notifications for what was essentially a single continuous write.)
When I ported my WatchService code to watchman, I dropped this "ensureSettled" hack, because I assumed watchman's 20ms settle period (which is way lower than the 2s I was using, but hey it's the default) + it's general robustness compared to the somewhat beta WatchService would mean it wouldn't be a problem.
But within ~a day of using the watchman-ported code, I'm seeing empty file reads, just like I was with the WatchService.
Any ideas about what I'm missing?
I can add back the ensureSettled hack, but at this point I'm curious about what is going on.
The docs aren't very clear on this, sorry!
Dispatching of subscription notifications is subject to the settle timeout, but since file updates are non-atomic it's likely that the default 20ms kicks in before the file contents are visible to you; under the covers, the kernel generates a series of notifications for the various mutations that you're doing, so if the truncate takes 20ms before you write (or perhaps flush) the data out, you'll likely get a notification "in the middle".
This stuff is also operating system dependent. Here's an example of a recently discovered and resolved issue: https://github.com/facebook/watchman/commit/bac383c751b248ae742a2a20df3e8272238c0ae2
it doesn't sound like it is quite the same thing as you're experiencing, it just adds some color to this discussion.
If you already have code to manage the settling in your client, then it may be easier for you to add that back; we do this in watchman-make for example.
You may also wish to try setting https://facebook.github.io/watchman/docs/config.html#settle in a .watchmanconfig file in the root of the directory tree that you're watching and leave that to the watchman server. If/when you change this setting, you will need to delete and restart the watch.
Which you choose depends on how you want to trade ease of configuration against volume of code you want to maintain and (perhaps) volume of support questions from your user base if the .watchmanconfig isn't correctly configured for them.
Note that you can use the command invocation from https://facebook.github.io/watchman/docs/cmd/log-level.html to see the debug logging for the kernel notifications as they come in in real time; this may be helpful for you in understanding exactly which notifications are coming in and when.
Just curious, are you using https://github.com/facebook/watchman/tree/master/java to talk to the watchman server?
Related
I'm working on a C++ program that uses boost::python to provide a python wrapper/API for the user. The program tracks and limits its own memory usage by opening /proc/self/statm using a file descriptor. Every timestep it seeks to the beginning of that file and reads the vmsize from it.
proc_self_statm_fd = open( "/proc/self/statm", O_RDONLY );
However, this causes a problem when calling fork(). In particular, when a user writes a python script that does something like this:
proc = multiprocessing.Process(name="bkg_process",target=bkg_process,daemon=True)
The problem is that the forked process gets the file descriptor pointing to /proc/self/statm from the parent process, not its own, and this reports the wrong memory usage. Even worse, if the parent process exits, the child process will fail when trying to read from the file descriptor.
What's the correct solution for this? It needs to be handled at the C++ level because we don't have control over the user's python scripts. Is there a way to have the class auto detect that a fork has happened and grab a new file descriptor? In the worst case I can have it re-open the file for every update. I'm worried that would add runtime overhead though.
You could store the PID in the class, and check it against the value of getpid() on each call, and then reopen the file if the PID has changed. getpid() is typically much cheaper than open - on some systems it doesn't even need a context switch (it just fetches the PID from a magic location in the process's own memory).
That said, you may also want to actually measure the cost of reopening the file each time - it may not actually be significant.
I am a newbie to both FreeRTOS and STM32. I want to know how exactly callback function HAL_UART_TxCpltCallback for HAL_UART_Transmit_IT works ?
Can we edit that that callback function for our convenience ?
Thanks in Advance
You call HAL_UART_Transmit_IT to transmit your data in the "interrupt" (non-blocking) mode. This call returns immediately, likely well before your data gets fully trasmitted.
The sequence of events is as follows:
HAL_UART_Transmit_IT stores a pointer and length of the data buffer you provide. It doesn't perform a copy, so your buffer you passed needs to remain valid until callback gets called. For example it cannot be a buffer you'll perform delete [] / free on before callbacks happen or a buffer that's local in a function you're going to return from before a callback call.
It then enables TXE interrupt for this UART, which happens every time the DR (or TDR, depending on STM in use) is empty and can have new data written
At this point interrupt happens immediately. In the IRQ handler (HAL_UART_IRQHandler) a new byte is put in the DR (TDR) register which then gets transmitted - this happens in UART_Transmit_IT.
Once this byte gets transmitted, TXE interrupt gets triggered again and this process repeats until reaching the end of the buffer you've provided.
If any error happens, HAL_UART_ErrorCallback will get called, from IRQ handler
If no errors happened and end of buffer has been reached, HAL_UART_TxCpltCallback is called (from HAL_UART_IRQHandler -> UART_EndTransmit_IT).
On to your second question whether you can edit this callback "for convenience" - I'd say you can do whatever you want, but you'll have to live with the consequences of modifying code what's essentially a library:
Upgrading HAL to newer versions is going to be a nightmare. You'll have to manually re-apply all your changes you've done to that code and test them again. To some extent this can be automated with some form of version control (git / svn) or even patch files, but if the code you've modified gets changed by ST, those patches will likely not apply anymore and you'll have to do it all by hand again. This may require re-discovering how the implementation changed and doing all your work from scratch.
Nobody is going to be able to help you as your library code no longer matches code that everyone else has. If you introduced new bugs by modifying library code, no one will be able to reproduce them. Even if you provided your modifications, I honestly doubt many here will bother to apply your changes and test them in practice.
If I was to express my personal opinion it'd be this: if you think there's bugs in the HAL code - fix them locally and report them to ST. Once they're fixed in future update, fully overwrite your HAL modifications with updated official release. If you think HAL code lacks functionality or flexibility for your needs, you have two options here:
Suggest your changes to ST. You have to keep in mind that HAL aims to serve "general purpose" needs.
Just don't use HAL for this specific peripheral. This "mixed" approach is exactly what I do personally. In some cases functionality provided by HAL for given peripheral is "good enough" to serve my needs (in my case one example is SPI where I fully rely on HAL) while in some other cases - such as UART - I use HAL only for initialization, while handling transmission myself. Even when you decide not to use HAL functions, it can still provide some value - you can for example copy their IRQ handler to your code and call your functions instead. That way you at least skip some parts in development.
From local device testing, I've seen that writing a file to the iOS file system (regardless of how low level the call you use) will often return success before the file is fully committed to the flash. Meaning, if you hard reset the device then reboot, your file could be rolled back (if the write completed or was atomic) or corrupted. What is the source of this delay (documentation appreciated, I haven't been able to find anything), and is there a way to get feedback when the actual filesystem write is completed. For instance, I'd like to acknowledge receipt and storage of a piece of data from a remote server, but I find that acknowledging it after write "reports" success could result in data loss in the event of a hard crash or power failure.
Since this is a 4 years old questions, I'll provide not only the answer, but also the path I took while searching for it.
I was not able to find any clear explanation in the official documentation: File System Programming Guide. There was only a clue in the Performance Tips section. It states that:
Apps can call the BSD fcntl function with the F_NOCACHE flag to enable or disable caching for a file. For more information about this function, see fcntl.
Enabling the F_NOCACHE flag does not solve the problem you're stating, however, the manual for fcntl method states there's an option that you might just find interesting:
F_FULLFSYNC Does the same thing as fsync(2) then asks the drive to flush all buffered data to the permanent storage device
(from man fcntl, see here).
I've checked the manual for fsync for more details. It has given me, eventually, the clearest and most understandable explanation of both the problem and the solution:
Note that while fsync() will flush all data from the host to the drive (i.e. the "permanent storage device"), the drive itself may not physically write the data to the platters for quite some time and it may be written in an out-of-order sequence.
Specifically, if the drive loses power or the OS crashes, the application may find that only some or none of their data was written. The disk drive may also re-order the data so that later writes may be present, while earlier writes are not.
This is not a theoretical edge case. This scenario is easily reproduced with real world workloads and drive power failures.
For applications that require tighter guarantees about the integrity of their data, Mac OS X provides the F_FULLFSYNC fcntl. The F_FULLFSYNC fcntl asks the drive to flush all buffered data to permanent storage. Applications, such as databases, that require a strict ordering of writes should use F_FULLFSYNC to ensure that their data is written in the order they expect.
(from man fsync, see here).
Yeah, it's definitely not a theoretical edge case. Thankfully, once you know the problem, the solution is trivial:
let filePath: String = "your file path"
// you can use other option than read-write
let fd = open(String(path.utf8), O_RDWR)
// if fd is -1, there was an error opening file, handle it as you wish
guard fd != -1 else { return }
// syncResult is -1 if sync operation failed, handle it as you wish
let syncResult = fcntl(fd, F_FULLFSYNC)
// don't forget to close opened file
close(fd)
Once fcntl finishes, your data will be saved.
Notice this operation is slower than a usual writing to file (via NSFileManager or writeToURL methods family). In case of performance issues, it's best to move writing to background thread.
I am running a perl script on a HP-UX box. The script will execute every 15 minutes and will need to compare it's results with the results of the last time it executed.
I will need to store two variables (IsOccuring and ErrorCount) between the executions. What is the best way to do this?
Edit clarification:
It only compares the most recent execution to the current execution.
It doesn't matter if the value is lost between reboots.
And touching the filesystem is pretty much off limits.
If you can't touch the file system, try using a shared memory segment. There are helper modules for that like IPC::ShareLite, or you can use the shmget and related functions directly.
You'll have to store them in a file. This sort of file is often kept in /tmp, but any place where the user running the cron job has access would do. Make sure your script can handle the case where the file is missing.
You could create a separate process running a "remember stuff" service over your choice of IPC mechanism. This sounds like a rather tortured solution to "I don't want to touch the disk" but if it's important enough to offset a couple of days of development work (realistically, if you are new to IPC, and HP-SUX continues to live up to its name) then by all means read man perlipc for a start.
Does it have to be completely re-executed? Can you just have it running in a loop and sleeping for 15 minutes between iterations? Than you don't have to worry about saving the values externally, the program never stops.
I definitely think IPC is the way to go here.
I'd save off the data in a file. Then, inside the script I'd load the last results if the file exists.
Use module Storable to serialize Perl data structures, save them anywhere you want and deserialize them during next script execution.
im having this situation:
Im parsing some log files with perl daemon. This daemon writes data to mysql db.
Log file can:
be rotated ('solved by filesize and some logic')
doesnt exist ('ignore_nonexistant' parameter in Tail)
Daemon:
Can be killed
Can became dead by some reazon.
Im using File::Tail to tail tha file. For file rotation mechanism of date of creation or filesize can help. and what mechanism should i use to start tail from some position in file? (asume that there is a lot of such daemons, no write access to filesystem).
I've think about position variable in DB, but this wont help me.
Maybe some mechanism to pass position parameter to parrent process?
I just dont want to reinvent bicycle.
File::Tail already detects rotation and continues reading from the new file.
To deal with the daemon dying and restarting, can you query the database for the last record written when the daemon restarts, and just skip logfile lines until you get to a later line?
Try http://search.cpan.org/dist/Log-Unrotate/.
You'll have to implement your own Log::Unrotate::Cursor class if you wish to store position files in DB instead of local filesystem, but that should be trivial.
We wrote and used Log::Unrotate for 5 years in production and it tries really hard to never skip any data. (It tries so hard that it throws exception if your cursor becomes invalid, for example if log got rotated several times while reader didn't work for some reason. You may want to enable autofix_cursor option to change this behavior).
Also take a look at http://search.cpan.org/dist/File-LogReader/. I never used it but it's supposed to solve the same task.