How to save file at some point without closing it? - progress-4gl

OUTPUT TO "logfile.txt".
FOR EACH ...:
...
PUT "Some log data". OUTPUT CLOSE. OUTPUT TO "logfile.txt" APPEND.
...
END.
Haven't found an appropriate statement to save file at some point. I don't wanna use UNBUFFERED APPEND because it is supposedly slower. Maybe there is built-in logging tools? Maybe STREAMS could help me? Problem in my solution that I have to specify log filename each time i open it with OUTPUT TO statement. A nested procedure may not have a clue about filename.

The question as it stands is still ambiguous.
If you want a way to route the output through a standard "service" similar to what LOG-MANAGER does, you can do that by using
static members of a class,
by using an API in a persistent procedure and PUBLISHing to it,
by using an API in a session super-procedure and calling it's API
STREAMS will give you a way to segregate output for a single procedure or class to a single file, and keep that output from getting mingled with the production output, however it's limited to the current program, which means it's not a general solution as an application-wide logging facility.

There is no "save" option.
However... you can force output to be flushed with:
put control null(0).
"Supposedly slower" is awfully vague. Yes, there is potentially more IO with unbuffered output. But whether or not that really matters depends heavily on what you are doing and how it will be used. It is very unlikely that it actually matters.
A STREAM would certainly help to keep things organized and make it so that you don't have to know the name of the file in nested procedures.
Yes, there are built in logging tools. Look at the LOG-MANAGER system handle.
The code in the question would be better written as:
define stream logStream.
output stream logStream to value( "log.txt" ) append unbuffered.
for each customer no-lock:
put stream logStream custName skip.
/* put stream logStream control null(0). */ /* if you want to try fooling with buffered output... */
end.
output stream logStream close.

Related

Omnet++: getting .ned file after simulations has ended

When my simulation, using Omnet++, starts, there is a .ned file describing the initial scenario (in my case it shows a certain type of a network configuration). During the simulation this scenario changes, come times it changes very much. Is there a way to get a .ned file describing the final scenario after the simulation has ended? So that I can analyze it with a script...
thnaks
Not the NED file, because that describes the initial network structure in an abstract way (i.e. it can contain vectors, loops, conditional statements etc.)
What you need is a simple dump of all or selected objects. You should use the 'snapshot feature' for this. It produces a nicely formatted XML output. You can read more about it in the manual:
http://www.omnetpp.org/doc/omnetpp/manual/usman.html#sec299

Is there a way to copy files in a non-blocking way in Scala?

I have checked java.nio.file.Files.copy but that blocks a thread until the copy is done. Are there any libraries that allow one to copy a file in a non-blocking way? I need to perform many of these operations simultaneously and cannot afford to have so many threads blocked.
While I could write something myself using non-blocking streams, I would rather use something tried and tested that would guarantee a correct copy every time (or detect if something went wrong).
Check this: Iterate over lines in a file in parallel (Scala)?
val chunkSize = 128 * 1024
val iterator = Source.fromFile(path).getLines.grouped(chunkSize)
iterator.foreach { lines =>
lines.par.foreach { line => process(line) }
}
Reading (copying) files by chunks in parallel. In this case "par" is used.
So it quite non-blocking in terms / scope of processors (cores).
But you may follow same idea of chunks, for example using Akka/Future/Promises to be even in wider scopes.
You may customize you chunk-size deepening on your performance characteristic, level of system load, etc..
One more link that explains possible way to do read / write data from (property) file in parallel using Akka Actors. This is not quite that you might be want, but it may give an idea.
Idea - you may build your own not-blocking way of reading / copying files.
--
And about your statement "While I could write something myself using non-blocking streams":
I would remind that each OS / File System (FS) may have its own vision about what and where to block. Like Windows blocks a file (write-block at leat) if one thread writes to it. On Linux is is configurable. So if you want to stick to something stable, I would suggest to think it out and go with your own wrapper (over FS) solution based on events, chunks, states.
I have used the Process class, issuing an operating system command to copy the file. Of course, one has to check under which OS the application is running, and issue the appropriate command, but this allows for fast and asynchronous copies.
As Marius rightly mentions in the comments, Scala Process blocks, so I run it wrapped in a Future.
Java 8 Process introduces a function isAlive(). A non-blocking alternative would be to use Java 8 processes and use the scheduler to poll at regular intervals to see if the process has finished. However, I did no need to go to this extent.
Have you checked out the async stuff in scala-io?
http://jesseeichar.github.io/scala-io-doc/0.4.2/index.html#!/core/async%20read%20write

How to preserve data between executions of program

I am running a perl script on a HP-UX box. The script will execute every 15 minutes and will need to compare it's results with the results of the last time it executed.
I will need to store two variables (IsOccuring and ErrorCount) between the executions. What is the best way to do this?
Edit clarification:
It only compares the most recent execution to the current execution.
It doesn't matter if the value is lost between reboots.
And touching the filesystem is pretty much off limits.
If you can't touch the file system, try using a shared memory segment. There are helper modules for that like IPC::ShareLite, or you can use the shmget and related functions directly.
You'll have to store them in a file. This sort of file is often kept in /tmp, but any place where the user running the cron job has access would do. Make sure your script can handle the case where the file is missing.
You could create a separate process running a "remember stuff" service over your choice of IPC mechanism. This sounds like a rather tortured solution to "I don't want to touch the disk" but if it's important enough to offset a couple of days of development work (realistically, if you are new to IPC, and HP-SUX continues to live up to its name) then by all means read man perlipc for a start.
Does it have to be completely re-executed? Can you just have it running in a loop and sleeping for 15 minutes between iterations? Than you don't have to worry about saving the values externally, the program never stops.
I definitely think IPC is the way to go here.
I'd save off the data in a file. Then, inside the script I'd load the last results if the file exists.
Use module Storable to serialize Perl data structures, save them anywhere you want and deserialize them during next script execution.

What is select supposed to do if you close a monitored fd?

I can test this to find the behavior but that's not the point. In my answer to another question, a commenter recommended closing a monitored fd from another thread to wake up select. Another commenter couldn't find a reference to this behavior in the standard, and I can't find one either.
Can someone provide a pointer to the standard on this behavior?
From the description of select in "The Open Group Base Specifications Issue 7":
A descriptor shall be considered ready for reading when a call to an input function with O_NONBLOCK clear would not block, whether or not the function would transfer data successfully. (The function might return data, an end-of-file indication, or an error other than one indicating that it is blocked, and in each of these cases the descriptor shall be considered ready for reading.)
So, I would say this method is portable.

How can I get a callback when there is some data to read on a boost.asio stream without reading it into a buffer?

It seems that since boost 1.40.0 there has been a change to the way that the the async_read_some() call works.
Previously, you could pass in a null_buffer and you would get a callback when there was data to read, but without the framework reading the data into any buffer (because there wasn't one!). This basically allowed you to write code that acted like a select() call, where you would be told when your socket had some data on it.
In the new code the behaviour has been changed to work in the following way:
If the total size of all buffers in the sequence mb is 0, the asynchronous read operation shall complete immediately and pass 0 as the argument to the handler that specifies the number of bytes read.
This means that my old (and incidentally, the method shown in this official example) way of detecting data on the socket no longer works. The problem for me is that I need a way detecting this because I've layered my own streaming classes on-top of the asio socket streams and as such, I cannot just read data off the sockets that my streams will expect to be there. The only workaround I can think of right now is to read a single byte, store it and when my stream classes then request some bytes, return that byte if one is set: not pretty.
Does anyone know of a better way to implement this kind of behaviour under the latest boost.asio code?
My quick test with an official example with boost-1.41 works... So I think it still should work (if you use null_buffers)