I'm using DBeaver to execute a large script file which produces a lot of output (via PostgreSQLs RAISE NOTICE statement). I can see the output in the server output tab, however, the buffer size seems to be limited so a lot of output is lost at the end of the execution.
Is it somehow possible to either increase the server output tab buffer size or redirect the server output directly to a file?
I was experiencing the same issue as you, and I have been unable to find any setting which limits the output length.
In my case, what I eventually discovered was that there was an error in my script which was causing it to fail silently. It looks like part of the output is missing, however it was just the script terminating prematurely.
I encourage you to consider this option, and check your script for errors. Be aware that errors in the script don't appear in the output log.
Related
I'm using install4j to generate Windows executables.
The launcher is configured to redirect stderr and stdout to log\error.log resp. log\output.log.
This all works as intended, the log files are written in the expected location and with the expected content.
However, I do not know whether output is flushed or buffered.
I.e. if I kill the program via the Task Manager, can I expect to see the last line that was printed to stderr, or can I expect to lose some output?
(Both outcomes would be fine, I just need to know what will happen so I known how to interpret the log files I'm getting, and what to ask of customers to make sure that I get complete logs.)
The redirection files are flushed for every newline, but not for every character.
I have a script that runs an external process which creates some output (a file) and then I also capture console output to file (the log)
try
{
p4d -r $root -jc | out-file $output
}
I later check the log output, grab some info and the script carries on.
The problem is that the external process could (and has once) stalled and I need a way to check that on the fly to handle the error.
The best way I can think to do this is to monitor the file that the process creates for increasing size. Obviously this isn't without issue as it could potentially stall at any point and we don't know the resulting file size.
I will likely check the size of the last successful process and use that to set some limits.
My question is how do I achieve the whole check a process whilst it's running thing?
I'm using MATLAB and calling an .exe via the system command.
[status,cmdout] = system(command_s);
where command_s is a command string that is formatted earlier in my script to pass all the desired options to the .exe. The .exe would normally write to a .csv file via the > redirection operator in Windows/DOS. Instead, this output is going to cmdout where I use it later in the MATLAB script. It is working correctly and as expected. I'm doing it this way so that the process just uses memory and does not write a very large file to the disk, which would then have to be read from the disk and then deleted after I'm done with it. In the end, it saves a .mat file that's usually in hundreds of KB instead of 10s/100s of MBs as the .csv file would be (some unneeded data is thrown out in the end).
The issue I'm having is since I'm dealing with large files, the executable can take a significant amount of time. I typically have to wait about 2 minutes after executing this command. In the meantime, I have no feedback to know it is progressing and that my system hasn't froze. I know I could add the & symbol to the end of my string, command_s, and run MATLAB code while this is running in the background (or asynchronously as some would say), but that brings up an external window AND makes cmdout empty - so I cannot use the output - forcing me to sit there for 2 minutes wondering each time it executes.
Is there any way to run in the background AND get the stdout from the command?
Maybe you could try system(command_s,'-echo')?
I have a script that is running a series of for loops, and within these for loops a file is created that is then run using an external program using the script command. In summary it looks like this:
for i=1:n1
for j=1:n2
for k=1:n3
fid=fopen('file.txt','w');
fprintf(fid,'Some commands to pass to external program depending on i j k');
fclose(fid);
system('program file.txt');
end
end
end
The script has in total about 500k cases (n1xn2xn3), and runs fine for a small scenario (about 100 runs), but for the entire script it runs for a while and then returns an error for no apparent reason, giving this error:
fopen invalid file identifier object
There is no obvious reason for this, and Im wondering if anyone could point out what is wrong?
Just a guess: an instance of your external program is reading file.txt and at the same time the next iteration of your nested loop wants to open file.txt for writing. The more instances of your external program are running at the same time, the slower your machine, the more likely becomes this scenario. (called a 'race condition')
Possible solution for this: use a separate text file per case with a unique file name
You should also consider using other ways to call your external function because file handling for 500k cases should be very slow.
Hope that helps,
Eli
We are trying to automate some procedures using corFlags.exe and dumpbin.exe. Trying to capture the output from either of these programs has been impossible so far. In detail, executing
corFlags.exe yourfavorite.dll
in cmd.exe or in powershell.exe (with appropriate change of syntax) produces output just fine, but as soon as one attempts to capture the output, either through re-direction or piping, e.g.
corflags.exe yourFavorite.dll >>out.txt
or
$l_result = &corflags yourFavorite.dll | select-string -pattern "32BIT"
and the output of corflags is lost. There is a similar problem with dumpbin.
This is occuring on a Windows 7 sp1 machine (6.1.7601 sp1 build 7601).
I am guessing they suffer from the flaw of not flushing their output streams before exiting. See Output shows up in console, but disappears when redirected to file for example.
We have found no way of working around this problem so far (executing in sub-process/batch process/etc. etc.) Does anyone know of a work-around to this problem? Thanks.
A nice, simple demonstration of the problem is as follows. Open the PowerShell ISE and try to run "corFlags.exe some.dll" within the console window. You will not be able to get any output from it!