Using monolog to log to a variable - monolog

I would like to use Monolog to log a single process, eg the progress of a command, and return that log when the command is finished. The process is not necessarily a console command.
Is there a handler in monolog that allows me to log to a variable - in memory ? Or, alternatively, is it easy / pretty to log to a temp file and read that temp file when done (and clear that temp file when starting?).

Related

Macro to generate xlsx works fine manually but not from the batch through QlikviewManagement Consol

I am trying to export few charts to Excel (.Xlsx format) through Qlikview Macro and to save it on postrelaod at a particular location. The file works perfectly fine when it is run manually or from the batch (.bat) on double click.
But when scheduled to run from the Qlikview Management Console through the external File(.bat file) its generating the Excel Extract but the file is blank. The error is:
Error: Paste method of Worksheet class failed
I have checked the permission/location of the file and its not an issue.
Postreload trigger saving charts via macro will not work via QMC (both postreload and frontend/chart manipulations doesn't work via QMC).
To solve that I do as following.
1) Set reload in QMC to refresh data in your document.
2) After successful reload another document which triggers... macro from first document to save that charts but with that it also gave me trouble as it generated conflict (you can not open Qlikview from Qlikview..... (I know that it is nosense) so in second document I run macro from first one like that (via PsExec):
EXECUTE "C:\Qlikview\PROD APPLICATION\modules\scripts\edx\PsExec64.exe" *\\SERVER_NAME* -u *SERVER_NAME\User* -p *password* -i 1 -d -high cmd /c ""C:\Program Files\QlikView\qv.exe" "C:\Qlikview\PROD APPLICATION\modules\$(vDocument).qvw" /vvRun=yes
I use variable vRun to specify that macro on open runs only when it is set to yes:
and in macro it is set to close app after saving charts:
ActiveDocument.UnlockAll
ActiveDocument.ClearAll true
ActiveDocument.Save
ActiveDocument.GetApplication.quit
end sub

How to create a log file in pg_log directory for each execution of my function

I have a log file as " /opt/postgres/9.2/data/pg_log/postgresql-2018-08-19.csv". Due to "log_rotation_age=1d", one log file will be created for me in this pg_log directory on everyday.
While I am debugging a particular user defined function which contains the lot of raise notice messages , I would like to create a new log file instead of appending the logs to existing one. How to achieve this?
Like this each and every execution of my function, I wold like to get a new log file. How to do this.

AMPL logfile doesn't work

When I write run file for my problem, I want to use log_file command, so after I load data and model, I write following option command:
option solver cplex;
option omit_zero_rows 0;
option presolve 1;
option show_stats 1;
option csvdisplay_header 0;
option log_file AMPL_log.txt;
option cplex_options 'timelimit 900';
solve;
However, when I run it on AMPL, the terminal shows the information
"Error at _cmdno 6 executing "option" command can't open
"AMPL_Log.txt" "
I don't know where I made a mistake in this code.
The error can't open <filename> is often caused by not having permissions to write the file (or, on Windows, if the file is already opened by another application). By default AMPL creates the log file in the current working directory, but you can specify the full path to the location where you have write permissions, e.g. in your home directory:
option log_file '/path/to/log/file';

gsutil cp: concurrent execution leads to local file corruption

I have a Perl script which calls 'gsutil cp' to copy a selected from from GCS to a local folder:
$cmd = "[bin-path]/gsutil cp -n gs://[gcs-file-path] [local-folder]";
$output = `$cmd 2>&1`;
The script is called via HTTP and hence can be initiated multiple times (e.g. by double-clicking on a link). When this happens, the local file can end up being exactly double the correct size, and hence obviously corrupt. Three things appear odd:
gsutil seems not to be locking the local file while it is writing to
it, allowing another thread (in this case another instance of gsutil)
to write to the same file.
The '-n' seems to have no effect. I would have expected it to prevent
the second instance of gsutil from attempting the copy action.
The MD5 signature check is failing: normally gsutil deletes the
target file if there is a signature mismatch, but this is clearly
not always happening.
The files in question are larger than 2MB (typically around 5MB) so there may be some interaction with the automated resume feature. The Perl script only calls gsutil if the local file does not already exist, but this doesn't catch a double-click (because of the time lag for the GCS transfer authentication).
gsutil version: 3.42 on FreeBSD 8.2
Anyone experiencing a similar problem? Anyone with any insights?
Edward Leigh
1) You're right, I don't see a lock in the source.
2) This can be caused by a race condition - Process 1 checks, sees the file is not there. Process 2 checks, sees the file is not there. Process 1 begins upload. Process 2 begins upload. The docs say this is a HEAD operation before the actual upload process -- that's not atomic with the actual upload.
3) No input on this.
You can fix the issue by having your script maintain an atomic lock of some sort on the file prior to initiating the transfer - i.e. your check would be something along the lines of:
use Lock::File qw(lockfile);
if (my $lock = lockfile("$localfile.lock", { blocking => 0 } )) {
... perform transfer ...
undef $lock;
}
else {
die "Unable to retrieve $localfile, file is locked";
}
1) gsutil doesn't currently do file locking.
2) -n does not protect against other instances of gsutil run concurrently with an overlapping destination.
3) Hash digests are calculated on the bytes as they are being downloaded as a performance optimization. This avoids a long-running computation once the download completes. If the hash validation succeeds, you're guaranteed that the bytes were written successfully at one point. But if something (even another instance of gsutil) modifies the contents in-place while the process is running, the digesters will not detect this.
Thanks to Oesor and Travis for answering all points between them. As an addendum to Oesor's suggested solution, I offer this alternative for systems lacking Lock::File:
use Fcntl ':flock'; # import LOCK_* constants
# if lock file exists ...
if (-e($lockFile))
{
# abort if lock file still locked (or sleep and re-check)
abort() if !unlink($lockFile);
# otherwise delete local file and download again
unlink($filePath);
}
# if file has not been downloaded already ...
if (!-e($filePath))
{
$cmd = "[bin-path]/gsutil cp -n gs://[gcs-file-path] [local-dir]";
abort() if !open(LOCKFILE, ">$lockFile");
flock(LOCKFILE, LOCK_EX);
my $output = `$cmd 2>&1`;
flock(LOCKFILE, LOCK_UN);
unlink($lockFile);
}

overwrite then append output of a cron job each time it runs

I know you can redirect the output of a cronjob via ">" to overwrite and ">>" to append. However, I was wondering if there is anyway to get the output from a cronjob to overwrite the log file each time the job is run, but then append the output for that particular job run?
When you use > it overwrites anything previously each time there is a in the output of the command linebreak, so you don't see historical output from that particular job.
If I understand it correctly, you want to create a new log file everytime the job is run, so in crontab you use ">" as
* * * * /home/myhome/some_cron_job.sh > /home/myhome/cron_job_output
Now, within some_cron_job.sh, you use ">>" to append to the log file
(within shell script)
echo "Testing" >> /home/myhome/cron_job_output
Does that help ?