How to create a log file in pg_log directory for each execution of my function - postgresql

I have a log file as " /opt/postgres/9.2/data/pg_log/postgresql-2018-08-19.csv". Due to "log_rotation_age=1d", one log file will be created for me in this pg_log directory on everyday.
While I am debugging a particular user defined function which contains the lot of raise notice messages , I would like to create a new log file instead of appending the logs to existing one. How to achieve this?
Like this each and every execution of my function, I wold like to get a new log file. How to do this.

Related

Using monolog to log to a variable

I would like to use Monolog to log a single process, eg the progress of a command, and return that log when the command is finished. The process is not necessarily a console command.
Is there a handler in monolog that allows me to log to a variable - in memory ? Or, alternatively, is it easy / pretty to log to a temp file and read that temp file when done (and clear that temp file when starting?).

Skipping the errors and recording downloads in kdb

I am trying to use kdb q script to download file from remote source.
How can I make the download keep going if there is an error?
also, how can i mark it down what its downloaded in linux when there are other files in the same directory???
Here is my code:
file:("abc.csv";"def.csv");
dbdir:"/home/terry/";
dlFunc:{
system "download.sh abc.com user /"get /remote/path/",x /",dbdir};
dlFunc each file;
If you're asking how to continue downloading other files if one file fails then you can put a protected eval around your dlFunc each file, e.g.
#[dlFunc;;()]each file;
You could capture the list of failed files using something like:
badfiles:();
{#[dlFunc;x;{y;badfiles,:enlist x}x]}each file;
Then inspect the badfiles list afterwards. The ones that succeeded would be:
file except badfiles

Scala Spark - overwrite parquet file failed to delete file or dir

I'm trying to create parquet files for several days locally. The first time I run the code, everything works fine. The second time it fails to delete a file. The third time it fails to delete another file. It's totally random which file can not be deleted.
The reason I need this to work is because I want to create parquet files everyday for the last seven days. So the parquet files that are already there should be overwritten with the updated data.
I use Project SDK 1.8, Scala version 2.11.8 and Spark version 2.0.2.
After running that line of code the second time:
newDF.repartition(1).write.mode(SaveMode.Overwrite).parquet(
OutputFilePath + "/day=" + DateOfData)
this error occurs:
WARN FileUtil:
Failed to delete file or dir [C:\Users\...\day=2018-07-15\._SUCCESS.crc]:
it still exists.
Exception in thread "main" java.io.IOException:
Unable to clear output directory file:/C:/Users/.../day=2018-07-15
prior to writing to it
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:91)
After the third time:
WARN FileUtil: Failed to delete file or dir
[C:\Users\day=2018-07-20\part-r-00000-8d1a2bde-c39a-47b2-81bb-decdef8ea2f9.snappy.parquet]: it still exists.
Exception in thread "main" java.io.IOException: Unable to clear output directory file:/C:/Users/day=2018-07-20 prior to writing to it
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:91)
As you see it's another file than the second time running the code.
And so on.. After deleting the files manually all parquet files can be created.
Does somebody know that issue and how to fix it?
Edit: It's always a crc-file that can't be deleted.
Thanks for your answers. :)
The solution is not to write in the Users directory. There seems to be a permission problem. So I created a new folder in the C: directory and it works perfect.
this problem occurs when you open the destination directory in windows. You just need to close the directory.
Perhaps another Windows process has a lock on the file so it can't be deleted.

How does nxlog track the line number?

In nxlog config I have these params set:
SavePos True
ReadFromLast True
When removing lines from a log file (this should never happen) nxlog ships the entire log file. Is this related to how nxlog tracks the line number?
To recreate:
I stop the nxlog service
Delete the nxlog cache (just to make sure im starting fresh)
Right now the log folder ive configured nxlog to watch is empty
I add a new log file to the folder
nxlog ships the log file
I open the log file and add a few lines
nxlog ships those lines
I delete those new lines I just added
nxlog ships the entire log file
NXLog and generally other log shippers are designed to deal with append-only log files.
When you delete lines from the log file it sees that the file size is less. Under the append-only assumption this can only mean that the file was replaced/rotated and the current file is a new one that needs to be fully read.
Also note that when you edit a log file in a text editor the editor will usually replace the file with a new one even if you only append data to the end. This is not equivalent to echo test >> test.log.
If you want to transfer all kinds of changes in files you should use rsync or other tools.

Informatica Session Failing

I created a mapping that pulls data from a flat file that shows me usage data for specific SSRS reports. The file is overwritten each day with the previous days usage data. My issue is, sometimes the report doesn't have any usage for that day and my ETL sends me a "Failed" email because there wasn't any data in the Source. The job from running if there is no data in the source or to prevent it from failing.
--Thanks
A simple way to solve this is to create a "Passthrough" mapping that only contains a flat file source, source qualifier, and a flat file target.
You would create a session that runs this mapping at the beginning of your workflow and have it read your flat file source. The target can just be a dummy flat file that you keep overwriting. Then you would have this condition in the link to your next session that would actually process the file:
$s_Passthrough.SrcSuccessRows > 0
Yes, there are several ways, you can do this.
You can provide an empty file to ETL job when there is no source data. To do this, use a pre-session command like touch <filename> in the Informatica workflow. This will create an empty file with the <filename> if it is not present. The workflow will run successfully with 0 rows.
If you have a script that triggers the Informatica job, then you can put a check there as well like this:
if [ -e <filename> ]
then
pmcmd ...
fi
This will skip the job from executing.
Have another session before the actual dataload. Read the file, use a FALSE filter and some dummy target. Link this one to the session you already have and set the following link condition:
$yourDummySessionName.SrcSuccessRows > 0