syslog-ng - Passing FILENAME from client to server when using wildcard_file - error-logging

I am using syslog-ng, to remote log the application logs of multiple containers of the same image. I am using the source config as below.
source s_wild { wildcard-file(
base-dir("/var/myapp/logs")
filename-pattern("*")
recursive(no)
flags(no-parse)
follow-freq(1)
); };
When I am using the logging in the local machine (for testing purposes), using the MACRO, ${FILE_NAME}, it works. But the filename is not being passed on, over network when testing with the remote server.
Aug 3 19:39:46 46fc878e92cf syslog-ng[2320]: Error opening file for writing; filename='', error='Is a directory (21)'
There are around 20-25 files and am looking for auto mapping of the filenames in both client and server side. Is it possible. Not sure how the wildcard_file maps to remote server. Logically it may not be possible. Still wondering on a solution.
I am wondering whether I can avoid manual 1-1 mapping by defining multiple source and destination or using log_prefix.

The $FILE_NAME macro works only if syslog-ng receives messages from a file or a wildcard-file source and it does not work over network(). A couple of options you have here to pass file names over network are:
Use the structured-data section of a RFC 5424 syslog message
Use template() with json-parser() to send messages from client-side and parse them on server side
Use ewmm() (Enterprise-wide message model) which supports delivery of structured messages
In the first method, sending the RFC5424-formatted (IETF-syslog) messages allows you to set the FILE_NAME in the SDATA field. Use the syslog() on the source and destination side instead of network() to send the messages using IETF syslog protocol. The source file wildcard can be defined like this. The whole configuration would be something like below:
syslog-ng client side
source s_wild {
wildcard-file(
base-dir("/var/log_syslog")
filename-pattern("*")
recursive(no)
follow-freq(1)
);
};
rewrite r_set_filename{
set(
"$FILE_NAME",
value(".SDATA.file#18372.4.name")
);
};
rewrite r_use_basename {
subst(
"/var/log_syslog/",
"",
value(".SDATA.file#18372.4.name")
type("string")
flags("prefix")
);
};
destination d_container_logs {
syslog(
"192.168.10.48"
transport("tcp")
port(5141)
);
};
log {source(s_wild); rewrite(r_set_filename); rewrite(r_use_basename); destination(d_container_logs);};
The r_set_filename gets the absolute path of file and we chop-off the path bit and retains only the filename using r_use_basename
syslog-ng server side
source s_network {
syslog(
transport("tcp")
port(5141)
keep_hostname(yes)
);
};
destination d_container_logs {
file(
"/var/sys_log/${.SDATA.file#18372.4.name}"
create_dirs(yes)
);
};
log {source(s_network); destination(d_container_logs);};

Related

Record Level Data truncation on Mainframe server while doing SFTP from spark server

Please read this fully.
I am working on sending a csv file via SFTP from spark application developed in scala to the mainframe server. I am using jsch (java secure channel) package version 0.1.53 version to accomplish the SFTP connection from spark server to mainframe server. I am facing issue that on the mainframe server, the csv file gets truncated to 1024 bytes per record line.
After research, I found that on the mainframe, we have options like using "lrecl" and "recfm" to control the length of each record in the file and the format of that record. But I am unable to integrate these options on scala. I found this answer on stackoverflow which was meant for implementation in Java. When I use the same logic on scala, I am getting the below error:
EDC5129I No such file or directory., file: /+recfm=fb,lrecl=3000 at
at com.jcraft.jsch.ChannelSftp.throwStatusError(ChannelSftp.java:2846)
at com.jcraft.jsch.ChannelSftp._stat(ChannelSftp.java:2198)
at com.jcraft.jsch.ChannelSftp._stat(ChannelSftp.java:2215)
at com.jcraft.jsch.ChannelSftp.ls(ChannelSftp.java:1565)
at com.jcraft.jsch.ChannelSftp.ls(ChannelSftp.java:1526)
Scala code block using the jsch library for establishing the SFTP connection and transferring file is as below:
session = jsch.getSession(username, host, port)
session.setConfig("PreferredAuthentication","publickey")
session.setConfig("MaxAuthTries",2)
System.out.println("Created SFTP Session")
val sftpSessionConfig: Properties = new Properties()
sftpSessionConfig.put("StrictHostKeyChecking","no")
session.setConfig(sftpSessionConfig)
session.connect() //Connect to session
System.out.println("Connected to SFTP Session")
val channel = session.openChannel("sftp")
channel.connect()
val sftpChannel = channel.asInstanceOf[ChannelSftp]
sftpChannel.ls("/+recfm=fb,lrecl=3000") //set lrecl and recfm ---> THROWING ERROR HERE
sftpChannel.put(sourceFile, destinationPath,ChannelSftp.APPEND) //Push file from local to mainframe
Is there any way where we can set these options as the configuration in my scala code using the jsch library? I also tried using the spring-ml's spark-sftp package. But this package also has the problem of data truncation on the mainframe server.
Please help as this issue has become very critical blocker to my project.
EDIT: Updated question with scala code block
From this presentation Dovetail SFTP Webinar on slide 21:
ls /+recfm=fb,lrecl=80
it seems to me there is one '/' too many in your code.
From the error message, I think the SFTP server has the current path in the UNIX file system. You do not set the data set high level qualifier (HLQ) for the data set, do you? I can't see it in the code. Again from the above presentation, do a cd before the ls:
cd //your-hlq-of-choice
This will do two things:
Change the current working directory to the MVS data set side.
Set the HLQ to be used.
Sorry I cannot test myself; I do not know scala.
First, what SFTP server is running on z/OS? If it is the one provided with z/OS (not Dovetail) the command you are executing isn't supported and you will receive a message like Can't ls: "/+recfm=fb,lrecl=80" not found. Which would be valid because that is not valid file. Everything to the right of the / would be considered part of the filename.
I converted your code to Java as I'm not familiar with Scala and didn't have time to learn it. Here was my code sample I used.
import com.jcraft.jsch.JSch;
import java.util.Properties;
import java.util.Vector;
class sftptest {
static public void main(String[] args) {
String username = "ibmuser";
String host = "localhost";
int port = 10022; // Note, my z/OS is running in a docker container so I map 10022 to 22
JSch jsch = new JSch();
String sourceFile = "/";
String destinationPath ="/";
String privateKey = "myPrivateKey";
try {
jsch.addIdentity(privateKey); //add private key path and file
com.jcraft.jsch.Session session = jsch.getSession(username, host, port);
session.setConfig("PreferredAuthentication","password");
session.setConfig("MaxAuthTries", "2");
System.out.println("Created SFTP Session");
Properties sftpSessionConfig = new Properties();
sftpSessionConfig.put("StrictHostKeyChecking","no");
session.setConfig(sftpSessionConfig);
session.connect(); //Connect to session
System.out.println("Connected to SFTP Session");
com.jcraft.jsch.ChannelSftp channel = (com.jcraft.jsch.ChannelSftp) session.openChannel("sftp");
channel.connect();
// com.jcraft.jsch.Channel sftpChannel = (ChannelSftp) channel;
// channel.ls("/+recfm=fb,lrecl=3000"); //set lrecl and recfm ---> THROWING ERROR HERE
// channel.ls("/"); //set lrecl and recfm ---> THROWING ERROR HERE
Vector filelist = channel.ls("/");
for(int i=0; i<filelist.size();i++){
System.out.println(filelist.get(i).toString());
}
// channel.put(sourceFile, destinationPath, com.jcraft.jsch.ChannelSftp.APPEND); //Push file from local to mainframe
} catch (Exception e) {
System.out.println("Exception "+e.getMessage());
}
}
}
For my case I did use an ssh key and not a password. The output with your ls method is:
Created SFTP Session
Connected to SFTP Session
Exception No such file
dropping the + and everything to the right you get:
Created SFTP Session
Connected to SFTP Session
drwxr-xr-x 2 OMVSKERN SYS1 8192 May 13 01:18 .
drwxr-xr-x 7 OMVSKERN SYS1 8192 May 13 01:18 ..
-rw-r--r-- 1 OMVSKERN SYS1 0 May 13 01:18 file 1
-rw-r--r-- 1 OMVSKERN SYS1 0 May 13 01:18 file 2
The main issue is that the z/OS appears to not support the syntax you are using which is provided by a specific SFTP implementation by Dovetail.
If you do not have Dovetail I recommend that since you are sending CSV files that are generally variable in length that you send them as a USS file so that the lines will be properly translated and will be of variable length. Transfer them to USS (regular Unix on z/OS) and then copy them to an MVS file that has a RECFM of VB. Assuming the file is already allocated you could do a cp myuploadedFile.csv "//'MY.MVS.FILE'"

rsyslog 5.8 imfile outside /var/log not picking up log files

I would like to pick up logs of different types from various locations other than /var/log and send them to a central location.
Using RH 6.6 and rsyslog 5.8 the configuration works fine when using path within /var/log. If I use other path like /opt/appname/log/file.log. The rsyslog client does not pick up the log. I do not see any error or message when running rsyslogd in debug mode.
Example:
Client:
...
$InputFileName /opt/appname/test.log
$InputFileTag APPNAME1
$InputFileStateFile stat-APPNAME1
$InputFileSeverity info
$InputFilePersistStateInterval 200
$InputFileFacility local3 # alto tried with other local
$InputRunFileMonitor
...
Server:
...
$template HostAudit, "/opt/logs/%HOSTNAME%/test.log" # tried differnt path
$template auditFormat, "%msg%\n"
local3.* ?HostAudit;auditFormat
...
Any recommendations?, I appreciate your help!!!
Bill
I would first try these:
Verify that the state file names are unique
Verify that every $InputFileName points to an existing regular file
Remove some of the files that you want to be monitored from the configuration. It could be that there is a problem with only one of the monitored files. That would make rsyslog ignore the rest of the files.
I had this with "$InputFileStateFile tomcat-log" for each of the individual tomcat logs. Each of the state file name needs to be unique. For me it worked by changing it to instances of:
"$InputFileStateFile tomcat-manager"
"$InputFileStateFile tomcat-localhost"
etc...
Another option is to just add numbers to the end of the state file name.
"$InputFileStateFile tomcat-log1"
"$InputFileStateFile tomcat-log2"

How can I delete a message without mailbox file lock? I'm using Perl's Mail::Box

I run Postfix on Ubuntu 16.04 server to send "internal email" messages and a crontab Perl job to parse the related bounce messages (delivered to local mailbox /var/mail/bounceparser). The Perl code basically checks the bounceparser mailbox, parse all the messages and take some actions (delete bounced addresses, etc).
The problem is that when I try to delete those already-parsed messages using Mail::Box library, the mailbox gets locked and if a new message arrives the postfix daemon throws an exception trying to deliver the message: "cannot update mailbox /var/mail/bounceparser for user bounceparser. cannot open file: Permission denied".
Is there a way to delete a message without locking the mailbox file? If it's not possible, any other suggested strategy?
The code I use to delete the messages:
my $mbox = Mail::Box::Mbox->new(folder =>'/var/mail/bounceparser', access => 'rw');
# #mailbox_pending_deletes contains the list of message ids to delete
for my $message_id (#mailbox_pending_deletes){
$message = $mbox->find($message_id);
$message->delete;
}
my $delete_result = $mbox->close(write=>'MODIFIED');
Thank you!
As suggested by #SteffenUllrich using mailbox single file box it's not a good idea (sincerely I was using it just because it's the default Postfix configured value ^_^).
So, if you have a similar issue 1.- Configure Postfix to use Maildir instead of Mailbox for messages delivery (main.cf file):
# Set Postfix to deliver messages to Maildir user folder
home_mailbox = Maildir/
and 2.- use Mail::Box:Maildir and not the Mail::Box:Mbox I was using to find-delete the messages.
my $mbox = Mail::Box::Maildir->new(folder =>'/home/bounceparser/Maildir', access => 'rw');
# #mailbox_pending_deletes contains the list of message ids to delete
for my $message_id (#mailbox_pending_deletes){
$message = $mbox->find($message_id);
$message->delete;
}
my $delete_result = $mbox->close(write=>'MODIFIED');
Fortunately the Sisimai library I use to parse the bounce/delivery/etc messages also accepts a Maildir path to go for the messages:
my $v = Sisimai->make('/home/bounceparser/Maildir/new','hook'=>$x);
Thanks for helping!

MailKit - FolderCache and deleted folder

I am in a situation where I have two clients (ClientA and ClientB) connected to IMAP server. ClientA is running mailkit. When I delete or move a folder with ClientB, mailkit client is getting error on attempt to open or fetch messages from the deleted folder. Actually, I am getting disconnected from the server when i try to fetch message from a deleted folder(I guess that is the expected behavior from the server), because of that I am trying to detect if the folder I am about to execute command to, still exists.
I see mailkit uses FolderCache and when I use GetFolder method even after I reconnect the client, I am still getting IMailFolder reference for the deleted folder when I use GetFolder(string path) method. To avoid the FolderCache, I am creating a new instance of MailClient each time I am about to synchronize remote folders to avoid having not existing folders in the cache. I would like to know if that is recommended approach in that situation?
UPDATE:
So, I am now using GetSubfolders command and I can see a LIST command is sent to the server. However it seems there is an issue with that command in the following scenario:
ClientB is deleting a folder INBOX.spam.op, ClientA is trying to move folder with path INBOX.spam.op.folder1. What happens is - the server is creating a new folder INBOX.spam.op with Attributes NonExistent. That is the expected server behavior in order to create folder with path INBOX.spam.op.folder1
But see what happens with Mailkit when I used GetSubfolders on INBOX.spam - I am getting an instance of IMailFolder with Name = "op", Attributes = a mix of the new attributes NonExistent and the attributes of the old "op" folder (the folder in the FolderCache). UidValidity should be 0 for NonExistent but it is the same as the UIDValidity of "op" folder in the FolderCache even if the server response is this
C: A00000102 LIST "" "INBOX.spam.%" RETURN (SUBSCRIBED CHILDREN STATUS (UIDVALIDITY))
S: * LIST (\NonExistent \HasChildren) "." INBOX.spam.op
S: A00000102 OK List completed (0.001 + 0.000 secs).
I tried to inherit ImapClient and add my own method GetFolderNoCache(string path) but this doesn't work, because of the internal classes. Any other suggestions?
What you want to do is get the top-level folder from the namespace. Then, using that ImapFolder object, get the list of its children (and so on if you are trying to see if a deeply nested folder).
var toplevel = client.GetFolder (client.PersonalNamespaces[0]);
foreach (var folder in toplevel.GetSubfolders ()) {
// look for the folder you are interested in...
// if it's not here, then the folder has been deleted
}

Diacritic Letters are mistreated by Rest Client

I'm using rest:0.8 to connect my main Grails project to another Grails project that serves as a report generator using this line of code:
Map<String, String> adminConfigService = [
webURL: "http://192.168.20.21:8080/oracle-report-service/generate",
...
]
Map params = [
...
name: "Iñigo",
...
]
withHttp(uri: adminConfigService.webURL) {
html = get(query: params)
}
And then the receiving REST client will process that data. Running the two projects in my local machine works fine. Although when I deploy the war file of the report generator to our tomcat server, it converts the letter "ñ" to "├â┬æ", so the name "Iñigo" is treated as "I├â┬æigo".
Since the Report Generator project works fine when run on my local machine, does that means I need to change some conf files on my Tomcat Server? What setting file do I need to change?
It seems like encoding issue.
Check Config.groovy:
grails.converters.encoding = "UTF-8"
Check file's encoding of controllers and services where you use rest:0.8.
Check URIEncoding in tomcat's server.xml (must be UTF-8).
Also try to set useBodyEncodingForURI="true" (in connector, like URIEncoding parameter).
Do you save this data to the database? If that so, check your DataSource.groovy url parameter:
url = "jdbc:mysql://127.0.0.1:3306/dbname?characterEncoding=utf8"
Also check encoding and collation of you table and fields in the database.