I wish to better understand the way the Log4cplus SocketAppender works with regard to the logging server that recieves this appender events.
I read the Log4cplus src code for loggingserver and socketappender and I will be glad to be clarified:
Can the SocketAppender only send events to the Log4cplus logging server, and not to any other server?
and if this is the case: does it mean that if I want to send log messages to remote machine, that machine must be installed with the Log4cplus lib?
I would also like to know- does this Log4cplus logging-server run as a service? and does it require special configuration and pre-setup in order to use it?
Can the SocketAppender only send events to the Log4cplus logging server, and not to any other server?
Yes and yes.
does it mean that if I want to send log messages to remote machine, that machine must be installed with the Log4cplus lib?
Well, sort of. If you want to use only SocketAppender, you will have to use the logging server. You could also use SysLogAppender and send to remote server using that. Obviously, you have to have syslog service and allow receiving from network in it. You could also write your own custom appender that sends the events to whatever server you desire.
I would also like to know- does this Log4cplus logging-server run as a service?
No, it is a simple executable that listens on a socket.
and does it require special configuration and pre-setup in order to use it?
It requires configuration file so that it knows where to log the events.
I just wanted to share how I used SocketAppender (this setup also works for docker containers being in the same network).
/usr/share/elasticsearch/config/log4j2.properties
status = error
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
appender.socket.type=Socket
appender.socket.name=socket
appender.socket.port=601
appender.socket.host=api
appender.socket.reconnectDelayMillis=10000
appender.socket.layout.type = PatternLayout
appender.socket.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.socket.ref = socket
in the second container I used syslog-ng:
apk add syslog-ng
vi /etc/syslog-ng/syslog-ng.conf
syslog-ng -f /etc/syslog-ng/syslog-ng.conf
/etc/syslog-ng/syslog-ng.conf
#version: 3.13
source s_network {
network(
transport(tcp)
port(601)
);
};
log {
source(s_network);
destination(d_network);
};
destination d_network {
file("/var/log/es_slowlog.log", template("${MSGHDR}${MESSAGE}\n"));
};
Notice that the #version: has to correspond to your version of syslog-ng. You can check it by invoking syslog-ng -V.
Related
I need to read Kafka messages with .Net from an external server. As the first step, I have installed Kafka on my local machine and then wrote the .Net code. It worked as wanted. Then, I moved to the cloud but the code did not work. Here is the setup that I have.
I have a Kafka Server deployed on a Windows VM (VM1: 10.0.0.4) on Azure. It is up and running. I have created a test topic and produced some messages with cmd. To test that everything is working I have opened a consumer with cmd and received the generated messages.
Then I have deployed another Windows VM (VM2, 10.0.0.5) with Visual Studio. Both of the VMs are deployed on the same virtual network so that I do not have to worry about opening ports or any other network configuration.
then, I have copied my Visual Studio project code and then changed the IP address of the bootstrap-server to point to the Kafka server. It did not work then, I read that I have to change the server configuration of Kafka, so I opened the server.properties and modified the listeners property to listeners=PLAINTEXT://10.0.0.4:9092. It still does not work.
I have searched online and tried many of the tips but it does not work. I think first of all to provide the credential to an external server (vm1), and probably some other configuration. Unfortunately, the official documentation of confluent is very short with very few examples. There is also no example to my case on the official GitHub. I have played with the "Sasl" properties in the Consumer Config class, but also no success.
the error message is:
%3|1622220986.498|FAIL|rdkafka#consumer-1| [thrd:10.0.0.4:9092/bootstrap]: 10.0.0.4:9092/bootstrap: Connect to ipv4#10.0.0.4:9092 failed: Unknown error (after 21038ms in state CONNECT)
Error: 10.0.0.4:9092/bootstrap: Connect to ipv4#10.0.0.4:9092 failed: Unknown error (after 21038ms in state CONNECT)
Error: 1/1 brokers are down
Here is my .Net core code:
static void Main(string[] args)
{
string topic = "AzureTopic";
var config = new ConsumerConfig
{
BootstrapServers = "10.0.0.4:9092",
GroupId = "test",
//SecurityProtocol = SecurityProtocol.SaslPlaintext,
//SaslMechanism = SaslMechanism.Plain,
//SaslUsername = "[User]",
//SaslPassword = "[Password]",
AutoOffsetReset = AutoOffsetReset.Latest,
//EnableAutoCommit = false
};
int x = 0;
using (var consumer = new ConsumerBuilder<Ignore, string>(config)
.SetErrorHandler((_, e) => Console.WriteLine($"Error: {e.Reason}"))
.Build())
{
consumer.Subscribe(topic);
var cancelToken = new CancellationTokenSource();
while (true)
{
// some tasks
}
consumer.Close();
If you set listeners to a hard-coded IP, it'll only start the server binding and accepting traffic to that ip
And your listener isn't defined as SASL, so I'm not sure why you've tried using that in the client. While using credentials is strongly encouraged when sending data to cloud resources, it's not required to fix a network connectivity problem. You definitely shouldn't send credentials over plaintext, however
Start with these settings
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://10.0.0.4:9092
That alone should work within the VM shared network. You can use the console tools included with Kafka to test it.
And if that still doesn't work from your local client, then it's because 10.0.0.0/8 address space is considered a private network and you must advertise the VM's public IP and allow TCP traffic on port 9092 through Azure Firewall. It'd also make sense to expose multiple listeners for internal Azure network and external, forwarded network traffic
Details here discuss AWS and Docker, but the basics still apply
Overall, I think it'd be easier to setup Azure EventHub with Kafka support
According to the documentation there are two ways to send log information to the SwisscomDev ELK service.
Standard way via STDOUT: Every output to stdout is sent to Logstash
Directly send to Logstash
Asking about way 2. How is is this achieved, especially how is the input expected?
We're using Monolog in our PHP buildpack based application and using its stdout_handler is working fine.
I was trying the GelfHandler (connection refused), SyslogUdPHandler (no error, but no result), both configured to use VCAPServices logstashHost and logstashPort as API endpoint / host to send logs to.
Binding works, env variables are set, but I have no idea how to send SwisscomDev ELK service Logstash API endpoint compatible log information from our application.
Logstash is configured with a tcp input, which is reachable via logstashHost:logstashPort. The tcp input is configured with its default codec, which is the line codec (source code; not the plain codec as stated in the documentation).
The payload of the log event should be encoded in JSON so that the fields are automatically recognized by Elasticsearch. If this is the case, the whole log event is forwarded without further processing to Elasticsearch.
If the payload is not JSON, the whole log line will end up in the field message.
For your use case with Monolog, I suggest you to use the SocketHandler (pointing it to logstashHost:logstashPort) in combination with the LogstashFormatter which will take care of the JSON encoding with the log events being line delimited.
I'm working on a coreaudio user-space hal plugin based on the example
developer.apple.com/library/mac/samplecode/AudioDriverExamples/Introduction/Intro.html
In the plug-in implementation, I plan to obtain audio data from another process i.e. CFMessagePort
However, I got the following error in console trying to create port CFMessagePortCreateLocal...
sandboxd[251]: ([2597]) coreaudiod(2597) deny mach-register com.mycompnay.audio
I did some googlging and came to this article
Technical Q&A QA1811
https://developer.apple.com/library/mac/qa/qa1811/_index.html
about adding AudioServerPlugIn_MachServices in plist but still no success.
Is there anything else I need to do to make this work (like adding entitlements, code-sign) or this is not the correct approach.?
I am not sure if MesssagePort mechanism works anymore under sandbox. would XPC Services be viable?
Thank you very much for your time. Any help is greatly appreciated
update 1:
I should be creating a remote port instead of a local in the audio plug-in. Having that said, with the AudioServerPlugIn_MachServices attribute in the plist. now there is no sandboxd[559]: ([552]) coreaudiod(552) deny mach-lookup / register message in console.
However, in my audio hal plug-in (client side) I have
CFStringRef port_name = CFSTR("com.mycompany.audio.XPCService");
CFMessagePortRef port = CFMessagePortCreateRemote(kCFAllocatorDefault, port_name);
port has return the value of 0. I tried this in a different app and it works just fine.
This is my server side:
CFStringRef port_name = CFSTR("com.mycompany.audio.XPCService");
CFMessagePortRef port = CFMessagePortCreateLocal(kCFAllocatorDefault, port_name, &callback, NULL, NULL);
CFRunLoopSourceRef runLoopSource =
CFMessagePortCreateRunLoopSource(nil, port, 0);
CFRunLoopAddSource(CFRunLoopGetCurrent(),
runLoopSource,
kCFRunLoopCommonModes);
CFRunLoopRun();
I did get a console message regarding this.
com.apple.audio.DriverHelper[1314]: The plug-in named SimpleAudioPlugIn.driver requires extending the sandbox for the mach service named com.mycompnay.audio.XPCService
anyone know why??
update 2
I noticed that when I use the debug mode with coreaudiod it does successful get the object reference of the mach service. (same thing happened when I was trying the xpc_service approach)
project scheme setting
Anyone??
I'm pretty sure I was running into the same problems in my AudioServerPlugIn. I could look up and use every Mach service I tried, except for the ones I had created. And the ones I had created worked normally from a regular process.
Eventually I read the Daemonomicon and figured out that coreaudiod (which hosts the HAL plugins) was using the global bootstrap namespace, but my service was being registered in the per-user bootstrap namespace. And since "processes using the global namespace can only see services in the global namespace" my plugin couldn't see my service.
You can use launchctl to test this by having it run the program that registers your service, but with the same bootstrap namespace as coreaudiod. You'll probably need to have rootless disabled.
# launchctl bsexec $(pgrep coreaudiod) your_service_executable
With that running, try to connect from your plugin again.
From Table 2 in the Daemonomicon, you can see that only launchd daemons use the global bootstrap namespace. That explains why coreaudiod uses it. And I think it means that your Mach service needs to be created by a launchd daemon.
To make one, create a launchd.plist for your service in /Library/LaunchDaemons. Set its owner to root:wheel and make it only writable by the owner. In it, set the MachServices key and add the name of your service:
<key>MachServices</key>
<dict>
<key>com.mycompany.audio.XPCService</key>
<true/>
</dict>
Then register it:
# launchctl bootstrap system /Library/LaunchDaemons/com.mycompany.audio.XPCService.plist
This is what I ended up with: com.bearisdriving.BGM.XPCHelper.plist.template. Note that without the UserName/GroupName keys your daemon will run as root. (The code for my service and plugin is in that repo as well, in case that's helpful.)
I ended up having to use XPC, unfortunately, but I tried CFMessagePort first and it worked fine.
It also seems to all work fine whether the plugin is signed or not. Though, as you say, you do need the AudioServerPlugIn_MachServices key in your Info.plist.
tl;dr; How do I capture stderr from within a script to get a more specific error, rather than just relying on the generic error from Net::OpenSSH?
I have a tricky problem I'm trying to resolve. Net::OpenSSH only works with protocol version 2. However we have a number of devices of the network that only support version 1. I'm trying to find an elegant way of detecting if the remote end is the wrong version.
When connecting to a version 1 device, the following message shows up on the stderr
Protocol major versions differ: 2 vs. 1
However the error that is returned by Net::OpenSSH is as follows
unable to establish master SSH connection: bad password or master process exited unexpectedly
This particular error is too general,and doesn't address just a protocol version difference. I need to handle protocol differences by switching over to another library, I don't want to do that for every connection error.
We use a fairly complicated process that was originally wired for telnet only access. We load up a "comm" object, that then determines stuff like the type of router, etc. That comm object invokes Net::OpenSSH to pass in the commands.
Example:
my $sshHandle = eval { $commsObject->go($router) };
my $sshError = $sshHandle->{ssh}->error;
if ($sshError) {
$sshHandle->{connect_error} = $sshError;
return $sshHandle;
}
Where the protocol error shows up on stderr is here
$args->{ssh} = eval {
Net::OpenSSH->new(
$args->{node_name},
user => $args->{user},
password => $args->{tacacs},
timeout => $timeout,
master_opts => [ -o => "StrictHostKeyChecking=no" ]
);
};
What I would like to do is pass in the stderr protocol error instead of the generic error passed back by Net::OpenSSH. I would like to do this within the script, but I'm not sure how to capture stderr from within a script.
Any ideas would be appreciated.
Capture the master stderr stream and check it afterwards.
See here how to do it.
Another approach you can use is just to open a socket to the remote SSH server. The first thing it sends back is its version string. For instance:
$ nc localhost 22
SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-8
^C
From that information you should be able to infer if the server supports SSH v2 or not.
Finally, if you also need to talk to SSH v1 servers, the development version of my other module Net::SSH::Any is able to do it using the OS native SSH client, though it establishes a new SSH connection for every command.
use Net::SSH::Any;
my $ssh = Net::SSH::Any->new($args->{node_name},
user => $args->{user},
password => $args->{tacacs},
timeout => $timeout,
backends => 'SSH_Cmd',
strict_host_key_checking => 0);
Update: In response to Bill comment below on the issue of sending multiple commands over the same session:
The problem of sending commands over the same session is that you have to talk to the remote shell and there isn't a way to do that reliably in a generic fashion as every shell do things differently, and specially for network equipment shells that are quite automation-unfriendly.
Anyway, there are several modules on CPAN trying to do that, implementing a handler for every kind of shell (or OS). For instance, check Oliver Gorwits's modules Net::CLI::Interact, Net::Appliance::Session and Net::Appliance::Phrasebook. The phrasebook approach seems quite suitable.
I am trying to connect to my local queue by using cpan mqseries lib through perl script, in solaris environment.When i am executing my script it is giving Reson code as 2058.which means Queuemanager name error.
I have done following thing to analysis this issue,but still getting the reson code 2058.
1)Stop and started the queue manager.
2)checked the queuemanager name in my code.
3)sucessfully put and get the message in my queue by using amqget and amqput command,but it not working with my script.
Could anybody please help me in this,what kind of environment i have to set or any configuration setting i am missing.
my $qm_name = "MQTEST";
my $compCode = MQCC_WARNING;
my $Reason = MQRC_UNEXPECTED_ERROR;
my $Hconn = MQCONN($qm_name,
$compCode,
$Reason,
) || die "Unable to Connect to Queuemanager\n";
Maybe you are running to this issue?
"By default, the MQSeries module will try to dynamically determine
whether or not the localhost has any queue managers installed, and if
so, use the "server" API, otherwise, it will use the "client" API.
This will Do The Right Thing (tm) for most applications, unless you want to connect >directly to a remote queue manager from a host
which is running other queue managers locally. Since the existence of
locally installed queue managers will result in the use of the
"server" API, attempts to connect to the remote queue managers will
fail with a Reason Code of 2058."