Logstash TCP Input Crashing - sockets

We have a logstash (v2.3) setup with 1 queue server running RabbitMQ, 10 elasticsearch nodes and a web node for kibana. Everything "works" and we have a large number of servers sending logs at the queue server. Most of the logs make it in, but we've noticed many that just never show up.
Looking in the logstash.log file we'll see the following start showing:
{:timestamp=>"2016-07-15T16:21:34.638000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::Tcp type=>\"syslog\", port=>5544, codec=><LogStash::Codecs::JSONLines charset=>\"UTF-8\", delimiter=>\"\\n\">, add_field=>{\"deleteme\"=>\"\"}, host=>\"0.0.0.0\", data_timeout=>-1, mode=>\"server\", ssl_enable=>false, ssl_verify=>true, ssl_key_passphrase=><password>>\n Error: closed stream", :level=>:error}
This repeats about every second or so. We initially thought maybe max connections limit was being met but a netstat only shows about ~4000 connections, and our limit should be upwards of 65,000.
Why is this TCP plugin crashing so much?
Everything I've read online hints towards this being an older issue that was resolved with newer versions of Logstash, which we've long since installed. Whats confusing is that it is partially working, we're getting a ton of logs but also seem to be missing quite a bit.
Relevant conf file on Queue server:
queue.mydomain.com:
input {
tcp {
type => "syslog"
port => "5544"
}
udp {
type => "syslog"
port => "5543"
}
}
output {
rabbitmq {
key => "thekey"
exchange => "theexchange"
exchange_type => "direct"
user => "username"
password => "password"
host => "127.0.0.1"
port => 5672
durable => true
persistent => true
}
}
We recently added UDP to the above conf to test with it, but logs aren't making it in reliably to it either.
Just in case the Elasticsearch cluster conf is relevant:
We have a 10 node Elasticsearch cluster, setup to pull from the queue server, this works as intended and is on same version of Logstash as Queue server. They pull from the rabbitMQ server with the conf:
input {
rabbitmq {
durable => "true"
host => "***.**.**.**"
key => "thekey"
exchange => "theexchange"
queue => "thequeue"
user => "username"
password => "password"
}
}
Anyone have any ideas for us to try to figure out whats up with tcp-input plugin?
Thanks.

Related

ipfs pubsub not working across two remote peers

If I run this code on my pc, I see my message, but another person on a different pc running the same code doesn't see mine. Is this a NAT thing perhaps? Or am I using this wrong?
const Room = require("ipfs-pubsub-room");
const ipfs = require("ipfs");
ipfs.create({}).then(async (node) => {
const room = new Room(node, "room-name");
room.on("peer joined", (peer) => {
console.log("Peer joined the room", peer);
});
room.on("peer left", (peer) => {
console.log("Peer left...", peer);
});
// now started to listen to room
room.on("subscribed", () => {
console.log("Now connected!");
});
room.on("message", ({ from, data }) =>
console.log(from, JSON.parse(data.toString()))
);
room.broadcast(JSON.stringify({ bla: "hello" }));
});
Are you using this in Node.js or browser?
The problem should be that your peers are not connected between each other (or with a common set of peers running pubsub) to have the pubsub messages reliably forwarded. The first step for you to diagnose this is to use node.swarm.peers() and see if you have the other peer.
Considering your peer is not connected to the other person node, you need to manually connect it, or configure a discovery mechanism to help you (this is automatic discovery and connectivity is a known problem for the community and we will be working on improving this experience).
The simplest option is to use webrtc-star transport+discovery. You can see examples with it in ipfs browser exchange files and with libp2p.
With this, you will likely see the other peers connected and then be able to exchange Pubsub messages. You can see more about the discovery mechanisms available in Libp2p config.md and libp2p discovery examples.
Let me know if you could get your issue fixed

How to connect searchkick (in a Rails app &/ Sidekiq job) to multiple elasticsearch clusters without stomping on global searckick config?

Upon startup my app sets my (?global?) searchkick client to point at my default elasticsearch cluster.
Searchkick.client = Elasticsearch::Client.new(
hosts: default_cluster, # this is the list of hosts in my default cluster
retry_on_failure: true,
)
However, I am upgrading my cluster (again), and while I'd like to be able to have my app read/search from that default cluster,
/search?q="some term"
# =>
Model.search("some term")
continue to work against the default_cluster
Where it starts to get a bit tricky is that:
I'd also like (via some specific ?sidekiq background jobs?) to fill an alternate (alt) cluster's index, something like:
Model.connect_to(alternate_cluster) {|client|
Searchkick.client = client
Model.reindex
}
Without causing all other background jobs to interact with the alternate cluster.
And, of course:
I'd like some way to verify that the alternate_cluster is working well (i.e. for search) before making it my default_cluster. And presumably via some admin route:
/admin/search?q="some search term"&cluster=alternate
# =>
Model.connect_to(alternate_cluster) {|client|
Searchkick.client = client
Model.search("some term")
}
And finally:
I'd like to avoid having to reconnect before every search/reindex action, i.e. I'd prefer not to have the overhead of changing (also because that probably implies that long-running tasks that continue to reconnect to searchkick will be swapping back and-forth from one cluster to the other):
Model.search("some term")
# =>
Model.connect_to(alternate_cluster) {|client|
Searchkick.client = client
Model.search("some term")
}
^ I don't want that
FWIW, the best I've been able to come-up with so far is something like:
def self.connect_to(current_cluster, &block)
previous_es_client = Searchkick.client
current_es_client = Elasticsearch::Client.new(
hosts: current_cluster,
retry_on_failure: true,
)
block.call(current_es_client)
rescue Exception => e
logger.warn(e)
ensure
Searchkick.client = previous_es_client
end
But, I suspect that will cause every other interaction within my system (via the same web-worker or other background jobs running in the same background-worker-instance) to (temporarily) point at the alternate cluster.
Thanks in advance for your assistance...

How to fix 'Redis server error: socket error on read socket' error in mediawiki

I am receiving error as
Redis server error: socket error on read socket
Error received as 'JobQueueError from line xxx of JobQueueRedis.php: Redis server error: socket error on read socket'
I tried by changing persistent connection option to true.
$wgObjectCaches['redis'] = [
'class' => 'RedisBagOStuff',
'servers' => [ $redisserver ],
'persistent' => true
];
In our case the following errors would occur because our AWS ElasticeCache Redis Cluster was using 100% of available memory :
"RedisException","message":"socket error on read socket","code":0
"RedisClusterException","message":"Error processing response from Redis node!"
Increasing the number of nodes and or allowing more memory on the instance seemed to solve the problem. But it seems our cached values take a lot of memory and need to expire faster.
For me, it was an error in the health check script, which was written with parm --no-auth-warning, which was not working for redis version 4 which caused redis-cli --no-auth-warning -a password ping resulted in an error after that, the script was retreating the redis server and server has socket error.

Using logstash for email alert

I installed logstash 5.5.2 in our windows server and I would like to send an email alert when I identify some sentences.
My output section is the following:
output {
tcp {
host => "host.com"
port => 1234
codec => "json_lines"
}
if "The message was Server with id " in [log_message] {
email {
to => "<myName#company.com>"
from => "<otherName#company.com>"
subject => "Issue appearance"
body => "The input is: %{incident}"
domain => "smtp.intra.company.com"
port => 25
#via => "smtp"
}
}
}
During my debug I got the following messages:
[2017-09-11T13:19:39,181][ERROR][logstash.plugins.registry] Problems loading a plugin with {:type=>"output", :name=>"email", :path=>"logstash/outputs/email", :error_message=>"NameError", :error_class=>NameError
[2017-09-11T13:19:39,186][DEBUG][logstash.plugins.registry] Problems loading the plugin with {:type=>"output", :name=>"email"}
[2017-09-11T13:19:39,195][ERROR][logstash.agent ] Cannot create pipeline {:reason=>"Couldn't find any output plugin named 'email'. Are you sure this is correct? Trying to load the email output plugin resulted in this error: Problems loading the requested plugin named email of type output.
Which I guess says that I don't have the email plugin installed.
Can someone suggest a way to fix this?
Using another solution is not an option, just in case that someone suggests it.
Thanks and regards,
Fotis
I tried to follow the instructions in the official documentation
But the option of creating an offline plugin pack didn't work.
So what I did, was to create a running logstash instance on my client, run the command to install the output-email plugin logstash-plugin install logstash-output-email and afterwards I copied this instance on my server(which had no internet access).

How define twice the same service in Puppet?

In order to deploy Varnish with a Puppet class, I need stop Varnish for move and deploy files, then at the end, ensure that Varnish is started.
My problem is simple, how I can define twice a service in a Puppet class, in order to stop or start the service at differents steps ?
class varnish::install (
(...)
service { "varnish":
ensure => "stopped",
require => Package['varnish'],
before => Exec['mv-lib-varnish'],
}
(...)
service { "varnish":
ensure => "running",
require => File["$varnishncsa_file"],
}
}
I've an Duplicate definition: Service[varnish] (...) error, and it's logical...
What's the best practice to manage services in a Puppet class ? Divide in multiple classes, or there is an option for "rename" a service for declare it several times ?
try the following to get rid of duplicate error, but what you are trying to do is wrong.
Puppet brings system to certain consistent state - so telling stop service X, do some work , start service X - it out of scope of proper puppet use, puppet is more like restart service if some files on which the service depends were modified.
class varnish::install (
(...)
service { "varnish-stop":
name => "varnish"
ensure => "stopped",
require => Package['varnish'],
before => Exec['mv-lib-varnish'],
}
(...)
service { "varnish-start":
name => "varnish"
ensure => "running",
require => File["$varnishncsa_file"],
}
}
Use exec with service restart as a hook (notify) for "deploy files" action (package/another exec). Define service itself only once as running, because that is what you normally want assuring. Puppet is for describing target state.