Logstash output is from another input - kubernetes

I have an issue where my beatmetric is caught by my http pipeline.
Both Logstash, Elastic and Metricbeat is running in Kubernetes.
My beatmetric is setup to send to Logstash on port 5044 and log to a file in /tmp. This works fine. But whenever I create a pipeline with an http input, this seems to also catch beatmetric inputs and send them to index2 in Elastic as defined in the http pipeline.
Why does it behave like this?
/usr/share/logstash/pipeline/http.conf
input {
http {
port => "8080"
}
}
output {
#stdout { codec => rubydebug }
elasticsearch {
hosts => ["http://my-host.com:9200"]
index => "test2"
}
}
/usr/share/logstash/pipeline/beats.conf
input {
beats {
port => "5044"
}
}
output {
file {
path => '/tmp/beats.log'
codec => "json"
}
}
/usr/share/logstash/config/logstash.yml
pipeline.id: main
pipeline.workers: 1
pipeline.batch.size: 125
pipeline.batch.delay: 50
http.host: "0.0.0.0"
http.port: 9600
config.reload.automatic: true
config.reload.interval: 3s
/usr/share/logstash/config/pipeline.yml
- pipeline.id: main
path.config: "/usr/share/logstash/pipeline"

Even if you have multiple config files, they are read as a single pipeline by logstash, concatenating the inputs, filters and outputs, if you need to run then as separate pipelines you have two options.
Change your pipelines.yml and create differents pipeline.id, each one pointing to one of the config files.
- pipeline.id: beats
path.config: "/usr/share/logstash/pipeline/beats.conf"
- pipeline.id: http
path.config: "/usr/share/logstash/pipeline/http.conf"
Or you can use tags in your input, filter and output, for example:
input {
http {
port => "8080"
tags => ["http"]
}
beats {
port => "5044"
tags => ["beats"]
}
}
output {
if "http" in [tags] {
elasticsearch {
hosts => ["http://my-host.com:9200"]
index => "test2"
}
}
if "beats" in [tags] {
file {
path => '/tmp/beats.log'
codec => "json"
}
}
}
Using the pipelines.yml file is the recommended way to running multiple pipelines

Related

Logstash tcp input

I would like use the logstash tcp input plugin in mode client to connect to a web application :
input {
tcp {
mode => "client"
host => "194.167.200.22"
port => 80
}
}
output {
stdout { codec => rubydebug }
}
It seems connect to the server, but does not receive any data.
Could tou help me?

Can logstash's Google Cloud Storage output plugin do persistent transmission after restart machine?

Using this config for logstash's output. It's using /tmp/logstash-gcs as a local folder. Send to GCS when file become 1024 kbytes.
input {
beats {
port => 5044
}
}
filter {}
output {
google_cloud_storage {
bucket => "gcs-bucket-name"
json_key_file => "/secrets/service_account/credentials.json"
temp_directory => "/tmp/logstash-gcs"
log_file_prefix => "logstash_gcs"
max_file_size_kbytes => 1024
output_format => "json"
date_pattern => "%Y-%m-%dT%H:00"
flush_interval_secs => 2
gzip => false
gzip_content_encoding => false
uploader_interval_secs => 60
include_uuid => true
include_hostname => true
}
}
After restart machine, can it continuous do the job schedule without any data loss?
Does it has a queue feature to manage as pub/sub?
About solution, there are 2 ways.
Use filebeat to beat those tmp files again.
Set grace period seconds to insure logstash has enough time to send last task when machine down.

How does jenkins declarative pipeline define arrays

docs
I want to define volumes and envVars, but I get an error
pipeline {
agent {
kubernetes {
idleMinutes 5
workspaceVolume nfsWorkspaceVolume(readOnly: false, serverAddress: '127.0.0.1', serverPath: '/data/nfs-data/kubernetes/jenkins/agent')
containerTemplate {
name 'maven'
image 'maven:3.3.9-jdk-8-alpine'
ttyEnabled true
command 'cat'
envVars envVar( key: 'ENV', value: 'test')
}
}
}
}
stages {
stage('Clone') {
steps {
script {
sh 'pwd'
}
}
}
}
}
Running a pipeline script gives following error:
java.lang.UnsupportedOperationException: no known implementation of interface java.util.List is using symbol ‘envVar’
at org.jenkinsci.plugins.structs.describable.DescribableModel.resolveClass(DescribableModel.java:570)
at org.jenkinsci.plugins.structs.describable.UninstantiatedDescribable.instantiate(UninstantiatedDescribable.java:207)
at org.jenkinsci.plugins.structs.describable.DescribableModel.coerce(DescribableModel.java:466)
at org.jenkinsci.plugins.structs.describable.DescribableModel.injectSetters(DescribableModel.java:429)
at org.jenkinsci.plugins.structs.describable.DescribableModel.instantiate(DescribableModel.java:331)
Caused: java.lang.IllegalArgumentException: Could not instantiate {image=maven:3.3.9-jdk-8-alpine, ttyEnabled=true, name=maven, envVars=#envVar(key=127.0.0.1,value=/data/nfs-data/kubernetes/jenkins/agent), command=cat} for org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate
at org.jenkinsci.plugins.structs.describable.DescribableModel.instantiate(DescribableModel.java:334)
at org.jenkinsci.plugins.structs.describable.DescribableModel.coerce(DescribableModel.java:474)
at org.jenkinsci.plugins.structs.describable.DescribableModel.injectSetters(DescribableModel.java:429)
at org.jenkinsci.plugins.structs.describable.DescribableModel.instantiate(DescribableModel.java:331)
What should I do??

Logstash not forwarding logs to ES via Kafka

I'm using ELK 5.0.1 and Kafka 0.10.1.0 . I'm not sure why my logs aren't forwarding I installed Kafkacat and was successfully able to Produce and Consume logs from all the 3 servers where Kafka cluster is installed.
shipper.conf
input {
file {
start_position => "beginning"
path => "/var/log/logstash/logstash-plain.log"
}
}
output {
kafka {
topic_id => "stash"
bootstrap_servers => "<i.p1>:9092,<i.p2>:9092,<i.p3>:9092"
}
}
receiver.conf
input {
kafka {
topics => ["stash"]
group_id => "stashlogs"
bootstrap_servers => "<i.p1>:2181,<i,p2>:2181,<i.p3>:2181"
}
}
output {
elasticsearch {
hosts => ["<eip>:9200","<eip>:9200","<eip>:9200"]
manage_template => false
index => "logstash-%{+YYYY.MM.dd}"
}
}
Logs: Getting the below warnings in logstash-plain.log
[2017-04-17T16:34:28,238][WARN ][org.apache.kafka.common.protocol.Errors] Unexpected error
code: 38.
[2017-04-17T16:34:28,238][WARN ][org.apache.kafka.clients.NetworkClient] Error while fetching
metadata with correlation id 44 : {stash=UNKNOWN}
It looks like your bootstrap servers are using zookeeper ports. Try using Kafka ports (default 9092)

Reading from multiple topics in Apache Kafka

I'm trying to read from multiple kafka topics (say 'newtest-1' and 'newtest-2') using 'white_list' configuration in the logstash input plugin. My logstash conf looks like:
input { kafka { white_list => "newtest-1|newtest-2" } } output { stdout {codec => rubydebug } }
With this configuration I can successfully read from two different topics. But I want to use regex for input topics as I'm expecting the topics to be of the form 'newtest-*'. According to the suggestion in this link, the following configuration should work:
input { kafka { white_list => "newtest-*" } } output { stdout {codec => rubydebug } }
But with this I'm not able to read from kafka. Any help is appreciated.
The white_list should be newtest-.*
This is relevant to older versions of the plugin. Now you can use topics.