I would like use the logstash tcp input plugin in mode client to connect to a web application :
input {
tcp {
mode => "client"
host => "194.167.200.22"
port => 80
}
}
output {
stdout { codec => rubydebug }
}
It seems connect to the server, but does not receive any data.
Could tou help me?
Related
need your help,
I need to connect to AWS Aurora Postgresql using liquibase, it's already configured for local machine, and works fine, but have issues with ssh configuration to it.
I'm using id 'org.hidetake.ssh' version '2.10.1', and id 'org.liquibase.gradle' version '2.0.4'
I'm able to run command directly on host machine, like getting date execute ('date') below, but have no idea why liquibase fails with
Unexpected error running Liquibase: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: Connection could not be created to jdbc:postgresql://xxxx.rds.amazonaws.com:5432/postgres with driver org.postgresql.Driver. The connection attempt failed.
here is my build.gradle setting:
ssh.settings {
knownHosts = allowAnyHosts
logging = 'stdout'
identity = file("${System.properties['user.home']}/myfolder/.ssh/id_rsa")}
remotes {
dev {
host = 'xxx.xxx.xxx.xxx'
port = 22
user = 'ec2-user'
identity = file("${System.properties['user.home']}/myfolder/.ssh/id_rsa")
}
}
ssh.run {
session(remotes.dev) {
forwardLocalPort port: 5432, hostPort: 5432
execute ('date')
liquibase {
activities {
main {
//changeLogFile changeLog
url 'jdbc:postgresql://xxxx.rds.amazonaws.com:5432/postgres'
username feedSqlUserDev
password feedSqlUserPasswordDev
logLevel 'debug'
}
}
}
}
}
Could you please help me with it, what am I doing wrong?
Also had to connect to SSH bastion host before running liquibase updates. My solution is based on https://github.com/int128/gradle-ssh-plugin/issues/246 answer by the plugin author.
Here is my setup:
ssh.settings {
knownHosts = allowAnyHosts
logging = 'stdout'
identity = file("${System.properties['user.home']}/.ssh/id_rsa")
}
remotes {
bastion {
host = '<hostname>'
user = '<username>'
}
}
liquibase {
activities {
main {
changeLogFile '...'
url 'jdbc:postgresql://localhost:5438/***'
username '***'
password '***'
driver 'org.postgresql.Driver'
}
}
}
task('sshTunnelStart') {
doFirst {
project.ext.ready = new CountDownLatch(1)
project.ext.done = new CountDownLatch(1)
Thread.start {
ssh.run {
session(remotes.bastion) {
forwardLocalPort port: 5438,
host: '<real db hostname>',
hostPort: 5432
project.ready.countDown()
project.done.await(5, TimeUnit.MINUTES) // liquibase update timeout
}
}
}
ready.await(10, TimeUnit.SECONDS) // start tunnel timeout
}
}
task('sshTunnelStop') {
doLast {
// teardown tunnel
project.done.countDown()
}
}
update.dependsOn(sshTunnelStart)
update.finalizedBy(sshTunnelStop)
Note that in liquibase config I use localhost:5438 as it is a local port forwarded to the remote. Later the same port is used in forwardLocalPort as a 'port' parameter. 'host' parameter is set to the remote database host, and 'hostPort' is accordingly the database port. The last part of the config adds dependencies between tasks to liquibase update and start/stop the tunnel.
Using this config for logstash's output. It's using /tmp/logstash-gcs as a local folder. Send to GCS when file become 1024 kbytes.
input {
beats {
port => 5044
}
}
filter {}
output {
google_cloud_storage {
bucket => "gcs-bucket-name"
json_key_file => "/secrets/service_account/credentials.json"
temp_directory => "/tmp/logstash-gcs"
log_file_prefix => "logstash_gcs"
max_file_size_kbytes => 1024
output_format => "json"
date_pattern => "%Y-%m-%dT%H:00"
flush_interval_secs => 2
gzip => false
gzip_content_encoding => false
uploader_interval_secs => 60
include_uuid => true
include_hostname => true
}
}
After restart machine, can it continuous do the job schedule without any data loss?
Does it has a queue feature to manage as pub/sub?
About solution, there are 2 ways.
Use filebeat to beat those tmp files again.
Set grace period seconds to insure logstash has enough time to send last task when machine down.
I have an issue where my beatmetric is caught by my http pipeline.
Both Logstash, Elastic and Metricbeat is running in Kubernetes.
My beatmetric is setup to send to Logstash on port 5044 and log to a file in /tmp. This works fine. But whenever I create a pipeline with an http input, this seems to also catch beatmetric inputs and send them to index2 in Elastic as defined in the http pipeline.
Why does it behave like this?
/usr/share/logstash/pipeline/http.conf
input {
http {
port => "8080"
}
}
output {
#stdout { codec => rubydebug }
elasticsearch {
hosts => ["http://my-host.com:9200"]
index => "test2"
}
}
/usr/share/logstash/pipeline/beats.conf
input {
beats {
port => "5044"
}
}
output {
file {
path => '/tmp/beats.log'
codec => "json"
}
}
/usr/share/logstash/config/logstash.yml
pipeline.id: main
pipeline.workers: 1
pipeline.batch.size: 125
pipeline.batch.delay: 50
http.host: "0.0.0.0"
http.port: 9600
config.reload.automatic: true
config.reload.interval: 3s
/usr/share/logstash/config/pipeline.yml
- pipeline.id: main
path.config: "/usr/share/logstash/pipeline"
Even if you have multiple config files, they are read as a single pipeline by logstash, concatenating the inputs, filters and outputs, if you need to run then as separate pipelines you have two options.
Change your pipelines.yml and create differents pipeline.id, each one pointing to one of the config files.
- pipeline.id: beats
path.config: "/usr/share/logstash/pipeline/beats.conf"
- pipeline.id: http
path.config: "/usr/share/logstash/pipeline/http.conf"
Or you can use tags in your input, filter and output, for example:
input {
http {
port => "8080"
tags => ["http"]
}
beats {
port => "5044"
tags => ["beats"]
}
}
output {
if "http" in [tags] {
elasticsearch {
hosts => ["http://my-host.com:9200"]
index => "test2"
}
}
if "beats" in [tags] {
file {
path => '/tmp/beats.log'
codec => "json"
}
}
}
Using the pipelines.yml file is the recommended way to running multiple pipelines
I'm new to akka and wanted to connect two PC using akka remotely just to run some code in both as (2 actors). I had tried the example in akka doc. But what I really do is to add the 2 IP addresses into config file I always get this error?
First machine give me this error:
[info] [ERROR] [11/20/2018 13:58:48.833]
[ClusterSystem-akka.remote.default-remote-dispatcher-6]
[akka.remote.artery.Association(akka://ClusterSystem)] Outbound
control stream to [akka://ClusterSystem#192.168.1.2:2552] failed.
Restarting it. Handshake with [akka://ClusterSystem#192.168.1.2:2552]
did not complete within 20000 ms
(akka.remote.artery.OutboundHandshake$HandshakeTimeoutException:
Handshake with [akka://ClusterSystem#192.168.1.2:2552] did not
complete within 20000 ms)
And second machine:
Exception in thread "main"
akka.remote.RemoteTransportException: Failed to bind TCP to
[192.168.1.3:2552] due to: Bind failed because of
java.net.BindException: Cannot assign requested address: bind
Config file content :
akka {
actor {
provider = cluster
}
remote {
artery {
enabled = on
transport = tcp
canonical.hostname = "192.168.1.3"
canonical.port = 0
}
}
cluster {
seed-nodes = [
"akka://ClusterSystem#192.168.1.3:2552",
"akka://ClusterSystem#192.168.1.2:2552"]
# auto downing is NOT safe for production deployments.
# you may want to use it during development, read more about it in the docs.
auto-down-unreachable-after = 120s
}
}
# Enable metrics extension in akka-cluster-metrics.
akka.extensions=["akka.cluster.metrics.ClusterMetricsExtension"]
# Sigar native library extract location during tests.
# Note: use per-jvm-instance folder when running multiple jvm on one host.
akka.cluster.metrics.native-library-extract-folder=${user.dir}/target/native
First of all, you don't need to add cluster configuration for AKKA remoting. Both the PCs or nodes should be enabled remoting with a concrete port instead of "0" that way you know which port to connect.
Have below configurations
PC1
akka {
actor {
provider = remote
}
remote {
artery {
enabled = on
transport = tcp
canonical.hostname = "192.168.1.3"
canonical.port = 19000
}
}
}
PC2
akka {
actor {
provider = remote
}
remote {
artery {
enabled = on
transport = tcp
canonical.hostname = "192.168.1.4"
canonical.port = 18000
}
}
}
Use below actor path to connect any actor in remote from PC1 to PC2
akka://<PC2-ActorSystem>#192.168.1.4:18000/user/<actor deployed in PC2>
Use below actor path to connect from PC2 to PC1
akka://<PC2-ActorSystem>#192.168.1.3:19000/user/<actor deployed in PC1>
Port numbers and IP address are samples.
I'm using ELK 5.0.1 and Kafka 0.10.1.0 . I'm not sure why my logs aren't forwarding I installed Kafkacat and was successfully able to Produce and Consume logs from all the 3 servers where Kafka cluster is installed.
shipper.conf
input {
file {
start_position => "beginning"
path => "/var/log/logstash/logstash-plain.log"
}
}
output {
kafka {
topic_id => "stash"
bootstrap_servers => "<i.p1>:9092,<i.p2>:9092,<i.p3>:9092"
}
}
receiver.conf
input {
kafka {
topics => ["stash"]
group_id => "stashlogs"
bootstrap_servers => "<i.p1>:2181,<i,p2>:2181,<i.p3>:2181"
}
}
output {
elasticsearch {
hosts => ["<eip>:9200","<eip>:9200","<eip>:9200"]
manage_template => false
index => "logstash-%{+YYYY.MM.dd}"
}
}
Logs: Getting the below warnings in logstash-plain.log
[2017-04-17T16:34:28,238][WARN ][org.apache.kafka.common.protocol.Errors] Unexpected error
code: 38.
[2017-04-17T16:34:28,238][WARN ][org.apache.kafka.clients.NetworkClient] Error while fetching
metadata with correlation id 44 : {stash=UNKNOWN}
It looks like your bootstrap servers are using zookeeper ports. Try using Kafka ports (default 9092)