logstash is throwing exception template file not found - docker-compose

I'm trying to install docker-elk stack using docker-compose, elastic search and kibana are working fine, but my logstash is not connecting to elastic search and throwing error shown below, I'm installing this for first time so doesn't have much knowledge about it.
logstash-5-6 | [2017-11-26T06:09:06,455][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Template file '' could not be found!", :class=>"ArgumentError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:37:in `read_template_file'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:23:in `get_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:58:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:25:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:290:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:310:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:235:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:398:in `start_pipeline'"]}
logstash-5-6 | [2017-11-26T06:09:06,455][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch-5-6:9201"]}
Logstash.conf
input {
tcp {
port => 5001
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => "localhost:9201"
}
}

providing the custom template and updating its path in output plugin solved the issue.

Related

Encoding issue when streaming logs from AWS Kinesis to ElasticSearch via Logstash

I've got an AWS Kinesis data stream called "otelpoc".
In Logstash, I'm using the Kinesis input plugin - see here.
My Logstash config is as follows:
input {
kinesis {
kinesis_stream_name => "otelpoc"
region => "ap-southeast-2"
codec => json { }
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "otelpoc-logstash-%{+YYYY.MM.dd}"
}
}
I can put events to Kinesis using the AWS CLI as follows:
aws kinesis put-record --stream-name otelpoc --data file://payload.json --partition-key 1
payload.json looks like this:
{
"message": "this is a test",
"level": "error"
}
... but when I do this I see an error in Logstash as follows:
Received an event that has a different character encoding than you configured. {:text=>"\\x99\\xEB,j\\a\\xAD\\x86+\\\"\\xB1\\xAB^\\xB2\\xD9^\\xBD\\xE9^\\xAE\\xBA+", :expected_charset=>"UTF-8"}
Interestingly the message still get's outputted to Elastic and I can view it in Kibana as shown below:
Not sure what I should be doing with the character encoding... I've tried several things in Logstash, but no success e.g. changing the codec in the kinesis input to something like the following
codec => plain {
charset => "UTF-8"
}
... but no luck... I tried to decode the encoded text in a few online decoders, but not really sure what I'm trying to decode from... anyone able to help?
EDIT: using v6.7.1 of ELK stack, which is quite old, but I don't think this is the issue...
I never resolved this when publishing messages to Kinesis using the AWS CLI, but for my specific use case I was trying to send logs to Kinesis using the awskinesis exporter for the Open Telemetry (OTEL) collector agent - see here.
If I use otlp_json encoding, it worked e.g.
awskinesis:
aws:
stream_name: otelpoc
region: ap-southeast-2
encoding:
name: otlp_json

Using logstash for email alert

I installed logstash 5.5.2 in our windows server and I would like to send an email alert when I identify some sentences.
My output section is the following:
output {
tcp {
host => "host.com"
port => 1234
codec => "json_lines"
}
if "The message was Server with id " in [log_message] {
email {
to => "<myName#company.com>"
from => "<otherName#company.com>"
subject => "Issue appearance"
body => "The input is: %{incident}"
domain => "smtp.intra.company.com"
port => 25
#via => "smtp"
}
}
}
During my debug I got the following messages:
[2017-09-11T13:19:39,181][ERROR][logstash.plugins.registry] Problems loading a plugin with {:type=>"output", :name=>"email", :path=>"logstash/outputs/email", :error_message=>"NameError", :error_class=>NameError
[2017-09-11T13:19:39,186][DEBUG][logstash.plugins.registry] Problems loading the plugin with {:type=>"output", :name=>"email"}
[2017-09-11T13:19:39,195][ERROR][logstash.agent ] Cannot create pipeline {:reason=>"Couldn't find any output plugin named 'email'. Are you sure this is correct? Trying to load the email output plugin resulted in this error: Problems loading the requested plugin named email of type output.
Which I guess says that I don't have the email plugin installed.
Can someone suggest a way to fix this?
Using another solution is not an option, just in case that someone suggests it.
Thanks and regards,
Fotis
I tried to follow the instructions in the official documentation
But the option of creating an offline plugin pack didn't work.
So what I did, was to create a running logstash instance on my client, run the command to install the output-email plugin logstash-plugin install logstash-output-email and afterwards I copied this instance on my server(which had no internet access).

Creating PostgreSQL DataSource via pax-jdbc config file on karaf 4

On my karaf 4.0.8 I've installed the feature pax-jdbc-postgresql. The DataFactory for PostgreSQL is installed:
org.osgi.service.jdbc.DataSourceFactory]
osgi.jdbc.driver.class org.postgresql.Driver
osgi.jdbc.driver.name PostgreSQL JDBC Driver
osgi.jdbc.driver.version PostgreSQL 9.4 JDBC4.1 (build 1203)
service.bundleid 204
service.scope singleton
Using Bundles com.eclipsesource.jaxrs.publisher (184)
I've create the file etc/org.ops4j.datasource-psql-sandbox.cfg:
osgi.jdbc.driver.class=org.postgresql.Driver
osgi.jdbc.driver.name=PostgreSQL
url=jdbc:postgresql://localhost:5432/sandbox
dataSourceName=psql-sandbox
user=sandbox
password=sandbox
After that, I see the confirmation in karaf.log that the file was processed:
2017-02-10 14:54:17,468 | INFO | 41-88b277ae0921) |
DataSourceRegistration | 154 - org.ops4j.pax.jdbc.config -
0.9.0 | Detected config for DataSource psql-sandbox. Tracking DSF with filter
(&(objectClass=org.osgi.service.jdbc.DataSourceFactory)(osgi.jdbc.driver.class=org.postgresql.Driver)(osgi.jdbc.driver.name=PostgreSQL))
However, I see no new DataSource in services list in console. What went wrong? I see no exceptions in log ....
The log message tell you that the config was processed and it is now searching for a suitable DataSourceFactory OSGi service.
The problem in your case is that it does not find such a service. So to debug this you should list all DataSourceFactory services and check their properties.
service:list DataSourceFactory
In my case it shows this:
[org.osgi.service.jdbc.DataSourceFactory]
-----------------------------------------
osgi.jdbc.driver.class = org.postgresql.Driver
osgi.jdbc.driver.name = PostgreSQL JDBC Driver
...
As you see it does not match the filter you see in the log. Generally you should only provide either osgi.jdbc.driver.class or osgi.jdbc.driver.name not both. If you remove the osgi.jdbc.driver.name line the config will work.
There is no error message as the system can not know if the error is transient or not. Basically as soon as you install a matching OSGi service the DataSource will be created.

Unsupported http.type [netty3] when trying to start embedded elasticsearch node

I know that embedded Elasticsearch is not recommended. I'm just trying it for testing purposes.
I'm trying to start an embedded Elasticsearch node giving the configuration from the following elasticsearch.yml
# Name for the cluster
cluster.name: elasticsearch
# Name for the embedded node
node.name: EmbeddedESNode
# Path to log files:
path.logs: logs
discovery.zen.ping.unicast.hosts: []
# Disable dynamic scripting
script.inline: false
script.stored: false
script.file: false
transport.type: local
http.type: netty3
I'm using es 5.1.1 and my code to start embedded node is as follows.
try {
Settings elasticsearchSetting = Settings.builder()
// Value for path.home is required for es but will not be used as long as other properties
// path.logs, path.data and path.conf is set.
.put(ES_PROPERTY_PATH_HOME, "nullpath")
.put(ES_PROPERTY_PATH_CONF, confDir)
.build();
Node node = new Node(elasticsearchSetting).start();
logger.info("Embedded Elasticsearch server successfully started ...");
} catch (Exception e) {
throw e;
}
I get the following trace.
java.lang.IllegalStateException: Unsupported http.type [netty3]
at org.elasticsearch.common.network.NetworkModule.getHttpServerTransportSupplier(NetworkModule.java:194) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.node.Node.<init>(Node.java:396) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.node.Node.<init>(Node.java:229) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.node.Node.<init>(Node.java:225) ~[elasticsearch-5.1.1.jar:5.1.1]
... 18 more
I've tried with http.type: netty4 as well but no luck so far. It works when http.enabled:false is set but I want to use http rest api for testing.
P.S:
I've been using this elasticsearch hadoop class for reference in implementing embedded es and unfortunately I couldn't find any docs on http.type.
Can't I start an embedded node with http now in ES 5.x ?
What am i doing wrong here ?
Any help is highly appreciated.
As mentioned by #Bastian the issue was that transport module not being in the classpath. The solution was already there in es-hadoop embedded es implementation.
private static class PluginConfigurableNode extends Node {
public PluginConfigurableNode(Settings settings, Collection<Class<? extends Plugin>> classpathPlugins) {
super(InternalSettingsPreparer.prepareEnvironment(settings, null), classpathPlugins);
}
}
We can give netty3 as a plugin as follows. Then everything works well.
Collection<Class<? extends Plugin>> plugins = Arrays.asList(Netty3Plugin.class);
Node node = new PluginConfigurableNode(elasticsearchSetting, plugins).start();
You need to add the module transport-netty4 to your classpath:
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>transport-netty4-client</artifactId>
<version>5.1.1</version>
<scope>test</scope>
</dependency>
See also my answer here

Cloud foundry on Google Compute engine can't create container

I am very new with Cloud foundry. I have added cloud foundry for google compute engine platform by this guides source1 and source2.
Terraform was used for creating needed infrastructure. It seemed all was fine I didn't get any errors during deployment cloud foundry itself and bosh cck command returns that there are no any problems. But when I tried to deploy my hello world app, I got next error message in terminal after cf push command:
Creating container
Failed to create container
FAILED
Error restarting application: StagingError.
After checking log files I found next message:
{
"timestamp":"1474637304.026303530",
"source":"garden-linux",
"message":"garden-linux.loop-mounter.mount-file.mounting",
"log_level":2,
"data":{
"destPath":"/var/vcap/data/garden/aufs_graph/aufs/diff/08829a3252c1d60729e3b5482b0fb109652c9ab5beff9724e4e4ae756a0bc3ce",
"error":"exit status 32",
"filePath":"/var/vcap/data/garden/aufs_graph/backing_stores/08829a3252c1d60729e3b5482b0fb109652c9ab5beff9724e4e4ae756a0bc3ce",
"output":"mount: wrong fs type, bad option, bad superblock on /dev/loop0,\n missing codepage or helper program, or other error\n In some cases useful info is found in syslog - try\n dmesg | tail or so\n\n",
"session":"2.276"
}
}{
"timestamp":"1474637304.026949406",
"source":"garden-linux",
"message":"garden-linux.pool.acquire.provide-rootfs-failed",
"log_level":2,
"data":{
"error":"mounting file: mounting file: exit status 32",
"handle":"ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"session":"9.545"
}
}
{
"timestamp":"1474637304.027062416",
"source":"garden-linux",
"message":"garden-linux.garden-server.create.failed",
"log_level":2,
"data":{
"error":"mounting file: mounting file: exit status 32",
"request":{
"Handle":"ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"GraceTime":0,
"RootFSPath":"/var/vcap/packages/rootfs_cflinuxfs2/rootfs",
"BindMounts":[
{
"src_path":"/var/vcap/data/executor_cache/6942123d3462ad9d21a45729c3cae183-1474475979582384649-1.d",
"dst_path":"/tmp/lifecycle"
}
],
"Network":"",
"Privileged":true,
"Limits":{
"bandwidth_limits":{
},
"cpu_limits":{
"limit_in_shares":512
},
"disk_limits":{
"inode_hard":200000,
"byte_hard":6442450944,
"scope":1
},
"memory_limits":{
"limit_in_bytes":1073741824
}
}
},
"session":"11.44187"
}
}{
"timestamp":"1474637304.034646988",
"source":"garden-linux",
"message":"garden-linux.garden-server.destroy.failed",
"log_level":2,
"data":{
"error":"unknown handle: ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"handle":"ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"session":"11.44188"
}
}
And meantime in dmesg | tail I got next:
[161023.238082] aufs test_add:283:garden-linux[7681]: uid/gid/perm
/var/vcap/data/garden/aufs_graph/aufs/diff/d350dcd30f6d6f8b37eabe06a3b73bcea0a87f9aff4edf15f12792269fc9f97c
4294967294/4294967294/0755, 0/0/0755 [161023.238109] aufs
au_opts_verify:1597:garden-linux[7681]: dirperm1 breaks the protection
by the permission bits on the lower branch [161023.413392] device
wtj3qdqhig0t-0 entered promiscuous mode
I'm not sure that this issues connected or that it is issue at all, but I post them here in order to be sure, that I didn't miss anything.
I don't know how to fix this problem and where, should I look solution for terraform scripts or for bosh manifest files. We have micro service architecture with three nodes on node js and one on ruby, so deployment is very important question for us.
here is my application manifest.yml file:
---
applications:
- name: hello_cloud
memory: 128M
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
instances: 1
random-route: true
command: "node server.js"
My goal is to be able deploy applications using cloud foundry. If you have any additional questions or I wrote something unclear feel free to write me.
This issue is related a conflict between garden and the 4.4 Linux kernel. To use the example cloudfoundry manfest, use the follow stemcell:
bosh upload stemcell https://bosh.io/d/stemcells/bosh-google-kvm-ubuntu-trusty-go_agent?v=3262.19
bosh deploy
You may need to delete your cf deployment before re-deploying due to quota issues.