How to use fluentd output_kafka plugin? - apache-kafka

I have installed output_kafka plugin for my fluentd by command "gem install fluent-plugin-kafka".
but when i start fluentd service, i got following error message in log file:
2012-11-09 18:18:39 +0800: temporarily failed to flush the buffer, next retry will be at 2012-11-09 18:52:46 +0800. error="uninitialized constant Kafka::Message" instance=69952455476860
It seems that output_kafka.rb can not found module Kafka or class Message, so how can i fix it?

This is Kaz, Fluentd committer. This sounds like a bug of fluent-plugin-kafka. Could you report the bug from here, with your environment description?
https://github.com/kiyoto/fluent-plugin-kafka/issues
Once you've reported, i'll take a look at it. Thanks -K

Related

while start the bro the error is coming "error occurred while trying to send mail: send-mail: SENDMAIL-NOTFOUND not found"

I've installed the Bro IDS but when I try to start the service an error is coming that :
Error: error occurred while trying to send mail: send-mail: SENDMAIL-NOTFOUND not found
starting ...
starting bro ...
bro terminated immediately after starting; check output with "diag"
I've already used broctl install and broctl update but still got the same error.
Kindly help
I've checked the configuration file i.e. node.cfg under /nsm/bro/etc and change the default interface eth0 with my system interface.
now bro has started
Bro can run without sendmail present but you may have been hit by a bug in Bro where it failed to include the Sendmail location in the config files. Whilst it says the bug was supposed to be fixed I've also seen the problem in Bro-2.5. So an easy fix is to do as suggested in the bug report and add the SendMail = /usr/sbin/sendmail to /usr/local/bro/etc/broctl.cfg and then rerun the deployment command:
sudo /usr/local/bro/bin/broctl deploy

How to resolve undef error in ejabberd hook

I have added a customized module named mod_confirm_delivery in ejabberd which has compiled and added successfully but when i am sending a message an error is coming in my ejabberd error log file, That is:
2016-03-15 17:03:38.306 [error] <0.2653.0>#ejabberd_hooks:run_fold1:368 {undef,[{mod_confirm_delivery,send_packet,[{xmlel,<<"iq">>,[{<<"xml:lang">>,<<"en">>},{<<"type">>,<<"get">>},{<<"id">>,<<"aacfa">>}],[{xmlcdata,<<"\n">>},{xmlel,<<"query">>,[{<<"xmlns">>,<<"jabber:iq:roster">>}],[]},{xmlcdata,<<"\n">>}]},{state,{socket_state,gen_tcp,#Port<0.58993>,<0.2652.0>},ejabberd_socket,#Ref<0.0.1.25301>,false,<<"12664578908237388886">>,undefined,c2s,c2s_shaper,false,false,false,false,[verify_none,compression_none],true,{jid,<<"test1">>,<<"localhost">>,<<"D-5">>,<<"test1">>,<<"localhost">>,<<"D-5">>},<<"test1">>,<<"localhost">>,<<"D-5">>,{{1458,41617,630679},<0.2653.0>},{2,{{<<"test2">>,<<"localhost">>,<<>>},{{<<"test1">>,<<"localhost">>,<<>>},nil,nil},nil}},{2,{{<<"test2">>,<<"localhost">>,<<>>},{{<<"test1">>,<<"localhost">>,<<>>},nil,nil},nil}},{0,nil},undefined,undefined,{userlist,none,[],false},c2s,ejabberd_auth_internal,{{127,0,0,1},41928},[],active,[],inactive,undefined,undefined,1000,undefined,300,300,false,0,0,true,<<"en">>},{jid,<<"test1">>,<<"localhost">>,<<"D-5">>,<<"test1">>,<<"localhost">>,<<"D-5">>},{jid,<<"test1">>,<<"localhost">>,<<>>,<<"test1">>,<<"localhost">>,<<>>}],[]},{ejabberd_hooks,safe_apply,3,[{file,"src/ejabberd_hooks.erl"},{line,382}]},{ejabberd_hooks,run_fold1,4,[{file,"src/ejabberd_hooks.erl"},{line,365}]},{ejabberd_c2s,session_established2,2,[{file,"src/ejabberd_c2s.erl"},{line,1268}]},{p1_fsm,handle_msg,10,[{file,"src/p1_fsm.erl"},{line,582}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]}
I have ejabberd 16.02.26 and my module code is:
mod_confirm_delivery.erl
This module is working fine with ejabberd 2.1.13 but i want to upgraded my ejabberd. I can't understand what is the problem and how can I resolve this error.
undef error means the function or module is not found. The most likely error is that the mod_confirm_delivery.beam file is not in Erlang VM path.
You should try moving the compiled beam file with other ejabberd beam files or try setting the path used to launch Erlang to the directory where your mod_confirm_delivery.beam file is located. This is the -pa option of the Erlang VM.
If your code if in right place, other option is that the function is undefined. The hook tries to call mod_confirm_delivery:send_packet/4. Your code is wrong as it indeed does not defined send_packet/4 but only send_packet/3. You need to update your code to match new signature for user_send_packet hook:
user_send_packet(Packet, C2SState, From, To) -> Packet
In case of doubt, you can refer to official hook list in ejabberd documentation: https://docs.ejabberd.im/developer/hooks/

Cloudera Kafka can not run

I installed Zookeeper and tried to install Kafka0.8.0 on Cloudera5.4.4. It successfully deployed, but when I ran it, it failed. The error log as following:
[Errno 2] No such file or directory: '/var/log/kafka/server.log'
I really have no any idea.
Thanks in advance!
I met this problem before, the default maximum memory of brokers were set to 0MB by Cloudera(it seems a bug of Cloudeara), it caused Kafka could not get run, and the parameter fetch.message.max.bytes also was set to low by default. Check the stderr installation log, search keyword ERROR, otherwise the log too messy to check. You would find the root error message. The message above [Errno 2] No such file or directory: '/var/log/kafka/server.log' is not the root exception.

Error running hadoop application in Eclipse on Windows

I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem

Wicket warning allow_url_include=On

From time to time I get the following warning in the logfile of my Wicket application:
04.10.2012 14:52:08,525 WARN [org.apache.wicket.core.request.mapper.AbstractBookmarkableMapper]
Unknown listener interface 'd allow_url_include=On '
What does that mean and how do I fix it? I tried Google, but I could only find results for the PHP configuration allow_url_include.
I'm using Wicket 6.0.0
Most likely an automated tool tries to exploit some PHP application. Wicket can't handle this request and prints the warning. Look in the access log what HTTP requests hit your server at this timestamp to see which request caused this warning.
It's safe to ignore this warning in this case.