Failed to load kafka module - apache-kafka

I am trying out the nxlog kafka out module from below
Link
I get the below error message
ERROR Failed to load module from /usr/local/libexec/nxlog/modules/output/om_kafka.so, /usr/local/libexec/nxlog/modules/output/om_kafka.so: undefined symbol: rd_kafka_topic_new;DSO load failed
ERROR module 'outKafka' is not declared at /usr/local/etc/nxlog/nxlog.conf:65
ERROR route tcproute is not functional without output modules, ignored at /usr/local/etc/nxlog/nxlog.conf:65
I am using :
Nxlog Version - nxlog-ce-2.8.1248
Kafka Version - kafka_2.9.2-0.8.1.1
Latest librdkafka
Also the example programe of librdkafka (rdkafka) for both producer and consumer runs fine so I guess the environment is set correct for librdkafka ,
but am unable to determine what's causing this problem.

The problem is that om_kafka.so is not linked with librdkafka.
You will need this in Makefile.am:
om_kafka_la_LIBADD = $(LIBRDKAFKA) $(LIBNX)
The value of $(LIBRDKAFKA) should be properly set ,normally this is done in configure.in. Otherwise you could just use the full path to the library (.so or .la or .a )

Related

Use Kafka Connect with jcustenborder / kafka-connect-twitter

I'm trying to use Kafka Connect with kafka-connect-twitter from jcustenborder in Github to introduce Twitter tweets into Kafka. The instructions say:
mvn clean package
export CLASSPATH="$(find target/ -type f -name '*.jar'| grep '\-package' | tr '\n' ':')"
$CONFLUENT_HOME/bin/connect-standalone connect/connect-avro-docker.properties config/TwitterSourceConnector.properties
The export CLASSPATH line in fact does not work and returns nothing when run. The connect avro docker properties file seems to want to use the jars available in target/kafka-connect-target/usr/share/kafka-connect after running mvn clean package in the kafka-connect-twitter repository.
When I run
connect-standalone.sh connect-avro-docker.properties TwitterSourceConnector.properties in the directory where these two .properties are present, since connect-standalone.sh is in the path, I get the error:
2021-11-12 18:22:05,267] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:126)
org.apache.kafka.common.config.ConfigException: Invalid value io.confluent.connect.avro.AvroConverter for configuration key.converter: Class io.confluent.connect.avro.AvroConverter could not be found.
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:744)
at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:490)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:483)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:108)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:129)
at org.apache.kafka.connect.runtime.WorkerConfig.<init>(WorkerConfig.java:452)
at org.apache.kafka.connect.runtime.standalone.StandaloneConfig.<init>(StandaloneConfig.java:42)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:80)
It's not finding the jar where the AvroConverter is.
I'm using Kafka 2.13-2.8.0 and the 0.3.34 jcustenborder kafka-connect-twitter.
I see nowhere possible jars where the AvroConverter might be, in the Kafka distribution. Does it include Kafka Connect?
Note that I'm using an install of Kafka in an iMac, I'm not using Docker for running Kafka.
EDIT:
instead of using the avro properties file, I'm using the connect-standalone.properties. Although the log says it has loaded the guava jar:
INFO Loading plugin from: /Users/paupaches/dev/books/kafkabeginnerscourse/kafka-connect/connectors/kafka-connectors-twitter/guava-30.1.1-jre.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:246)
[2021-11-13 10:50:22,992] INFO Registered loader: PluginClassLoader{pluginLocation=file:/Users/paupaches/dev/books/kafkabeginnerscourse/kafka-connect/connectors/kafka-connectors-twitter/guava-30.1.1-jre.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:269)
I get the error
ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:126)
java.lang.NoClassDefFoundError: com/google/common/collect/Multimap
I am using openjdk 17.
It is not necessary to export the CLASSPATH, using the "connectors" path in the plugin.path property in the file connect-standalone.properties is enough.
The "connectors" directory contains the kafka-connect-twitter directory which contains all the jars generated when running "mvn clean package" in the connector working copy.
In the end I used openjdk 8, although 17 is also installed in my Mac.

Unable to start zookeeper getting exception saying that unable to load

I am unable to start zookeeper in my local
getting exception
Error: Could not find or load main class software\kafka_2.12-2.3.0\libs\activation-1.1.1.jar;
I have got the same error.
Finally I found the reason.
I put kafka in a folder which it's name contains space like this
C:\Open source\kafka\kafka_2.12-2.3.0
Then I copied all kafka_2.12-2.3.0 to a new folder without space in name, such asc C:\kafka_2.12-2.3.0
and kafka can run
NOte that the command line is: zookeeper-server-start.bat ../../config/zookeeper.properties
NOT: zookeeper-server-start.bat config/zookeeper.properties
Here's my result

Starting KsqlRestApplication form scala and getting NoSuchMethodError org.apache.kafka.streams.StreamsConfig.getConsumerConfigs

I am trying to write a program that enables me to run predefined KSQL operations on Kafka topics in Scala, but I don't want to open the KSQL Cli everytime. Therefore I want to start the KSQL "Server" from within my Scala program. If I understand the KSQL source code right, I have to build and start a KsqlRestApplication:
def restServer = KsqlRestApplication.buildApplication(new
KsqlRestConfig(defaultServerProperties), true, new VersionCheckerAgent
{override def start(ksqlModuleType: KsqlModuleType, properties:
Properties): Unit = ???})
But when I try doing that, I get the following error:
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.kafka.streams.StreamsConfig.getConsumerConfigs(Ljava/lang/String;Ljava/lang/String;)Ljava/util/Map;
at io.confluent.ksql.rest.server.BrokerCompatibilityCheck.create(BrokerCompatibilityCheck.java:62)
at io.confluent.ksql.rest.server.KsqlRestApplication.buildApplication(KsqlRestApplication.java:241)
I looked into the function call in BrokerCompatibilityCheck and in the create function it calls the StreamsConfig.getConsumerConfigs() with 2 Strings as parameters instead of the parameters defined in
https://kafka.apache.org/0102/javadoc/org/apache/kafka/streams/StreamsConfig.html#getConsumerConfigs(StreamThread,%20java.lang.String,%20java.lang.String).
Are my KSQL and Kafka version simply not compatible or am I doing something wrong?
I am using KSQL version 4.1.0-SNAPSHOT and Kafka version 1.0.0.
Yes, NoSuchMethodError typically indicates a version incompatibility between libraries.
The link you posted is to javadoc for kafka 0.10.2. The method hasn't changed in 1.0 but indeed in the upcoming 1.1 it only takes 2 Strings:
https://kafka.apache.org/11/javadoc/org/apache/kafka/streams/StreamsConfig.html#getConsumerConfigs(java.lang.String,%20java.lang.String)
. That suggests the version of KSQL you're using (4.1.0-SNAPSHOT) depends on version 1.1 of kafka streams, which is currently in the release candidate phase and I believe and should be out soon:
https://lists.apache.org/thread.html/780c4458b16590e99261b69d7b41b9ec374a3226d72c8d38885a008a#%3Cusers.kafka.apache.org%3E
As per that email you can find the latest (1.1.0-rc2) artifacts in the apache staging repo:
https://repository.apache.org/content/groups/staging/

How to resolve undef error in ejabberd hook

I have added a customized module named mod_confirm_delivery in ejabberd which has compiled and added successfully but when i am sending a message an error is coming in my ejabberd error log file, That is:
2016-03-15 17:03:38.306 [error] <0.2653.0>#ejabberd_hooks:run_fold1:368 {undef,[{mod_confirm_delivery,send_packet,[{xmlel,<<"iq">>,[{<<"xml:lang">>,<<"en">>},{<<"type">>,<<"get">>},{<<"id">>,<<"aacfa">>}],[{xmlcdata,<<"\n">>},{xmlel,<<"query">>,[{<<"xmlns">>,<<"jabber:iq:roster">>}],[]},{xmlcdata,<<"\n">>}]},{state,{socket_state,gen_tcp,#Port<0.58993>,<0.2652.0>},ejabberd_socket,#Ref<0.0.1.25301>,false,<<"12664578908237388886">>,undefined,c2s,c2s_shaper,false,false,false,false,[verify_none,compression_none],true,{jid,<<"test1">>,<<"localhost">>,<<"D-5">>,<<"test1">>,<<"localhost">>,<<"D-5">>},<<"test1">>,<<"localhost">>,<<"D-5">>,{{1458,41617,630679},<0.2653.0>},{2,{{<<"test2">>,<<"localhost">>,<<>>},{{<<"test1">>,<<"localhost">>,<<>>},nil,nil},nil}},{2,{{<<"test2">>,<<"localhost">>,<<>>},{{<<"test1">>,<<"localhost">>,<<>>},nil,nil},nil}},{0,nil},undefined,undefined,{userlist,none,[],false},c2s,ejabberd_auth_internal,{{127,0,0,1},41928},[],active,[],inactive,undefined,undefined,1000,undefined,300,300,false,0,0,true,<<"en">>},{jid,<<"test1">>,<<"localhost">>,<<"D-5">>,<<"test1">>,<<"localhost">>,<<"D-5">>},{jid,<<"test1">>,<<"localhost">>,<<>>,<<"test1">>,<<"localhost">>,<<>>}],[]},{ejabberd_hooks,safe_apply,3,[{file,"src/ejabberd_hooks.erl"},{line,382}]},{ejabberd_hooks,run_fold1,4,[{file,"src/ejabberd_hooks.erl"},{line,365}]},{ejabberd_c2s,session_established2,2,[{file,"src/ejabberd_c2s.erl"},{line,1268}]},{p1_fsm,handle_msg,10,[{file,"src/p1_fsm.erl"},{line,582}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]}
I have ejabberd 16.02.26 and my module code is:
mod_confirm_delivery.erl
This module is working fine with ejabberd 2.1.13 but i want to upgraded my ejabberd. I can't understand what is the problem and how can I resolve this error.
undef error means the function or module is not found. The most likely error is that the mod_confirm_delivery.beam file is not in Erlang VM path.
You should try moving the compiled beam file with other ejabberd beam files or try setting the path used to launch Erlang to the directory where your mod_confirm_delivery.beam file is located. This is the -pa option of the Erlang VM.
If your code if in right place, other option is that the function is undefined. The hook tries to call mod_confirm_delivery:send_packet/4. Your code is wrong as it indeed does not defined send_packet/4 but only send_packet/3. You need to update your code to match new signature for user_send_packet hook:
user_send_packet(Packet, C2SState, From, To) -> Packet
In case of doubt, you can refer to official hook list in ejabberd documentation: https://docs.ejabberd.im/developer/hooks/

Error running hadoop application in Eclipse on Windows

I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem