Could not find or load main class org.h2.tools.Console - wildfly

I want to connect to h2 console. I have build the code source of keycloak and now I'm under bin repository and i taped the following command
java -cp jar org.h2.tools.Console -url "$url" -user sa -password ""
to connect to h2 console but I get the following error
Error: Could not find or load main class org.h2.tools.Console
Caused by: java.lang.ClassNotFoundException: org.h2.tools.Console

I would use this approach instead, it should resolve the resolution of the class:
java -cp h2-1.4.200.jar org.h2.tools.Console
This would launch the console. Please adapt accordingly with regards to the directory you're launching java from.
Please keep in mind the console requires you to enter credentials in the console webapp.

Related

Use Kafka Connect with jcustenborder / kafka-connect-twitter

I'm trying to use Kafka Connect with kafka-connect-twitter from jcustenborder in Github to introduce Twitter tweets into Kafka. The instructions say:
mvn clean package
export CLASSPATH="$(find target/ -type f -name '*.jar'| grep '\-package' | tr '\n' ':')"
$CONFLUENT_HOME/bin/connect-standalone connect/connect-avro-docker.properties config/TwitterSourceConnector.properties
The export CLASSPATH line in fact does not work and returns nothing when run. The connect avro docker properties file seems to want to use the jars available in target/kafka-connect-target/usr/share/kafka-connect after running mvn clean package in the kafka-connect-twitter repository.
When I run
connect-standalone.sh connect-avro-docker.properties TwitterSourceConnector.properties in the directory where these two .properties are present, since connect-standalone.sh is in the path, I get the error:
2021-11-12 18:22:05,267] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:126)
org.apache.kafka.common.config.ConfigException: Invalid value io.confluent.connect.avro.AvroConverter for configuration key.converter: Class io.confluent.connect.avro.AvroConverter could not be found.
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:744)
at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:490)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:483)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:108)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:129)
at org.apache.kafka.connect.runtime.WorkerConfig.<init>(WorkerConfig.java:452)
at org.apache.kafka.connect.runtime.standalone.StandaloneConfig.<init>(StandaloneConfig.java:42)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:80)
It's not finding the jar where the AvroConverter is.
I'm using Kafka 2.13-2.8.0 and the 0.3.34 jcustenborder kafka-connect-twitter.
I see nowhere possible jars where the AvroConverter might be, in the Kafka distribution. Does it include Kafka Connect?
Note that I'm using an install of Kafka in an iMac, I'm not using Docker for running Kafka.
EDIT:
instead of using the avro properties file, I'm using the connect-standalone.properties. Although the log says it has loaded the guava jar:
INFO Loading plugin from: /Users/paupaches/dev/books/kafkabeginnerscourse/kafka-connect/connectors/kafka-connectors-twitter/guava-30.1.1-jre.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:246)
[2021-11-13 10:50:22,992] INFO Registered loader: PluginClassLoader{pluginLocation=file:/Users/paupaches/dev/books/kafkabeginnerscourse/kafka-connect/connectors/kafka-connectors-twitter/guava-30.1.1-jre.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:269)
I get the error
ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:126)
java.lang.NoClassDefFoundError: com/google/common/collect/Multimap
I am using openjdk 17.
It is not necessary to export the CLASSPATH, using the "connectors" path in the plugin.path property in the file connect-standalone.properties is enough.
The "connectors" directory contains the kafka-connect-twitter directory which contains all the jars generated when running "mvn clean package" in the connector working copy.
In the end I used openjdk 8, although 17 is also installed in my Mac.

OpenSource: Encryption of JDBC Password in configuration properties file

As I noticed a plugin available for the enterprise version (https://download.rundeck.com/plugins/encrypted-datasource-plugin.html); is there an option for users of Rundeck open source to perform the same kind of encyption of datasource password in the configuration file?
As I noticed many people mentioning writing their own java programs and leveraging the Jasypt utilities; I tried this. I do have two jar files (one for encrypt and one for decrypt). I created a directory (since I'm using rpm based Rundeck 3.3 installation) called: /var/lib/rundeck/lib . I added this directory to the JVM classpath in /etc/sysconfig/rundeckd via: export RDECK_JVM_SETTINGS="-Djava.class.path=/var/lib/rundeck/lib/*". I converted my /etc/rundeck/rundeck-config.properties file to groovy format and updated the /etc/sysconfig/rundeck with: export RDECK_CONFIG_FILE="/etc/rundeck/rundeck-config.groovy". However when I change the /etc/rundeck/rundeck-config.groovy entry for datasource.password to:
datasource.password=MyDecrypt("MyTest123Password"); I get an error in the Rundeck logs after restarting:
[2020-09-08T18:01:03,168] WARN context.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'application': Initialization of bean failed; nested exception is groovy.lang.MissingMethodException: No signature of method: groovy.util.ConfigSlurper$_parse_closure5.MyDecrypt() is applicable for argument types: (String) values: [MyTest123Password]
Any suggestions?
That's encryption is only for Rundeck Enterprise, perhaps the best approach on Rundeck Community is to secure the rundeck-config.properties file through file UNIX permissions.

forgerock openam ssoadm STS configuration error while running create-sub-cfg

I am getting the following exception while running ssoadm's create-sub-cfg on forgerock openam13 version. I would appreciate any leads or hints to resolve this. Thanks
Command:
create-sub-cfg --servicename RestSecurityTokenService --subconfigname "test" --realm myrealm --datafile mydir1/my_realm_sts_attrs.properties
Exception:
Executing class, com.sun.identity.cli.schema.AddSubConfiguration.
com.sun.identity.cli.CLIException: Message:Unable to add subConfig test
at com.sun.identity.cli.schema.AddSubConfiguration.addSubConfigToRealm(AddSubConfiguration.java:150)
at com.sun.identity.cli.schema.AddSubConfiguration.handleRequest(AddSubConfiguration.java:103)
at com.sun.identity.cli.SubCommand.execute(SubCommand.java:296)
at com.sun.identity.cli.CLIRequest.process(CLIRequest.java:217)
at com.sun.identity.cli.CLIRequest.process(CLIRequest.java:139)
at com.sun.identity.cli.CommandManager.serviceRequestQueue(CommandManager.java:576)
at com.sun.identity.cli.CommandManager.<init>(CommandManager.java:173)
at com.sun.identity.cli.CommandManager.main(CommandManager.java:150)
Caused by: Message:Unable to add subConfig test
at com.sun.identity.sm.ServiceConfig.addSubConfig(ServiceConfig.java:343)
at com.sun.identity.cli.schema.AddSubConfiguration.addSubConfig(AddSubConfiguration.java:228)
at com.sun.identity.cli.schema.AddSubConfiguration.addSubConfigToRealm(AddSubConfiguration.java:131)
... 7 more
Unable to add subConfig test
Command process exited with value 127
You might want to have a look at OPENAM-8006
You probably need to replace:
--subconfigname "test"
with
--subconfigid "test"

Error CREATEing SolrCore ... Specified config does not exist in Zookeeper:default

I used the following command:
./solr -e cloud -z localhost:2181 -noprompt
The final message is the following:
{
"responseHeader":{
"status":0,
"QTime":1616},
"failure":{
"":"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard2_replica1': Unable to create core [gettingstarted_shard2_replica1] Caused by: Specified config does not exist in ZooKeeper:default",
"":"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard1_replica1': Unable to create core [gettingstarted_shard1_replica1] Caused by: Specified config does not exist in ZooKeeper:default",
"":"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard2_replica2': Unable to create core [gettingstarted_shard2_replica2] Caused by: Specified config does not exist in ZooKeeper:default",
"":"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard1_replica2': Unable to create core [gettingstarted_shard1_replica2] Caused by: Specified config does not exist in ZooKeeper:default"}}
Confirmed zookeeper is running
[zk: localhost:2181(CONNECTED) 0]
I looked almost everywhere and cannot overcome. Can anyone help?
I recently downloaded and installed solr 4.10.3, was going through the official Quick Start using the command:
bin/solr start -e cloud -noprompt
and encountered the same looking exceptions as you:
"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard2_replica2': Unable to create core [gettingstarted_shard2_replica2] Caused by: Specified config does not exist in ZooKeeper:default"
Looking up in the bash I also saw the error line:
bin/solr: line 1085: jar: command not found
This was the reason for the exceptions, the "jar" command was not in the current path. After putting the "jar" command inside my path, these exceptions do not show up. It might be the same reason why you are getting the exceptions.
I am on a Fedora machine, and used the following guide to set up jar, java, javac etc. via the alternatives command (but I think you could just add the java/bin directory to your current path to solve the issue):
https://ask.fedoraproject.org/en/question/59412/cannot-find-oracle-jdk-on-fedora-21/

Error running hadoop application in Eclipse on Windows

I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem